id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2412.10121 | Jonas Golde | Jonas Golde, Patrick Haller, Max Ploner, Fabio Barth, Nicolaas Jedema,
Alan Akbik | Familiarity: Better Evaluation of Zero-Shot Named Entity Recognition by
Quantifying Label Shifts in Synthetic Training Data | 9 pages, 4 figures, 5 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Zero-shot named entity recognition (NER) is the task of detecting named
entities of specific types (such as 'Person' or 'Medicine') without any
training examples. Current research increasingly relies on large synthetic
datasets, automatically generated to cover tens of thousands of distinct entity
types, to train zero-shot NER models. However, in this paper, we find that
these synthetic datasets often contain entity types that are semantically
highly similar to (or even the same as) those in standard evaluation
benchmarks. Because of this overlap, we argue that reported F1 scores for
zero-shot NER overestimate the true capabilities of these approaches. Further,
we argue that current evaluation setups provide an incomplete picture of
zero-shot abilities since they do not quantify the label shift (i.e., the
similarity of labels) between training and evaluation datasets. To address
these issues, we propose Familiarity, a novel metric that captures both the
semantic similarity between entity types in training and evaluation, as well as
their frequency in the training data, to provide an estimate of label shift. It
allows researchers to contextualize reported zero-shot NER scores when using
custom synthetic training datasets. Further, it enables researchers to generate
evaluation setups of various transfer difficulties for fine-grained analysis of
zero-shot NER.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 13:06:58 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 11:54:22 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Golde",
"Jonas",
""
],
[
"Haller",
"Patrick",
""
],
[
"Ploner",
"Max",
""
],
[
"Barth",
"Fabio",
""
],
[
"Jedema",
"Nicolaas",
""
],
[
"Akbik",
"Alan",
""
]
]
| TITLE: Familiarity: Better Evaluation of Zero-Shot Named Entity Recognition by
Quantifying Label Shifts in Synthetic Training Data
ABSTRACT: Zero-shot named entity recognition (NER) is the task of detecting named
entities of specific types (such as 'Person' or 'Medicine') without any
training examples. Current research increasingly relies on large synthetic
datasets, automatically generated to cover tens of thousands of distinct entity
types, to train zero-shot NER models. However, in this paper, we find that
these synthetic datasets often contain entity types that are semantically
highly similar to (or even the same as) those in standard evaluation
benchmarks. Because of this overlap, we argue that reported F1 scores for
zero-shot NER overestimate the true capabilities of these approaches. Further,
we argue that current evaluation setups provide an incomplete picture of
zero-shot abilities since they do not quantify the label shift (i.e., the
similarity of labels) between training and evaluation datasets. To address
these issues, we propose Familiarity, a novel metric that captures both the
semantic similarity between entity types in training and evaluation, as well as
their frequency in the training data, to provide an estimate of label shift. It
allows researchers to contextualize reported zero-shot NER scores when using
custom synthetic training datasets. Further, it enables researchers to generate
evaluation setups of various transfer difficulties for fine-grained analysis of
zero-shot NER.
| no_new_dataset | 0.799833 |
2412.10525 | Rahul Harsha Cheppally | Rahul Harsha Cheppally and Ajay Sharda | RowDetr: End-to-End Row Detection Using Polynomials | Code will be open sourced upon publication | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Crop row detection is essential for enabling autonomous navigation in
GPS-denied environments, such as under-canopy agricultural settings.
Traditional methods often struggle with occlusions, variable lighting
conditions, and the structural variability of crop rows. To address these
challenges, RowDetr, a novel end-to-end neural network architecture, is
introduced for robust and efficient row detection. A new dataset of
approximately 6,900 images is curated, capturing a diverse range of real-world
agricultural conditions, including occluded rows, uneven terrain, and varying
crop densities. Unlike previous approaches, RowDetr leverages smooth polynomial
functions to precisely delineate crop boundaries in the image space, ensuring a
more structured and interpretable representation of row geometry. A key
innovation of this approach is PolyOptLoss, a novel energy-based loss function
designed to enhance learning robustness, even in the presence of noisy or
imperfect labels. This loss function significantly improves model stability and
generalization by optimizing polynomial curve fitting directly in image space.
Extensive experiments demonstrate that RowDetr significantly outperforms
existing frameworks, including Agronav and RowColAttention, across key
performance metrics. Additionally, RowDetr achieves a sixfold speedup over
Agronav, making it highly suitable for real-time deployment on
resource-constrained edge devices. To facilitate better comparisons across
future studies, lane detection metrics from autonomous driving research are
adapted, providing a more standardized and meaningful evaluation framework for
crop row detection. This work establishes a new benchmark in under-canopy
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 19:38:36 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 00:00:57 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Cheppally",
"Rahul Harsha",
""
],
[
"Sharda",
"Ajay",
""
]
]
| TITLE: RowDetr: End-to-End Row Detection Using Polynomials
ABSTRACT: Crop row detection is essential for enabling autonomous navigation in
GPS-denied environments, such as under-canopy agricultural settings.
Traditional methods often struggle with occlusions, variable lighting
conditions, and the structural variability of crop rows. To address these
challenges, RowDetr, a novel end-to-end neural network architecture, is
introduced for robust and efficient row detection. A new dataset of
approximately 6,900 images is curated, capturing a diverse range of real-world
agricultural conditions, including occluded rows, uneven terrain, and varying
crop densities. Unlike previous approaches, RowDetr leverages smooth polynomial
functions to precisely delineate crop boundaries in the image space, ensuring a
more structured and interpretable representation of row geometry. A key
innovation of this approach is PolyOptLoss, a novel energy-based loss function
designed to enhance learning robustness, even in the presence of noisy or
imperfect labels. This loss function significantly improves model stability and
generalization by optimizing polynomial curve fitting directly in image space.
Extensive experiments demonstrate that RowDetr significantly outperforms
existing frameworks, including Agronav and RowColAttention, across key
performance metrics. Additionally, RowDetr achieves a sixfold speedup over
Agronav, making it highly suitable for real-time deployment on
resource-constrained edge devices. To facilitate better comparisons across
future studies, lane detection metrics from autonomous driving research are
adapted, providing a more standardized and meaningful evaluation framework for
crop row detection. This work establishes a new benchmark in under-canopy
| new_dataset | 0.907148 |
2412.11542 | Ziyang Chen | Ziyang Chen, Yiwen Ye, Feilong Tang, Yongsheng Pan, and Yong Xia | Meta Curvature-Aware Minimization for Domain Generalization | 22 pages, 5 figures, 16 tables | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain generalization (DG) aims to enhance the ability of models trained on
source domains to generalize effectively to unseen domains. Recently,
Sharpness-Aware Minimization (SAM) has shown promise in this area by reducing
the sharpness of the loss landscape to obtain more generalized models. However,
SAM and its variants sometimes fail to guide the model toward a flat minimum,
and their training processes exhibit limitations, hindering further
improvements in model generalization. In this paper, we first propose an
improved model training process aimed at encouraging the model to converge to a
flat minima. To achieve this, we design a curvature metric that has a minimal
effect when the model is far from convergence but becomes increasingly
influential in indicating the curvature of the minima as the model approaches a
local minimum. Then we derive a novel algorithm from this metric, called Meta
Curvature-Aware Minimization (MeCAM), to minimize the curvature around the
local minima. Specifically, the optimization objective of MeCAM simultaneously
minimizes the regular training loss, the surrogate gap of SAM, and the
surrogate gap of meta-learning. We provide theoretical analysis on MeCAM's
generalization error and convergence rate, and demonstrate its superiority over
existing DG methods through extensive experiments on five benchmark DG
datasets, including PACS, VLCS, OfficeHome, TerraIncognita, and DomainNet. Code
will be available on GitHub.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 08:22:23 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 10:39:41 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 05:49:35 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Chen",
"Ziyang",
""
],
[
"Ye",
"Yiwen",
""
],
[
"Tang",
"Feilong",
""
],
[
"Pan",
"Yongsheng",
""
],
[
"Xia",
"Yong",
""
]
]
| TITLE: Meta Curvature-Aware Minimization for Domain Generalization
ABSTRACT: Domain generalization (DG) aims to enhance the ability of models trained on
source domains to generalize effectively to unseen domains. Recently,
Sharpness-Aware Minimization (SAM) has shown promise in this area by reducing
the sharpness of the loss landscape to obtain more generalized models. However,
SAM and its variants sometimes fail to guide the model toward a flat minimum,
and their training processes exhibit limitations, hindering further
improvements in model generalization. In this paper, we first propose an
improved model training process aimed at encouraging the model to converge to a
flat minima. To achieve this, we design a curvature metric that has a minimal
effect when the model is far from convergence but becomes increasingly
influential in indicating the curvature of the minima as the model approaches a
local minimum. Then we derive a novel algorithm from this metric, called Meta
Curvature-Aware Minimization (MeCAM), to minimize the curvature around the
local minima. Specifically, the optimization objective of MeCAM simultaneously
minimizes the regular training loss, the surrogate gap of SAM, and the
surrogate gap of meta-learning. We provide theoretical analysis on MeCAM's
generalization error and convergence rate, and demonstrate its superiority over
existing DG methods through extensive experiments on five benchmark DG
datasets, including PACS, VLCS, OfficeHome, TerraIncognita, and DomainNet. Code
will be available on GitHub.
| no_new_dataset | 0.953794 |
2412.13533 | Mingjian Li | Mingjian Li, Mingyuan Meng, Shuchang Ye, Michael Fulham, Lei Bi,
Jinman Kim | Language-guided Medical Image Segmentation with Target-informed
Multi-level Contrastive Alignments | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical image segmentation is crucial in modern medical image analysis, which
can aid into diagnosis of various disease conditions. Recently, language-guided
segmentation methods have shown promising results in automating image
segmentation where text reports are incorporated as guidance. These text
reports, containing image impressions and insights given by clinicians,
provides auxiliary guidance. However, these methods neglect the inherent
pattern gaps between the two distinct modalities, which leads to sub-optimal
image-text feature fusion without proper cross-modality feature alignments.
Contrastive alignments are widely used to associate image-text semantics in
representation learning; however, it has not been exploited to bridge the
pattern gaps in language-guided segmentation that relies on subtle low level
image details to represent diseases. Existing contrastive alignment methods
typically algin high-level global image semantics without involving low-level,
localized target information, and therefore fails to explore fine-grained text
guidance for language-guided segmentation. In this study, we propose a
language-guided segmentation network with Target-informed Multi-level
Contrastive Alignments (TMCA). TMCA enables target-informed cross-modality
alignments and fine-grained text guidance to bridge the pattern gaps in
language-guided segmentation. Specifically, we introduce: 1) a target-sensitive
semantic distance module that enables granular image-text alignment modelling,
and 2) a multi-level alignment strategy that directs text guidance on low-level
image features. In addition, a language-guided target enhancement module is
proposed to leverage the aligned text to redirect attention to focus on
critical localized image features. Extensive experiments on 4 image-text
datasets, involving 3 medical imaging modalities, demonstrated that our TMCA
achieved superior performances.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 06:19:03 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 13:13:02 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Li",
"Mingjian",
""
],
[
"Meng",
"Mingyuan",
""
],
[
"Ye",
"Shuchang",
""
],
[
"Fulham",
"Michael",
""
],
[
"Bi",
"Lei",
""
],
[
"Kim",
"Jinman",
""
]
]
| TITLE: Language-guided Medical Image Segmentation with Target-informed
Multi-level Contrastive Alignments
ABSTRACT: Medical image segmentation is crucial in modern medical image analysis, which
can aid into diagnosis of various disease conditions. Recently, language-guided
segmentation methods have shown promising results in automating image
segmentation where text reports are incorporated as guidance. These text
reports, containing image impressions and insights given by clinicians,
provides auxiliary guidance. However, these methods neglect the inherent
pattern gaps between the two distinct modalities, which leads to sub-optimal
image-text feature fusion without proper cross-modality feature alignments.
Contrastive alignments are widely used to associate image-text semantics in
representation learning; however, it has not been exploited to bridge the
pattern gaps in language-guided segmentation that relies on subtle low level
image details to represent diseases. Existing contrastive alignment methods
typically algin high-level global image semantics without involving low-level,
localized target information, and therefore fails to explore fine-grained text
guidance for language-guided segmentation. In this study, we propose a
language-guided segmentation network with Target-informed Multi-level
Contrastive Alignments (TMCA). TMCA enables target-informed cross-modality
alignments and fine-grained text guidance to bridge the pattern gaps in
language-guided segmentation. Specifically, we introduce: 1) a target-sensitive
semantic distance module that enables granular image-text alignment modelling,
and 2) a multi-level alignment strategy that directs text guidance on low-level
image features. In addition, a language-guided target enhancement module is
proposed to leverage the aligned text to redirect attention to focus on
critical localized image features. Extensive experiments on 4 image-text
datasets, involving 3 medical imaging modalities, demonstrated that our TMCA
achieved superior performances.
| no_new_dataset | 0.949482 |
2412.14103 | R\'emi Marsal | R\'emi Marsal, Alexandre Chapoutot, Philippe Xu and David Filliat | A Simple yet Effective Test-Time Adaptation for Zero-Shot Monocular
Metric Depth Estimation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The recent development of foundation models for monocular depth estimation
such as Depth Anything paved the way to zero-shot monocular depth estimation.
Since it returns an affine-invariant disparity map, the favored technique to
recover the metric depth consists in fine-tuning the model. However, this stage
is not straightforward, it can be costly and time-consuming because of the
training and the creation of the dataset. The latter must contain images
captured by the camera that will be used at test time and the corresponding
ground truth. Moreover, the fine-tuning may also degrade the generalizing
capacity of the original model. Instead, we propose in this paper a new method
to rescale Depth Anything predictions using 3D points provided by sensors or
techniques such as low-resolution LiDAR or structure-from-motion with poses
given by an IMU. This approach avoids fine-tuning and preserves the
generalizing power of the original depth estimation model while being robust to
the noise of the sparse depth or of the depth model. Our experiments highlight
enhancements relative to zero-shot monocular metric depth estimation methods,
competitive results compared to fine-tuned approaches and a better robustness
than depth completion approaches. Code available at
https://gitlab.ensta.fr/ssh/monocular-depth-rescaling.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 17:50:15 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 11:02:33 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Marsal",
"Rémi",
""
],
[
"Chapoutot",
"Alexandre",
""
],
[
"Xu",
"Philippe",
""
],
[
"Filliat",
"David",
""
]
]
| TITLE: A Simple yet Effective Test-Time Adaptation for Zero-Shot Monocular
Metric Depth Estimation
ABSTRACT: The recent development of foundation models for monocular depth estimation
such as Depth Anything paved the way to zero-shot monocular depth estimation.
Since it returns an affine-invariant disparity map, the favored technique to
recover the metric depth consists in fine-tuning the model. However, this stage
is not straightforward, it can be costly and time-consuming because of the
training and the creation of the dataset. The latter must contain images
captured by the camera that will be used at test time and the corresponding
ground truth. Moreover, the fine-tuning may also degrade the generalizing
capacity of the original model. Instead, we propose in this paper a new method
to rescale Depth Anything predictions using 3D points provided by sensors or
techniques such as low-resolution LiDAR or structure-from-motion with poses
given by an IMU. This approach avoids fine-tuning and preserves the
generalizing power of the original depth estimation model while being robust to
the noise of the sparse depth or of the depth model. Our experiments highlight
enhancements relative to zero-shot monocular metric depth estimation methods,
competitive results compared to fine-tuned approaches and a better robustness
than depth completion approaches. Code available at
https://gitlab.ensta.fr/ssh/monocular-depth-rescaling.
| no_new_dataset | 0.948585 |
2412.15429 | Ze Gong | Ze Gong, Akshat Kumar, Pradeep Varakantham | Offline Safe Reinforcement Learning Using Trajectory Classification | AAAI 2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Offline safe reinforcement learning (RL) has emerged as a promising approach
for learning safe behaviors without engaging in risky online interactions with
the environment. Most existing methods in offline safe RL rely on cost
constraints at each time step (derived from global cost constraints) and this
can result in either overly conservative policies or violation of safety
constraints. In this paper, we propose to learn a policy that generates
desirable trajectories and avoids undesirable trajectories. To be specific, we
first partition the pre-collected dataset of state-action trajectories into
desirable and undesirable subsets. Intuitively, the desirable set contains high
reward and safe trajectories, and undesirable set contains unsafe trajectories
and low-reward safe trajectories. Second, we learn a policy that generates
desirable trajectories and avoids undesirable trajectories, where
(un)desirability scores are provided by a classifier learnt from the dataset of
desirable and undesirable trajectories. This approach bypasses the
computational complexity and stability issues of a min-max objective that is
employed in existing methods. Theoretically, we also show our approach's strong
connections to existing learning paradigms involving human feedback. Finally,
we extensively evaluate our method using the DSRL benchmark for offline safe
RL. Empirically, our method outperforms competitive baselines, achieving higher
rewards and better constraint satisfaction across a wide variety of benchmark
tasks.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2024 22:29:03 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Feb 2025 17:22:17 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 11:20:12 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Gong",
"Ze",
""
],
[
"Kumar",
"Akshat",
""
],
[
"Varakantham",
"Pradeep",
""
]
]
| TITLE: Offline Safe Reinforcement Learning Using Trajectory Classification
ABSTRACT: Offline safe reinforcement learning (RL) has emerged as a promising approach
for learning safe behaviors without engaging in risky online interactions with
the environment. Most existing methods in offline safe RL rely on cost
constraints at each time step (derived from global cost constraints) and this
can result in either overly conservative policies or violation of safety
constraints. In this paper, we propose to learn a policy that generates
desirable trajectories and avoids undesirable trajectories. To be specific, we
first partition the pre-collected dataset of state-action trajectories into
desirable and undesirable subsets. Intuitively, the desirable set contains high
reward and safe trajectories, and undesirable set contains unsafe trajectories
and low-reward safe trajectories. Second, we learn a policy that generates
desirable trajectories and avoids undesirable trajectories, where
(un)desirability scores are provided by a classifier learnt from the dataset of
desirable and undesirable trajectories. This approach bypasses the
computational complexity and stability issues of a min-max objective that is
employed in existing methods. Theoretically, we also show our approach's strong
connections to existing learning paradigms involving human feedback. Finally,
we extensively evaluate our method using the DSRL benchmark for offline safe
RL. Empirically, our method outperforms competitive baselines, achieving higher
rewards and better constraint satisfaction across a wide variety of benchmark
tasks.
| no_new_dataset | 0.942242 |
2412.19225 | Zhiqiang Yan | Zhiqiang Yan and Zhengxue Wang and Kun Wang and Jun Li and Jian Yang | Completion as Enhancement: A Degradation-Aware Selective Image Guided
Network for Depth Completion | CVPR 2025 | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce the Selective Image Guided Network (SigNet), a
novel degradation-aware framework that transforms depth completion into depth
enhancement for the first time. Moving beyond direct completion using
convolutional neural networks (CNNs), SigNet initially densifies sparse depth
data through non-CNN densification tools to obtain coarse yet dense depth. This
approach eliminates the mismatch and ambiguity caused by direct convolution
over irregularly sampled sparse data. Subsequently, SigNet redefines completion
as enhancement, establishing a self-supervised degradation bridge between the
coarse depth and the targeted dense depth for effective RGB-D fusion. To
achieve this, SigNet leverages the implicit degradation to adaptively select
high-frequency components (e.g., edges) of RGB data to compensate for the
coarse depth. This degradation is further integrated into a multi-modal
conditional Mamba, dynamically generating the state parameters to enable
efficient global high-frequency information interaction. We conduct extensive
experiments on the NYUv2, DIML, SUN RGBD, and TOFDC datasets, demonstrating the
state-of-the-art (SOTA) performance of SigNet.
| [
{
"version": "v1",
"created": "Thu, 26 Dec 2024 14:05:01 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 15:33:32 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Yan",
"Zhiqiang",
""
],
[
"Wang",
"Zhengxue",
""
],
[
"Wang",
"Kun",
""
],
[
"Li",
"Jun",
""
],
[
"Yang",
"Jian",
""
]
]
| TITLE: Completion as Enhancement: A Degradation-Aware Selective Image Guided
Network for Depth Completion
ABSTRACT: In this paper, we introduce the Selective Image Guided Network (SigNet), a
novel degradation-aware framework that transforms depth completion into depth
enhancement for the first time. Moving beyond direct completion using
convolutional neural networks (CNNs), SigNet initially densifies sparse depth
data through non-CNN densification tools to obtain coarse yet dense depth. This
approach eliminates the mismatch and ambiguity caused by direct convolution
over irregularly sampled sparse data. Subsequently, SigNet redefines completion
as enhancement, establishing a self-supervised degradation bridge between the
coarse depth and the targeted dense depth for effective RGB-D fusion. To
achieve this, SigNet leverages the implicit degradation to adaptively select
high-frequency components (e.g., edges) of RGB data to compensate for the
coarse depth. This degradation is further integrated into a multi-modal
conditional Mamba, dynamically generating the state parameters to enable
efficient global high-frequency information interaction. We conduct extensive
experiments on the NYUv2, DIML, SUN RGBD, and TOFDC datasets, demonstrating the
state-of-the-art (SOTA) performance of SigNet.
| no_new_dataset | 0.950365 |
2501.00962 | Sepehr Dehdashtian | Sepehr Dehdashtian, Gautam Sreekumar, Vishnu Naresh Boddeti | OASIS Uncovers: High-Quality T2I Models, Same Old Stereotypes | Accepted as a Spotlight paper at ICLR 2025 | null | null | null | cs.CV cs.CY cs.LG | http://creativecommons.org/licenses/by/4.0/ | Images generated by text-to-image (T2I) models often exhibit visual biases
and stereotypes of concepts such as culture and profession. Existing
quantitative measures of stereotypes are based on statistical parity that does
not align with the sociological definition of stereotypes and, therefore,
incorrectly categorizes biases as stereotypes. Instead of oversimplifying
stereotypes as biases, we propose a quantitative measure of stereotypes that
aligns with its sociological definition. We then propose OASIS to measure the
stereotypes in a generated dataset and understand their origins within the T2I
model. OASIS includes two scores to measure stereotypes from a generated image
dataset: (M1) Stereotype Score to measure the distributional violation of
stereotypical attributes, and (M2) WALS to measure spectral variance in the
images along a stereotypical attribute. OASIS also includes two methods to
understand the origins of stereotypes in T2I models: (U1) StOP to discover
attributes that the T2I model internally associates with a given concept, and
(U2) SPI to quantify the emergence of stereotypical attributes in the latent
space of the T2I model during image generation. Despite the considerable
progress in image fidelity, using OASIS, we conclude that newer T2I models such
as FLUX.1 and SDv3 contain strong stereotypical predispositions about concepts
and still generate images with widespread stereotypical attributes.
Additionally, the quantity of stereotypes worsens for nationalities with lower
Internet footprints.
| [
{
"version": "v1",
"created": "Wed, 1 Jan 2025 21:47:52 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Feb 2025 18:04:37 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 14:31:49 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Dehdashtian",
"Sepehr",
""
],
[
"Sreekumar",
"Gautam",
""
],
[
"Boddeti",
"Vishnu Naresh",
""
]
]
| TITLE: OASIS Uncovers: High-Quality T2I Models, Same Old Stereotypes
ABSTRACT: Images generated by text-to-image (T2I) models often exhibit visual biases
and stereotypes of concepts such as culture and profession. Existing
quantitative measures of stereotypes are based on statistical parity that does
not align with the sociological definition of stereotypes and, therefore,
incorrectly categorizes biases as stereotypes. Instead of oversimplifying
stereotypes as biases, we propose a quantitative measure of stereotypes that
aligns with its sociological definition. We then propose OASIS to measure the
stereotypes in a generated dataset and understand their origins within the T2I
model. OASIS includes two scores to measure stereotypes from a generated image
dataset: (M1) Stereotype Score to measure the distributional violation of
stereotypical attributes, and (M2) WALS to measure spectral variance in the
images along a stereotypical attribute. OASIS also includes two methods to
understand the origins of stereotypes in T2I models: (U1) StOP to discover
attributes that the T2I model internally associates with a given concept, and
(U2) SPI to quantify the emergence of stereotypical attributes in the latent
space of the T2I model during image generation. Despite the considerable
progress in image fidelity, using OASIS, we conclude that newer T2I models such
as FLUX.1 and SDv3 contain strong stereotypical predispositions about concepts
and still generate images with widespread stereotypical attributes.
Additionally, the quantity of stereotypes worsens for nationalities with lower
Internet footprints.
| no_new_dataset | 0.766556 |
2501.06259 | Farina Riaz Dr | Farina Riaz, Fakhar Zaman, Hajime Suzuki, Sharif Abuadbba, David
Nguyen | Quantum Down Sampling Filter for Variational Auto-encoder | 18 pages, 13 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Variational autoencoders (VAEs) are fundamental for generative modeling and
image reconstruction, yet their performance often struggles to maintain high
fidelity in reconstructions. This study introduces a hybrid model, quantum
variational autoencoder (Q-VAE), which integrates quantum encoding within the
encoder while utilizing fully connected layers to extract meaningful
representations. The decoder uses transposed convolution layers for
up-sampling. The Q-VAE is evaluated against the classical VAE and the classical
direct-passing VAE, which utilizes windowed pooling filters. Results on the
MNIST and USPS datasets demonstrate that Q-VAE consistently outperforms
classical approaches, achieving lower Fr\'echet inception distance scores,
thereby indicating superior image fidelity and enhanced reconstruction quality.
These findings highlight the potential of Q-VAE for high-quality synthetic data
generation and improved image reconstruction in generative models.
| [
{
"version": "v1",
"created": "Thu, 9 Jan 2025 11:08:55 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Jan 2025 00:31:45 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Mar 2025 23:10:14 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Riaz",
"Farina",
""
],
[
"Zaman",
"Fakhar",
""
],
[
"Suzuki",
"Hajime",
""
],
[
"Abuadbba",
"Sharif",
""
],
[
"Nguyen",
"David",
""
]
]
| TITLE: Quantum Down Sampling Filter for Variational Auto-encoder
ABSTRACT: Variational autoencoders (VAEs) are fundamental for generative modeling and
image reconstruction, yet their performance often struggles to maintain high
fidelity in reconstructions. This study introduces a hybrid model, quantum
variational autoencoder (Q-VAE), which integrates quantum encoding within the
encoder while utilizing fully connected layers to extract meaningful
representations. The decoder uses transposed convolution layers for
up-sampling. The Q-VAE is evaluated against the classical VAE and the classical
direct-passing VAE, which utilizes windowed pooling filters. Results on the
MNIST and USPS datasets demonstrate that Q-VAE consistently outperforms
classical approaches, achieving lower Fr\'echet inception distance scores,
thereby indicating superior image fidelity and enhanced reconstruction quality.
These findings highlight the potential of Q-VAE for high-quality synthetic data
generation and improved image reconstruction in generative models.
| no_new_dataset | 0.948106 |
2501.06826 | Bolei Ma | Stephanie Eckman, Bolei Ma, Christoph Kern, Rob Chew, Barbara Plank,
Frauke Kreuter | Correcting Annotator Bias in Training Data: Population-Aligned Instance
Replication (PAIR) | null | null | null | null | stat.ME cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models trained on crowdsourced labels may not reflect broader population
views, because those who work as annotators do not represent the population. We
propose Population-Aligned Instance Replication (PAIR), a method to address
bias caused by non-representative annotator pools. Using a simulation study of
offensive language and hate speech, we create two types of annotators with
different labeling tendencies and generate datasets with varying proportions of
the types. We observe that models trained on unbalanced annotator pools show
poor calibration compared to those trained on representative data. By
duplicating labels from underrepresented annotator groups to match population
proportions, PAIR reduces bias without collecting additional annotations. These
results suggest that statistical techniques from survey research can improve
model performance. We conclude with practical recommendations for improving the
representativity of training data and model performance.
| [
{
"version": "v1",
"created": "Sun, 12 Jan 2025 14:39:26 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 17:32:57 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Eckman",
"Stephanie",
""
],
[
"Ma",
"Bolei",
""
],
[
"Kern",
"Christoph",
""
],
[
"Chew",
"Rob",
""
],
[
"Plank",
"Barbara",
""
],
[
"Kreuter",
"Frauke",
""
]
]
| TITLE: Correcting Annotator Bias in Training Data: Population-Aligned Instance
Replication (PAIR)
ABSTRACT: Models trained on crowdsourced labels may not reflect broader population
views, because those who work as annotators do not represent the population. We
propose Population-Aligned Instance Replication (PAIR), a method to address
bias caused by non-representative annotator pools. Using a simulation study of
offensive language and hate speech, we create two types of annotators with
different labeling tendencies and generate datasets with varying proportions of
the types. We observe that models trained on unbalanced annotator pools show
poor calibration compared to those trained on representative data. By
duplicating labels from underrepresented annotator groups to match population
proportions, PAIR reduces bias without collecting additional annotations. These
results suggest that statistical techniques from survey research can improve
model performance. We conclude with practical recommendations for improving the
representativity of training data and model performance.
| no_new_dataset | 0.955486 |
2501.11926 | Yunseo Nam | Yunseo Nam and Jiwook Choi | Multi-Modal Variable-Rate CSI Reconstruction for FDD Massive MIMO
Systems | null | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In frequency division duplex (FDD) systems, acquiring channel state
information (CSI) at the base station (BS) traditionally relies on limited
feedback from mobile terminals (MTs). However, the accuracy of channel
reconstruction from feedback CSI is inherently constrained by the
rate-distortion trade-off. To overcome this limitation, we propose a
multi-modal channel reconstruction framework that leverages auxiliary data,
such as RGB images or uplink CSI, collected at the BS. By integrating
contextual information from these modalities, the framework mitigates CSI
distortions caused by noise, compression, and quantization. At its core, the
framework utilizes an autoencoder network capable of generating variable-length
CSI, tailored for rate-adaptive multi-modal channel reconstruction. By
augmenting the foundational autoencoder network using a transfer learning-based
multi-modal fusion strategy, we enable accurate channel reconstruction in both
single-modal and multi-modal scenarios. To train and evaluate the network under
diverse and realistic wireless conditions, we construct a synthetic dataset
that pairs wireless channel data with sensor data through 3D modeling and ray
tracing. Simulation results demonstrate that the proposed framework achieves
near-optimal beamforming gains in 5G New Radio (5G NR)-compliant scenarios,
highlighting the potential of sensor data integration to improve CSI
reconstruction accuracy.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 07:02:19 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 05:07:11 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Nam",
"Yunseo",
""
],
[
"Choi",
"Jiwook",
""
]
]
| TITLE: Multi-Modal Variable-Rate CSI Reconstruction for FDD Massive MIMO
Systems
ABSTRACT: In frequency division duplex (FDD) systems, acquiring channel state
information (CSI) at the base station (BS) traditionally relies on limited
feedback from mobile terminals (MTs). However, the accuracy of channel
reconstruction from feedback CSI is inherently constrained by the
rate-distortion trade-off. To overcome this limitation, we propose a
multi-modal channel reconstruction framework that leverages auxiliary data,
such as RGB images or uplink CSI, collected at the BS. By integrating
contextual information from these modalities, the framework mitigates CSI
distortions caused by noise, compression, and quantization. At its core, the
framework utilizes an autoencoder network capable of generating variable-length
CSI, tailored for rate-adaptive multi-modal channel reconstruction. By
augmenting the foundational autoencoder network using a transfer learning-based
multi-modal fusion strategy, we enable accurate channel reconstruction in both
single-modal and multi-modal scenarios. To train and evaluate the network under
diverse and realistic wireless conditions, we construct a synthetic dataset
that pairs wireless channel data with sensor data through 3D modeling and ray
tracing. Simulation results demonstrate that the proposed framework achieves
near-optimal beamforming gains in 5G New Radio (5G NR)-compliant scenarios,
highlighting the potential of sensor data integration to improve CSI
reconstruction accuracy.
| new_dataset | 0.969871 |
2501.13983 | Fan Yang | Yang Fan | AdEval: Alignment-based Dynamic Evaluation to Mitigate Data
Contamination in Large Language Models | There are serious academic problems in this paper, such as data
falsification and plagiarism in the method of the paper | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As Large Language Models (LLMs) are pretrained on massive-scale corpora, the
issue of data contamination has become increasingly severe, leading to
potential overestimation of model performance during evaluation. To address
this, we propose AdEval (Alignment-based Dynamic Evaluation), a dynamic data
evaluation method aimed at mitigating the impact of data contamination on
evaluation reliability. Experimental results on multiple datasets demonstrate
that AdEval effectively reduces the impact of data contamination on evaluation
outcomes, enhancing both the fairness and reliability of the evaluation
process.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 06:57:24 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 15:07:55 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 02:06:47 GMT"
},
{
"version": "v4",
"created": "Fri, 7 Mar 2025 09:02:42 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Fan",
"Yang",
""
]
]
| TITLE: AdEval: Alignment-based Dynamic Evaluation to Mitigate Data
Contamination in Large Language Models
ABSTRACT: As Large Language Models (LLMs) are pretrained on massive-scale corpora, the
issue of data contamination has become increasingly severe, leading to
potential overestimation of model performance during evaluation. To address
this, we propose AdEval (Alignment-based Dynamic Evaluation), a dynamic data
evaluation method aimed at mitigating the impact of data contamination on
evaluation reliability. Experimental results on multiple datasets demonstrate
that AdEval effectively reduces the impact of data contamination on evaluation
outcomes, enhancing both the fairness and reliability of the evaluation
process.
| no_new_dataset | 0.94801 |
2501.15387 | Edi Sutoyo | Edi Sutoyo, Paris Avgeriou, Andrea Capiluppi | Tracing the Lifecycle of Architecture Technical Debt in Software
Systems: A Dependency Approach | Accepted for publication at the 22nd IEEE International Conference on
Software Architecture (ICSA 2025) | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Architectural technical debt (ATD) represents trade-offs in software
architecture that accelerate initial development but create long-term
maintenance challenges. ATD, in particular when self-admitted, impacts the
foundational structure of software, making it difficult to detect and resolve.
This study investigates the lifecycle of ATD, focusing on how it affects i) the
connectivity between classes and ii) the frequency of file modifications. We
aim to understand how ATD evolves from introduction to repayment and its
implications on software architectures. Our empirical approach was applied to a
dataset of SATD items extracted from various software artifacts. We isolated
ATD instances, filtered for architectural indicators, and calculated
dependencies at different lifecycle stages using FAN-IN and FAN-OUT metrics.
Statistical analyses, including the Mann-Whitney U test and Cliff's Delta, were
used to assess the significance and effect size of connectivity and dependency
changes over time. We observed that ATD repayment increased class connectivity,
with FAN-IN increasing by 57.5% on average and FAN-OUT by 26.7%, suggesting a
shift toward centralization and increased architectural complexity after
repayment. Moreover, ATD files were modified less frequently than Non-ATD
files, with changes accumulated in high-dependency portions of the code. Our
study shows that resolving ATD improves software quality in the short-term, but
can make the architecture more complex by centralizing dependencies. Also, even
if dependency metrics (like FAN-IN and FAN-OUT) can help understand the impact
of ATD, they should be combined with other measures to capture other effects of
ATD on software maintainability.
| [
{
"version": "v1",
"created": "Sun, 26 Jan 2025 03:58:57 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 13:55:01 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Sutoyo",
"Edi",
""
],
[
"Avgeriou",
"Paris",
""
],
[
"Capiluppi",
"Andrea",
""
]
]
| TITLE: Tracing the Lifecycle of Architecture Technical Debt in Software
Systems: A Dependency Approach
ABSTRACT: Architectural technical debt (ATD) represents trade-offs in software
architecture that accelerate initial development but create long-term
maintenance challenges. ATD, in particular when self-admitted, impacts the
foundational structure of software, making it difficult to detect and resolve.
This study investigates the lifecycle of ATD, focusing on how it affects i) the
connectivity between classes and ii) the frequency of file modifications. We
aim to understand how ATD evolves from introduction to repayment and its
implications on software architectures. Our empirical approach was applied to a
dataset of SATD items extracted from various software artifacts. We isolated
ATD instances, filtered for architectural indicators, and calculated
dependencies at different lifecycle stages using FAN-IN and FAN-OUT metrics.
Statistical analyses, including the Mann-Whitney U test and Cliff's Delta, were
used to assess the significance and effect size of connectivity and dependency
changes over time. We observed that ATD repayment increased class connectivity,
with FAN-IN increasing by 57.5% on average and FAN-OUT by 26.7%, suggesting a
shift toward centralization and increased architectural complexity after
repayment. Moreover, ATD files were modified less frequently than Non-ATD
files, with changes accumulated in high-dependency portions of the code. Our
study shows that resolving ATD improves software quality in the short-term, but
can make the architecture more complex by centralizing dependencies. Also, even
if dependency metrics (like FAN-IN and FAN-OUT) can help understand the impact
of ATD, they should be combined with other measures to capture other effects of
ATD on software maintainability.
| no_new_dataset | 0.944893 |
2501.18478 | Daniel Bermuth | Daniel Bermuth, Alexander Poeppel, Wolfgang Reif | SimpleDepthPose: Fast and Reliable Human Pose Estimation with
RGBD-Images | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | In the rapidly advancing domain of computer vision, accurately estimating the
poses of multiple individuals from various viewpoints remains a significant
challenge, especially when reliability is a key requirement. This paper
introduces a novel algorithm that excels in multi-view, multi-person pose
estimation by incorporating depth information. An extensive evaluation
demonstrates that the proposed algorithm not only generalizes well to unseen
datasets, and shows a fast runtime performance, but also is adaptable to
different keypoints. To support further research, all of the work is publicly
accessible.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2025 16:51:40 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 10:40:43 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Bermuth",
"Daniel",
""
],
[
"Poeppel",
"Alexander",
""
],
[
"Reif",
"Wolfgang",
""
]
]
| TITLE: SimpleDepthPose: Fast and Reliable Human Pose Estimation with
RGBD-Images
ABSTRACT: In the rapidly advancing domain of computer vision, accurately estimating the
poses of multiple individuals from various viewpoints remains a significant
challenge, especially when reliability is a key requirement. This paper
introduces a novel algorithm that excels in multi-view, multi-person pose
estimation by incorporating depth information. An extensive evaluation
demonstrates that the proposed algorithm not only generalizes well to unseen
datasets, and shows a fast runtime performance, but also is adaptable to
different keypoints. To support further research, all of the work is publicly
accessible.
| no_new_dataset | 0.944944 |
2502.05605 | Yongcheng Zeng | Yongcheng Zeng, Xinyu Cui, Xuanfa Jin, Guoqing Liu, Zexu Sun, Quan He,
Dong Li, Ning Yang, Jianye Hao, Haifeng Zhang, Jun Wang | ARIES: Stimulating Self-Refinement of Large Language Models by Iterative
Preference Optimization | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A truly intelligent Large Language Model (LLM) should be capable of
correcting errors in its responses through external interactions. However, even
the most advanced models often face challenges in improving their outputs. In
this paper, we explore how to cultivate LLMs with the self-refinement
capability through iterative preference training, and how this ability can be
leveraged to improve model performance during inference. To this end, we
introduce a novel post-training and inference framework, called ARIES: Adaptive
Refinement and Iterative Enhancement Structure. This method iteratively
performs preference training and self-refinement-based data collection. During
training, ARIES strengthen the model's direct question-answering capability
while simultaneously unlocking its self-refinement potential. During inference,
ARIES harnesses this self-refinement capability to generate a series of
progressively refined responses, which are then filtered using either the
Reward Model Scoring or a simple yet effective Rule-Based Selection mechanism,
specifically tailored to our approach, to construct a dataset for the next
round of preference training. Experimental results demonstrate the remarkable
performance of ARIES. When applied to the Llama-3.1-8B model and under the
self-refinement setting, ARIES surpasses powerful models such as GPT-4o,
achieving 62.3% length-controlled (LC) and a 63.3% raw win rates on AlpacaEval
2, outperforming Iterative DPO by 27.8% and 35.5% respectively, as well as a
50.3% win rate on Arena-Hard, surpassing Iterative DPO by 26.6%. Furthermore,
ARIES consistently enhances performance on mathematical reasoning tasks like
GSM8K and MATH.
| [
{
"version": "v1",
"created": "Sat, 8 Feb 2025 15:21:55 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 08:35:00 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Zeng",
"Yongcheng",
""
],
[
"Cui",
"Xinyu",
""
],
[
"Jin",
"Xuanfa",
""
],
[
"Liu",
"Guoqing",
""
],
[
"Sun",
"Zexu",
""
],
[
"He",
"Quan",
""
],
[
"Li",
"Dong",
""
],
[
"Yang",
"Ning",
""
],
[
"Hao",
"Jianye",
""
],
[
"Zhang",
"Haifeng",
""
],
[
"Wang",
"Jun",
""
]
]
| TITLE: ARIES: Stimulating Self-Refinement of Large Language Models by Iterative
Preference Optimization
ABSTRACT: A truly intelligent Large Language Model (LLM) should be capable of
correcting errors in its responses through external interactions. However, even
the most advanced models often face challenges in improving their outputs. In
this paper, we explore how to cultivate LLMs with the self-refinement
capability through iterative preference training, and how this ability can be
leveraged to improve model performance during inference. To this end, we
introduce a novel post-training and inference framework, called ARIES: Adaptive
Refinement and Iterative Enhancement Structure. This method iteratively
performs preference training and self-refinement-based data collection. During
training, ARIES strengthen the model's direct question-answering capability
while simultaneously unlocking its self-refinement potential. During inference,
ARIES harnesses this self-refinement capability to generate a series of
progressively refined responses, which are then filtered using either the
Reward Model Scoring or a simple yet effective Rule-Based Selection mechanism,
specifically tailored to our approach, to construct a dataset for the next
round of preference training. Experimental results demonstrate the remarkable
performance of ARIES. When applied to the Llama-3.1-8B model and under the
self-refinement setting, ARIES surpasses powerful models such as GPT-4o,
achieving 62.3% length-controlled (LC) and a 63.3% raw win rates on AlpacaEval
2, outperforming Iterative DPO by 27.8% and 35.5% respectively, as well as a
50.3% win rate on Arena-Hard, surpassing Iterative DPO by 26.6%. Furthermore,
ARIES consistently enhances performance on mathematical reasoning tasks like
GSM8K and MATH.
| no_new_dataset | 0.940243 |
2502.08080 | Neha Srikanth | Neha Srikanth, Rachel Rudinger | NLI under the Microscope: What Atomic Hypothesis Decomposition Reveals | Accepted to NAACL 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Decomposition of text into atomic propositions is a flexible framework
allowing for the closer inspection of input and output text. We use atomic
decomposition of hypotheses in two natural language reasoning tasks,
traditional NLI and defeasible NLI, to form atomic sub-problems, or granular
inferences that models must weigh when solving the overall problem. These
atomic sub-problems serve as a tool to further understand the structure of both
NLI and defeasible reasoning, probe a model's consistency and understanding of
different inferences, and measure the diversity of examples in benchmark
datasets. Our results indicate that LLMs still struggle with logical
consistency on atomic NLI and defeasible NLI sub-problems. Lastly, we identify
critical atomic sub-problems of defeasible NLI examples, or those that most
contribute to the overall label, and propose a method to measure the
inferential consistency of a model, a metric designed to capture the degree to
which a model makes consistently correct or incorrect predictions about the
same fact under different contexts.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2025 02:54:12 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 15:17:43 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Srikanth",
"Neha",
""
],
[
"Rudinger",
"Rachel",
""
]
]
| TITLE: NLI under the Microscope: What Atomic Hypothesis Decomposition Reveals
ABSTRACT: Decomposition of text into atomic propositions is a flexible framework
allowing for the closer inspection of input and output text. We use atomic
decomposition of hypotheses in two natural language reasoning tasks,
traditional NLI and defeasible NLI, to form atomic sub-problems, or granular
inferences that models must weigh when solving the overall problem. These
atomic sub-problems serve as a tool to further understand the structure of both
NLI and defeasible reasoning, probe a model's consistency and understanding of
different inferences, and measure the diversity of examples in benchmark
datasets. Our results indicate that LLMs still struggle with logical
consistency on atomic NLI and defeasible NLI sub-problems. Lastly, we identify
critical atomic sub-problems of defeasible NLI examples, or those that most
contribute to the overall label, and propose a method to measure the
inferential consistency of a model, a metric designed to capture the degree to
which a model makes consistently correct or incorrect predictions about the
same fact under different contexts.
| no_new_dataset | 0.936981 |
2502.14195 | Zhenyu Li | Tianyi Shang, Zhenyu Li, Pengjie Xu, Jinwei Qiao, Gang Chen, Zihan
Ruan, Weijun Hu | Bridging Text and Vision: A Multi-View Text-Vision Registration Approach
for Cross-Modal Place Recognition | 8 pages, 4 figures, conference | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Mobile robots necessitate advanced natural language understanding
capabilities to accurately identify locations and perform tasks such as package
delivery. However, traditional visual place recognition (VPR) methods rely
solely on single-view visual information and cannot interpret human language
descriptions. To overcome this challenge, we bridge text and vision by
proposing a multiview (360{\deg} views of the surroundings) text-vision
registration approach called Text4VPR for place recognition task, which is the
first method that exclusively utilizes textual descriptions to match a database
of images. Text4VPR employs the frozen T5 language model to extract global
textual embeddings. Additionally, it utilizes the Sinkhorn algorithm with
temperature coefficient to assign local tokens to their respective clusters,
thereby aggregating visual descriptors from images. During the training stage,
Text4VPR emphasizes the alignment between individual text-image pairs for
precise textual description. In the inference stage, Text4VPR uses the Cascaded
Cross-Attention Cosine Alignment (CCCA) to address the internal mismatch
between text and image groups. Subsequently, Text4VPR performs precisely place
match based on the descriptions of text-image groups. On Street360Loc, the
first text to image VPR dataset we created, Text4VPR builds a robust baseline,
achieving a leading top-1 accuracy of 57% and a leading top-10 accuracy of 92%
within a 5-meter radius on the test set, which indicates that localization from
textual descriptions to images is not only feasible but also holds significant
potential for further advancement, as shown in Figure 1.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 02:00:02 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 12:30:18 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Shang",
"Tianyi",
""
],
[
"Li",
"Zhenyu",
""
],
[
"Xu",
"Pengjie",
""
],
[
"Qiao",
"Jinwei",
""
],
[
"Chen",
"Gang",
""
],
[
"Ruan",
"Zihan",
""
],
[
"Hu",
"Weijun",
""
]
]
| TITLE: Bridging Text and Vision: A Multi-View Text-Vision Registration Approach
for Cross-Modal Place Recognition
ABSTRACT: Mobile robots necessitate advanced natural language understanding
capabilities to accurately identify locations and perform tasks such as package
delivery. However, traditional visual place recognition (VPR) methods rely
solely on single-view visual information and cannot interpret human language
descriptions. To overcome this challenge, we bridge text and vision by
proposing a multiview (360{\deg} views of the surroundings) text-vision
registration approach called Text4VPR for place recognition task, which is the
first method that exclusively utilizes textual descriptions to match a database
of images. Text4VPR employs the frozen T5 language model to extract global
textual embeddings. Additionally, it utilizes the Sinkhorn algorithm with
temperature coefficient to assign local tokens to their respective clusters,
thereby aggregating visual descriptors from images. During the training stage,
Text4VPR emphasizes the alignment between individual text-image pairs for
precise textual description. In the inference stage, Text4VPR uses the Cascaded
Cross-Attention Cosine Alignment (CCCA) to address the internal mismatch
between text and image groups. Subsequently, Text4VPR performs precisely place
match based on the descriptions of text-image groups. On Street360Loc, the
first text to image VPR dataset we created, Text4VPR builds a robust baseline,
achieving a leading top-1 accuracy of 57% and a leading top-10 accuracy of 92%
within a 5-meter radius on the test set, which indicates that localization from
textual descriptions to images is not only feasible but also holds significant
potential for further advancement, as shown in Figure 1.
| new_dataset | 0.962321 |
2502.15755 | Vasco Guerra | Matilde Valente, Tiago C. Dias, Vasco Guerra and Rodrigo Ventura | Physics-consistent machine learning: output projection onto physical
manifolds | 23 pages, 6 figures | null | null | null | cs.LG cs.AI physics.plasm-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Data-driven machine learning models often require extensive datasets, which
can be costly or inaccessible, and their predictions may fail to comply with
established physical laws. Current approaches for incorporating physical priors
mitigate these issues by penalizing deviations from known physical laws, as in
physics-informed neural networks, or by designing architectures that
automatically satisfy specific invariants. However, penalization approaches do
not guarantee compliance with physical constraints for unseen inputs, and
invariant-based methods lack flexibility and generality. We propose a novel
physics-consistent machine learning method that directly enforces compliance
with physical principles by projecting model outputs onto the manifold defined
by these laws. This procedure ensures that predictions inherently adhere to the
chosen physical constraints, improving reliability and interpretability. Our
method is demonstrated on two systems: a spring-mass system and a
low-temperature reactive plasma. Compared to purely data-driven models, our
approach significantly reduces errors in physical law compliance, enhances
predictive accuracy of physical quantities, and outperforms alternatives when
working with simpler models or limited datasets. The proposed projection-based
technique is versatile and can function independently or in conjunction with
existing physics-informed neural networks, offering a powerful, general, and
scalable solution for developing fast and reliable surrogate models of complex
physical systems, particularly in resource-constrained scenarios.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 13:18:19 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 21:52:47 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Valente",
"Matilde",
""
],
[
"Dias",
"Tiago C.",
""
],
[
"Guerra",
"Vasco",
""
],
[
"Ventura",
"Rodrigo",
""
]
]
| TITLE: Physics-consistent machine learning: output projection onto physical
manifolds
ABSTRACT: Data-driven machine learning models often require extensive datasets, which
can be costly or inaccessible, and their predictions may fail to comply with
established physical laws. Current approaches for incorporating physical priors
mitigate these issues by penalizing deviations from known physical laws, as in
physics-informed neural networks, or by designing architectures that
automatically satisfy specific invariants. However, penalization approaches do
not guarantee compliance with physical constraints for unseen inputs, and
invariant-based methods lack flexibility and generality. We propose a novel
physics-consistent machine learning method that directly enforces compliance
with physical principles by projecting model outputs onto the manifold defined
by these laws. This procedure ensures that predictions inherently adhere to the
chosen physical constraints, improving reliability and interpretability. Our
method is demonstrated on two systems: a spring-mass system and a
low-temperature reactive plasma. Compared to purely data-driven models, our
approach significantly reduces errors in physical law compliance, enhances
predictive accuracy of physical quantities, and outperforms alternatives when
working with simpler models or limited datasets. The proposed projection-based
technique is versatile and can function independently or in conjunction with
existing physics-informed neural networks, offering a powerful, general, and
scalable solution for developing fast and reliable surrogate models of complex
physical systems, particularly in resource-constrained scenarios.
| no_new_dataset | 0.946646 |
2502.19103 | Siwei Wu | Siwei Wu, Yizhi Li, Xingwei Qu, Rishi Ravikumar, Yucheng Li, Tyler
Loakman, Shanghaoran Quan, Xiaoyong Wei, Riza Batista-Navarro, Chenghua Lin | LongEval: A Comprehensive Analysis of Long-Text Generation Through a
Plan-based Paradigm | Under review | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have achieved remarkable success in various
natural language processing tasks, yet their ability to generate long-form
content remains poorly understood and evaluated. Our analysis reveals that
current LLMs struggle with length requirements and information density in
long-text generation, with performance deteriorating as text length increases.
To quantitively locate such a performance degradation and provide further
insights on model development, we present LongEval, a benchmark that evaluates
long-text generation through both direct and plan-based generation paradigms,
inspired by cognitive and linguistic writing models. The comprehensive
experiments in this work reveal interesting findings such as that while model
size correlates with generation ability, the small-scale model (e.g.,
LongWriter), well-trained on long texts, has comparable performance. All code
and datasets are released in https://github.com/Wusiwei0410/LongEval.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 12:46:36 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 11:05:01 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Wu",
"Siwei",
""
],
[
"Li",
"Yizhi",
""
],
[
"Qu",
"Xingwei",
""
],
[
"Ravikumar",
"Rishi",
""
],
[
"Li",
"Yucheng",
""
],
[
"Loakman",
"Tyler",
""
],
[
"Quan",
"Shanghaoran",
""
],
[
"Wei",
"Xiaoyong",
""
],
[
"Batista-Navarro",
"Riza",
""
],
[
"Lin",
"Chenghua",
""
]
]
| TITLE: LongEval: A Comprehensive Analysis of Long-Text Generation Through a
Plan-based Paradigm
ABSTRACT: Large Language Models (LLMs) have achieved remarkable success in various
natural language processing tasks, yet their ability to generate long-form
content remains poorly understood and evaluated. Our analysis reveals that
current LLMs struggle with length requirements and information density in
long-text generation, with performance deteriorating as text length increases.
To quantitively locate such a performance degradation and provide further
insights on model development, we present LongEval, a benchmark that evaluates
long-text generation through both direct and plan-based generation paradigms,
inspired by cognitive and linguistic writing models. The comprehensive
experiments in this work reveal interesting findings such as that while model
size correlates with generation ability, the small-scale model (e.g.,
LongWriter), well-trained on long texts, has comparable performance. All code
and datasets are released in https://github.com/Wusiwei0410/LongEval.
| new_dataset | 0.971913 |
2502.19202 | Nghia Hieu Nguyen | Thanh-Phong Le, Trung Le Chi Phan, Nghia Hieu Nguyen, Kiet Van Nguyen | LiGT: Layout-infused Generative Transformer for Visual Question
Answering on Vietnamese Receipts | Accepted at IJDAR | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Document Visual Question Answering (Document VQA) challenges multimodal
systems to holistically handle textual, layout, and visual modalities to
provide appropriate answers. Document VQA has gained popularity in recent years
due to the increasing amount of documents and the high demand for digitization.
Nonetheless, most of document VQA datasets are developed in high-resource
languages such as English. In this paper, we present ReceiptVQA
(\textbf{Receipt} \textbf{V}isual \textbf{Q}uestion \textbf{A}nswering), the
initial large-scale document VQA dataset in Vietnamese dedicated to receipts, a
document kind with high commercial potentials. The dataset encompasses
\textbf{9,000+} receipt images and \textbf{60,000+} manually annotated
question-answer pairs. In addition to our study, we introduce LiGT
(\textbf{L}ayout-\textbf{i}nfused \textbf{G}enerative \textbf{T}ransformer), a
layout-aware encoder-decoder architecture designed to leverage embedding layers
of language models to operate layout embeddings, minimizing the use of
additional neural modules. Experiments on ReceiptVQA show that our architecture
yielded promising performance, achieving competitive results compared with
outstanding baselines. Furthermore, throughout analyzing experimental results,
we found evident patterns that employing encoder-only model architectures has
considerable disadvantages in comparison to architectures that can generate
answers. We also observed that it is necessary to combine multiple modalities
to tackle our dataset, despite the critical role of semantic understanding from
language models. We hope that our work will encourage and facilitate future
development in Vietnamese document VQA, contributing to a diverse multimodal
research community in the Vietnamese language.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 15:09:28 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 16:11:10 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Le",
"Thanh-Phong",
""
],
[
"Phan",
"Trung Le Chi",
""
],
[
"Nguyen",
"Nghia Hieu",
""
],
[
"Van Nguyen",
"Kiet",
""
]
]
| TITLE: LiGT: Layout-infused Generative Transformer for Visual Question
Answering on Vietnamese Receipts
ABSTRACT: Document Visual Question Answering (Document VQA) challenges multimodal
systems to holistically handle textual, layout, and visual modalities to
provide appropriate answers. Document VQA has gained popularity in recent years
due to the increasing amount of documents and the high demand for digitization.
Nonetheless, most of document VQA datasets are developed in high-resource
languages such as English. In this paper, we present ReceiptVQA
(\textbf{Receipt} \textbf{V}isual \textbf{Q}uestion \textbf{A}nswering), the
initial large-scale document VQA dataset in Vietnamese dedicated to receipts, a
document kind with high commercial potentials. The dataset encompasses
\textbf{9,000+} receipt images and \textbf{60,000+} manually annotated
question-answer pairs. In addition to our study, we introduce LiGT
(\textbf{L}ayout-\textbf{i}nfused \textbf{G}enerative \textbf{T}ransformer), a
layout-aware encoder-decoder architecture designed to leverage embedding layers
of language models to operate layout embeddings, minimizing the use of
additional neural modules. Experiments on ReceiptVQA show that our architecture
yielded promising performance, achieving competitive results compared with
outstanding baselines. Furthermore, throughout analyzing experimental results,
we found evident patterns that employing encoder-only model architectures has
considerable disadvantages in comparison to architectures that can generate
answers. We also observed that it is necessary to combine multiple modalities
to tackle our dataset, despite the critical role of semantic understanding from
language models. We hope that our work will encourage and facilitate future
development in Vietnamese document VQA, contributing to a diverse multimodal
research community in the Vietnamese language.
| new_dataset | 0.945801 |
2502.19320 | Cornelius Emde | Cornelius Emde, Alasdair Paren, Preetham Arvind, Maxime Kayser, Tom
Rainforth, Thomas Lukasiewicz, Bernard Ghanem, Philip H.S. Torr, Adel Bibi | Shh, don't say that! Domain Certification in LLMs | 10 pages, includes appendix Published in International Conference on
Learning Representations (ICLR) 2025 | International Conference on Learning Representations (ICLR) 2025 | null | null | cs.CL cs.AI cs.CR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) are often deployed to perform constrained tasks,
with narrow domains. For example, customer support bots can be built on top of
LLMs, relying on their broad language understanding and capabilities to enhance
performance. However, these LLMs are adversarially susceptible, potentially
generating outputs outside the intended domain. To formalize, assess, and
mitigate this risk, we introduce domain certification; a guarantee that
accurately characterizes the out-of-domain behavior of language models. We then
propose a simple yet effective approach, which we call VALID that provides
adversarial bounds as a certificate. Finally, we evaluate our method across a
diverse set of datasets, demonstrating that it yields meaningful certificates,
which bound the probability of out-of-domain samples tightly with minimum
penalty to refusal behavior.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 17:13:19 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 21:49:11 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Emde",
"Cornelius",
""
],
[
"Paren",
"Alasdair",
""
],
[
"Arvind",
"Preetham",
""
],
[
"Kayser",
"Maxime",
""
],
[
"Rainforth",
"Tom",
""
],
[
"Lukasiewicz",
"Thomas",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Torr",
"Philip H. S.",
""
],
[
"Bibi",
"Adel",
""
]
]
| TITLE: Shh, don't say that! Domain Certification in LLMs
ABSTRACT: Large language models (LLMs) are often deployed to perform constrained tasks,
with narrow domains. For example, customer support bots can be built on top of
LLMs, relying on their broad language understanding and capabilities to enhance
performance. However, these LLMs are adversarially susceptible, potentially
generating outputs outside the intended domain. To formalize, assess, and
mitigate this risk, we introduce domain certification; a guarantee that
accurately characterizes the out-of-domain behavior of language models. We then
propose a simple yet effective approach, which we call VALID that provides
adversarial bounds as a certificate. Finally, we evaluate our method across a
diverse set of datasets, demonstrating that it yields meaningful certificates,
which bound the probability of out-of-domain samples tightly with minimum
penalty to refusal behavior.
| no_new_dataset | 0.936923 |
2502.19723 | Yu Zhao | Yu Zhao and Songping Huang and Dongsheng Zhou and Zhaoyun Ding and Fei
Wang and Aixin Nian | CNsum:Automatic Summarization for Chinese News Text | This withdrawal is due to the lack of authorization from all
co-authors for the publication of this version | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Obtaining valuable information from massive data efficiently has become our
research goal in the era of Big Data. Text summarization technology has been
continuously developed to meet this demand. Recent work has also shown that
transformer-based pre-trained language models have achieved great success on
various tasks in Natural Language Processing (NLP). Aiming at the problem of
Chinese news text summary generation and the application of Transformer
structure on Chinese, this paper proposes a Chinese news text summarization
model (CNsum) based on Transformer structure, and tests it on Chinese datasets
such as THUCNews. The results of the conducted experiments show that CNsum
achieves better ROUGE score than the baseline models, which verifies the
outperformance of the model.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 03:25:34 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 15:07:28 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 14:56:45 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Zhao",
"Yu",
""
],
[
"Huang",
"Songping",
""
],
[
"Zhou",
"Dongsheng",
""
],
[
"Ding",
"Zhaoyun",
""
],
[
"Wang",
"Fei",
""
],
[
"Nian",
"Aixin",
""
]
]
| TITLE: CNsum:Automatic Summarization for Chinese News Text
ABSTRACT: Obtaining valuable information from massive data efficiently has become our
research goal in the era of Big Data. Text summarization technology has been
continuously developed to meet this demand. Recent work has also shown that
transformer-based pre-trained language models have achieved great success on
various tasks in Natural Language Processing (NLP). Aiming at the problem of
Chinese news text summary generation and the application of Transformer
structure on Chinese, this paper proposes a Chinese news text summarization
model (CNsum) based on Transformer structure, and tests it on Chinese datasets
such as THUCNews. The results of the conducted experiments show that CNsum
achieves better ROUGE score than the baseline models, which verifies the
outperformance of the model.
| no_new_dataset | 0.949763 |
2502.20242 | Chao Feng | Chao Feng, Alberto Huertas Celdr\'an, Xi Cheng, G\'er\^ome Bovet,
Burkhard Stiller | GreenDFL: a Framework for Assessing the Sustainability of Decentralized
Federated Learning Systems | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Decentralized Federated Learning (DFL) is an emerging paradigm that enables
collaborative model training without centralized data and model aggregation,
enhancing privacy and resilience. However, its sustainability remains
underexplored, as energy consumption and carbon emissions vary across different
system configurations. Understanding the environmental impact of DFL is crucial
for optimizing its design and deployment. This work aims to develop a
comprehensive and operational framework for assessing the sustainability of DFL
systems. To address it, this work provides a systematic method for quantifying
energy consumption and carbon emissions, offering insights into improving the
sustainability of DFL. This work proposes GreenDFL, a fully implementable
framework that has been integrated into a real-world DFL platform. GreenDFL
systematically analyzes the impact of various factors, including hardware
accelerators, model architecture, communication medium, data distribution,
network topology, and federation size, on the sustainability of DFL systems.
Besides, a sustainability-aware aggregation algorithm (GreenDFL-SA) and a node
selection algorithm (GreenDFL-SN) are developed to optimize energy efficiency
and reduce carbon emissions in DFL training. Empirical experiments are
conducted on multiple datasets, measuring energy consumption and carbon
emissions at different phases of the DFL lifecycle. The proposed GreenDFL
provides a comprehensive and practical approach for assessing the
sustainability of DFL systems. Furthermore, it offers best practices for
improving environmental efficiency in DFL, making sustainability considerations
more actionable in real-world deployments.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 16:27:42 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 08:04:54 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Feng",
"Chao",
""
],
[
"Celdrán",
"Alberto Huertas",
""
],
[
"Cheng",
"Xi",
""
],
[
"Bovet",
"Gérôme",
""
],
[
"Stiller",
"Burkhard",
""
]
]
| TITLE: GreenDFL: a Framework for Assessing the Sustainability of Decentralized
Federated Learning Systems
ABSTRACT: Decentralized Federated Learning (DFL) is an emerging paradigm that enables
collaborative model training without centralized data and model aggregation,
enhancing privacy and resilience. However, its sustainability remains
underexplored, as energy consumption and carbon emissions vary across different
system configurations. Understanding the environmental impact of DFL is crucial
for optimizing its design and deployment. This work aims to develop a
comprehensive and operational framework for assessing the sustainability of DFL
systems. To address it, this work provides a systematic method for quantifying
energy consumption and carbon emissions, offering insights into improving the
sustainability of DFL. This work proposes GreenDFL, a fully implementable
framework that has been integrated into a real-world DFL platform. GreenDFL
systematically analyzes the impact of various factors, including hardware
accelerators, model architecture, communication medium, data distribution,
network topology, and federation size, on the sustainability of DFL systems.
Besides, a sustainability-aware aggregation algorithm (GreenDFL-SA) and a node
selection algorithm (GreenDFL-SN) are developed to optimize energy efficiency
and reduce carbon emissions in DFL training. Empirical experiments are
conducted on multiple datasets, measuring energy consumption and carbon
emissions at different phases of the DFL lifecycle. The proposed GreenDFL
provides a comprehensive and practical approach for assessing the
sustainability of DFL systems. Furthermore, it offers best practices for
improving environmental efficiency in DFL, making sustainability considerations
more actionable in real-world deployments.
| no_new_dataset | 0.948965 |
2502.21314 | Junyan Wang | Zhiyu Tan, Junyan Wang, Hao Yang, Luozheng Qin, Hesen Chen, Qiang
Zhou, Hao Li | Raccoon: Multi-stage Diffusion Training with Coarse-to-Fine Curating
Videos | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Text-to-video generation has demonstrated promising progress with the advent
of diffusion models, yet existing approaches are limited by dataset quality and
computational resources. To address these limitations, this paper presents a
comprehensive approach that advances both data curation and model design. We
introduce CFC-VIDS-1M, a high-quality video dataset constructed through a
systematic coarse-to-fine curation pipeline. The pipeline first evaluates video
quality across multiple dimensions, followed by a fine-grained stage that
leverages vision-language models to enhance text-video alignment and semantic
richness. Building upon the curated dataset's emphasis on visual quality and
temporal coherence, we develop RACCOON, a transformer-based architecture with
decoupled spatial-temporal attention mechanisms. The model is trained through a
progressive four-stage strategy designed to efficiently handle the complexities
of video generation. Extensive experiments demonstrate that our integrated
approach of high-quality data curation and efficient training strategy
generates visually appealing and temporally coherent videos while maintaining
computational efficiency. We will release our dataset, code, and models.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 18:56:35 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 06:46:50 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Tan",
"Zhiyu",
""
],
[
"Wang",
"Junyan",
""
],
[
"Yang",
"Hao",
""
],
[
"Qin",
"Luozheng",
""
],
[
"Chen",
"Hesen",
""
],
[
"Zhou",
"Qiang",
""
],
[
"Li",
"Hao",
""
]
]
| TITLE: Raccoon: Multi-stage Diffusion Training with Coarse-to-Fine Curating
Videos
ABSTRACT: Text-to-video generation has demonstrated promising progress with the advent
of diffusion models, yet existing approaches are limited by dataset quality and
computational resources. To address these limitations, this paper presents a
comprehensive approach that advances both data curation and model design. We
introduce CFC-VIDS-1M, a high-quality video dataset constructed through a
systematic coarse-to-fine curation pipeline. The pipeline first evaluates video
quality across multiple dimensions, followed by a fine-grained stage that
leverages vision-language models to enhance text-video alignment and semantic
richness. Building upon the curated dataset's emphasis on visual quality and
temporal coherence, we develop RACCOON, a transformer-based architecture with
decoupled spatial-temporal attention mechanisms. The model is trained through a
progressive four-stage strategy designed to efficiently handle the complexities
of video generation. Extensive experiments demonstrate that our integrated
approach of high-quality data curation and efficient training strategy
generates visually appealing and temporally coherent videos while maintaining
computational efficiency. We will release our dataset, code, and models.
| new_dataset | 0.955236 |
2503.00198 | Ivan Ezhov | Martin Hartenberger, Huzeyfe Ayaz, Fatih Ozlugedik, Charly Caredda,
Luca Giannoni, Fr\'ed\'eric Lange, Laurin Lux, Jonas Weidner, Alex Berger,
Florian Kofler, Martin Menten, Bruno Montcel, Ilias Tachtsidis, Daniel
Rueckert, Ivan Ezhov | Redefining spectral unmixing for in-vivo brain tissue analysis from
hyperspectral imaging | null | null | null | null | physics.med-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a methodology for extracting molecular tumor
biomarkers from hyperspectral imaging (HSI), an emerging technology for
intraoperative tissue assessment. To achieve this, we employ spectral unmixing,
allowing to decompose the spectral signals recorded by the HSI camera into
their constituent molecular components. Traditional unmixing approaches are
based on physical models that establish a relationship between tissue molecules
and the recorded spectra. However, these methods commonly assume a linear
relationship between the spectra and molecular content, which does not capture
the whole complexity of light-matter interaction. To address this limitation,
we introduce a novel unmixing procedure that allows to take into account
non-linear optical effects while preserving the computational benefits of
linear spectral unmixing. We validate our methodology on an in-vivo brain
tissue HSI dataset and demonstrate that the extracted molecular information
leads to superior classification performance.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 21:35:56 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 20:48:54 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Hartenberger",
"Martin",
""
],
[
"Ayaz",
"Huzeyfe",
""
],
[
"Ozlugedik",
"Fatih",
""
],
[
"Caredda",
"Charly",
""
],
[
"Giannoni",
"Luca",
""
],
[
"Lange",
"Frédéric",
""
],
[
"Lux",
"Laurin",
""
],
[
"Weidner",
"Jonas",
""
],
[
"Berger",
"Alex",
""
],
[
"Kofler",
"Florian",
""
],
[
"Menten",
"Martin",
""
],
[
"Montcel",
"Bruno",
""
],
[
"Tachtsidis",
"Ilias",
""
],
[
"Rueckert",
"Daniel",
""
],
[
"Ezhov",
"Ivan",
""
]
]
| TITLE: Redefining spectral unmixing for in-vivo brain tissue analysis from
hyperspectral imaging
ABSTRACT: In this paper, we propose a methodology for extracting molecular tumor
biomarkers from hyperspectral imaging (HSI), an emerging technology for
intraoperative tissue assessment. To achieve this, we employ spectral unmixing,
allowing to decompose the spectral signals recorded by the HSI camera into
their constituent molecular components. Traditional unmixing approaches are
based on physical models that establish a relationship between tissue molecules
and the recorded spectra. However, these methods commonly assume a linear
relationship between the spectra and molecular content, which does not capture
the whole complexity of light-matter interaction. To address this limitation,
we introduce a novel unmixing procedure that allows to take into account
non-linear optical effects while preserving the computational benefits of
linear spectral unmixing. We validate our methodology on an in-vivo brain
tissue HSI dataset and demonstrate that the extracted molecular information
leads to superior classification performance.
| no_new_dataset | 0.930332 |
2503.00357 | Yu-Ting Zhan | Yu-Ting Zhan, Cheng-Yuan Ho, Hebi Yang, Yi-Hsin Chen, Jui Chiu Chiang,
Yu-Lun Liu, Wen-Hsiao Peng | CAT-3DGS: A Context-Adaptive Triplane Approach to
Rate-Distortion-Optimized 3DGS Compression | Accepted for Publication in International Conference on Learning
Representations (ICLR) | ICLR 2025 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | 3D Gaussian Splatting (3DGS) has recently emerged as a promising 3D
representation. Much research has been focused on reducing its storage
requirements and memory footprint. However, the needs to compress and transmit
the 3DGS representation to the remote side are overlooked. This new application
calls for rate-distortion-optimized 3DGS compression. How to quantize and
entropy encode sparse Gaussian primitives in the 3D space remains largely
unexplored. Few early attempts resort to the hyperprior framework from learned
image compression. But, they fail to utilize fully the inter and intra
correlation inherent in Gaussian primitives. Built on ScaffoldGS, this work,
termed CAT-3DGS, introduces a context-adaptive triplane approach to their
rate-distortion-optimized coding. It features multi-scale triplanes, oriented
according to the principal axes of Gaussian primitives in the 3D space, to
capture their inter correlation (i.e. spatial correlation) for spatial
autoregressive coding in the projected 2D planes. With these triplanes serving
as the hyperprior, we further perform channel-wise autoregressive coding to
leverage the intra correlation within each individual Gaussian primitive. Our
CAT-3DGS incorporates a view frequency-aware masking mechanism. It actively
skips from coding those Gaussian primitives that potentially have little impact
on the rendering quality. When trained end-to-end to strike a good
rate-distortion trade-off, our CAT-3DGS achieves the state-of-the-art
compression performance on the commonly used real-world datasets.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 05:42:52 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 06:20:13 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Zhan",
"Yu-Ting",
""
],
[
"Ho",
"Cheng-Yuan",
""
],
[
"Yang",
"Hebi",
""
],
[
"Chen",
"Yi-Hsin",
""
],
[
"Chiang",
"Jui Chiu",
""
],
[
"Liu",
"Yu-Lun",
""
],
[
"Peng",
"Wen-Hsiao",
""
]
]
| TITLE: CAT-3DGS: A Context-Adaptive Triplane Approach to
Rate-Distortion-Optimized 3DGS Compression
ABSTRACT: 3D Gaussian Splatting (3DGS) has recently emerged as a promising 3D
representation. Much research has been focused on reducing its storage
requirements and memory footprint. However, the needs to compress and transmit
the 3DGS representation to the remote side are overlooked. This new application
calls for rate-distortion-optimized 3DGS compression. How to quantize and
entropy encode sparse Gaussian primitives in the 3D space remains largely
unexplored. Few early attempts resort to the hyperprior framework from learned
image compression. But, they fail to utilize fully the inter and intra
correlation inherent in Gaussian primitives. Built on ScaffoldGS, this work,
termed CAT-3DGS, introduces a context-adaptive triplane approach to their
rate-distortion-optimized coding. It features multi-scale triplanes, oriented
according to the principal axes of Gaussian primitives in the 3D space, to
capture their inter correlation (i.e. spatial correlation) for spatial
autoregressive coding in the projected 2D planes. With these triplanes serving
as the hyperprior, we further perform channel-wise autoregressive coding to
leverage the intra correlation within each individual Gaussian primitive. Our
CAT-3DGS incorporates a view frequency-aware masking mechanism. It actively
skips from coding those Gaussian primitives that potentially have little impact
on the rendering quality. When trained end-to-end to strike a good
rate-distortion trade-off, our CAT-3DGS achieves the state-of-the-art
compression performance on the commonly used real-world datasets.
| no_new_dataset | 0.941708 |
2503.00435 | Maria Lymperaiou | Andreas Evangelatos, Giorgos Filandrianos, Maria Lymperaiou,
Athanasios Voulodimos, Giorgos Stamou | AILS-NTUA at SemEval-2025 Task 8: Language-to-Code prompting and Error
Fixing for Tabular Question Answering | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, we present our submission to SemEval-2025 Task 8: Question
Answering over Tabular Data. This task, evaluated on the DataBench dataset,
assesses Large Language Models' (LLMs) ability to answer natural language
questions over structured data while addressing topic diversity and table size
limitations in previous benchmarks. We propose a system that employs effective
LLM prompting to translate natural language queries into executable code,
enabling accurate responses, error correction, and interpretability. Our
approach ranks first in both subtasks of the competition in the proprietary
model category, significantly outperforming the organizer's baseline.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 10:24:42 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 14:33:10 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Evangelatos",
"Andreas",
""
],
[
"Filandrianos",
"Giorgos",
""
],
[
"Lymperaiou",
"Maria",
""
],
[
"Voulodimos",
"Athanasios",
""
],
[
"Stamou",
"Giorgos",
""
]
]
| TITLE: AILS-NTUA at SemEval-2025 Task 8: Language-to-Code prompting and Error
Fixing for Tabular Question Answering
ABSTRACT: In this paper, we present our submission to SemEval-2025 Task 8: Question
Answering over Tabular Data. This task, evaluated on the DataBench dataset,
assesses Large Language Models' (LLMs) ability to answer natural language
questions over structured data while addressing topic diversity and table size
limitations in previous benchmarks. We propose a system that employs effective
LLM prompting to translate natural language queries into executable code,
enabling accurate responses, error correction, and interpretability. Our
approach ranks first in both subtasks of the competition in the proprietary
model category, significantly outperforming the organizer's baseline.
| no_new_dataset | 0.948585 |
2503.00691 | Seonghyeon Lee | Seonghyeon Lee, Heejae Chon, Joonwon Jang, Dongha Lee, Hwanjo Yu | How Diversely Can Language Models Solve Problems? Exploring the
Algorithmic Diversity of Model-Generated Code | null | null | null | null | cs.SE cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Language models (LMs) have exhibited impressive abilities in generating code
from natural language requirements. In this work, we highlight the diversity of
code generated by LMs as a critical criterion for evaluating their code
generation capabilities. There is a lack of studies focused on assessing the
diversity of generated code, which overlooks its importance in code LMs.
Therefore, we propose a systematic approach to evaluate code diversity,
introducing various metrics with inter-code similarity. Specifically, we
introduce code clustering methods that leverages LMs' capabilities in code
understanding and reasoning, resulting in a set of metrics that represent the
number of algorithms in model-generated solutions. We extensively investigate
the property of model-generated solutions by contrasting them with
human-written ones and quantifying the impact of various factors on code
diversity: model size, temperature, instruction tuning, and problem complexity.
Our analysis demonstrates that model-generated solutions exhibit low
algorithmic diversity, which was neglected by the research community. Moreover,
we explore methods to increase code diversity by combining solutions from
different models and increasing sampling temperatures. Our findings highlight
that code diversity can be enhanced with the help of heterogeneous models and
setting temperature beyond 1.0 that has not been fully explored due to the
functional correctness degradation. To facilitate our research direction, we
publicly share our code and datasets through open-source repositories.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 02:04:58 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 05:38:47 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Lee",
"Seonghyeon",
""
],
[
"Chon",
"Heejae",
""
],
[
"Jang",
"Joonwon",
""
],
[
"Lee",
"Dongha",
""
],
[
"Yu",
"Hwanjo",
""
]
]
| TITLE: How Diversely Can Language Models Solve Problems? Exploring the
Algorithmic Diversity of Model-Generated Code
ABSTRACT: Language models (LMs) have exhibited impressive abilities in generating code
from natural language requirements. In this work, we highlight the diversity of
code generated by LMs as a critical criterion for evaluating their code
generation capabilities. There is a lack of studies focused on assessing the
diversity of generated code, which overlooks its importance in code LMs.
Therefore, we propose a systematic approach to evaluate code diversity,
introducing various metrics with inter-code similarity. Specifically, we
introduce code clustering methods that leverages LMs' capabilities in code
understanding and reasoning, resulting in a set of metrics that represent the
number of algorithms in model-generated solutions. We extensively investigate
the property of model-generated solutions by contrasting them with
human-written ones and quantifying the impact of various factors on code
diversity: model size, temperature, instruction tuning, and problem complexity.
Our analysis demonstrates that model-generated solutions exhibit low
algorithmic diversity, which was neglected by the research community. Moreover,
we explore methods to increase code diversity by combining solutions from
different models and increasing sampling temperatures. Our findings highlight
that code diversity can be enhanced with the help of heterogeneous models and
setting temperature beyond 1.0 that has not been fully explored due to the
functional correctness degradation. To facilitate our research direction, we
publicly share our code and datasets through open-source repositories.
| no_new_dataset | 0.943034 |
2503.01155 | Yiqun Zhang | Yiqun Zhang, Peng Ye, Xiaocui Yang, Shi Feng, Shufei Zhang, Lei Bai,
Wanli Ouyang, Shuyue Hu | Nature-Inspired Population-Based Evolution of Large Language Models | preprint | null | null | null | cs.CL cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolution, the engine behind the survival and growth of life on Earth,
operates through the population-based process of reproduction. Inspired by this
principle, this paper formally defines a newly emerging problem -- the
population-based evolution of large language models (LLMs) -- and introduces a
novel framework. Starting with a population of parent LLMs, our framework
enables the population to evolve through four key operations: (i) crossover,
merging the weights of different parents to create offspring LLMs, (ii)
mutation, introducing small, random changes to model weights to foster
diversity, (iii) selection, prioritizing high-performing models, and (iv)
succession, transferring the learned experience from parent to offspring LLMs.
With only 200 samples per new task, the LLM population evolves rapidly to adapt
to the task at hand, without any gradients. Experiments on 12 datasets show
that our framework consistently outperforms existing multi-LLM merging and
adaptation methods, achieving accuracy gains of up to 54.8% over the best LLM
in the initial population. Moreover, our framework allows for the evolution of
LLMs across multiple new tasks simultaneously, scaling effectively with
populations of up to 40 LLMs, and even zero-shot generalization to unseen
held-out tasks. We have open-sourced the code on GitHub and released the
weights of 10 parent LLMs, fine-tuned from gemma-2-2b-it, on HuggingFace$,
enabling reproduction of our proposed framework using just a single 4090 GPU
with 24GB memory, without any performance degradation.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 04:03:31 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Zhang",
"Yiqun",
""
],
[
"Ye",
"Peng",
""
],
[
"Yang",
"Xiaocui",
""
],
[
"Feng",
"Shi",
""
],
[
"Zhang",
"Shufei",
""
],
[
"Bai",
"Lei",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Hu",
"Shuyue",
""
]
]
| TITLE: Nature-Inspired Population-Based Evolution of Large Language Models
ABSTRACT: Evolution, the engine behind the survival and growth of life on Earth,
operates through the population-based process of reproduction. Inspired by this
principle, this paper formally defines a newly emerging problem -- the
population-based evolution of large language models (LLMs) -- and introduces a
novel framework. Starting with a population of parent LLMs, our framework
enables the population to evolve through four key operations: (i) crossover,
merging the weights of different parents to create offspring LLMs, (ii)
mutation, introducing small, random changes to model weights to foster
diversity, (iii) selection, prioritizing high-performing models, and (iv)
succession, transferring the learned experience from parent to offspring LLMs.
With only 200 samples per new task, the LLM population evolves rapidly to adapt
to the task at hand, without any gradients. Experiments on 12 datasets show
that our framework consistently outperforms existing multi-LLM merging and
adaptation methods, achieving accuracy gains of up to 54.8% over the best LLM
in the initial population. Moreover, our framework allows for the evolution of
LLMs across multiple new tasks simultaneously, scaling effectively with
populations of up to 40 LLMs, and even zero-shot generalization to unseen
held-out tasks. We have open-sourced the code on GitHub and released the
weights of 10 parent LLMs, fine-tuned from gemma-2-2b-it, on HuggingFace$,
enabling reproduction of our proposed framework using just a single 4090 GPU
with 24GB memory, without any performance degradation.
| no_new_dataset | 0.948346 |
2503.01428 | Naifu Xue | Naifu Xue, Zhaoyang Jia, Jiahao Li, Bin Li, Yuan Zhang, Yan Lu | DLF: Extreme Image Compression with Dual-generative Latent Fusion | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies in extreme image compression have achieved remarkable
performance by compressing the tokens from generative tokenizers. However,
these methods often prioritize clustering common semantics within the dataset,
while overlooking the diverse details of individual objects. Consequently, this
results in suboptimal reconstruction fidelity, especially at low bitrates. To
address this issue, we introduce a Dual-generative Latent Fusion (DLF)
paradigm. DLF decomposes the latent into semantic and detail elements,
compressing them through two distinct branches. The semantic branch clusters
high-level information into compact tokens, while the detail branch encodes
perceptually critical details to enhance the overall fidelity. Additionally, we
propose a cross-branch interactive design to reduce redundancy between the two
branches, thereby minimizing the overall bit cost. Experimental results
demonstrate the impressive reconstruction quality of DLF even below 0.01 bits
per pixel (bpp). On the CLIC2020 test set, our method achieves bitrate savings
of up to 27.93% on LPIPS and 53.55% on DISTS compared to MS-ILLM. Furthermore,
DLF surpasses recent diffusion-based codecs in visual fidelity while
maintaining a comparable level of generative realism. Code will be available
later.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 11:29:35 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 08:21:10 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Xue",
"Naifu",
""
],
[
"Jia",
"Zhaoyang",
""
],
[
"Li",
"Jiahao",
""
],
[
"Li",
"Bin",
""
],
[
"Zhang",
"Yuan",
""
],
[
"Lu",
"Yan",
""
]
]
| TITLE: DLF: Extreme Image Compression with Dual-generative Latent Fusion
ABSTRACT: Recent studies in extreme image compression have achieved remarkable
performance by compressing the tokens from generative tokenizers. However,
these methods often prioritize clustering common semantics within the dataset,
while overlooking the diverse details of individual objects. Consequently, this
results in suboptimal reconstruction fidelity, especially at low bitrates. To
address this issue, we introduce a Dual-generative Latent Fusion (DLF)
paradigm. DLF decomposes the latent into semantic and detail elements,
compressing them through two distinct branches. The semantic branch clusters
high-level information into compact tokens, while the detail branch encodes
perceptually critical details to enhance the overall fidelity. Additionally, we
propose a cross-branch interactive design to reduce redundancy between the two
branches, thereby minimizing the overall bit cost. Experimental results
demonstrate the impressive reconstruction quality of DLF even below 0.01 bits
per pixel (bpp). On the CLIC2020 test set, our method achieves bitrate savings
of up to 27.93% on LPIPS and 53.55% on DISTS compared to MS-ILLM. Furthermore,
DLF surpasses recent diffusion-based codecs in visual fidelity while
maintaining a comparable level of generative realism. Code will be available
later.
| no_new_dataset | 0.948251 |
2503.01565 | Yuheng Xu | Yuheng Xu, Shijie Yang, Xin Liu, Jie Liu, Jie Tang, Gangshan Wu | AutoLUT: LUT-Based Image Super-Resolution with Automatic Sampling and
Adaptive Residual Learning | Accepted by CVPR2025 | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the increasing popularity of Hi-DPI screens has driven a
rising demand for high-resolution images. However, the limited computational
power of edge devices poses a challenge in deploying complex super-resolution
neural networks, highlighting the need for efficient methods. While prior works
have made significant progress, they have not fully exploited pixel-level
information. Moreover, their reliance on fixed sampling patterns limits both
accuracy and the ability to capture fine details in low-resolution images. To
address these challenges, we introduce two plug-and-play modules designed to
capture and leverage pixel information effectively in Look-Up Table (LUT) based
super-resolution networks. Our method introduces Automatic Sampling
(AutoSample), a flexible LUT sampling approach where sampling weights are
automatically learned during training to adapt to pixel variations and expand
the receptive field without added inference cost. We also incorporate Adaptive
Residual Learning (AdaRL) to enhance inter-layer connections, enabling detailed
information flow and improving the network's ability to reconstruct fine
details. Our method achieves significant performance improvements on both MuLUT
and SPF-LUT while maintaining similar storage sizes. Specifically, for MuLUT,
we achieve a PSNR improvement of approximately +0.20 dB improvement on average
across five datasets. For SPF-LUT, with more than a 50% reduction in storage
space and about a 2/3 reduction in inference time, our method still maintains
performance comparable to the original. The code is available at
https://github.com/SuperKenVery/AutoLUT.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:09:36 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 16:08:17 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Xu",
"Yuheng",
""
],
[
"Yang",
"Shijie",
""
],
[
"Liu",
"Xin",
""
],
[
"Liu",
"Jie",
""
],
[
"Tang",
"Jie",
""
],
[
"Wu",
"Gangshan",
""
]
]
| TITLE: AutoLUT: LUT-Based Image Super-Resolution with Automatic Sampling and
Adaptive Residual Learning
ABSTRACT: In recent years, the increasing popularity of Hi-DPI screens has driven a
rising demand for high-resolution images. However, the limited computational
power of edge devices poses a challenge in deploying complex super-resolution
neural networks, highlighting the need for efficient methods. While prior works
have made significant progress, they have not fully exploited pixel-level
information. Moreover, their reliance on fixed sampling patterns limits both
accuracy and the ability to capture fine details in low-resolution images. To
address these challenges, we introduce two plug-and-play modules designed to
capture and leverage pixel information effectively in Look-Up Table (LUT) based
super-resolution networks. Our method introduces Automatic Sampling
(AutoSample), a flexible LUT sampling approach where sampling weights are
automatically learned during training to adapt to pixel variations and expand
the receptive field without added inference cost. We also incorporate Adaptive
Residual Learning (AdaRL) to enhance inter-layer connections, enabling detailed
information flow and improving the network's ability to reconstruct fine
details. Our method achieves significant performance improvements on both MuLUT
and SPF-LUT while maintaining similar storage sizes. Specifically, for MuLUT,
we achieve a PSNR improvement of approximately +0.20 dB improvement on average
across five datasets. For SPF-LUT, with more than a 50% reduction in storage
space and about a 2/3 reduction in inference time, our method still maintains
performance comparable to the original. The code is available at
https://github.com/SuperKenVery/AutoLUT.
| no_new_dataset | 0.94699 |
2503.01743 | Young Jin Kim | Microsoft: Abdelrahman Abouelenin, Atabak Ashfaq, Adam Atkinson, Hany
Awadalla, Nguyen Bach, Jianmin Bao, Alon Benhaim, Martin Cai, Vishrav
Chaudhary, Congcong Chen, Dong Chen, Dongdong Chen, Junkun Chen, Weizhu Chen,
Yen-Chun Chen, Yi-ling Chen, Qi Dai, Xiyang Dai, Ruchao Fan, Mei Gao, Min
Gao, Amit Garg, Abhishek Goswami, Junheng Hao, Amr Hendy, Yuxuan Hu, Xin Jin,
Mahmoud Khademi, Dongwoo Kim, Young Jin Kim, Gina Lee, Jinyu Li, Yunsheng Li,
Chen Liang, Xihui Lin, Zeqi Lin, Mengchen Liu, Yang Liu, Gilsinia Lopez,
Chong Luo, Piyush Madan, Vadim Mazalov, Arindam Mitra, Ali Mousavi, Anh
Nguyen, Jing Pan, Daniel Perez-Becker, Jacob Platin, Thomas Portet, Kai Qiu,
Bo Ren, Liliang Ren, Sambuddha Roy, Ning Shang, Yelong Shen, Saksham Singhal,
Subhojit Som, Xia Song, Tetyana Sych, Praneetha Vaddamanu, Shuohang Wang,
Yiming Wang, Zhenghao Wang, Haibin Wu, Haoran Xu, Weijian Xu, Yifan Yang,
Ziyi Yang, Donghan Yu, Ishmam Zabir, Jianwen Zhang, Li Lyna Zhang, Yunan
Zhang, Xiren Zhou | Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language
Models via Mixture-of-LoRAs | 39 pages | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | We introduce Phi-4-Mini and Phi-4-Multimodal, compact yet highly capable
language and multimodal models. Phi-4-Mini is a 3.8-billion-parameter language
model trained on high-quality web and synthetic data, significantly
outperforming recent open-source models of similar size and matching the
performance of models twice its size on math and coding tasks requiring complex
reasoning. This achievement is driven by a carefully curated synthetic data
recipe emphasizing high-quality math and coding datasets. Compared to its
predecessor, Phi-3.5-Mini, Phi-4-Mini features an expanded vocabulary size of
200K tokens to better support multilingual applications, as well as group query
attention for more efficient long-sequence generation. Phi-4-Multimodal is a
multimodal model that integrates text, vision, and speech/audio input
modalities into a single model. Its novel modality extension approach leverages
LoRA adapters and modality-specific routers to allow multiple inference modes
combining various modalities without interference. For example, it now ranks
first in the OpenASR leaderboard to date, although the LoRA component of the
speech/audio modality has just 460 million parameters. Phi-4-Multimodal
supports scenarios involving (vision + language), (vision + speech), and
(speech/audio) inputs, outperforming larger vision-language and speech-language
models on a wide range of tasks. Additionally, we experiment to further train
Phi-4-Mini to enhance its reasoning capabilities. Despite its compact
3.8-billion-parameter size, this experimental version achieves reasoning
performance on par with or surpassing significantly larger models, including
DeepSeek-R1-Distill-Qwen-7B and DeepSeek-R1-Distill-Llama-8B.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 17:05:52 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 09:05:58 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Microsoft",
"",
""
],
[
":",
"",
""
],
[
"Abouelenin",
"Abdelrahman",
""
],
[
"Ashfaq",
"Atabak",
""
],
[
"Atkinson",
"Adam",
""
],
[
"Awadalla",
"Hany",
""
],
[
"Bach",
"Nguyen",
""
],
[
"Bao",
"Jianmin",
""
],
[
"Benhaim",
"Alon",
""
],
[
"Cai",
"Martin",
""
],
[
"Chaudhary",
"Vishrav",
""
],
[
"Chen",
"Congcong",
""
],
[
"Chen",
"Dong",
""
],
[
"Chen",
"Dongdong",
""
],
[
"Chen",
"Junkun",
""
],
[
"Chen",
"Weizhu",
""
],
[
"Chen",
"Yen-Chun",
""
],
[
"Chen",
"Yi-ling",
""
],
[
"Dai",
"Qi",
""
],
[
"Dai",
"Xiyang",
""
],
[
"Fan",
"Ruchao",
""
],
[
"Gao",
"Mei",
""
],
[
"Gao",
"Min",
""
],
[
"Garg",
"Amit",
""
],
[
"Goswami",
"Abhishek",
""
],
[
"Hao",
"Junheng",
""
],
[
"Hendy",
"Amr",
""
],
[
"Hu",
"Yuxuan",
""
],
[
"Jin",
"Xin",
""
],
[
"Khademi",
"Mahmoud",
""
],
[
"Kim",
"Dongwoo",
""
],
[
"Kim",
"Young Jin",
""
],
[
"Lee",
"Gina",
""
],
[
"Li",
"Jinyu",
""
],
[
"Li",
"Yunsheng",
""
],
[
"Liang",
"Chen",
""
],
[
"Lin",
"Xihui",
""
],
[
"Lin",
"Zeqi",
""
],
[
"Liu",
"Mengchen",
""
],
[
"Liu",
"Yang",
""
],
[
"Lopez",
"Gilsinia",
""
],
[
"Luo",
"Chong",
""
],
[
"Madan",
"Piyush",
""
],
[
"Mazalov",
"Vadim",
""
],
[
"Mitra",
"Arindam",
""
],
[
"Mousavi",
"Ali",
""
],
[
"Nguyen",
"Anh",
""
],
[
"Pan",
"Jing",
""
],
[
"Perez-Becker",
"Daniel",
""
],
[
"Platin",
"Jacob",
""
],
[
"Portet",
"Thomas",
""
],
[
"Qiu",
"Kai",
""
],
[
"Ren",
"Bo",
""
],
[
"Ren",
"Liliang",
""
],
[
"Roy",
"Sambuddha",
""
],
[
"Shang",
"Ning",
""
],
[
"Shen",
"Yelong",
""
],
[
"Singhal",
"Saksham",
""
],
[
"Som",
"Subhojit",
""
],
[
"Song",
"Xia",
""
],
[
"Sych",
"Tetyana",
""
],
[
"Vaddamanu",
"Praneetha",
""
],
[
"Wang",
"Shuohang",
""
],
[
"Wang",
"Yiming",
""
],
[
"Wang",
"Zhenghao",
""
],
[
"Wu",
"Haibin",
""
],
[
"Xu",
"Haoran",
""
],
[
"Xu",
"Weijian",
""
],
[
"Yang",
"Yifan",
""
],
[
"Yang",
"Ziyi",
""
],
[
"Yu",
"Donghan",
""
],
[
"Zabir",
"Ishmam",
""
],
[
"Zhang",
"Jianwen",
""
],
[
"Zhang",
"Li Lyna",
""
],
[
"Zhang",
"Yunan",
""
],
[
"Zhou",
"Xiren",
""
]
]
| TITLE: Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language
Models via Mixture-of-LoRAs
ABSTRACT: We introduce Phi-4-Mini and Phi-4-Multimodal, compact yet highly capable
language and multimodal models. Phi-4-Mini is a 3.8-billion-parameter language
model trained on high-quality web and synthetic data, significantly
outperforming recent open-source models of similar size and matching the
performance of models twice its size on math and coding tasks requiring complex
reasoning. This achievement is driven by a carefully curated synthetic data
recipe emphasizing high-quality math and coding datasets. Compared to its
predecessor, Phi-3.5-Mini, Phi-4-Mini features an expanded vocabulary size of
200K tokens to better support multilingual applications, as well as group query
attention for more efficient long-sequence generation. Phi-4-Multimodal is a
multimodal model that integrates text, vision, and speech/audio input
modalities into a single model. Its novel modality extension approach leverages
LoRA adapters and modality-specific routers to allow multiple inference modes
combining various modalities without interference. For example, it now ranks
first in the OpenASR leaderboard to date, although the LoRA component of the
speech/audio modality has just 460 million parameters. Phi-4-Multimodal
supports scenarios involving (vision + language), (vision + speech), and
(speech/audio) inputs, outperforming larger vision-language and speech-language
models on a wide range of tasks. Additionally, we experiment to further train
Phi-4-Mini to enhance its reasoning capabilities. Despite its compact
3.8-billion-parameter size, this experimental version achieves reasoning
performance on par with or surpassing significantly larger models, including
DeepSeek-R1-Distill-Qwen-7B and DeepSeek-R1-Distill-Llama-8B.
| no_new_dataset | 0.951863 |
2503.01879 | Yingji Zhang | Che Liu, Yingji Zhang, Dong Zhang, Weijie Zhang, Chenggong Gong,
Haohan Li, Yu Lu, Shilin Zhou, Yue Lu, Ziliang Gan, Ziao Wang, Junwei Liao,
Haipang Wu, Ji Liu, Andr\'e Freitas, Qifan Wang, Zenglin Xu, Rongjuncheng
Zhang, Yong Dai | Nexus-O: An Omni-Perceptive And -Interactive Model for Language, Audio,
And Vision | null | null | null | null | cs.MM cs.CV cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Human beings perceive the real world through a spectrum of sensory
modalities, encompassing auditory, visual, and linguistic faculties. The
journey towards achieving Artificial General Intelligence (AGI) necessitates
the development of models that can emulate these multifaceted perceptual
capabilities and comprehensively understand these diversified data. To this
end, we introduce \textbf{Nexus-O}, an industry-level \textbf{omni-perceptive
and -interactive} model capable of efficiently processing Audio, Image, Video,
and Text data in any combination and output audio/text in an end-to-end way. We
systematically investigate Nexus-O by addressing three key research questions:
First, how can models be efficiently designed and trained to achieve tri-modal
alignment, understanding and reasoning capabilities across multiple modalities?
Second, what approaches can be implemented to evaluate tri-modal model
robustness, ensuring reliable performance and applicability in real-world
scenarios? Third, what strategies can be employed to curate and obtain
high-quality, real-life scenario speech datasets? For the first question, we
design and pre-train Nexus-O based on the vision-language model, rather than
the language model. By pre-training the model over high-quality synthetic audio
data, our model is capable of tri-modal perception and interaction. For the
second question, we introduce a new audio testbed, Nexus-O-audio, comprising
diverse Automatic Speech Recognition (ASR) samples, spanning various real-world
scenarios, such as corporate meetings and live stream. For the third question,
we design the speech data synthesis pipeline to obtain high-quality speech
training datasets, covering various real-world scenarios. Comprehensive
experimentation and an in-depth analysis of tri-modal alignment over latent
space demonstrate the advantages of our model on downstream tasks.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 17:26:36 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 09:21:40 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Liu",
"Che",
""
],
[
"Zhang",
"Yingji",
""
],
[
"Zhang",
"Dong",
""
],
[
"Zhang",
"Weijie",
""
],
[
"Gong",
"Chenggong",
""
],
[
"Li",
"Haohan",
""
],
[
"Lu",
"Yu",
""
],
[
"Zhou",
"Shilin",
""
],
[
"Lu",
"Yue",
""
],
[
"Gan",
"Ziliang",
""
],
[
"Wang",
"Ziao",
""
],
[
"Liao",
"Junwei",
""
],
[
"Wu",
"Haipang",
""
],
[
"Liu",
"Ji",
""
],
[
"Freitas",
"André",
""
],
[
"Wang",
"Qifan",
""
],
[
"Xu",
"Zenglin",
""
],
[
"Zhang",
"Rongjuncheng",
""
],
[
"Dai",
"Yong",
""
]
]
| TITLE: Nexus-O: An Omni-Perceptive And -Interactive Model for Language, Audio,
And Vision
ABSTRACT: Human beings perceive the real world through a spectrum of sensory
modalities, encompassing auditory, visual, and linguistic faculties. The
journey towards achieving Artificial General Intelligence (AGI) necessitates
the development of models that can emulate these multifaceted perceptual
capabilities and comprehensively understand these diversified data. To this
end, we introduce \textbf{Nexus-O}, an industry-level \textbf{omni-perceptive
and -interactive} model capable of efficiently processing Audio, Image, Video,
and Text data in any combination and output audio/text in an end-to-end way. We
systematically investigate Nexus-O by addressing three key research questions:
First, how can models be efficiently designed and trained to achieve tri-modal
alignment, understanding and reasoning capabilities across multiple modalities?
Second, what approaches can be implemented to evaluate tri-modal model
robustness, ensuring reliable performance and applicability in real-world
scenarios? Third, what strategies can be employed to curate and obtain
high-quality, real-life scenario speech datasets? For the first question, we
design and pre-train Nexus-O based on the vision-language model, rather than
the language model. By pre-training the model over high-quality synthetic audio
data, our model is capable of tri-modal perception and interaction. For the
second question, we introduce a new audio testbed, Nexus-O-audio, comprising
diverse Automatic Speech Recognition (ASR) samples, spanning various real-world
scenarios, such as corporate meetings and live stream. For the third question,
we design the speech data synthesis pipeline to obtain high-quality speech
training datasets, covering various real-world scenarios. Comprehensive
experimentation and an in-depth analysis of tri-modal alignment over latent
space demonstrate the advantages of our model on downstream tasks.
| no_new_dataset | 0.950365 |
2503.03360 | Afnan Sultan | Afnan Sultan, Max Rausch-Dupont, Shahrukh Khan, Olga Kalinina, Andrea
Volkamer, and Dietrich Klakow | Transformers for molecular property prediction: Domain adaptation
efficiently improves performance | null | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Most of the current transformer-based chemical language models are
pre-trained on millions to billions of molecules. However, the improvement from
such scaling in dataset size is not confidently linked to improved molecular
property prediction. The aim of this study is to investigate and overcome some
of the limitations of transformer models in predicting molecular properties.
Specifically, we examine the impact of pre-training dataset size and diversity
on the performance of transformer models and investigate the use of domain
adaptation as a technique for improving model performance. First, our findings
indicate that increasing pretraining dataset size beyond 400K molecules from
the GuacaMol dataset does not result in a significant improvement on four ADME
endpoints, namely, solubility, permeability, microsomal stability, and plasma
protein binding. Second, our results demonstrate that using domain adaptation
by further training the transformer model on a small set of domain-relevant
molecules, i.e., a few hundred to a few thousand, using multi-task regression
of physicochemical properties was sufficient to significantly improve
performance for three out of the four investigated ADME endpoints (P-value <
0.001). Finally, we observe that a model pre-trained on 400K molecules and
domain adopted on a few hundred/thousand molecules performs similarly (P-value
> 0.05) to more complicated transformer models like MolBERT(pre-trained on 1.3M
molecules) and MolFormer (pre-trained on 100M molecules). A comparison to a
random forest model trained on basic physicochemical properties showed similar
performance to the examined transformer models. We believe that current
transformer models can be improved through further systematic analysis of
pre-training and downstream data, pre-training objectives, and scaling laws,
ultimately leading to better and more helpful models.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 10:40:09 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 08:55:13 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Sultan",
"Afnan",
""
],
[
"Rausch-Dupont",
"Max",
""
],
[
"Khan",
"Shahrukh",
""
],
[
"Kalinina",
"Olga",
""
],
[
"Volkamer",
"Andrea",
""
],
[
"Klakow",
"Dietrich",
""
]
]
| TITLE: Transformers for molecular property prediction: Domain adaptation
efficiently improves performance
ABSTRACT: Most of the current transformer-based chemical language models are
pre-trained on millions to billions of molecules. However, the improvement from
such scaling in dataset size is not confidently linked to improved molecular
property prediction. The aim of this study is to investigate and overcome some
of the limitations of transformer models in predicting molecular properties.
Specifically, we examine the impact of pre-training dataset size and diversity
on the performance of transformer models and investigate the use of domain
adaptation as a technique for improving model performance. First, our findings
indicate that increasing pretraining dataset size beyond 400K molecules from
the GuacaMol dataset does not result in a significant improvement on four ADME
endpoints, namely, solubility, permeability, microsomal stability, and plasma
protein binding. Second, our results demonstrate that using domain adaptation
by further training the transformer model on a small set of domain-relevant
molecules, i.e., a few hundred to a few thousand, using multi-task regression
of physicochemical properties was sufficient to significantly improve
performance for three out of the four investigated ADME endpoints (P-value <
0.001). Finally, we observe that a model pre-trained on 400K molecules and
domain adopted on a few hundred/thousand molecules performs similarly (P-value
> 0.05) to more complicated transformer models like MolBERT(pre-trained on 1.3M
molecules) and MolFormer (pre-trained on 100M molecules). A comparison to a
random forest model trained on basic physicochemical properties showed similar
performance to the examined transformer models. We believe that current
transformer models can be improved through further systematic analysis of
pre-training and downstream data, pre-training objectives, and scaling laws,
ultimately leading to better and more helpful models.
| no_new_dataset | 0.952086 |
2503.04325 | Cecilia Diana-Albelda | Cecilia Diana-Albelda, Roberto Alcover-Couso, \'Alvaro
Garc\'ia-Mart\'in, Jesus Bescos, Marcos Escudero-Vi\~nolo | GBT-SAM: A Parameter-Efficient Depth-Aware Model for Generalizable Brain
tumour Segmentation on mp-MRI | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Gliomas are brain tumours that stand out for their highly lethal and
aggressive nature, which demands a precise approach in their diagnosis. Medical
image segmentation plays a crucial role in the evaluation and follow-up of
these tumours, allowing specialists to analyse their morphology. However,
existing methods for automatic glioma segmentation often lack generalization
capability across other brain tumour domains, require extensive computational
resources, or fail to fully utilize the multi-parametric MRI (mp-MRI) data used
to delineate them. In this work, we introduce GBT-SAM, a novel Generalizable
Brain Tumour (GBT) framework that extends the Segment Anything Model (SAM) to
brain tumour segmentation tasks. Our method employs a two-step training
protocol: first, fine-tuning the patch embedding layer to process the entire
mp-MRI modalities, and second, incorporating parameter-efficient LoRA blocks
and a Depth-Condition block into the Vision Transformer (ViT) to capture
inter-slice correlations. GBT-SAM achieves state-of-the-art performance on the
Adult Glioma dataset (Dice Score of $93.54$) while demonstrating robust
generalization across Meningioma, Pediatric Glioma, and Sub-Saharan Glioma
datasets. Furthermore, GBT-SAM uses less than 6.5M trainable parameters, thus
offering an efficient solution for brain tumour segmentation. \\ Our code and
models are available at https://github.com/vpulab/med-sam-brain .
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 11:18:22 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 10:22:10 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Diana-Albelda",
"Cecilia",
""
],
[
"Alcover-Couso",
"Roberto",
""
],
[
"García-Martín",
"Álvaro",
""
],
[
"Bescos",
"Jesus",
""
],
[
"Escudero-Viñolo",
"Marcos",
""
]
]
| TITLE: GBT-SAM: A Parameter-Efficient Depth-Aware Model for Generalizable Brain
tumour Segmentation on mp-MRI
ABSTRACT: Gliomas are brain tumours that stand out for their highly lethal and
aggressive nature, which demands a precise approach in their diagnosis. Medical
image segmentation plays a crucial role in the evaluation and follow-up of
these tumours, allowing specialists to analyse their morphology. However,
existing methods for automatic glioma segmentation often lack generalization
capability across other brain tumour domains, require extensive computational
resources, or fail to fully utilize the multi-parametric MRI (mp-MRI) data used
to delineate them. In this work, we introduce GBT-SAM, a novel Generalizable
Brain Tumour (GBT) framework that extends the Segment Anything Model (SAM) to
brain tumour segmentation tasks. Our method employs a two-step training
protocol: first, fine-tuning the patch embedding layer to process the entire
mp-MRI modalities, and second, incorporating parameter-efficient LoRA blocks
and a Depth-Condition block into the Vision Transformer (ViT) to capture
inter-slice correlations. GBT-SAM achieves state-of-the-art performance on the
Adult Glioma dataset (Dice Score of $93.54$) while demonstrating robust
generalization across Meningioma, Pediatric Glioma, and Sub-Saharan Glioma
datasets. Furthermore, GBT-SAM uses less than 6.5M trainable parameters, thus
offering an efficient solution for brain tumour segmentation. \\ Our code and
models are available at https://github.com/vpulab/med-sam-brain .
| no_new_dataset | 0.935524 |
2503.04638 | Mohammad Ali Vahedifar | Mohammad Ali Vahedifar and Qi Zhang | No Forgetting Learning: Memory-free Continual Learning | This paper is submitted to ICCV 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continual Learning (CL) remains a central challenge in deep learning, where
models must sequentially acquire new knowledge while mitigating Catastrophic
Forgetting (CF) of prior tasks. Existing approaches often struggle with
efficiency and scalability, requiring extensive memory or model buffers. This
work introduces ``No Forgetting Learning" (NFL), a memory-free CL framework
that leverages knowledge distillation to maintain stability while preserving
plasticity. Memory-free means the NFL does not rely on any memory buffer.
Through extensive evaluations of three benchmark datasets, we demonstrate that
NFL achieves competitive performance while utilizing approximately 14.75 times
less memory than state-of-the-art methods. Furthermore, we introduce a new
metric to better assess CL's plasticity-stability trade-off.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:25:46 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 09:18:06 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Vahedifar",
"Mohammad Ali",
""
],
[
"Zhang",
"Qi",
""
]
]
| TITLE: No Forgetting Learning: Memory-free Continual Learning
ABSTRACT: Continual Learning (CL) remains a central challenge in deep learning, where
models must sequentially acquire new knowledge while mitigating Catastrophic
Forgetting (CF) of prior tasks. Existing approaches often struggle with
efficiency and scalability, requiring extensive memory or model buffers. This
work introduces ``No Forgetting Learning" (NFL), a memory-free CL framework
that leverages knowledge distillation to maintain stability while preserving
plasticity. Memory-free means the NFL does not rely on any memory buffer.
Through extensive evaluations of three benchmark datasets, we demonstrate that
NFL achieves competitive performance while utilizing approximately 14.75 times
less memory than state-of-the-art methods. Furthermore, we introduce a new
metric to better assess CL's plasticity-stability trade-off.
| no_new_dataset | 0.947527 |
2503.04685 | Krish Sharma | Krish Sharma, Niyar R Barman, Akshay Chaturvedi, Nicholas Asher | DIMSUM: Discourse in Mathematical Reasoning as a Supervision Module | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We look at reasoning on GSM8k, a dataset of short texts presenting primary
school, math problems. We find, with Mirzadeh et al. (2024), that current LLM
progress on the data set may not be explained by better reasoning but by
exposure to a broader pretraining data distribution. We then introduce a novel
information source for helping models with less data or inferior training
reason better: discourse structure. We show that discourse structure improves
performance for models like Llama2 13b by up to 160%. Even for models that have
most likely memorized the data set, adding discourse structural information to
the model still improves predictions and dramatically improves large model
performance on out of distribution examples.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 18:27:41 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 08:19:07 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Sharma",
"Krish",
""
],
[
"Barman",
"Niyar R",
""
],
[
"Chaturvedi",
"Akshay",
""
],
[
"Asher",
"Nicholas",
""
]
]
| TITLE: DIMSUM: Discourse in Mathematical Reasoning as a Supervision Module
ABSTRACT: We look at reasoning on GSM8k, a dataset of short texts presenting primary
school, math problems. We find, with Mirzadeh et al. (2024), that current LLM
progress on the data set may not be explained by better reasoning but by
exposure to a broader pretraining data distribution. We then introduce a novel
information source for helping models with less data or inferior training
reason better: discourse structure. We show that discourse structure improves
performance for models like Llama2 13b by up to 160%. Even for models that have
most likely memorized the data set, adding discourse structural information to
the model still improves predictions and dramatically improves large model
performance on out of distribution examples.
| no_new_dataset | 0.941654 |
2503.04728 | Anmolika Singh | Anmolika Singh and Yuhang Diao | Leveraging Large Language Models For Optimized Item Categorization using
UNSPSC Taxonomy | 10 Pages, International Conference on NLP, AI, Computer Science &
Engineering (NLAICSE 2024), December 2024, ISBN : 978-1-923107-45-8 | International Journal on Cybernetics & Informatics. 13. (2024) | 10.5121/ijci.2024.130601 | null | cs.CL cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Effective item categorization is vital for businesses, enabling the
transformation of unstructured datasets into organized categories that
streamline inventory management. Despite its importance, item categorization
remains highly subjective and lacks a uniform standard across industries and
businesses. The United Nations Standard Products and Services Code (UNSPSC)
provides a standardized system for cataloguing inventory, yet employing UNSPSC
categorizations often demands significant manual effort. This paper
investigates the deployment of Large Language Models (LLMs) to automate the
classification of inventory data into UNSPSC codes based on Item Descriptions.
We evaluate the accuracy and efficiency of LLMs in categorizing diverse
datasets, exploring their language processing capabilities and their potential
as a tool for standardizing inventory classification. Our findings reveal that
LLMs can substantially diminish the manual labor involved in item
categorization while maintaining high accuracy, offering a scalable solution
for businesses striving to enhance their inventory management practices.
| [
{
"version": "v1",
"created": "Sat, 28 Dec 2024 00:12:13 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Singh",
"Anmolika",
""
],
[
"Diao",
"Yuhang",
""
]
]
| TITLE: Leveraging Large Language Models For Optimized Item Categorization using
UNSPSC Taxonomy
ABSTRACT: Effective item categorization is vital for businesses, enabling the
transformation of unstructured datasets into organized categories that
streamline inventory management. Despite its importance, item categorization
remains highly subjective and lacks a uniform standard across industries and
businesses. The United Nations Standard Products and Services Code (UNSPSC)
provides a standardized system for cataloguing inventory, yet employing UNSPSC
categorizations often demands significant manual effort. This paper
investigates the deployment of Large Language Models (LLMs) to automate the
classification of inventory data into UNSPSC codes based on Item Descriptions.
We evaluate the accuracy and efficiency of LLMs in categorizing diverse
datasets, exploring their language processing capabilities and their potential
as a tool for standardizing inventory classification. Our findings reveal that
LLMs can substantially diminish the manual labor involved in item
categorization while maintaining high accuracy, offering a scalable solution
for businesses striving to enhance their inventory management practices.
| no_new_dataset | 0.95222 |
2503.04751 | Prashant Mahajan Dr | Prashant Mahajan | What is Ethical: AIHED Driving Humans or Human-Driven AIHED? A
Conceptual Framework enabling the Ethos of AI-driven Higher education | Tables 9, Figures 6 | null | null | null | cs.CY cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | The rapid integration of Artificial Intelligence (AI) in Higher Education
(HE) is transforming personalized learning, administrative automation, and
decision-making. However, this progress presents a duality, as AI adoption also
introduces ethical and institutional challenges, including algorithmic bias,
data privacy risks, and governance inconsistencies. To address these concerns,
this study introduces the Human-Driven AI in Higher Education (HD-AIHED)
Framework, ensuring compliance with UNESCO and OECD ethical standards. This
conceptual research employs a qualitative meta-synthesis approach, integrating
qualitative and quantitative studies to identify patterns, contradictions, and
gaps in AI adoption within HE. It reinterprets existing datasets through
theoretical and ethical lenses to develop governance frameworks. The study
applies a participatory integrated co-system, Phased Human Intelligence, SWOC
analysis, and AI ethical review boards to assess AI readiness and governance
strategies for universities and HE institutions. The HD-AIHED model bridges AI
research gaps, addresses global real-time challenges, and provides tailored,
scalable, and ethical strategies for diverse educational contexts. By
emphasizing interdisciplinary collaboration among stakeholders, this study
envisions AIHED as a transparent and equitable force for innovation. The
HD-AIHED framework ensures AI acts as a collaborative and ethical enabler
rather than a disruptive replacement for human intelligence while advocating
for responsible AI implementation in HE.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2025 11:13:31 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Mahajan",
"Prashant",
""
]
]
| TITLE: What is Ethical: AIHED Driving Humans or Human-Driven AIHED? A
Conceptual Framework enabling the Ethos of AI-driven Higher education
ABSTRACT: The rapid integration of Artificial Intelligence (AI) in Higher Education
(HE) is transforming personalized learning, administrative automation, and
decision-making. However, this progress presents a duality, as AI adoption also
introduces ethical and institutional challenges, including algorithmic bias,
data privacy risks, and governance inconsistencies. To address these concerns,
this study introduces the Human-Driven AI in Higher Education (HD-AIHED)
Framework, ensuring compliance with UNESCO and OECD ethical standards. This
conceptual research employs a qualitative meta-synthesis approach, integrating
qualitative and quantitative studies to identify patterns, contradictions, and
gaps in AI adoption within HE. It reinterprets existing datasets through
theoretical and ethical lenses to develop governance frameworks. The study
applies a participatory integrated co-system, Phased Human Intelligence, SWOC
analysis, and AI ethical review boards to assess AI readiness and governance
strategies for universities and HE institutions. The HD-AIHED model bridges AI
research gaps, addresses global real-time challenges, and provides tailored,
scalable, and ethical strategies for diverse educational contexts. By
emphasizing interdisciplinary collaboration among stakeholders, this study
envisions AIHED as a transparent and equitable force for innovation. The
HD-AIHED framework ensures AI acts as a collaborative and ethical enabler
rather than a disruptive replacement for human intelligence while advocating
for responsible AI implementation in HE.
| no_new_dataset | 0.953449 |
2503.04755 | Thorsten Ruprechter | Thorsten Ruprechter, Marion Garaus, Ivo Ponocny, Denis Helic | NutriTransform: Estimating Nutritional Information From Online Food
Posts | under review | null | null | null | cs.CY cs.CL | http://creativecommons.org/licenses/by/4.0/ | Deriving nutritional information from online food posts is challenging,
particularly when users do not explicitly log the macro-nutrients of a shared
meal. In this work, we present an efficient and straightforward approach to
approximating macro-nutrients based solely on the titles of food posts. Our
method combines a public food database from the U.S. Department of Agriculture
with advanced text embedding techniques. We evaluate the approach on a labeled
food dataset, demonstrating its effectiveness, and apply it to over 500,000
real-world posts from Reddit's popular /r/food subreddit to uncover trends in
food-sharing behavior based on the estimated macro-nutrient content.
Altogether, this work lays a foundation for researchers and practitioners
aiming to estimate caloric and nutritional content using only text data.
| [
{
"version": "v1",
"created": "Sun, 9 Feb 2025 10:33:29 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Ruprechter",
"Thorsten",
""
],
[
"Garaus",
"Marion",
""
],
[
"Ponocny",
"Ivo",
""
],
[
"Helic",
"Denis",
""
]
]
| TITLE: NutriTransform: Estimating Nutritional Information From Online Food
Posts
ABSTRACT: Deriving nutritional information from online food posts is challenging,
particularly when users do not explicitly log the macro-nutrients of a shared
meal. In this work, we present an efficient and straightforward approach to
approximating macro-nutrients based solely on the titles of food posts. Our
method combines a public food database from the U.S. Department of Agriculture
with advanced text embedding techniques. We evaluate the approach on a labeled
food dataset, demonstrating its effectiveness, and apply it to over 500,000
real-world posts from Reddit's popular /r/food subreddit to uncover trends in
food-sharing behavior based on the estimated macro-nutrient content.
Altogether, this work lays a foundation for researchers and practitioners
aiming to estimate caloric and nutritional content using only text data.
| no_new_dataset | 0.953708 |
2503.04763 | Jules Viennot | Jules Viennot, Guillaume Baudart, Emilio Jes\`us Gallego Arias, Marc
Lelarge | MiniF2F in Rocq: Automatic Translation Between Proof Assistants -- A
Case Study | null | null | null | null | cs.LO cs.CL cs.LG cs.PL | http://creativecommons.org/licenses/by/4.0/ | In this work, we conduct an experiment using state-of-the-art LLMs to
translate MiniF2F into Rocq. The translation task focuses on generating a Rocq
theorem based on three sources: a natural language description, the Lean
formalization, and the Isabelle formalization. We conducted our experiment in 3
stages of increasing complexity, from basic one-shot prompting to multi-turn
conversations that incorporate feedback from unsuccessful attempts. At each
stage, we perform multiple rounds of translation using increasingly advanced
models: GPT-4o mini, Claude 3.5 Sonnet, o1 mini, and o1. We successfully
translated 478 out of 488 theorems. The dataset is opensource:
https://github.com/LLM4Rocq/miniF2F-rocq.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 09:32:55 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Viennot",
"Jules",
""
],
[
"Baudart",
"Guillaume",
""
],
[
"Arias",
"Emilio Jesùs Gallego",
""
],
[
"Lelarge",
"Marc",
""
]
]
| TITLE: MiniF2F in Rocq: Automatic Translation Between Proof Assistants -- A
Case Study
ABSTRACT: In this work, we conduct an experiment using state-of-the-art LLMs to
translate MiniF2F into Rocq. The translation task focuses on generating a Rocq
theorem based on three sources: a natural language description, the Lean
formalization, and the Isabelle formalization. We conducted our experiment in 3
stages of increasing complexity, from basic one-shot prompting to multi-turn
conversations that incorporate feedback from unsuccessful attempts. At each
stage, we perform multiple rounds of translation using increasingly advanced
models: GPT-4o mini, Claude 3.5 Sonnet, o1 mini, and o1. We successfully
translated 478 out of 488 theorems. The dataset is opensource:
https://github.com/LLM4Rocq/miniF2F-rocq.
| new_dataset | 0.942295 |
2503.04772 | David Yin | David Yin and Jing Gao | Generating Millions Of Lean Theorems With Proofs By Exploring State
Transition Graphs | null | null | null | null | cs.LO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have demonstrated significant potential in
generating mathematical proofs. However, a persistent challenge is that LLMs
occasionally make mistakes, while even a minor mistake can invalidate an entire
proof. Proof assistants like Lean offer a great remedy. They are designed for
verifying each step of a proof in a formal language, and in recent years
researchers have created AI models to generate proofs in their languages.
However, the scarcity of large-scale datasets of Lean proofs restrict the
performance of such Automated Theorem Proving (ATP) models.
We developed LeanNavigator, a novel method for generating a large-scale
dataset of Lean theorems and proofs by finding new ways to prove existing Lean
theorems. By leveraging an interactive Lean client and an efficient method for
proof step generation, LeanNavigator efficiently produces new theorems with
corresponding proofs. Applying this approach to Mathlib4, we generated 4.7
million theorems totaling 1 billion tokens, surpassing previous datasets by
more than an order of magnitude. Using this extensive dataset, we trained an AI
model that outperforms the state-of-the-art ReProver model in theorem-proving
tasks. These results confirm our hypothesis and demonstrate the critical role
of large datasets in improving the performance of automated theorem provers.
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2025 06:20:39 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Yin",
"David",
""
],
[
"Gao",
"Jing",
""
]
]
| TITLE: Generating Millions Of Lean Theorems With Proofs By Exploring State
Transition Graphs
ABSTRACT: Large Language Models (LLMs) have demonstrated significant potential in
generating mathematical proofs. However, a persistent challenge is that LLMs
occasionally make mistakes, while even a minor mistake can invalidate an entire
proof. Proof assistants like Lean offer a great remedy. They are designed for
verifying each step of a proof in a formal language, and in recent years
researchers have created AI models to generate proofs in their languages.
However, the scarcity of large-scale datasets of Lean proofs restrict the
performance of such Automated Theorem Proving (ATP) models.
We developed LeanNavigator, a novel method for generating a large-scale
dataset of Lean theorems and proofs by finding new ways to prove existing Lean
theorems. By leveraging an interactive Lean client and an efficient method for
proof step generation, LeanNavigator efficiently produces new theorems with
corresponding proofs. Applying this approach to Mathlib4, we generated 4.7
million theorems totaling 1 billion tokens, surpassing previous datasets by
more than an order of magnitude. Using this extensive dataset, we trained an AI
model that outperforms the state-of-the-art ReProver model in theorem-proving
tasks. These results confirm our hypothesis and demonstrate the critical role
of large datasets in improving the performance of automated theorem provers.
| no_new_dataset | 0.72287 |
2503.04783 | Anichur Rahman | Anichur Rahman, Shahariar Hossain Mahir, Md Tanjum An Tashrif, Airin
Afroj Aishi, Md Ahsan Karim, Dipanjali Kundu, Tanoy Debnath, Md. Abul Ala
Moududi, and MD. Zunead Abedin Eidmum | Comparative Analysis Based on DeepSeek, ChatGPT, and Google Gemini:
Features, Techniques, Performance, Future Prospects | null | null | null | null | cs.CL cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, DeepSeek, ChatGPT, and Google Gemini are the most trending and
exciting Large Language Model (LLM) technologies for reasoning, multimodal
capabilities, and general linguistic performance worldwide. DeepSeek employs a
Mixture-of-Experts (MoE) approach, activating only the parameters most relevant
to the task at hand, which makes it especially effective for domain-specific
work. On the other hand, ChatGPT relies on a dense transformer model enhanced
through reinforcement learning from human feedback (RLHF), and then Google
Gemini actually uses a multimodal transformer architecture that integrates
text, code, and images into a single framework. However, by using those
technologies, people can be able to mine their desired text, code, images, etc,
in a cost-effective and domain-specific inference. People may choose those
techniques based on the best performance. In this regard, we offer a
comparative study based on the DeepSeek, ChatGPT, and Gemini techniques in this
research. Initially, we focus on their methods and materials, appropriately
including the data selection criteria. Then, we present state-of-the-art
features of DeepSeek, ChatGPT, and Gemini based on their applications. Most
importantly, we show the technological comparison among them and also cover the
dataset analysis for various applications. Finally, we address extensive
research areas and future potential guidance regarding LLM-based AI research
for the community.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 19:55:35 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Rahman",
"Anichur",
""
],
[
"Mahir",
"Shahariar Hossain",
""
],
[
"Tashrif",
"Md Tanjum An",
""
],
[
"Aishi",
"Airin Afroj",
""
],
[
"Karim",
"Md Ahsan",
""
],
[
"Kundu",
"Dipanjali",
""
],
[
"Debnath",
"Tanoy",
""
],
[
"Moududi",
"Md. Abul Ala",
""
],
[
"Eidmum",
"MD. Zunead Abedin",
""
]
]
| TITLE: Comparative Analysis Based on DeepSeek, ChatGPT, and Google Gemini:
Features, Techniques, Performance, Future Prospects
ABSTRACT: Nowadays, DeepSeek, ChatGPT, and Google Gemini are the most trending and
exciting Large Language Model (LLM) technologies for reasoning, multimodal
capabilities, and general linguistic performance worldwide. DeepSeek employs a
Mixture-of-Experts (MoE) approach, activating only the parameters most relevant
to the task at hand, which makes it especially effective for domain-specific
work. On the other hand, ChatGPT relies on a dense transformer model enhanced
through reinforcement learning from human feedback (RLHF), and then Google
Gemini actually uses a multimodal transformer architecture that integrates
text, code, and images into a single framework. However, by using those
technologies, people can be able to mine their desired text, code, images, etc,
in a cost-effective and domain-specific inference. People may choose those
techniques based on the best performance. In this regard, we offer a
comparative study based on the DeepSeek, ChatGPT, and Gemini techniques in this
research. Initially, we focus on their methods and materials, appropriately
including the data selection criteria. Then, we present state-of-the-art
features of DeepSeek, ChatGPT, and Gemini based on their applications. Most
importantly, we show the technological comparison among them and also cover the
dataset analysis for various applications. Finally, we address extensive
research areas and future potential guidance regarding LLM-based AI research
for the community.
| no_new_dataset | 0.941815 |
2503.04793 | Wenjie Qiu | Wenjie Qiu, Yi-Chen Li, Xuqin Zhang, Tianyi Zhang, Yihang Zhang,
Zongzhang Zhang, Yang Yu | Sentence-level Reward Model can Generalize Better for Aligning LLM from
Human Preference | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning reward models from human preference datasets and subsequently
optimizing language models via reinforcement learning has emerged as a
fundamental paradigm for aligning LLMs with human preferences. The performance
of the reward model plays a crucial role in the effectiveness of alignment.
Previous reward models operate at a coarse-grained level, requiring the
generation of a complete response to obtain a reward value. The sparse reward
may present challenges for downstream reinforcement learning. While recent
efforts have attempted to learn token-level reward models, the lack of explicit
semantic information makes it difficult to model the credit of every individual
token. In this paper, we propose assigning scores to every sentence,
introducing an intermediate-grained reward model. By segmenting the complete
response into sentences and applying differential operations to reward output
at the start and end positions of each sentence, we can effectively model the
rewards of sentences. Moreover, a novel attention mechanism is introduced to
aggregate the scores of all sentences into a response-level score, which allows
it to be trained using the Bradley-Terry model. On common benchmarks, our
method outperforms the response-level reward model by 2.7% on RewardBench (for
reward modeling evaluation) and surpasses all baselines on AlpacaEval (for
alignment evaluation).
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 14:11:04 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Qiu",
"Wenjie",
""
],
[
"Li",
"Yi-Chen",
""
],
[
"Zhang",
"Xuqin",
""
],
[
"Zhang",
"Tianyi",
""
],
[
"Zhang",
"Yihang",
""
],
[
"Zhang",
"Zongzhang",
""
],
[
"Yu",
"Yang",
""
]
]
| TITLE: Sentence-level Reward Model can Generalize Better for Aligning LLM from
Human Preference
ABSTRACT: Learning reward models from human preference datasets and subsequently
optimizing language models via reinforcement learning has emerged as a
fundamental paradigm for aligning LLMs with human preferences. The performance
of the reward model plays a crucial role in the effectiveness of alignment.
Previous reward models operate at a coarse-grained level, requiring the
generation of a complete response to obtain a reward value. The sparse reward
may present challenges for downstream reinforcement learning. While recent
efforts have attempted to learn token-level reward models, the lack of explicit
semantic information makes it difficult to model the credit of every individual
token. In this paper, we propose assigning scores to every sentence,
introducing an intermediate-grained reward model. By segmenting the complete
response into sentences and applying differential operations to reward output
at the start and end positions of each sentence, we can effectively model the
rewards of sentences. Moreover, a novel attention mechanism is introduced to
aggregate the scores of all sentences into a response-level score, which allows
it to be trained using the Bradley-Terry model. On common benchmarks, our
method outperforms the response-level reward model by 2.7% on RewardBench (for
reward modeling evaluation) and surpasses all baselines on AlpacaEval (for
alignment evaluation).
| no_new_dataset | 0.944022 |
2503.04796 | Jiaen Lin | Jiaen Lin, Jingyu Liu | Optimizing Multi-Hop Document Retrieval Through Intermediate
Representations | null | null | null | null | cs.CL cs.AI cs.IR | http://creativecommons.org/licenses/by-sa/4.0/ | Retrieval-augmented generation (RAG) encounters challenges when addressing
complex queries, particularly multi-hop questions. While several methods tackle
multi-hop queries by iteratively generating internal queries and retrieving
external documents, these approaches are computationally expensive. In this
paper, we identify a three-stage information processing pattern in LLMs during
layer-by-layer reasoning, consisting of extraction, processing, and subsequent
extraction steps. This observation suggests that the representations in
intermediate layers contain richer information compared to those in other
layers. Building on this insight, we propose Layer-wise RAG (L-RAG). Unlike
prior methods that focus on generating new internal queries, L-RAG leverages
intermediate representations from the middle layers, which capture next-hop
information, to retrieve external knowledge. L-RAG achieves performance
comparable to multi-step approaches while maintaining inference overhead
similar to that of standard RAG. Experimental results show that L-RAG
outperforms existing RAG methods on open-domain multi-hop question-answering
datasets, including MuSiQue, HotpotQA, and 2WikiMultiHopQA. The code is
available in https://anonymous.4open.science/r/L-RAG-ADD5/
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 11:33:22 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Lin",
"Jiaen",
""
],
[
"Liu",
"Jingyu",
""
]
]
| TITLE: Optimizing Multi-Hop Document Retrieval Through Intermediate
Representations
ABSTRACT: Retrieval-augmented generation (RAG) encounters challenges when addressing
complex queries, particularly multi-hop questions. While several methods tackle
multi-hop queries by iteratively generating internal queries and retrieving
external documents, these approaches are computationally expensive. In this
paper, we identify a three-stage information processing pattern in LLMs during
layer-by-layer reasoning, consisting of extraction, processing, and subsequent
extraction steps. This observation suggests that the representations in
intermediate layers contain richer information compared to those in other
layers. Building on this insight, we propose Layer-wise RAG (L-RAG). Unlike
prior methods that focus on generating new internal queries, L-RAG leverages
intermediate representations from the middle layers, which capture next-hop
information, to retrieve external knowledge. L-RAG achieves performance
comparable to multi-step approaches while maintaining inference overhead
similar to that of standard RAG. Experimental results show that L-RAG
outperforms existing RAG methods on open-domain multi-hop question-answering
datasets, including MuSiQue, HotpotQA, and 2WikiMultiHopQA. The code is
available in https://anonymous.4open.science/r/L-RAG-ADD5/
| no_new_dataset | 0.942665 |
2503.04797 | Rahul Raja | Rahul Raja, Arpita Vats | Parallel Corpora for Machine Translation in Low-resource Indic
Languages: A Comprehensive Review | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Parallel corpora play an important role in training machine translation (MT)
models, particularly for low-resource languages where high-quality bilingual
data is scarce. This review provides a comprehensive overview of available
parallel corpora for Indic languages, which span diverse linguistic families,
scripts, and regional variations. We categorize these corpora into
text-to-text, code-switched, and various categories of multimodal datasets,
highlighting their significance in the development of robust multilingual MT
systems. Beyond resource enumeration, we critically examine the challenges
faced in corpus creation, including linguistic diversity, script variation,
data scarcity, and the prevalence of informal textual content.We also discuss
and evaluate these corpora in various terms such as alignment quality and
domain representativeness. Furthermore, we address open challenges such as data
imbalance across Indic languages, the trade-off between quality and quantity,
and the impact of noisy, informal, and dialectal data on MT performance.
Finally, we outline future directions, including leveraging cross-lingual
transfer learning, expanding multilingual datasets, and integrating multimodal
resources to enhance translation quality. To the best of our knowledge, this
paper presents the first comprehensive review of parallel corpora specifically
tailored for low-resource Indic languages in the context of machine
translation.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 21:22:53 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Raja",
"Rahul",
""
],
[
"Vats",
"Arpita",
""
]
]
| TITLE: Parallel Corpora for Machine Translation in Low-resource Indic
Languages: A Comprehensive Review
ABSTRACT: Parallel corpora play an important role in training machine translation (MT)
models, particularly for low-resource languages where high-quality bilingual
data is scarce. This review provides a comprehensive overview of available
parallel corpora for Indic languages, which span diverse linguistic families,
scripts, and regional variations. We categorize these corpora into
text-to-text, code-switched, and various categories of multimodal datasets,
highlighting their significance in the development of robust multilingual MT
systems. Beyond resource enumeration, we critically examine the challenges
faced in corpus creation, including linguistic diversity, script variation,
data scarcity, and the prevalence of informal textual content.We also discuss
and evaluate these corpora in various terms such as alignment quality and
domain representativeness. Furthermore, we address open challenges such as data
imbalance across Indic languages, the trade-off between quality and quantity,
and the impact of noisy, informal, and dialectal data on MT performance.
Finally, we outline future directions, including leveraging cross-lingual
transfer learning, expanding multilingual datasets, and integrating multimodal
resources to enhance translation quality. To the best of our knowledge, this
paper presents the first comprehensive review of parallel corpora specifically
tailored for low-resource Indic languages in the context of machine
translation.
| no_new_dataset | 0.950411 |
2503.04800 | Jie Ouyang | Jie Ouyang, Tingyue Pan, Mingyue Cheng, Ruiran Yan, Yucong Luo,
Jiaying Lin, Qi Liu | HoH: A Dynamic Benchmark for Evaluating the Impact of Outdated
Information on Retrieval-Augmented Generation | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While Retrieval-Augmented Generation (RAG) has emerged as an effective
approach for addressing the knowledge outdating problem in Large Language
Models (LLMs), it faces a critical challenge: the prevalence of outdated
information in knowledge bases. Current research primarily focuses on
incorporating up-to-date information, yet the impact of outdated information
coexisting in retrieval sources remains inadequately addressed. To bridge this
gap, we introduce HoH, the first benchmark specifically designed to evaluate
the impact of outdated information on RAG. Our benchmark leverages token-level
diff algorithms combined with LLM pipelines to efficiently create a large-scale
QA dataset that accurately captures temporal knowledge evolution in real-world
facts. Through comprehensive experiments, we reveal that outdated information
significantly degrades RAG performance in two critical ways: (1) it
substantially reduces response accuracy by distracting models from correct
information, and (2) it can mislead models into generating potentially harmful
outputs, even when current information is available. Current RAG approaches
struggle with both retrieval and generation aspects when handling outdated
information. These findings highlight the urgent need for innovative solutions
to address the temporal challenges in RAG.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 06:54:05 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Ouyang",
"Jie",
""
],
[
"Pan",
"Tingyue",
""
],
[
"Cheng",
"Mingyue",
""
],
[
"Yan",
"Ruiran",
""
],
[
"Luo",
"Yucong",
""
],
[
"Lin",
"Jiaying",
""
],
[
"Liu",
"Qi",
""
]
]
| TITLE: HoH: A Dynamic Benchmark for Evaluating the Impact of Outdated
Information on Retrieval-Augmented Generation
ABSTRACT: While Retrieval-Augmented Generation (RAG) has emerged as an effective
approach for addressing the knowledge outdating problem in Large Language
Models (LLMs), it faces a critical challenge: the prevalence of outdated
information in knowledge bases. Current research primarily focuses on
incorporating up-to-date information, yet the impact of outdated information
coexisting in retrieval sources remains inadequately addressed. To bridge this
gap, we introduce HoH, the first benchmark specifically designed to evaluate
the impact of outdated information on RAG. Our benchmark leverages token-level
diff algorithms combined with LLM pipelines to efficiently create a large-scale
QA dataset that accurately captures temporal knowledge evolution in real-world
facts. Through comprehensive experiments, we reveal that outdated information
significantly degrades RAG performance in two critical ways: (1) it
substantially reduces response accuracy by distracting models from correct
information, and (2) it can mislead models into generating potentially harmful
outputs, even when current information is available. Current RAG approaches
struggle with both retrieval and generation aspects when handling outdated
information. These findings highlight the urgent need for innovative solutions
to address the temporal challenges in RAG.
| new_dataset | 0.959875 |
2503.04801 | Boyu Jia | Boyu Jia, Junzhe Zhang, Huixuan Zhang, Xiaojun Wan | Exploring and Evaluating Multimodal Knowledge Reasoning Consistency of
Multimodal Large Language Models | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, multimodal large language models (MLLMs) have achieved
significant breakthroughs, enhancing understanding across text and vision.
However, current MLLMs still face challenges in effectively integrating
knowledge across these modalities during multimodal knowledge reasoning,
leading to inconsistencies in reasoning outcomes. To systematically explore
this issue, we propose four evaluation tasks and construct a new dataset. We
conduct a series of experiments on this dataset to analyze and compare the
extent of consistency degradation in multimodal knowledge reasoning within
MLLMs. Based on the experimental results, we identify factors contributing to
the observed degradation in consistency. Our research provides new insights
into the challenges of multimodal knowledge reasoning and offers valuable
guidance for future efforts aimed at improving MLLMs.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 09:01:51 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Jia",
"Boyu",
""
],
[
"Zhang",
"Junzhe",
""
],
[
"Zhang",
"Huixuan",
""
],
[
"Wan",
"Xiaojun",
""
]
]
| TITLE: Exploring and Evaluating Multimodal Knowledge Reasoning Consistency of
Multimodal Large Language Models
ABSTRACT: In recent years, multimodal large language models (MLLMs) have achieved
significant breakthroughs, enhancing understanding across text and vision.
However, current MLLMs still face challenges in effectively integrating
knowledge across these modalities during multimodal knowledge reasoning,
leading to inconsistencies in reasoning outcomes. To systematically explore
this issue, we propose four evaluation tasks and construct a new dataset. We
conduct a series of experiments on this dataset to analyze and compare the
extent of consistency degradation in multimodal knowledge reasoning within
MLLMs. Based on the experimental results, we identify factors contributing to
the observed degradation in consistency. Our research provides new insights
into the challenges of multimodal knowledge reasoning and offers valuable
guidance for future efforts aimed at improving MLLMs.
| new_dataset | 0.957278 |
2503.04812 | Zhibin Lan | Zhibin Lan, Liqiang Niu, Fandong Meng, Jie Zhou, Jinsong Su | LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted
Contrastive Learning | Preprint | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Universal multimodal embedding models play a critical role in tasks such as
interleaved image-text retrieval, multimodal RAG, and multimodal clustering.
However, our empirical results indicate that existing LMM-based embedding
models trained with the standard InfoNCE loss exhibit a high degree of overlap
in similarity distribution between positive and negative pairs, making it
challenging to distinguish hard negative pairs effectively. To deal with this
issue, we propose a simple yet effective framework that dynamically improves
the embedding model's representation learning for negative pairs based on their
discriminative difficulty. Within this framework, we train a series of models,
named LLaVE, and evaluate them on the MMEB benchmark, which covers 4 meta-tasks
and 36 datasets. Experimental results show that LLaVE establishes stronger
baselines that achieve state-of-the-art (SOTA) performance while demonstrating
strong scalability and efficiency. Specifically, LLaVE-2B surpasses the
previous SOTA 7B models, while LLaVE-7B achieves a further performance
improvement of 6.2 points. Although LLaVE is trained on image-text data, it can
generalize to text-video retrieval tasks in a zero-shot manner and achieve
strong performance, demonstrating its remarkable potential for transfer to
other embedding tasks.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 10:21:57 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Lan",
"Zhibin",
""
],
[
"Niu",
"Liqiang",
""
],
[
"Meng",
"Fandong",
""
],
[
"Zhou",
"Jie",
""
],
[
"Su",
"Jinsong",
""
]
]
| TITLE: LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted
Contrastive Learning
ABSTRACT: Universal multimodal embedding models play a critical role in tasks such as
interleaved image-text retrieval, multimodal RAG, and multimodal clustering.
However, our empirical results indicate that existing LMM-based embedding
models trained with the standard InfoNCE loss exhibit a high degree of overlap
in similarity distribution between positive and negative pairs, making it
challenging to distinguish hard negative pairs effectively. To deal with this
issue, we propose a simple yet effective framework that dynamically improves
the embedding model's representation learning for negative pairs based on their
discriminative difficulty. Within this framework, we train a series of models,
named LLaVE, and evaluate them on the MMEB benchmark, which covers 4 meta-tasks
and 36 datasets. Experimental results show that LLaVE establishes stronger
baselines that achieve state-of-the-art (SOTA) performance while demonstrating
strong scalability and efficiency. Specifically, LLaVE-2B surpasses the
previous SOTA 7B models, while LLaVE-7B achieves a further performance
improvement of 6.2 points. Although LLaVE is trained on image-text data, it can
generalize to text-video retrieval tasks in a zero-shot manner and achieve
strong performance, demonstrating its remarkable potential for transfer to
other embedding tasks.
| no_new_dataset | 0.936692 |
2503.04819 | Matthew Turner | Matthew J. Turner, Mike Carenzo, Jackie Lasky, James Morris-King,
James Ross | Technique Inference Engine: A Recommender Model to Support Cyber Threat
Hunting | null | null | null | null | cs.CR cs.AI | http://creativecommons.org/licenses/by/4.0/ | Cyber threat hunting is the practice of proactively searching for latent
threats in a network. Engaging in threat hunting can be difficult due to the
volume of network traffic, variety of adversary techniques, and constantly
evolving vulnerabilities. To aid analysts in identifying techniques which may
be co-occurring as part of a campaign, we present the Technique Inference
Engine, a tool to infer tactics, techniques, and procedures (TTPs) which may be
related to existing observations of adversarial behavior. We compile the
largest (to our knowledge) available dataset of cyber threat intelligence (CTI)
reports labeled with relevant TTPs. With the knowledge that techniques are
chronically under-reported in CTI, we apply several implicit feedback
recommender models to the data in order to predict additional techniques which
may be part of a given campaign. We evaluate the results in the context of the
cyber analyst's use case and apply t-SNE to visualize the model embeddings. We
provide our code and a web interface.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 22:31:43 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Turner",
"Matthew J.",
""
],
[
"Carenzo",
"Mike",
""
],
[
"Lasky",
"Jackie",
""
],
[
"Morris-King",
"James",
""
],
[
"Ross",
"James",
""
]
]
| TITLE: Technique Inference Engine: A Recommender Model to Support Cyber Threat
Hunting
ABSTRACT: Cyber threat hunting is the practice of proactively searching for latent
threats in a network. Engaging in threat hunting can be difficult due to the
volume of network traffic, variety of adversary techniques, and constantly
evolving vulnerabilities. To aid analysts in identifying techniques which may
be co-occurring as part of a campaign, we present the Technique Inference
Engine, a tool to infer tactics, techniques, and procedures (TTPs) which may be
related to existing observations of adversarial behavior. We compile the
largest (to our knowledge) available dataset of cyber threat intelligence (CTI)
reports labeled with relevant TTPs. With the knowledge that techniques are
chronically under-reported in CTI, we apply several implicit feedback
recommender models to the data in order to predict additional techniques which
may be part of a given campaign. We evaluate the results in the context of the
cyber analyst's use case and apply t-SNE to visualize the model embeddings. We
provide our code and a web interface.
| new_dataset | 0.95452 |
2503.04821 | Zelin Meng | Zelin Meng, and Takanori Fukao | RTFusion: A depth estimation network based on multimodal fusion in
challenging scenarios | 8 pages, 2 figures | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Depth estimation in complex real-world scenarios is a challenging task,
especially when relying solely on a single modality such as visible light or
thermal infrared (THR) imagery. This paper proposes a novel multimodal depth
estimation model, RTFusion, which enhances depth estimation accuracy and
robustness by integrating the complementary strengths of RGB and THR data. The
RGB modality provides rich texture and color information, while the THR
modality captures thermal patterns, ensuring stability under adverse lighting
conditions such as extreme illumination. The model incorporates a unique fusion
mechanism, EGFusion, consisting of the Mutual Complementary Attention (MCA)
module for cross-modal feature alignment and the Edge Saliency Enhancement
Module (ESEM) to improve edge detail preservation. Comprehensive experiments on
the MS2 and ViViD++ datasets demonstrate that the proposed model consistently
produces high-quality depth maps across various challenging environments,
including nighttime, rainy, and high-glare conditions. The experimental results
highlight the potential of the proposed method in applications requiring
reliable depth estimation, such as autonomous driving, robotics, and augmented
reality.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 01:35:14 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Meng",
"Zelin",
""
],
[
"Fukao",
"Takanori",
""
]
]
| TITLE: RTFusion: A depth estimation network based on multimodal fusion in
challenging scenarios
ABSTRACT: Depth estimation in complex real-world scenarios is a challenging task,
especially when relying solely on a single modality such as visible light or
thermal infrared (THR) imagery. This paper proposes a novel multimodal depth
estimation model, RTFusion, which enhances depth estimation accuracy and
robustness by integrating the complementary strengths of RGB and THR data. The
RGB modality provides rich texture and color information, while the THR
modality captures thermal patterns, ensuring stability under adverse lighting
conditions such as extreme illumination. The model incorporates a unique fusion
mechanism, EGFusion, consisting of the Mutual Complementary Attention (MCA)
module for cross-modal feature alignment and the Edge Saliency Enhancement
Module (ESEM) to improve edge detail preservation. Comprehensive experiments on
the MS2 and ViViD++ datasets demonstrate that the proposed model consistently
produces high-quality depth maps across various challenging environments,
including nighttime, rainy, and high-glare conditions. The experimental results
highlight the potential of the proposed method in applications requiring
reliable depth estimation, such as autonomous driving, robotics, and augmented
reality.
| no_new_dataset | 0.951908 |
2503.04822 | Yuxia Wu | Shujie Li, Yuxia Wu, Chuan Shi, Yuan Fang | HeTGB: A Comprehensive Benchmark for Heterophilic Text-Attributed Graphs | Under review | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph neural networks (GNNs) have demonstrated success in modeling relational
data primarily under the assumption of homophily. However, many real-world
graphs exhibit heterophily, where linked nodes belong to different categories
or possess diverse attributes. Additionally, nodes in many domains are
associated with textual descriptions, forming heterophilic text-attributed
graphs (TAGs). Despite their significance, the study of heterophilic TAGs
remains underexplored due to the lack of comprehensive benchmarks. To address
this gap, we introduce the Heterophilic Text-attributed Graph Benchmark
(HeTGB), a novel benchmark comprising five real-world heterophilic graph
datasets from diverse domains, with nodes enriched by extensive textual
descriptions. HeTGB enables systematic evaluation of GNNs, pre-trained language
models (PLMs) and co-training methods on the node classification task. Through
extensive benchmarking experiments, we showcase the utility of text attributes
in heterophilic graphs, analyze the challenges posed by heterophilic TAGs and
the limitations of existing models, and provide insights into the interplay
between graph structures and textual attributes. We have publicly released
HeTGB with baseline implementations to facilitate further research in this
field.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 02:00:32 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Li",
"Shujie",
""
],
[
"Wu",
"Yuxia",
""
],
[
"Shi",
"Chuan",
""
],
[
"Fang",
"Yuan",
""
]
]
| TITLE: HeTGB: A Comprehensive Benchmark for Heterophilic Text-Attributed Graphs
ABSTRACT: Graph neural networks (GNNs) have demonstrated success in modeling relational
data primarily under the assumption of homophily. However, many real-world
graphs exhibit heterophily, where linked nodes belong to different categories
or possess diverse attributes. Additionally, nodes in many domains are
associated with textual descriptions, forming heterophilic text-attributed
graphs (TAGs). Despite their significance, the study of heterophilic TAGs
remains underexplored due to the lack of comprehensive benchmarks. To address
this gap, we introduce the Heterophilic Text-attributed Graph Benchmark
(HeTGB), a novel benchmark comprising five real-world heterophilic graph
datasets from diverse domains, with nodes enriched by extensive textual
descriptions. HeTGB enables systematic evaluation of GNNs, pre-trained language
models (PLMs) and co-training methods on the node classification task. Through
extensive benchmarking experiments, we showcase the utility of text attributes
in heterophilic graphs, analyze the challenges posed by heterophilic TAGs and
the limitations of existing models, and provide insights into the interplay
between graph structures and textual attributes. We have publicly released
HeTGB with baseline implementations to facilitate further research in this
field.
| new_dataset | 0.965932 |
2503.04826 | Haiyue Zu | Haiyue Zu, Jun Ge, Heting Xiao, Jile Xie, Zhangzhe Zhou, Yifan Meng,
Jiayi Ni, Junjie Niu, Linlin Zhang, Li Ni, Huilin Yang | Rethinking Few-Shot Medical Image Segmentation by SAM2: A Training-Free
Framework with Augmentative Prompting and Dynamic Matching | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reliance on large labeled datasets presents a significant challenge in
medical image segmentation. Few-shot learning offers a potential solution, but
existing methods often still require substantial training data. This paper
proposes a novel approach that leverages the Segment Anything Model 2 (SAM2), a
vision foundation model with strong video segmentation capabilities. We
conceptualize 3D medical image volumes as video sequences, departing from the
traditional slice-by-slice paradigm. Our core innovation is a support-query
matching strategy: we perform extensive data augmentation on a single labeled
support image and, for each frame in the query volume, algorithmically select
the most analogous augmented support image. This selected image, along with its
corresponding mask, is used as a mask prompt, driving SAM2's video
segmentation. This approach entirely avoids model retraining or parameter
updates. We demonstrate state-of-the-art performance on benchmark few-shot
medical image segmentation datasets, achieving significant improvements in
accuracy and annotation efficiency. This plug-and-play method offers a powerful
and generalizable solution for 3D medical image segmentation.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 06:12:13 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Zu",
"Haiyue",
""
],
[
"Ge",
"Jun",
""
],
[
"Xiao",
"Heting",
""
],
[
"Xie",
"Jile",
""
],
[
"Zhou",
"Zhangzhe",
""
],
[
"Meng",
"Yifan",
""
],
[
"Ni",
"Jiayi",
""
],
[
"Niu",
"Junjie",
""
],
[
"Zhang",
"Linlin",
""
],
[
"Ni",
"Li",
""
],
[
"Yang",
"Huilin",
""
]
]
| TITLE: Rethinking Few-Shot Medical Image Segmentation by SAM2: A Training-Free
Framework with Augmentative Prompting and Dynamic Matching
ABSTRACT: The reliance on large labeled datasets presents a significant challenge in
medical image segmentation. Few-shot learning offers a potential solution, but
existing methods often still require substantial training data. This paper
proposes a novel approach that leverages the Segment Anything Model 2 (SAM2), a
vision foundation model with strong video segmentation capabilities. We
conceptualize 3D medical image volumes as video sequences, departing from the
traditional slice-by-slice paradigm. Our core innovation is a support-query
matching strategy: we perform extensive data augmentation on a single labeled
support image and, for each frame in the query volume, algorithmically select
the most analogous augmented support image. This selected image, along with its
corresponding mask, is used as a mask prompt, driving SAM2's video
segmentation. This approach entirely avoids model retraining or parameter
updates. We demonstrate state-of-the-art performance on benchmark few-shot
medical image segmentation datasets, achieving significant improvements in
accuracy and annotation efficiency. This plug-and-play method offers a powerful
and generalizable solution for 3D medical image segmentation.
| no_new_dataset | 0.951142 |
2503.04828 | Shreya Agrawal | Vishakha Agrawal, Archie Chaudhury, Shreya Agrawal | Beyond Next Word Prediction: Developing Comprehensive Evaluation
Frameworks for measuring LLM performance on real world applications | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | While Large Language Models (LLMs) are fundamentally next-token prediction
systems, their practical applications extend far beyond this basic function.
From natural language processing and text generation to conversational
assistants and software use, LLMs have numerous use-cases, and have already
acquired a significant degree of enterprise adoption. To evaluate such models,
static evaluation datasets, consisting of a set of prompts and their
corresponding ground truths, are often used to benchmark the efficacy of the
model for a particular task. In this paper, we provide the basis for a more
comprehensive evaluation framework, based upon a traditional game and
tool-based architecture that enables a more overarching measurement of a
model's capabilities. For simplicity, we provide a generalized foundation that
can be extended, without significant alteration, to numerous scenarios, from
specific use cases such as supply chain management or financial reasoning, to
abstract measurements such as ethics or safety.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 06:44:38 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Agrawal",
"Vishakha",
""
],
[
"Chaudhury",
"Archie",
""
],
[
"Agrawal",
"Shreya",
""
]
]
| TITLE: Beyond Next Word Prediction: Developing Comprehensive Evaluation
Frameworks for measuring LLM performance on real world applications
ABSTRACT: While Large Language Models (LLMs) are fundamentally next-token prediction
systems, their practical applications extend far beyond this basic function.
From natural language processing and text generation to conversational
assistants and software use, LLMs have numerous use-cases, and have already
acquired a significant degree of enterprise adoption. To evaluate such models,
static evaluation datasets, consisting of a set of prompts and their
corresponding ground truths, are often used to benchmark the efficacy of the
model for a particular task. In this paper, we provide the basis for a more
comprehensive evaluation framework, based upon a traditional game and
tool-based architecture that enables a more overarching measurement of a
model's capabilities. For simplicity, we provide a generalized foundation that
can be extended, without significant alteration, to numerous scenarios, from
specific use cases such as supply chain management or financial reasoning, to
abstract measurements such as ethics or safety.
| no_new_dataset | 0.939913 |
2503.04829 | Tao Wang | Tao Wang, Zhihua Wu, Qiaozhi He, Jiaming Chu, Ling Qian, Yu Cheng,
Junliang Xing, Jian Zhao, Lei Jin | StickMotion: Generating 3D Human Motions by Drawing a Stickman | 11 pages, 5 figures, accepted by CVPR2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-to-motion generation, which translates textual descriptions into human
motions, has been challenging in accurately capturing detailed user-imagined
motions from simple text inputs. This paper introduces StickMotion, an
efficient diffusion-based network designed for multi-condition scenarios, which
generates desired motions based on traditional text and our proposed stickman
conditions for global and local control of these motions, respectively. We
address the challenges introduced by the user-friendly stickman from three
perspectives: 1) Data generation. We develop an algorithm to generate
hand-drawn stickmen automatically across different dataset formats. 2)
Multi-condition fusion. We propose a multi-condition module that integrates
into the diffusion process and obtains outputs of all possible condition
combinations, reducing computational complexity and enhancing StickMotion's
performance compared to conventional approaches with the self-attention module.
3) Dynamic supervision. We empower StickMotion to make minor adjustments to the
stickman's position within the output sequences, generating more natural
movements through our proposed dynamic supervision strategy. Through
quantitative experiments and user studies, sketching stickmen saves users about
51.5% of their time generating motions consistent with their imagination. Our
codes, demos, and relevant data will be released to facilitate further research
and validation within the scientific community.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 07:16:14 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Wang",
"Tao",
""
],
[
"Wu",
"Zhihua",
""
],
[
"He",
"Qiaozhi",
""
],
[
"Chu",
"Jiaming",
""
],
[
"Qian",
"Ling",
""
],
[
"Cheng",
"Yu",
""
],
[
"Xing",
"Junliang",
""
],
[
"Zhao",
"Jian",
""
],
[
"Jin",
"Lei",
""
]
]
| TITLE: StickMotion: Generating 3D Human Motions by Drawing a Stickman
ABSTRACT: Text-to-motion generation, which translates textual descriptions into human
motions, has been challenging in accurately capturing detailed user-imagined
motions from simple text inputs. This paper introduces StickMotion, an
efficient diffusion-based network designed for multi-condition scenarios, which
generates desired motions based on traditional text and our proposed stickman
conditions for global and local control of these motions, respectively. We
address the challenges introduced by the user-friendly stickman from three
perspectives: 1) Data generation. We develop an algorithm to generate
hand-drawn stickmen automatically across different dataset formats. 2)
Multi-condition fusion. We propose a multi-condition module that integrates
into the diffusion process and obtains outputs of all possible condition
combinations, reducing computational complexity and enhancing StickMotion's
performance compared to conventional approaches with the self-attention module.
3) Dynamic supervision. We empower StickMotion to make minor adjustments to the
stickman's position within the output sequences, generating more natural
movements through our proposed dynamic supervision strategy. Through
quantitative experiments and user studies, sketching stickmen saves users about
51.5% of their time generating motions consistent with their imagination. Our
codes, demos, and relevant data will be released to facilitate further research
and validation within the scientific community.
| no_new_dataset | 0.947721 |
2503.04831 | florian lecourt | Florian Lecourt (LIRMM | ADVANSE), Madalina Croitoru (GRAPHIK),
Konstantin Todorov (LIRMM | WEB3, LIRMM, WEB3) | "Only ChatGPT gets me": An Empirical Analysis of GPT versus other Large
Language Models for Emotion Detection in Text | null | WWW '25 - ACM Web Conference (formerly International World Wide
Web Conference), Apr 2025, Sydney, Australia | 10.1145/3701716.3718375 | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work investigates the capabilities of large language models (LLMs) in
detecting and understanding human emotions through text. Drawing upon emotion
models from psychology, we adopt an interdisciplinary perspective that
integrates computational and affective sciences insights. The main goal is to
assess how accurately they can identify emotions expressed in textual
interactions and compare different models on this specific task. This research
contributes to broader efforts to enhance human-computer interaction, making
artificial intelligence technologies more responsive and sensitive to users'
emotional nuances. By employing a methodology that involves comparisons with a
state-of-the-art model on the GoEmotions dataset, we aim to gauge LLMs'
effectiveness as a system for emotional analysis, paving the way for potential
applications in various fields that require a nuanced understanding of human
language.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 09:47:49 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Lecourt",
"Florian",
"",
"LIRMM | ADVANSE"
],
[
"Croitoru",
"Madalina",
"",
"GRAPHIK"
],
[
"Todorov",
"Konstantin",
"",
"LIRMM | WEB3, LIRMM, WEB3"
]
]
| TITLE: "Only ChatGPT gets me": An Empirical Analysis of GPT versus other Large
Language Models for Emotion Detection in Text
ABSTRACT: This work investigates the capabilities of large language models (LLMs) in
detecting and understanding human emotions through text. Drawing upon emotion
models from psychology, we adopt an interdisciplinary perspective that
integrates computational and affective sciences insights. The main goal is to
assess how accurately they can identify emotions expressed in textual
interactions and compare different models on this specific task. This research
contributes to broader efforts to enhance human-computer interaction, making
artificial intelligence technologies more responsive and sensitive to users'
emotional nuances. By employing a methodology that involves comparisons with a
state-of-the-art model on the GoEmotions dataset, we aim to gauge LLMs'
effectiveness as a system for emotional analysis, paving the way for potential
applications in various fields that require a nuanced understanding of human
language.
| no_new_dataset | 0.945851 |
2503.04835 | Donghyeok Shin | Donghyeok Shin, HeeSun Bae, Gyuwon Sim, Wanmo Kang, Il-Chul Moon | Distilling Dataset into Neural Field | The Thirteenth International Conference on Learning Representations
(ICLR 2025) | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Utilizing a large-scale dataset is essential for training high-performance
deep learning models, but it also comes with substantial computation and
storage costs. To overcome these challenges, dataset distillation has emerged
as a promising solution by compressing the large-scale dataset into a smaller
synthetic dataset that retains the essential information needed for training.
This paper proposes a novel parameterization framework for dataset
distillation, coined Distilling Dataset into Neural Field (DDiF), which
leverages the neural field to store the necessary information of the
large-scale dataset. Due to the unique nature of the neural field, which takes
coordinates as input and output quantity, DDiF effectively preserves the
information and easily generates various shapes of data. We theoretically
confirm that DDiF exhibits greater expressiveness than some previous literature
when the utilized budget for a single synthetic instance is the same. Through
extensive experiments, we demonstrate that DDiF achieves superior performance
on several benchmark datasets, extending beyond the image domain to include
video, audio, and 3D voxel. We release the code at
https://github.com/aailab-kaist/DDiF.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:33:29 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Shin",
"Donghyeok",
""
],
[
"Bae",
"HeeSun",
""
],
[
"Sim",
"Gyuwon",
""
],
[
"Kang",
"Wanmo",
""
],
[
"Moon",
"Il-Chul",
""
]
]
| TITLE: Distilling Dataset into Neural Field
ABSTRACT: Utilizing a large-scale dataset is essential for training high-performance
deep learning models, but it also comes with substantial computation and
storage costs. To overcome these challenges, dataset distillation has emerged
as a promising solution by compressing the large-scale dataset into a smaller
synthetic dataset that retains the essential information needed for training.
This paper proposes a novel parameterization framework for dataset
distillation, coined Distilling Dataset into Neural Field (DDiF), which
leverages the neural field to store the necessary information of the
large-scale dataset. Due to the unique nature of the neural field, which takes
coordinates as input and output quantity, DDiF effectively preserves the
information and easily generates various shapes of data. We theoretically
confirm that DDiF exhibits greater expressiveness than some previous literature
when the utilized budget for a single synthetic instance is the same. Through
extensive experiments, we demonstrate that DDiF achieves superior performance
on several benchmark datasets, extending beyond the image domain to include
video, audio, and 3D voxel. We release the code at
https://github.com/aailab-kaist/DDiF.
| no_new_dataset | 0.946498 |
2503.04836 | Yanfei Li | Yanfei Li, Teng Yin, Wenyi Shang, Jingyu Liu, Xi Wang, Kaiyang Zhao | PGAD: Prototype-Guided Adaptive Distillation for Multi-Modal Learning in
AD Diagnosis | null | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Missing modalities pose a major issue in Alzheimer's Disease (AD) diagnosis,
as many subjects lack full imaging data due to cost and clinical constraints.
While multi-modal learning leverages complementary information, most existing
methods train only on complete data, ignoring the large proportion of
incomplete samples in real-world datasets like ADNI. This reduces the effective
training set and limits the full use of valuable medical data. While some
methods incorporate incomplete samples, they fail to effectively address
inter-modal feature alignment and knowledge transfer challenges under high
missing rates. To address this, we propose a Prototype-Guided Adaptive
Distillation (PGAD) framework that directly incorporates incomplete multi-modal
data into training. PGAD enhances missing modality representations through
prototype matching and balances learning with a dynamic sampling strategy. We
validate PGAD on the ADNI dataset with varying missing rates (20%, 50%, and
70%) and demonstrate that it significantly outperforms state-of-the-art
approaches. Ablation studies confirm the effectiveness of prototype matching
and adaptive sampling, highlighting the potential of our framework for robust
and scalable AD diagnosis in real-world clinical settings.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:39:31 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Li",
"Yanfei",
""
],
[
"Yin",
"Teng",
""
],
[
"Shang",
"Wenyi",
""
],
[
"Liu",
"Jingyu",
""
],
[
"Wang",
"Xi",
""
],
[
"Zhao",
"Kaiyang",
""
]
]
| TITLE: PGAD: Prototype-Guided Adaptive Distillation for Multi-Modal Learning in
AD Diagnosis
ABSTRACT: Missing modalities pose a major issue in Alzheimer's Disease (AD) diagnosis,
as many subjects lack full imaging data due to cost and clinical constraints.
While multi-modal learning leverages complementary information, most existing
methods train only on complete data, ignoring the large proportion of
incomplete samples in real-world datasets like ADNI. This reduces the effective
training set and limits the full use of valuable medical data. While some
methods incorporate incomplete samples, they fail to effectively address
inter-modal feature alignment and knowledge transfer challenges under high
missing rates. To address this, we propose a Prototype-Guided Adaptive
Distillation (PGAD) framework that directly incorporates incomplete multi-modal
data into training. PGAD enhances missing modality representations through
prototype matching and balances learning with a dynamic sampling strategy. We
validate PGAD on the ADNI dataset with varying missing rates (20%, 50%, and
70%) and demonstrate that it significantly outperforms state-of-the-art
approaches. Ablation studies confirm the effectiveness of prototype matching
and adaptive sampling, highlighting the potential of our framework for robust
and scalable AD diagnosis in real-world clinical settings.
| no_new_dataset | 0.946745 |
2503.04837 | Ziyuan Yang | Ziyuan Yang, Yingyu Chen, Chengrui Gao, Andrew Beng Jin Teoh, Bob
Zhang, Yi Zhang | FedPalm: A General Federated Learning Framework for Closed- and Open-Set
Palmprint Verification | null | null | null | null | cs.CV cs.AI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current deep learning (DL)-based palmprint verification models rely on
centralized training with large datasets, which raises significant privacy
concerns due to biometric data's sensitive and immutable nature. Federated
learning~(FL), a privacy-preserving distributed learning paradigm, offers a
compelling alternative by enabling collaborative model training without the
need for data sharing. However, FL-based palmprint verification faces critical
challenges, including data heterogeneity from diverse identities and the
absence of standardized evaluation benchmarks. This paper addresses these gaps
by establishing a comprehensive benchmark for FL-based palmprint verification,
which explicitly defines and evaluates two practical scenarios: closed-set and
open-set verification. We propose FedPalm, a unified FL framework that balances
local adaptability with global generalization. Each client trains a
personalized textural expert tailored to local data and collaboratively
contributes to a shared global textural expert for extracting generalized
features. To further enhance verification performance, we introduce a Textural
Expert Interaction Module that dynamically routes textural features among
experts to generate refined side textural features. Learnable parameters are
employed to model relationships between original and side features, fostering
cross-texture-expert interaction and improving feature discrimination.
Extensive experiments validate the effectiveness of FedPalm, demonstrating
robust performance across both scenarios and providing a promising foundation
for advancing FL-based palmprint verification research.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:49:42 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Yang",
"Ziyuan",
""
],
[
"Chen",
"Yingyu",
""
],
[
"Gao",
"Chengrui",
""
],
[
"Teoh",
"Andrew Beng Jin",
""
],
[
"Zhang",
"Bob",
""
],
[
"Zhang",
"Yi",
""
]
]
| TITLE: FedPalm: A General Federated Learning Framework for Closed- and Open-Set
Palmprint Verification
ABSTRACT: Current deep learning (DL)-based palmprint verification models rely on
centralized training with large datasets, which raises significant privacy
concerns due to biometric data's sensitive and immutable nature. Federated
learning~(FL), a privacy-preserving distributed learning paradigm, offers a
compelling alternative by enabling collaborative model training without the
need for data sharing. However, FL-based palmprint verification faces critical
challenges, including data heterogeneity from diverse identities and the
absence of standardized evaluation benchmarks. This paper addresses these gaps
by establishing a comprehensive benchmark for FL-based palmprint verification,
which explicitly defines and evaluates two practical scenarios: closed-set and
open-set verification. We propose FedPalm, a unified FL framework that balances
local adaptability with global generalization. Each client trains a
personalized textural expert tailored to local data and collaboratively
contributes to a shared global textural expert for extracting generalized
features. To further enhance verification performance, we introduce a Textural
Expert Interaction Module that dynamically routes textural features among
experts to generate refined side textural features. Learnable parameters are
employed to model relationships between original and side features, fostering
cross-texture-expert interaction and improving feature discrimination.
Extensive experiments validate the effectiveness of FedPalm, demonstrating
robust performance across both scenarios and providing a promising foundation
for advancing FL-based palmprint verification research.
| no_new_dataset | 0.953405 |
2503.04849 | Ramteja Sajja | Likith Kadiyala, Ramteja Sajja, Yusuf Sermet, Ibrahim Demir | Enhancing Collective Intelligence in Large Language Models Through
Emotional Integration | 23 pages, 8 figures | null | null | null | cs.CL cs.AI cs.CY cs.HC cs.MA | http://creativecommons.org/licenses/by/4.0/ | This research investigates the integration of emotional diversity into Large
Language Models (LLMs) to enhance collective intelligence. Inspired by the
human wisdom of crowds phenomenon, where group decisions often outperform
individual judgments, we fine-tuned the DarkIdol-Llama-3.1-8B model using
Google's GoEmotions dataset and Low-Rank Adaptation (LoRA) to simulate
emotionally diverse responses. Evaluating the model on a distance estimation
task between Fargo, ND, and Seattle, WA, across 15,064 unique persona
configurations, we analyzed how emotional states and social attributes
influence decision-making. Our findings demonstrate that emotional integration
shapes response patterns while maintaining acceptable prediction accuracy,
revealing its potential to enhance artificial collective intelligence. This
study provides valuable insights into the interplay of emotional diversity and
decision-making in LLMs, suggesting pathways for creating emotionally aware AI
systems that balance emotional depth with analytical precision.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 23:42:48 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Kadiyala",
"Likith",
""
],
[
"Sajja",
"Ramteja",
""
],
[
"Sermet",
"Yusuf",
""
],
[
"Demir",
"Ibrahim",
""
]
]
| TITLE: Enhancing Collective Intelligence in Large Language Models Through
Emotional Integration
ABSTRACT: This research investigates the integration of emotional diversity into Large
Language Models (LLMs) to enhance collective intelligence. Inspired by the
human wisdom of crowds phenomenon, where group decisions often outperform
individual judgments, we fine-tuned the DarkIdol-Llama-3.1-8B model using
Google's GoEmotions dataset and Low-Rank Adaptation (LoRA) to simulate
emotionally diverse responses. Evaluating the model on a distance estimation
task between Fargo, ND, and Seattle, WA, across 15,064 unique persona
configurations, we analyzed how emotional states and social attributes
influence decision-making. Our findings demonstrate that emotional integration
shapes response patterns while maintaining acceptable prediction accuracy,
revealing its potential to enhance artificial collective intelligence. This
study provides valuable insights into the interplay of emotional diversity and
decision-making in LLMs, suggesting pathways for creating emotionally aware AI
systems that balance emotional depth with analytical precision.
| no_new_dataset | 0.943556 |
2503.04852 | Yiran Qiao | Disheng Liu, Yiran Qiao, Wuche Liu, Yiren Lu, Yunlai Zhou, Tuo Liang,
Yu Yin, Jing Ma | CAUSAL3D: A Comprehensive Benchmark for Causal Learning from Visual Data | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | True intelligence hinges on the ability to uncover and leverage hidden causal
relations. Despite significant progress in AI and computer vision (CV), there
remains a lack of benchmarks for assessing models' abilities to infer latent
causality from complex visual data. In this paper, we introduce
\textsc{\textbf{Causal3D}}, a novel and comprehensive benchmark that integrates
structured data (tables) with corresponding visual representations (images) to
evaluate causal reasoning. Designed within a systematic framework, Causal3D
comprises 19 3D-scene datasets capturing diverse causal relations, views, and
backgrounds, enabling evaluations across scenes of varying complexity. We
assess multiple state-of-the-art methods, including classical causal discovery,
causal representation learning, and large/vision-language models (LLMs/VLMs).
Our experiments show that as causal structures grow more complex without prior
knowledge, performance declines significantly, highlighting the challenges even
advanced methods face in complex causal scenarios. Causal3D serves as a vital
resource for advancing causal reasoning in CV and fostering trustworthy AI in
critical domains.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 03:40:01 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Liu",
"Disheng",
""
],
[
"Qiao",
"Yiran",
""
],
[
"Liu",
"Wuche",
""
],
[
"Lu",
"Yiren",
""
],
[
"Zhou",
"Yunlai",
""
],
[
"Liang",
"Tuo",
""
],
[
"Yin",
"Yu",
""
],
[
"Ma",
"Jing",
""
]
]
| TITLE: CAUSAL3D: A Comprehensive Benchmark for Causal Learning from Visual Data
ABSTRACT: True intelligence hinges on the ability to uncover and leverage hidden causal
relations. Despite significant progress in AI and computer vision (CV), there
remains a lack of benchmarks for assessing models' abilities to infer latent
causality from complex visual data. In this paper, we introduce
\textsc{\textbf{Causal3D}}, a novel and comprehensive benchmark that integrates
structured data (tables) with corresponding visual representations (images) to
evaluate causal reasoning. Designed within a systematic framework, Causal3D
comprises 19 3D-scene datasets capturing diverse causal relations, views, and
backgrounds, enabling evaluations across scenes of varying complexity. We
assess multiple state-of-the-art methods, including classical causal discovery,
causal representation learning, and large/vision-language models (LLMs/VLMs).
Our experiments show that as causal structures grow more complex without prior
knowledge, performance declines significantly, highlighting the challenges even
advanced methods face in complex causal scenarios. Causal3D serves as a vital
resource for advancing causal reasoning in CV and fostering trustworthy AI in
critical domains.
| no_new_dataset | 0.867766 |
2503.04853 | Huaibing Peng | Yansong Gao, Huaibing Peng, Hua Ma, Zhiyang Dai, Shuo Wang, Hongsheng
Hu, Anmin Fu, Minhui Xue | From Pixels to Trajectory: Universal Adversarial Example Detection via
Temporal Imprints | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For the first time, we unveil discernible temporal (or historical) trajectory
imprints resulting from adversarial example (AE) attacks. Standing in contrast
to existing studies all focusing on spatial (or static) imprints within the
targeted underlying victim models, we present a fresh temporal paradigm for
understanding these attacks. Of paramount discovery is that these imprints are
encapsulated within a single loss metric, spanning universally across diverse
tasks such as classification and regression, and modalities including image,
text, and audio. Recognizing the distinct nature of loss between adversarial
and clean examples, we exploit this temporal imprint for AE detection by
proposing TRAIT (TRaceable Adversarial temporal trajectory ImprinTs). TRAIT
operates under minimal assumptions without prior knowledge of attacks, thereby
framing the detection challenge as a one-class classification problem. However,
detecting AEs is still challenged by significant overlaps between the
constructed synthetic losses of adversarial and clean examples due to the
absence of ground truth for incoming inputs. TRAIT addresses this challenge by
converting the synthetic loss into a spectrum signature, using the technique of
Fast Fourier Transform to highlight the discrepancies, drawing inspiration from
the temporal nature of the imprints, analogous to time-series signals. Across
12 AE attacks including SMACK (USENIX Sec'2023), TRAIT demonstrates consistent
outstanding performance across comprehensively evaluated modalities, tasks,
datasets, and model architectures. In all scenarios, TRAIT achieves an AE
detection accuracy exceeding 97%, often around 99%, while maintaining a false
rejection rate of 1%. TRAIT remains effective under the formulated strong
adaptive attacks.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 06:00:04 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Gao",
"Yansong",
""
],
[
"Peng",
"Huaibing",
""
],
[
"Ma",
"Hua",
""
],
[
"Dai",
"Zhiyang",
""
],
[
"Wang",
"Shuo",
""
],
[
"Hu",
"Hongsheng",
""
],
[
"Fu",
"Anmin",
""
],
[
"Xue",
"Minhui",
""
]
]
| TITLE: From Pixels to Trajectory: Universal Adversarial Example Detection via
Temporal Imprints
ABSTRACT: For the first time, we unveil discernible temporal (or historical) trajectory
imprints resulting from adversarial example (AE) attacks. Standing in contrast
to existing studies all focusing on spatial (or static) imprints within the
targeted underlying victim models, we present a fresh temporal paradigm for
understanding these attacks. Of paramount discovery is that these imprints are
encapsulated within a single loss metric, spanning universally across diverse
tasks such as classification and regression, and modalities including image,
text, and audio. Recognizing the distinct nature of loss between adversarial
and clean examples, we exploit this temporal imprint for AE detection by
proposing TRAIT (TRaceable Adversarial temporal trajectory ImprinTs). TRAIT
operates under minimal assumptions without prior knowledge of attacks, thereby
framing the detection challenge as a one-class classification problem. However,
detecting AEs is still challenged by significant overlaps between the
constructed synthetic losses of adversarial and clean examples due to the
absence of ground truth for incoming inputs. TRAIT addresses this challenge by
converting the synthetic loss into a spectrum signature, using the technique of
Fast Fourier Transform to highlight the discrepancies, drawing inspiration from
the temporal nature of the imprints, analogous to time-series signals. Across
12 AE attacks including SMACK (USENIX Sec'2023), TRAIT demonstrates consistent
outstanding performance across comprehensively evaluated modalities, tasks,
datasets, and model architectures. In all scenarios, TRAIT achieves an AE
detection accuracy exceeding 97%, often around 99%, while maintaining a false
rejection rate of 1%. TRAIT remains effective under the formulated strong
adaptive attacks.
| no_new_dataset | 0.944536 |
2503.04856 | Junwoo Ha | Junwoo Ha, Hyunjun Kim, Sangyoon Yu, Haon Park, Ashkan Yousefpour,
Yuna Park, Suhyun Kim | One-Shot is Enough: Consolidating Multi-Turn Attacks into Efficient
Single-Turn Prompts for LLMs | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Despite extensive safety enhancements in large language models (LLMs),
multi-turn "jailbreak" conversations crafted by skilled human adversaries can
still breach even the most sophisticated guardrails. However, these multi-turn
attacks demand considerable manual effort, limiting their scalability. In this
work, we introduce a novel approach called Multi-turn-to-Single-turn (M2S) that
systematically converts multi-turn jailbreak prompts into single-turn attacks.
Specifically, we propose three conversion strategies - Hyphenize, Numberize,
and Pythonize - each preserving sequential context yet packaging it in a single
query. Our experiments on the Multi-turn Human Jailbreak (MHJ) dataset show
that M2S often increases or maintains high Attack Success Rates (ASRs) compared
to original multi-turn conversations. Notably, using a StrongREJECT-based
evaluation of harmfulness, M2S achieves up to 95.9% ASR on Mistral-7B and
outperforms original multi-turn prompts by as much as 17.5% in absolute
improvement on GPT-4o. Further analysis reveals that certain adversarial
tactics, when consolidated into a single prompt, exploit structural formatting
cues to evade standard policy checks. These findings underscore that
single-turn attacks - despite being simpler and cheaper to conduct - can be
just as potent, if not more, than their multi-turn counterparts. Our findings
underscore the urgent need to reevaluate and reinforce LLM safety strategies,
given how adversarial queries can be compacted into a single prompt while still
retaining sufficient complexity to bypass existing safety measures.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 07:34:51 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Ha",
"Junwoo",
""
],
[
"Kim",
"Hyunjun",
""
],
[
"Yu",
"Sangyoon",
""
],
[
"Park",
"Haon",
""
],
[
"Yousefpour",
"Ashkan",
""
],
[
"Park",
"Yuna",
""
],
[
"Kim",
"Suhyun",
""
]
]
| TITLE: One-Shot is Enough: Consolidating Multi-Turn Attacks into Efficient
Single-Turn Prompts for LLMs
ABSTRACT: Despite extensive safety enhancements in large language models (LLMs),
multi-turn "jailbreak" conversations crafted by skilled human adversaries can
still breach even the most sophisticated guardrails. However, these multi-turn
attacks demand considerable manual effort, limiting their scalability. In this
work, we introduce a novel approach called Multi-turn-to-Single-turn (M2S) that
systematically converts multi-turn jailbreak prompts into single-turn attacks.
Specifically, we propose three conversion strategies - Hyphenize, Numberize,
and Pythonize - each preserving sequential context yet packaging it in a single
query. Our experiments on the Multi-turn Human Jailbreak (MHJ) dataset show
that M2S often increases or maintains high Attack Success Rates (ASRs) compared
to original multi-turn conversations. Notably, using a StrongREJECT-based
evaluation of harmfulness, M2S achieves up to 95.9% ASR on Mistral-7B and
outperforms original multi-turn prompts by as much as 17.5% in absolute
improvement on GPT-4o. Further analysis reveals that certain adversarial
tactics, when consolidated into a single prompt, exploit structural formatting
cues to evade standard policy checks. These findings underscore that
single-turn attacks - despite being simpler and cheaper to conduct - can be
just as potent, if not more, than their multi-turn counterparts. Our findings
underscore the urgent need to reevaluate and reinforce LLM safety strategies,
given how adversarial queries can be compacted into a single prompt while still
retaining sufficient complexity to bypass existing safety measures.
| no_new_dataset | 0.722821 |
2503.04857 | Alessandro Gabbana | Abhisek Ganguly, Alessandro Gabbana, Vybhav Rao, Sauro Succi, Santosh
Ansumali | A kinetic-based regularization method for data science applications | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | We propose a physics-based regularization technique for function learning,
inspired by statistical mechanics. By drawing an analogy between optimizing the
parameters of an interpolator and minimizing the energy of a system, we
introduce corrections that impose constraints on the lower-order moments of the
data distribution. This minimizes the discrepancy between the discrete and
continuum representations of the data, in turn allowing to access more
favorable energy landscapes, thus improving the accuracy of the interpolator.
Our approach improves performance in both interpolation and regression tasks,
even in high-dimensional spaces. Unlike traditional methods, it does not
require empirical parameter tuning, making it particularly effective for
handling noisy data. We also show that thanks to its local nature, the method
offers computational and memory efficiency advantages over Radial Basis
Function interpolators, especially for large datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 08:12:01 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Ganguly",
"Abhisek",
""
],
[
"Gabbana",
"Alessandro",
""
],
[
"Rao",
"Vybhav",
""
],
[
"Succi",
"Sauro",
""
],
[
"Ansumali",
"Santosh",
""
]
]
| TITLE: A kinetic-based regularization method for data science applications
ABSTRACT: We propose a physics-based regularization technique for function learning,
inspired by statistical mechanics. By drawing an analogy between optimizing the
parameters of an interpolator and minimizing the energy of a system, we
introduce corrections that impose constraints on the lower-order moments of the
data distribution. This minimizes the discrepancy between the discrete and
continuum representations of the data, in turn allowing to access more
favorable energy landscapes, thus improving the accuracy of the interpolator.
Our approach improves performance in both interpolation and regression tasks,
even in high-dimensional spaces. Unlike traditional methods, it does not
require empirical parameter tuning, making it particularly effective for
handling noisy data. We also show that thanks to its local nature, the method
offers computational and memory efficiency advantages over Radial Basis
Function interpolators, especially for large datasets.
| no_new_dataset | 0.946498 |
2503.04859 | Stefano De Paoli Prof | Stefano De Paoli and Walter Stan Mathis | Codebook Reduction and Saturation: Novel observations on Inductive
Thematic Saturation for Large Language Models and initial coding in Thematic
Analysis | null | null | null | null | cs.CL cs.AI cs.CY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper reflects on the process of performing Thematic Analysis with Large
Language Models (LLMs). Specifically, the paper deals with the problem of
analytical saturation of initial codes, as produced by LLMs. Thematic Analysis
is a well-established qualitative analysis method composed of interlinked
phases. A key phase is the initial coding, where the analysts assign labels to
discrete components of a dataset. Saturation is a way to measure the validity
of a qualitative analysis and relates to the recurrence and repetition of
initial codes. In the paper we reflect on how well LLMs achieve analytical
saturation and propose also a novel technique to measure Inductive Thematic
Saturation (ITS). This novel technique leverages a programming framework called
DSPy. The proposed novel approach allows a precise measurement of ITS.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 08:52:03 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"De Paoli",
"Stefano",
""
],
[
"Mathis",
"Walter Stan",
""
]
]
| TITLE: Codebook Reduction and Saturation: Novel observations on Inductive
Thematic Saturation for Large Language Models and initial coding in Thematic
Analysis
ABSTRACT: This paper reflects on the process of performing Thematic Analysis with Large
Language Models (LLMs). Specifically, the paper deals with the problem of
analytical saturation of initial codes, as produced by LLMs. Thematic Analysis
is a well-established qualitative analysis method composed of interlinked
phases. A key phase is the initial coding, where the analysts assign labels to
discrete components of a dataset. Saturation is a way to measure the validity
of a qualitative analysis and relates to the recurrence and repetition of
initial codes. In the paper we reflect on how well LLMs achieve analytical
saturation and propose also a novel technique to measure Inductive Thematic
Saturation (ITS). This novel technique leverages a programming framework called
DSPy. The proposed novel approach allows a precise measurement of ITS.
| no_new_dataset | 0.945096 |
2503.04863 | Ziyue Zhao | Ziyue Zhao, Qining Qi, Jianfa Ma | Manboformer: Learning Gaussian Representations via Spatial-temporal
Attention Mechanism | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compared with voxel-based grid prediction, in the field of 3D semantic
occupation prediction for autonomous driving, GaussianFormer proposed using 3D
Gaussian to describe scenes with sparse 3D semantic Gaussian based on objects
is another scheme with lower memory requirements. Each 3D Gaussian function
represents a flexible region of interest and its semantic features, which are
iteratively refined by the attention mechanism. In the experiment, it is found
that the Gaussian function required by this method is larger than the query
resolution of the original dense grid network, resulting in impaired
performance. Therefore, we consider optimizing GaussianFormer by using unused
temporal information. We learn the Spatial-Temporal Self-attention Mechanism
from the previous grid-given occupation network and improve it to
GaussianFormer. The experiment was conducted with the NuScenes dataset, and the
experiment is currently underway.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 09:40:46 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Zhao",
"Ziyue",
""
],
[
"Qi",
"Qining",
""
],
[
"Ma",
"Jianfa",
""
]
]
| TITLE: Manboformer: Learning Gaussian Representations via Spatial-temporal
Attention Mechanism
ABSTRACT: Compared with voxel-based grid prediction, in the field of 3D semantic
occupation prediction for autonomous driving, GaussianFormer proposed using 3D
Gaussian to describe scenes with sparse 3D semantic Gaussian based on objects
is another scheme with lower memory requirements. Each 3D Gaussian function
represents a flexible region of interest and its semantic features, which are
iteratively refined by the attention mechanism. In the experiment, it is found
that the Gaussian function required by this method is larger than the query
resolution of the original dense grid network, resulting in impaired
performance. Therefore, we consider optimizing GaussianFormer by using unused
temporal information. We learn the Spatial-Temporal Self-attention Mechanism
from the previous grid-given occupation network and improve it to
GaussianFormer. The experiment was conducted with the NuScenes dataset, and the
experiment is currently underway.
| no_new_dataset | 0.951818 |
2503.04869 | Bo Yuan | Bo Yuan, Yulin Chen, Zhen Tan, Wang Jinyan, Huan Liu, Yin Zhang | Label Distribution Learning-Enhanced Dual-KNN for Text Classification | Accepted by SDM 2024 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Many text classification methods usually introduce external information
(e.g., label descriptions and knowledge bases) to improve the classification
performance. Compared to external information, some internal information
generated by the model itself during training, like text embeddings and
predicted label probability distributions, are exploited poorly when predicting
the outcomes of some texts. In this paper, we focus on leveraging this internal
information, proposing a dual $k$ nearest neighbor (D$k$NN) framework with two
$k$NN modules, to retrieve several neighbors from the training set and augment
the distribution of labels. For the $k$NN module, it is easily confused and may
cause incorrect predictions when retrieving some nearest neighbors from noisy
datasets (datasets with labeling errors) or similar datasets (datasets with
similar labels). To address this issue, we also introduce a label distribution
learning module that can learn label similarity, and generate a better label
distribution to help models distinguish texts more effectively. This module
eases model overfitting and improves final classification performance, hence
enhancing the quality of the retrieved neighbors by $k$NN modules during
inference. Extensive experiments on the benchmark datasets verify the
effectiveness of our method.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 15:15:26 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Yuan",
"Bo",
""
],
[
"Chen",
"Yulin",
""
],
[
"Tan",
"Zhen",
""
],
[
"Jinyan",
"Wang",
""
],
[
"Liu",
"Huan",
""
],
[
"Zhang",
"Yin",
""
]
]
| TITLE: Label Distribution Learning-Enhanced Dual-KNN for Text Classification
ABSTRACT: Many text classification methods usually introduce external information
(e.g., label descriptions and knowledge bases) to improve the classification
performance. Compared to external information, some internal information
generated by the model itself during training, like text embeddings and
predicted label probability distributions, are exploited poorly when predicting
the outcomes of some texts. In this paper, we focus on leveraging this internal
information, proposing a dual $k$ nearest neighbor (D$k$NN) framework with two
$k$NN modules, to retrieve several neighbors from the training set and augment
the distribution of labels. For the $k$NN module, it is easily confused and may
cause incorrect predictions when retrieving some nearest neighbors from noisy
datasets (datasets with labeling errors) or similar datasets (datasets with
similar labels). To address this issue, we also introduce a label distribution
learning module that can learn label similarity, and generate a better label
distribution to help models distinguish texts more effectively. This module
eases model overfitting and improves final classification performance, hence
enhancing the quality of the retrieved neighbors by $k$NN modules during
inference. Extensive experiments on the benchmark datasets verify the
effectiveness of our method.
| no_new_dataset | 0.949623 |
2503.04874 | Sebastian Vallejo Vera | Joan C. Timoneda and Sebasti\'an Vallejo Vera | Memory Is All You Need: Testing How Model Memory Affects LLM Performance
in Annotation Tasks | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Generative Large Language Models (LLMs) have shown promising results in text
annotation using zero-shot and few-shot learning. Yet these approaches do not
allow the model to retain information from previous annotations, making each
response independent from the preceding ones. This raises the question of
whether model memory -- the LLM having knowledge about its own previous
annotations in the same task -- affects performance. In this article, using
OpenAI's GPT-4o and Meta's Llama 3.1 on two political science datasets, we
demonstrate that allowing the model to retain information about its own
previous classifications yields significant performance improvements: between 5
and 25\% when compared to zero-shot and few-shot learning. Moreover, memory
reinforcement, a novel approach we propose that combines model memory and
reinforcement learning, yields additional performance gains in three out of our
four tests. These findings have important implications for applied researchers
looking to improve performance and efficiency in LLM annotation tasks.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 16:39:18 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Timoneda",
"Joan C.",
""
],
[
"Vera",
"Sebastián Vallejo",
""
]
]
| TITLE: Memory Is All You Need: Testing How Model Memory Affects LLM Performance
in Annotation Tasks
ABSTRACT: Generative Large Language Models (LLMs) have shown promising results in text
annotation using zero-shot and few-shot learning. Yet these approaches do not
allow the model to retain information from previous annotations, making each
response independent from the preceding ones. This raises the question of
whether model memory -- the LLM having knowledge about its own previous
annotations in the same task -- affects performance. In this article, using
OpenAI's GPT-4o and Meta's Llama 3.1 on two political science datasets, we
demonstrate that allowing the model to retain information about its own
previous classifications yields significant performance improvements: between 5
and 25\% when compared to zero-shot and few-shot learning. Moreover, memory
reinforcement, a novel approach we propose that combines model memory and
reinforcement learning, yields additional performance gains in three out of our
four tests. These findings have important implications for applied researchers
looking to improve performance and efficiency in LLM annotation tasks.
| no_new_dataset | 0.953665 |
2503.04930 | Abeed Sarker | Yao Ge, Yuting Guo, Sudeshna Das, Swati Rajwal, Selen Bozkurt, Abeed
Sarker | HILGEN: Hierarchically-Informed Data Generation for Biomedical NER Using
Knowledgebases and Large Language Models | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present HILGEN, a Hierarchically-Informed Data Generation approach that
combines domain knowledge from the Unified Medical Language System (UMLS) with
synthetic data generated by large language models (LLMs), specifically GPT-3.5.
Our approach leverages UMLS's hierarchical structure to expand training data
with related concepts, while incorporating contextual information from LLMs
through targeted prompts aimed at automatically generating synthetic examples
for sparsely occurring named entities. The performance of the HILGEN approach
was evaluated across four biomedical NER datasets (MIMIC III, BC5CDR,
NCBI-Disease, and Med-Mentions) using BERT-Large and DANN (Data Augmentation
with Nearest Neighbor Classifier) models, applying various data generation
strategies, including UMLS, GPT-3.5, and their best ensemble. For the
BERT-Large model, incorporating UMLS led to an average F1 score improvement of
40.36%, while using GPT-3.5 resulted in a comparable average increase of
40.52%. The Best-Ensemble approach using BERT-Large achieved the highest
improvement, with an average increase of 42.29%. DANN model's F1 score improved
by 22.74% on average using the UMLS-only approach. The GPT-3.5-based method
resulted in a 21.53% increase, and the Best-Ensemble DANN model showed a more
notable improvement, with an average increase of 25.03%. Our proposed HILGEN
approach improves NER performance in few-shot settings without requiring
additional manually annotated data. Our experiments demonstrate that an
effective strategy for optimizing biomedical NER is to combine biomedical
knowledge curated in the past, such as the UMLS, and generative LLMs to create
synthetic training instances. Our future research will focus on exploring
additional innovative synthetic data generation strategies for further
improving NER performance.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 20:02:19 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Ge",
"Yao",
""
],
[
"Guo",
"Yuting",
""
],
[
"Das",
"Sudeshna",
""
],
[
"Rajwal",
"Swati",
""
],
[
"Bozkurt",
"Selen",
""
],
[
"Sarker",
"Abeed",
""
]
]
| TITLE: HILGEN: Hierarchically-Informed Data Generation for Biomedical NER Using
Knowledgebases and Large Language Models
ABSTRACT: We present HILGEN, a Hierarchically-Informed Data Generation approach that
combines domain knowledge from the Unified Medical Language System (UMLS) with
synthetic data generated by large language models (LLMs), specifically GPT-3.5.
Our approach leverages UMLS's hierarchical structure to expand training data
with related concepts, while incorporating contextual information from LLMs
through targeted prompts aimed at automatically generating synthetic examples
for sparsely occurring named entities. The performance of the HILGEN approach
was evaluated across four biomedical NER datasets (MIMIC III, BC5CDR,
NCBI-Disease, and Med-Mentions) using BERT-Large and DANN (Data Augmentation
with Nearest Neighbor Classifier) models, applying various data generation
strategies, including UMLS, GPT-3.5, and their best ensemble. For the
BERT-Large model, incorporating UMLS led to an average F1 score improvement of
40.36%, while using GPT-3.5 resulted in a comparable average increase of
40.52%. The Best-Ensemble approach using BERT-Large achieved the highest
improvement, with an average increase of 42.29%. DANN model's F1 score improved
by 22.74% on average using the UMLS-only approach. The GPT-3.5-based method
resulted in a 21.53% increase, and the Best-Ensemble DANN model showed a more
notable improvement, with an average increase of 25.03%. Our proposed HILGEN
approach improves NER performance in few-shot settings without requiring
additional manually annotated data. Our experiments demonstrate that an
effective strategy for optimizing biomedical NER is to combine biomedical
knowledge curated in the past, such as the UMLS, and generative LLMs to create
synthetic training instances. Our future research will focus on exploring
additional innovative synthetic data generation strategies for further
improving NER performance.
| no_new_dataset | 0.95418 |
2503.04940 | Mohammad Mahdi Samiei | Mohammad Mahdi Samiei Paqaleh, Mahdieh Soleymani Baghshah | VQEL: Enabling Self-Developed Symbolic Language in Agents through Vector
Quantization in Emergent Language Games | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In the field of emergent language, efforts have traditionally focused on
developing communication protocols through interactions between agents in
referential games. However, the aspect of internal language learning, where
language serves not only as a communicative tool with others but also as a
means for individual thinking, self-reflection, and problem-solving remains
underexplored. Developing a language through self-play, without another agent's
involvement, poses a unique challenge. It requires an agent to craft symbolic
representations and train them using direct gradient methods. The challenge
here is that if an agent attempts to learn symbolic representations through
self-play using conventional modeling and techniques such as REINFORCE, the
solution will offer no advantage over previous multi-agent approaches. We
introduce VQEL, a novel method that incorporates Vector Quantization into the
agents' architecture, enabling them to autonomously invent and develop discrete
symbolic representations in a self-play referential game. Following the
self-play phase, agents can enhance their language through reinforcement
learning and interactions with other agents in the mutual-play phase. Our
experiments across various datasets demonstrate that VQEL not only outperforms
the traditional REINFORCE method but also benefits from improved control and
reduced susceptibility to collapse, thanks to the incorporation of vector
quantization.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 20:15:51 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Paqaleh",
"Mohammad Mahdi Samiei",
""
],
[
"Baghshah",
"Mahdieh Soleymani",
""
]
]
| TITLE: VQEL: Enabling Self-Developed Symbolic Language in Agents through Vector
Quantization in Emergent Language Games
ABSTRACT: In the field of emergent language, efforts have traditionally focused on
developing communication protocols through interactions between agents in
referential games. However, the aspect of internal language learning, where
language serves not only as a communicative tool with others but also as a
means for individual thinking, self-reflection, and problem-solving remains
underexplored. Developing a language through self-play, without another agent's
involvement, poses a unique challenge. It requires an agent to craft symbolic
representations and train them using direct gradient methods. The challenge
here is that if an agent attempts to learn symbolic representations through
self-play using conventional modeling and techniques such as REINFORCE, the
solution will offer no advantage over previous multi-agent approaches. We
introduce VQEL, a novel method that incorporates Vector Quantization into the
agents' architecture, enabling them to autonomously invent and develop discrete
symbolic representations in a self-play referential game. Following the
self-play phase, agents can enhance their language through reinforcement
learning and interactions with other agents in the mutual-play phase. Our
experiments across various datasets demonstrate that VQEL not only outperforms
the traditional REINFORCE method but also benefits from improved control and
reduced susceptibility to collapse, thanks to the incorporation of vector
quantization.
| no_new_dataset | 0.941439 |
2503.04944 | Anja Sheppard | Anja Sheppard and Katherine A. Skinner | MarsLGPR: Mars Rover Localization with Ground Penetrating Radar | null | null | null | null | cs.RO cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose the use of Ground Penetrating Radar (GPR) for rover
localization on Mars. Precise pose estimation is an important task for mobile
robots exploring planetary surfaces, as they operate in GPS-denied
environments. Although visual odometry provides accurate localization, it is
computationally expensive and can fail in dim or high-contrast lighting. Wheel
encoders can also provide odometry estimation, but are prone to slipping on the
sandy terrain encountered on Mars. Although traditionally a scientific
surveying sensor, GPR has been used on Earth for terrain classification and
localization through subsurface feature matching. The Perseverance rover and
the upcoming ExoMars rover have GPR sensors already equipped to aid in the
search of water and mineral resources. We propose to leverage GPR to aid in
Mars rover localization. Specifically, we develop a novel GPR-based deep
learning model that predicts 1D relative pose translation. We fuse our GPR pose
prediction method with inertial and wheel encoder data in a filtering framework
to output rover localization. We perform experiments in a Mars analog
environment and demonstrate that our GPR-based displacement predictions both
outperform wheel encoders and improve multi-modal filtering estimates in
high-slip environments. Lastly, we present the first dataset aimed at GPR-based
localization in Mars analog environments, which will be made publicly available
upon publication.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 20:19:21 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Sheppard",
"Anja",
""
],
[
"Skinner",
"Katherine A.",
""
]
]
| TITLE: MarsLGPR: Mars Rover Localization with Ground Penetrating Radar
ABSTRACT: In this work, we propose the use of Ground Penetrating Radar (GPR) for rover
localization on Mars. Precise pose estimation is an important task for mobile
robots exploring planetary surfaces, as they operate in GPS-denied
environments. Although visual odometry provides accurate localization, it is
computationally expensive and can fail in dim or high-contrast lighting. Wheel
encoders can also provide odometry estimation, but are prone to slipping on the
sandy terrain encountered on Mars. Although traditionally a scientific
surveying sensor, GPR has been used on Earth for terrain classification and
localization through subsurface feature matching. The Perseverance rover and
the upcoming ExoMars rover have GPR sensors already equipped to aid in the
search of water and mineral resources. We propose to leverage GPR to aid in
Mars rover localization. Specifically, we develop a novel GPR-based deep
learning model that predicts 1D relative pose translation. We fuse our GPR pose
prediction method with inertial and wheel encoder data in a filtering framework
to output rover localization. We perform experiments in a Mars analog
environment and demonstrate that our GPR-based displacement predictions both
outperform wheel encoders and improve multi-modal filtering estimates in
high-slip environments. Lastly, we present the first dataset aimed at GPR-based
localization in Mars analog environments, which will be made publicly available
upon publication.
| new_dataset | 0.964355 |
2503.04945 | Dongwon Lee | Jooyoung Lee, Xiaochen Zhu, Georgi Karadzhov, Tom Stafford, Andreas
Vlachos, Dongwon Lee | Collaborative Evaluation of Deepfake Text with Deliberation-Enhancing
Dialogue Systems | 15 | null | null | null | cs.CL cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | The proliferation of generative models has presented significant challenges
in distinguishing authentic human-authored content from deepfake content.
Collaborative human efforts, augmented by AI tools, present a promising
solution. In this study, we explore the potential of DeepFakeDeLiBot, a
deliberation-enhancing chatbot, to support groups in detecting deepfake text.
Our findings reveal that group-based problem-solving significantly improves the
accuracy of identifying machine-generated paragraphs compared to individual
efforts. While engagement with DeepFakeDeLiBot does not yield substantial
performance gains overall, it enhances group dynamics by fostering greater
participant engagement, consensus building, and the frequency and diversity of
reasoning-based utterances. Additionally, participants with higher perceived
effectiveness of group collaboration exhibited performance benefits from
DeepFakeDeLiBot. These findings underscore the potential of deliberative
chatbots in fostering interactive and productive group dynamics while ensuring
accuracy in collaborative deepfake text detection. \textit{Dataset and source
code used in this study will be made publicly available upon acceptance of the
manuscript.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 20:19:38 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Lee",
"Jooyoung",
""
],
[
"Zhu",
"Xiaochen",
""
],
[
"Karadzhov",
"Georgi",
""
],
[
"Stafford",
"Tom",
""
],
[
"Vlachos",
"Andreas",
""
],
[
"Lee",
"Dongwon",
""
]
]
| TITLE: Collaborative Evaluation of Deepfake Text with Deliberation-Enhancing
Dialogue Systems
ABSTRACT: The proliferation of generative models has presented significant challenges
in distinguishing authentic human-authored content from deepfake content.
Collaborative human efforts, augmented by AI tools, present a promising
solution. In this study, we explore the potential of DeepFakeDeLiBot, a
deliberation-enhancing chatbot, to support groups in detecting deepfake text.
Our findings reveal that group-based problem-solving significantly improves the
accuracy of identifying machine-generated paragraphs compared to individual
efforts. While engagement with DeepFakeDeLiBot does not yield substantial
performance gains overall, it enhances group dynamics by fostering greater
participant engagement, consensus building, and the frequency and diversity of
reasoning-based utterances. Additionally, participants with higher perceived
effectiveness of group collaboration exhibited performance benefits from
DeepFakeDeLiBot. These findings underscore the potential of deliberative
chatbots in fostering interactive and productive group dynamics while ensuring
accuracy in collaborative deepfake text detection. \textit{Dataset and source
code used in this study will be made publicly available upon acceptance of the
manuscript.
| no_new_dataset | 0.936518 |
2503.04946 | Changchang Yin | Changchang Yin, Hong-You Chen, Wei-Lun Chao, Ping Zhang | Federated Inverse Probability Treatment Weighting for Individual
Treatment Effect Estimation | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Individual treatment effect (ITE) estimation is to evaluate the causal
effects of treatment strategies on some important outcomes, which is a crucial
problem in healthcare. Most existing ITE estimation methods are designed for
centralized settings. However, in real-world clinical scenarios, the raw data
are usually not shareable among hospitals due to the potential privacy and
security risks, which makes the methods not applicable. In this work, we study
the ITE estimation task in a federated setting, which allows us to harness the
decentralized data from multiple hospitals. Due to the unavoidable confounding
bias in the collected data, a model directly learned from it would be
inaccurate. One well-known solution is Inverse Probability Treatment Weighting
(IPTW), which uses the conditional probability of treatment given the
covariates to re-weight each training example. Applying IPTW in a federated
setting, however, is non-trivial. We found that even with a well-estimated
conditional probability, the local model training step using each hospital's
data alone would still suffer from confounding bias. To address this, we
propose FED-IPTW, a novel algorithm to extend IPTW into a federated setting
that enforces both global (over all the data) and local (within each hospital)
decorrelation between covariates and treatments. We validated our approach on
the task of comparing the treatment effects of mechanical ventilation on
improving survival probability for patients with breadth difficulties in the
intensive care unit (ICU). We conducted experiments on both synthetic and
real-world eICU datasets and the results show that FED-IPTW outperform
state-of-the-art methods on all the metrics on factual prediction and ITE
estimation tasks, paving the way for personalized treatment strategy design in
mechanical ventilation usage.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 20:24:34 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Yin",
"Changchang",
""
],
[
"Chen",
"Hong-You",
""
],
[
"Chao",
"Wei-Lun",
""
],
[
"Zhang",
"Ping",
""
]
]
| TITLE: Federated Inverse Probability Treatment Weighting for Individual
Treatment Effect Estimation
ABSTRACT: Individual treatment effect (ITE) estimation is to evaluate the causal
effects of treatment strategies on some important outcomes, which is a crucial
problem in healthcare. Most existing ITE estimation methods are designed for
centralized settings. However, in real-world clinical scenarios, the raw data
are usually not shareable among hospitals due to the potential privacy and
security risks, which makes the methods not applicable. In this work, we study
the ITE estimation task in a federated setting, which allows us to harness the
decentralized data from multiple hospitals. Due to the unavoidable confounding
bias in the collected data, a model directly learned from it would be
inaccurate. One well-known solution is Inverse Probability Treatment Weighting
(IPTW), which uses the conditional probability of treatment given the
covariates to re-weight each training example. Applying IPTW in a federated
setting, however, is non-trivial. We found that even with a well-estimated
conditional probability, the local model training step using each hospital's
data alone would still suffer from confounding bias. To address this, we
propose FED-IPTW, a novel algorithm to extend IPTW into a federated setting
that enforces both global (over all the data) and local (within each hospital)
decorrelation between covariates and treatments. We validated our approach on
the task of comparing the treatment effects of mechanical ventilation on
improving survival probability for patients with breadth difficulties in the
intensive care unit (ICU). We conducted experiments on both synthetic and
real-world eICU datasets and the results show that FED-IPTW outperform
state-of-the-art methods on all the metrics on factual prediction and ITE
estimation tasks, paving the way for personalized treatment strategy design in
mechanical ventilation usage.
| no_new_dataset | 0.952131 |
2503.04952 | Yihong Tang | Yihong Tang, Wei Ma | INTENT: Trajectory Prediction Framework with Intention-Guided
Contrastive Clustering | null | null | null | null | cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate trajectory prediction of road agents (e.g., pedestrians, vehicles)
is an essential prerequisite for various intelligent systems applications, such
as autonomous driving and robotic navigation. Recent research highlights the
importance of environmental contexts (e.g., maps) and the "multi-modality" of
trajectories, leading to increasingly complex model structures. However,
real-world deployments require lightweight models that can quickly migrate and
adapt to new environments. Additionally, the core motivations of road agents,
referred to as their intentions, deserves further exploration. In this study,
we advocate that understanding and reasoning road agents' intention plays a key
role in trajectory prediction tasks, and the main challenge is that the concept
of intention is fuzzy and abstract. To this end, we present INTENT, an
efficient intention-guided trajectory prediction model that relies solely on
information contained in the road agent's trajectory. Our model distinguishes
itself from existing models in several key aspects: (i) We explicitly model
road agents' intentions through contrastive clustering, accommodating the
fuzziness and abstraction of human intention in their trajectories. (ii) The
proposed INTENT is based solely on multi-layer perceptrons (MLPs), resulting in
reduced training and inference time, making it very efficient and more suitable
for real-world deployment. (iii) By leveraging estimated intentions and an
innovative algorithm for transforming trajectory observations, we obtain more
robust trajectory representations that lead to superior prediction accuracy.
Extensive experiments on real-world trajectory datasets for pedestrians and
autonomous vehicles demonstrate the effectiveness and efficiency of INTENT.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 20:31:11 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Tang",
"Yihong",
""
],
[
"Ma",
"Wei",
""
]
]
| TITLE: INTENT: Trajectory Prediction Framework with Intention-Guided
Contrastive Clustering
ABSTRACT: Accurate trajectory prediction of road agents (e.g., pedestrians, vehicles)
is an essential prerequisite for various intelligent systems applications, such
as autonomous driving and robotic navigation. Recent research highlights the
importance of environmental contexts (e.g., maps) and the "multi-modality" of
trajectories, leading to increasingly complex model structures. However,
real-world deployments require lightweight models that can quickly migrate and
adapt to new environments. Additionally, the core motivations of road agents,
referred to as their intentions, deserves further exploration. In this study,
we advocate that understanding and reasoning road agents' intention plays a key
role in trajectory prediction tasks, and the main challenge is that the concept
of intention is fuzzy and abstract. To this end, we present INTENT, an
efficient intention-guided trajectory prediction model that relies solely on
information contained in the road agent's trajectory. Our model distinguishes
itself from existing models in several key aspects: (i) We explicitly model
road agents' intentions through contrastive clustering, accommodating the
fuzziness and abstraction of human intention in their trajectories. (ii) The
proposed INTENT is based solely on multi-layer perceptrons (MLPs), resulting in
reduced training and inference time, making it very efficient and more suitable
for real-world deployment. (iii) By leveraging estimated intentions and an
innovative algorithm for transforming trajectory observations, we obtain more
robust trajectory representations that lead to superior prediction accuracy.
Extensive experiments on real-world trajectory datasets for pedestrians and
autonomous vehicles demonstrate the effectiveness and efficiency of INTENT.
| no_new_dataset | 0.949482 |
2503.04969 | Zhenghao Peng | Zhenghao Peng, Zhizheng Liu, Bolei Zhou | Data-Efficient Learning from Human Interventions for Mobile Robots | ICRA 2025. Webpage: https://metadriverse.github.io/pvp4real/ | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Mobile robots are essential in applications such as autonomous delivery and
hospitality services. Applying learning-based methods to address mobile robot
tasks has gained popularity due to its robustness and generalizability.
Traditional methods such as Imitation Learning (IL) and Reinforcement Learning
(RL) offer adaptability but require large datasets, carefully crafted reward
functions, and face sim-to-real gaps, making them challenging for efficient and
safe real-world deployment. We propose an online human-in-the-loop learning
method PVP4Real that combines IL and RL to address these issues. PVP4Real
enables efficient real-time policy learning from online human intervention and
demonstration, without reward or any pretraining, significantly improving data
efficiency and training safety. We validate our method by training two
different robots -- a legged quadruped, and a wheeled delivery robot -- in two
mobile robot tasks, one of which even uses raw RGBD image as observation. The
training finishes within 15 minutes. Our experiments show the promising future
of human-in-the-loop learning in addressing the data efficiency issue in
real-world robotic tasks. More information is available at:
https://metadriverse.github.io/pvp4real/
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 21:02:02 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Peng",
"Zhenghao",
""
],
[
"Liu",
"Zhizheng",
""
],
[
"Zhou",
"Bolei",
""
]
]
| TITLE: Data-Efficient Learning from Human Interventions for Mobile Robots
ABSTRACT: Mobile robots are essential in applications such as autonomous delivery and
hospitality services. Applying learning-based methods to address mobile robot
tasks has gained popularity due to its robustness and generalizability.
Traditional methods such as Imitation Learning (IL) and Reinforcement Learning
(RL) offer adaptability but require large datasets, carefully crafted reward
functions, and face sim-to-real gaps, making them challenging for efficient and
safe real-world deployment. We propose an online human-in-the-loop learning
method PVP4Real that combines IL and RL to address these issues. PVP4Real
enables efficient real-time policy learning from online human intervention and
demonstration, without reward or any pretraining, significantly improving data
efficiency and training safety. We validate our method by training two
different robots -- a legged quadruped, and a wheeled delivery robot -- in two
mobile robot tasks, one of which even uses raw RGBD image as observation. The
training finishes within 15 minutes. Our experiments show the promising future
of human-in-the-loop learning in addressing the data efficiency issue in
real-world robotic tasks. More information is available at:
https://metadriverse.github.io/pvp4real/
| no_new_dataset | 0.949809 |
2503.04973 | Fabio Petroni | Giulio Corallo, Orion Weller, Fabio Petroni, Paolo Papotti | Beyond RAG: Task-Aware KV Cache Compression for Comprehensive Knowledge
Reasoning | null | null | null | null | cs.CL cs.AI cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Incorporating external knowledge in large language models (LLMs) enhances
their utility across diverse applications, but existing methods have
trade-offs. Retrieval-Augmented Generation (RAG) fetches evidence via
similarity search, but key information may fall outside top ranked results.
Long-context models can process multiple documents but are computationally
expensive and limited by context window size. Inspired by students condensing
study material for open-book exams, we propose task-aware key-value (KV) cache
compression, which compresses external knowledge in a zero- or few-shot setup.
This enables LLMs to reason efficiently over a compacted representation of all
relevant information. Experiments show our approach outperforms both RAG and
task-agnostic compression methods. On LongBench v2, it improves accuracy by up
to 7 absolute points over RAG with a 30x compression rate, while reducing
inference latency from 0.43s to 0.16s. A synthetic dataset highlights that RAG
performs well when sparse evidence suffices, whereas task-aware compression is
superior for broad knowledge tasks.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 21:07:41 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Corallo",
"Giulio",
""
],
[
"Weller",
"Orion",
""
],
[
"Petroni",
"Fabio",
""
],
[
"Papotti",
"Paolo",
""
]
]
| TITLE: Beyond RAG: Task-Aware KV Cache Compression for Comprehensive Knowledge
Reasoning
ABSTRACT: Incorporating external knowledge in large language models (LLMs) enhances
their utility across diverse applications, but existing methods have
trade-offs. Retrieval-Augmented Generation (RAG) fetches evidence via
similarity search, but key information may fall outside top ranked results.
Long-context models can process multiple documents but are computationally
expensive and limited by context window size. Inspired by students condensing
study material for open-book exams, we propose task-aware key-value (KV) cache
compression, which compresses external knowledge in a zero- or few-shot setup.
This enables LLMs to reason efficiently over a compacted representation of all
relevant information. Experiments show our approach outperforms both RAG and
task-agnostic compression methods. On LongBench v2, it improves accuracy by up
to 7 absolute points over RAG with a 30x compression rate, while reducing
inference latency from 0.43s to 0.16s. A synthetic dataset highlights that RAG
performs well when sparse evidence suffices, whereas task-aware compression is
superior for broad knowledge tasks.
| no_new_dataset | 0.926736 |
2503.04974 | Yutian Pang | Yutian Pang, Andrew Paul Kendall, Alex Porcayo, Mariah Barsotti,
Anahita Jain, John-Paul Clarke | From Voice to Safety: Language AI Powered Pilot-ATC Communication
Understanding for Airport Surface Movement Collision Risk Assessment | null | null | null | null | eess.AS cs.SD | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This work integrates language AI-based voice communication understanding with
collision risk assessment. The proposed framework consists of two major parts,
(a) Automatic Speech Recognition (ASR); (b) surface collision risk modeling.
ASR module generates information tables by processing voice communication
transcripts, which serve as references for producing potential taxi plans and
calculating the surface movement collision risk. For ASR, we collect and
annotate our own Named Entity Recognition (NER) dataset based on open-sourced
video recordings and safety investigation reports. Additionally, we refer to
FAA Order JO 7110.65W and FAA Order JO 7340.2N to get the list of heuristic
rules and phase contractions of communication between the pilot and the Air
Traffic Controller (ATCo) used in daily aviation operations. Then, we propose
the novel ATC Rule-Enhanced NER method, which integrates the heuristic rules
into the model training and inference stages, resulting into hybrid rule-based
NER model. We show the effectiveness of this hybrid approach by comparing
different setups with different token-level embedding models. For the risk
modeling, we adopt the node-link airport layout graph from NASA FACET and model
the aircraft taxi speed at each link as a log-normal distribution and derive
the total taxi time distribution. Then, we propose a spatiotemporal formulation
of the risk probability of two aircraft moving across potential collision nodes
during ground movement. We show the effectiveness of our approach by simulating
two case studies, (a) the Henada airport runway collision accident happened in
January 2024; (b) the KATL taxiway collision happened in September 2024. We
show that, by understanding the pilot-ATC communication transcripts and
analyzing surface movement patterns, the proposed model improves airport safety
by providing risk assessment in time.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 21:08:07 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Pang",
"Yutian",
""
],
[
"Kendall",
"Andrew Paul",
""
],
[
"Porcayo",
"Alex",
""
],
[
"Barsotti",
"Mariah",
""
],
[
"Jain",
"Anahita",
""
],
[
"Clarke",
"John-Paul",
""
]
]
| TITLE: From Voice to Safety: Language AI Powered Pilot-ATC Communication
Understanding for Airport Surface Movement Collision Risk Assessment
ABSTRACT: This work integrates language AI-based voice communication understanding with
collision risk assessment. The proposed framework consists of two major parts,
(a) Automatic Speech Recognition (ASR); (b) surface collision risk modeling.
ASR module generates information tables by processing voice communication
transcripts, which serve as references for producing potential taxi plans and
calculating the surface movement collision risk. For ASR, we collect and
annotate our own Named Entity Recognition (NER) dataset based on open-sourced
video recordings and safety investigation reports. Additionally, we refer to
FAA Order JO 7110.65W and FAA Order JO 7340.2N to get the list of heuristic
rules and phase contractions of communication between the pilot and the Air
Traffic Controller (ATCo) used in daily aviation operations. Then, we propose
the novel ATC Rule-Enhanced NER method, which integrates the heuristic rules
into the model training and inference stages, resulting into hybrid rule-based
NER model. We show the effectiveness of this hybrid approach by comparing
different setups with different token-level embedding models. For the risk
modeling, we adopt the node-link airport layout graph from NASA FACET and model
the aircraft taxi speed at each link as a log-normal distribution and derive
the total taxi time distribution. Then, we propose a spatiotemporal formulation
of the risk probability of two aircraft moving across potential collision nodes
during ground movement. We show the effectiveness of our approach by simulating
two case studies, (a) the Henada airport runway collision accident happened in
January 2024; (b) the KATL taxiway collision happened in September 2024. We
show that, by understanding the pilot-ATC communication transcripts and
analyzing surface movement patterns, the proposed model improves airport safety
by providing risk assessment in time.
| new_dataset | 0.887253 |
2503.04979 | Doron Serebro | Doron Serebro and Tammy Riklin-Raviv | HyDA: Hypernetworks for Test Time Domain Adaptation in Medical Imaging
Analysis | submitted to MICCAI 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Medical imaging datasets often vary due to differences in acquisition
protocols, patient demographics, and imaging devices. These variations in data
distribution, known as domain shift, present a significant challenge in
adapting imaging analysis models for practical healthcare applications.
Most current domain adaptation (DA) approaches aim either to align the
distributions between the source and target domains or to learn an invariant
feature space that generalizes well across all domains. However, both
strategies require access to a sufficient number of examples, though not
necessarily annotated, from the test domain during training. This limitation
hinders the widespread deployment of models in clinical settings, where target
domain data may only be accessible in real time.
In this work, we introduce HyDA, a novel hypernetwork framework that
leverages domain characteristics rather than suppressing them, enabling dynamic
adaptation at inference time. Specifically, HyDA learns implicit domain
representations and uses them to adjust model parameters on-the-fly,
effectively interpolating to unseen domains. We validate HyDA on two clinically
relevant applications - MRI brain age prediction and chest X-ray pathology
classification - demonstrating its ability to generalize across tasks and
modalities. Our code is available at TBD.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 21:17:40 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Serebro",
"Doron",
""
],
[
"Riklin-Raviv",
"Tammy",
""
]
]
| TITLE: HyDA: Hypernetworks for Test Time Domain Adaptation in Medical Imaging
Analysis
ABSTRACT: Medical imaging datasets often vary due to differences in acquisition
protocols, patient demographics, and imaging devices. These variations in data
distribution, known as domain shift, present a significant challenge in
adapting imaging analysis models for practical healthcare applications.
Most current domain adaptation (DA) approaches aim either to align the
distributions between the source and target domains or to learn an invariant
feature space that generalizes well across all domains. However, both
strategies require access to a sufficient number of examples, though not
necessarily annotated, from the test domain during training. This limitation
hinders the widespread deployment of models in clinical settings, where target
domain data may only be accessible in real time.
In this work, we introduce HyDA, a novel hypernetwork framework that
leverages domain characteristics rather than suppressing them, enabling dynamic
adaptation at inference time. Specifically, HyDA learns implicit domain
representations and uses them to adjust model parameters on-the-fly,
effectively interpolating to unseen domains. We validate HyDA on two clinically
relevant applications - MRI brain age prediction and chest X-ray pathology
classification - demonstrating its ability to generalize across tasks and
modalities. Our code is available at TBD.
| no_new_dataset | 0.948585 |
2503.04981 | Jifan Zhang | Jifan Zhang, Fangxin Wang, Philip S. Yu, Kaize Ding, Shixiang Zhu | Topology-Aware Conformal Prediction for Stream Networks | 16 pages, 6 figures | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | Stream networks, a unique class of spatiotemporal graphs, exhibit complex
directional flow constraints and evolving dependencies, making uncertainty
quantification a critical yet challenging task. Traditional conformal
prediction methods struggle in this setting due to the need for joint
predictions across multiple interdependent locations and the intricate
spatio-temporal dependencies inherent in stream networks. Existing approaches
either neglect dependencies, leading to overly conservative predictions, or
rely solely on data-driven estimations, failing to capture the rich topological
structure of the network. To address these challenges, we propose
Spatio-Temporal Adaptive Conformal Inference (\texttt{STACI}), a novel
framework that integrates network topology and temporal dynamics into the
conformal prediction framework. \texttt{STACI} introduces a topology-aware
nonconformity score that respects directional flow constraints and dynamically
adjusts prediction sets to account for temporal distributional shifts. We
provide theoretical guarantees on the validity of our approach and demonstrate
its superior performance on both synthetic and real-world datasets. Our results
show that \texttt{STACI} effectively balances prediction efficiency and
coverage, outperforming existing conformal prediction methods for stream
networks.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 21:21:15 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Zhang",
"Jifan",
""
],
[
"Wang",
"Fangxin",
""
],
[
"Yu",
"Philip S.",
""
],
[
"Ding",
"Kaize",
""
],
[
"Zhu",
"Shixiang",
""
]
]
| TITLE: Topology-Aware Conformal Prediction for Stream Networks
ABSTRACT: Stream networks, a unique class of spatiotemporal graphs, exhibit complex
directional flow constraints and evolving dependencies, making uncertainty
quantification a critical yet challenging task. Traditional conformal
prediction methods struggle in this setting due to the need for joint
predictions across multiple interdependent locations and the intricate
spatio-temporal dependencies inherent in stream networks. Existing approaches
either neglect dependencies, leading to overly conservative predictions, or
rely solely on data-driven estimations, failing to capture the rich topological
structure of the network. To address these challenges, we propose
Spatio-Temporal Adaptive Conformal Inference (\texttt{STACI}), a novel
framework that integrates network topology and temporal dynamics into the
conformal prediction framework. \texttt{STACI} introduces a topology-aware
nonconformity score that respects directional flow constraints and dynamically
adjusts prediction sets to account for temporal distributional shifts. We
provide theoretical guarantees on the validity of our approach and demonstrate
its superior performance on both synthetic and real-world datasets. Our results
show that \texttt{STACI} effectively balances prediction efficiency and
coverage, outperforming existing conformal prediction methods for stream
networks.
| no_new_dataset | 0.947284 |
2503.04982 | Sungduk Yu | Souvik Kundu, Anahita Bhiwandiwalla, Sungduk Yu, Phillip Howard, Tiep
Le, Sharath Nittur Sridhar, David Cobbley, Hao Kang, Vasudev Lal | LVLM-Compress-Bench: Benchmarking the Broader Impact of Large
Vision-Language Model Compression | This work has been accepted to NAACL 2025 | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite recent efforts in understanding the compression impact on large
language models (LLMs) in terms of their downstream task performance and
trustworthiness on relatively simpler uni-modal benchmarks (for example,
question answering, common sense reasoning), their detailed study on
multi-modal Large Vision-Language Models (LVLMs) is yet to be unveiled. Towards
mitigating this gap, we present LVLM-Compress-Bench, a framework to first
thoroughly study the broad impact of compression on the generative performance
of LVLMs with multi-modal input driven tasks. In specific, we consider two
major classes of compression for autoregressive models, namely KV cache and
weight compression, for the dynamically growing intermediate cache and static
weights, respectively.
We use four LVLM variants of the popular LLaVA framework to present our
analysis via integrating various state-of-the-art KV and weight compression
methods including uniform, outlier-reduced, and group quantization for the KV
cache and weights. With this framework we demonstrate on ten different
multi-modal datasets with different capabilities including recognition,
knowledge, language generation, spatial awareness, visual reasoning,
hallucination and visual illusion identification, toxicity, stereotypes and
bias. In specific, our framework demonstrates the compression impact on both
general and ethically critical metrics leveraging a combination of real world
and synthetic datasets to encompass diverse societal intersectional attributes.
Extensive experimental evaluations yield diverse and intriguing observations on
the behavior of LVLMs at different quantization budget of KV and weights, in
both maintaining and losing performance as compared to the baseline model with
FP16 data format.
Code will be open-sourced at
https://github.com/opengear-project/LVLM-compress-bench.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 21:21:18 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Kundu",
"Souvik",
""
],
[
"Bhiwandiwalla",
"Anahita",
""
],
[
"Yu",
"Sungduk",
""
],
[
"Howard",
"Phillip",
""
],
[
"Le",
"Tiep",
""
],
[
"Sridhar",
"Sharath Nittur",
""
],
[
"Cobbley",
"David",
""
],
[
"Kang",
"Hao",
""
],
[
"Lal",
"Vasudev",
""
]
]
| TITLE: LVLM-Compress-Bench: Benchmarking the Broader Impact of Large
Vision-Language Model Compression
ABSTRACT: Despite recent efforts in understanding the compression impact on large
language models (LLMs) in terms of their downstream task performance and
trustworthiness on relatively simpler uni-modal benchmarks (for example,
question answering, common sense reasoning), their detailed study on
multi-modal Large Vision-Language Models (LVLMs) is yet to be unveiled. Towards
mitigating this gap, we present LVLM-Compress-Bench, a framework to first
thoroughly study the broad impact of compression on the generative performance
of LVLMs with multi-modal input driven tasks. In specific, we consider two
major classes of compression for autoregressive models, namely KV cache and
weight compression, for the dynamically growing intermediate cache and static
weights, respectively.
We use four LVLM variants of the popular LLaVA framework to present our
analysis via integrating various state-of-the-art KV and weight compression
methods including uniform, outlier-reduced, and group quantization for the KV
cache and weights. With this framework we demonstrate on ten different
multi-modal datasets with different capabilities including recognition,
knowledge, language generation, spatial awareness, visual reasoning,
hallucination and visual illusion identification, toxicity, stereotypes and
bias. In specific, our framework demonstrates the compression impact on both
general and ethically critical metrics leveraging a combination of real world
and synthetic datasets to encompass diverse societal intersectional attributes.
Extensive experimental evaluations yield diverse and intriguing observations on
the behavior of LVLMs at different quantization budget of KV and weights, in
both maintaining and losing performance as compared to the baseline model with
FP16 data format.
Code will be open-sourced at
https://github.com/opengear-project/LVLM-compress-bench.
| no_new_dataset | 0.946151 |
2503.04983 | Ivan Jarsky | Boris Malashenko, Ivan Jarsky, Valeria Efimova | Leveraging Large Language Models For Scalable Vector Graphics
Processing: A Review | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In recent years, rapid advances in computer vision have significantly
improved the processing and generation of raster images. However, vector
graphics, which is essential in digital design, due to its scalability and ease
of editing, have been relatively understudied. Traditional vectorization
techniques, which are often used in vector generation, suffer from long
processing times and excessive output complexity, limiting their usability in
practical applications. The advent of large language models (LLMs) has opened
new possibilities for the generation, editing, and analysis of vector graphics,
particularly in the SVG format, which is inherently text-based and well-suited
for integration with LLMs.
This paper provides a systematic review of existing LLM-based approaches for
SVG processing, categorizing them into three main tasks: generation, editing,
and understanding. We observe notable models such as IconShop, StrokeNUWA, and
StarVector, highlighting their strengths and limitations. Furthermore, we
analyze benchmark datasets designed for assessing SVG-related tasks, including
SVGEditBench, VGBench, and SGP-Bench, and conduct a series of experiments to
evaluate various LLMs in these domains. Our results demonstrate that for vector
graphics reasoning-enhanced models outperform standard LLMs, particularly in
generation and understanding tasks. Furthermore, our findings underscore the
need to develop more diverse and richly annotated datasets to further improve
LLM capabilities in vector graphics tasks.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 21:23:17 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Malashenko",
"Boris",
""
],
[
"Jarsky",
"Ivan",
""
],
[
"Efimova",
"Valeria",
""
]
]
| TITLE: Leveraging Large Language Models For Scalable Vector Graphics
Processing: A Review
ABSTRACT: In recent years, rapid advances in computer vision have significantly
improved the processing and generation of raster images. However, vector
graphics, which is essential in digital design, due to its scalability and ease
of editing, have been relatively understudied. Traditional vectorization
techniques, which are often used in vector generation, suffer from long
processing times and excessive output complexity, limiting their usability in
practical applications. The advent of large language models (LLMs) has opened
new possibilities for the generation, editing, and analysis of vector graphics,
particularly in the SVG format, which is inherently text-based and well-suited
for integration with LLMs.
This paper provides a systematic review of existing LLM-based approaches for
SVG processing, categorizing them into three main tasks: generation, editing,
and understanding. We observe notable models such as IconShop, StrokeNUWA, and
StarVector, highlighting their strengths and limitations. Furthermore, we
analyze benchmark datasets designed for assessing SVG-related tasks, including
SVGEditBench, VGBench, and SGP-Bench, and conduct a series of experiments to
evaluate various LLMs in these domains. Our results demonstrate that for vector
graphics reasoning-enhanced models outperform standard LLMs, particularly in
generation and understanding tasks. Furthermore, our findings underscore the
need to develop more diverse and richly annotated datasets to further improve
LLM capabilities in vector graphics tasks.
| no_new_dataset | 0.9434 |
2503.04989 | Tomaso Erseghe | Ali Aghababaei, Jan Nikadon, Magdalena Formanowicz, Maria Laura
Bettinsoli, Carmen Cervone, Caterina Suitner, Tomaso Erseghe | Application of integrated gradients explainability to sociopsychological
semantic markers | Submitted to IEEE Trans. on Affective Computing | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Classification of textual data in terms of sentiment, or more nuanced
sociopsychological markers (e.g., agency), is now a popular approach commonly
applied at the sentence level. In this paper, we exploit the integrated
gradient (IG) method to capture the classification output at the word level,
revealing which words actually contribute to the classification process. This
approach improves explainability and provides in-depth insights into the text.
We focus on sociopsychological markers beyond sentiment and investigate how to
effectively train IG in agency, one of the very few markers for which a
verified deep learning classifier, BERTAgent, is currently available.
Performance and system parameters are carefully tested, alternatives to the IG
approach are evaluated, and the usefulness of the result is verified in a
relevant application scenario. The method is also applied in a scenario where
only a small labeled dataset is available, with the aim of exploiting IG to
identify the salient words that contribute to building the different classes
that relate to relevant sociopsychological markers. To achieve this, an
uncommon training procedure that encourages overfitting is employed to enhance
the distinctiveness of each class. The results are analyzed through the lens of
social psychology, offering valuable insights.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 21:35:24 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Aghababaei",
"Ali",
""
],
[
"Nikadon",
"Jan",
""
],
[
"Formanowicz",
"Magdalena",
""
],
[
"Bettinsoli",
"Maria Laura",
""
],
[
"Cervone",
"Carmen",
""
],
[
"Suitner",
"Caterina",
""
],
[
"Erseghe",
"Tomaso",
""
]
]
| TITLE: Application of integrated gradients explainability to sociopsychological
semantic markers
ABSTRACT: Classification of textual data in terms of sentiment, or more nuanced
sociopsychological markers (e.g., agency), is now a popular approach commonly
applied at the sentence level. In this paper, we exploit the integrated
gradient (IG) method to capture the classification output at the word level,
revealing which words actually contribute to the classification process. This
approach improves explainability and provides in-depth insights into the text.
We focus on sociopsychological markers beyond sentiment and investigate how to
effectively train IG in agency, one of the very few markers for which a
verified deep learning classifier, BERTAgent, is currently available.
Performance and system parameters are carefully tested, alternatives to the IG
approach are evaluated, and the usefulness of the result is verified in a
relevant application scenario. The method is also applied in a scenario where
only a small labeled dataset is available, with the aim of exploiting IG to
identify the salient words that contribute to building the different classes
that relate to relevant sociopsychological markers. To achieve this, an
uncommon training procedure that encourages overfitting is employed to enhance
the distinctiveness of each class. The results are analyzed through the lens of
social psychology, offering valuable insights.
| no_new_dataset | 0.951233 |
2503.04994 | Laura Zheng | Laura Zheng, Hamidreza Yaghoubi Araghi, Tony Wu, Sandeep Thalapanane,
Tianyi Zhou, and Ming C. Lin | Quantifying and Modeling Driving Styles in Trajectory Forecasting | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Trajectory forecasting has become a popular deep learning task due to its
relevance for scenario simulation for autonomous driving. Specifically,
trajectory forecasting predicts the trajectory of a short-horizon future for
specific human drivers in a particular traffic scenario. Robust and accurate
future predictions can enable autonomous driving planners to optimize for
low-risk and predictable outcomes for human drivers around them. Although some
work has been done to model driving style in planning and personalized
autonomous polices, a gap exists in explicitly modeling human driving styles
for trajectory forecasting of human behavior. Human driving style is most
certainly a correlating factor to decision making, especially in edge-case
scenarios where risk is nontrivial, as justified by the large amount of traffic
psychology literature on risky driving. So far, the current real-world datasets
for trajectory forecasting lack insight on the variety of represented driving
styles. While the datasets may represent real-world distributions of driving
styles, we posit that fringe driving style types may also be correlated with
edge-case safety scenarios. In this work, we conduct analyses on existing
real-world trajectory datasets for driving and dissect these works from the
lens of driving styles, which is often intangible and non-standardized.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 21:47:49 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Zheng",
"Laura",
""
],
[
"Araghi",
"Hamidreza Yaghoubi",
""
],
[
"Wu",
"Tony",
""
],
[
"Thalapanane",
"Sandeep",
""
],
[
"Zhou",
"Tianyi",
""
],
[
"Lin",
"Ming C.",
""
]
]
| TITLE: Quantifying and Modeling Driving Styles in Trajectory Forecasting
ABSTRACT: Trajectory forecasting has become a popular deep learning task due to its
relevance for scenario simulation for autonomous driving. Specifically,
trajectory forecasting predicts the trajectory of a short-horizon future for
specific human drivers in a particular traffic scenario. Robust and accurate
future predictions can enable autonomous driving planners to optimize for
low-risk and predictable outcomes for human drivers around them. Although some
work has been done to model driving style in planning and personalized
autonomous polices, a gap exists in explicitly modeling human driving styles
for trajectory forecasting of human behavior. Human driving style is most
certainly a correlating factor to decision making, especially in edge-case
scenarios where risk is nontrivial, as justified by the large amount of traffic
psychology literature on risky driving. So far, the current real-world datasets
for trajectory forecasting lack insight on the variety of represented driving
styles. While the datasets may represent real-world distributions of driving
styles, we posit that fringe driving style types may also be correlated with
edge-case safety scenarios. In this work, we conduct analyses on existing
real-world trajectory datasets for driving and dissect these works from the
lens of driving styles, which is often intangible and non-standardized.
| no_new_dataset | 0.938801 |
2503.05009 | Divakar Vashisth | Divakar Vashisth, Rohan Sharma, Tapan Mukerji and Mrinal K. Sen | Seismic inversion using hybrid quantum neural networks | null | null | null | null | quant-ph cs.LG physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum computing leverages qubits, exploiting superposition and entanglement
to solve problems intractable for classical computers, offering significant
computational advantages. Quantum machine learning (QML), which integrates
quantum computing with machine learning, holds immense potential across various
fields but remains largely unexplored in geosciences. However, its progress is
hindered by the limitations of current NISQ hardware. To address these
challenges, hybrid quantum neural networks (HQNNs) have emerged, combining
quantum layers within classical neural networks to leverage the strengths of
both paradigms. To the best of our knowledge, this study presents the first
application of QML to subsurface imaging through the development of hybrid
quantum physics-informed neural networks (HQ-PINNs) for seismic inversion. We
apply the HQ-PINN framework to invert pre-stack and post-stack seismic
datasets, estimating P- and S-impedances. The proposed HQ-PINN architecture
follows an encoder-decoder structure, where the encoder (HQNN), processes
seismic data to estimate elastic parameters, while the decoder utilizes these
parameters to generate the corresponding seismic data based on geophysical
relationships. The HQ-PINN model is trained by minimizing the misfit between
the input and predicted seismic data generated by the decoder. We
systematically evaluate various quantum layer configurations, differentiation
methods, and quantum device simulators on the inversion performance, and
demonstrate real-world applicability through the individual and simultaneous
inversion cases of the Sleipner dataset. The HQ-PINN framework consistently and
efficiently estimated accurate subsurface impedances across the synthetic and
field case studies, establishing the feasibility of leveraging QML for seismic
inversion, thereby paving the way for broader applications of quantum computing
in geosciences.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 22:21:45 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Vashisth",
"Divakar",
""
],
[
"Sharma",
"Rohan",
""
],
[
"Mukerji",
"Tapan",
""
],
[
"Sen",
"Mrinal K.",
""
]
]
| TITLE: Seismic inversion using hybrid quantum neural networks
ABSTRACT: Quantum computing leverages qubits, exploiting superposition and entanglement
to solve problems intractable for classical computers, offering significant
computational advantages. Quantum machine learning (QML), which integrates
quantum computing with machine learning, holds immense potential across various
fields but remains largely unexplored in geosciences. However, its progress is
hindered by the limitations of current NISQ hardware. To address these
challenges, hybrid quantum neural networks (HQNNs) have emerged, combining
quantum layers within classical neural networks to leverage the strengths of
both paradigms. To the best of our knowledge, this study presents the first
application of QML to subsurface imaging through the development of hybrid
quantum physics-informed neural networks (HQ-PINNs) for seismic inversion. We
apply the HQ-PINN framework to invert pre-stack and post-stack seismic
datasets, estimating P- and S-impedances. The proposed HQ-PINN architecture
follows an encoder-decoder structure, where the encoder (HQNN), processes
seismic data to estimate elastic parameters, while the decoder utilizes these
parameters to generate the corresponding seismic data based on geophysical
relationships. The HQ-PINN model is trained by minimizing the misfit between
the input and predicted seismic data generated by the decoder. We
systematically evaluate various quantum layer configurations, differentiation
methods, and quantum device simulators on the inversion performance, and
demonstrate real-world applicability through the individual and simultaneous
inversion cases of the Sleipner dataset. The HQ-PINN framework consistently and
efficiently estimated accurate subsurface impedances across the synthetic and
field case studies, establishing the feasibility of leveraging QML for seismic
inversion, thereby paving the way for broader applications of quantum computing
in geosciences.
| no_new_dataset | 0.947039 |
2503.05020 | Siyu Ma | Siyu Ma, Wenxin Du, Chang Yu, Ying Jiang, Zeshun Zong, Tianyi Xie,
Yunuo Chen, Yin Yang, Xuchen Han, Chenfanfu Jiang | GRIP: A General Robotic Incremental Potential Contact Simulation Dataset
for Unified Deformable-Rigid Coupled Grasping | We release GRIP to advance research in robotic manipulation,
soft-gripper control, and physics-driven simulation at:
https://bell0o.github.io/GRIP/ | null | null | null | cs.RO cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grasping is fundamental to robotic manipulation, and recent advances in
large-scale grasping datasets have provided essential training data and
evaluation benchmarks, accelerating the development of learning-based methods
for robust object grasping. However, most existing datasets exclude deformable
bodies due to the lack of scalable, robust simulation pipelines, limiting the
development of generalizable models for compliant grippers and soft
manipulands. To address these challenges, we present GRIP, a General Robotic
Incremental Potential contact simulation dataset for universal grasping. GRIP
leverages an optimized Incremental Potential Contact (IPC)-based simulator for
multi-environment data generation, achieving up to 48x speedup while ensuring
efficient, intersection- and inversion-free simulations for compliant grippers
and deformable objects. Our fully automated pipeline generates and evaluates
diverse grasp interactions across 1,200 objects and 100,000 grasp poses,
incorporating both soft and rigid grippers. The GRIP dataset enables
applications such as neural grasp generation and stress field prediction.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 22:46:13 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Ma",
"Siyu",
""
],
[
"Du",
"Wenxin",
""
],
[
"Yu",
"Chang",
""
],
[
"Jiang",
"Ying",
""
],
[
"Zong",
"Zeshun",
""
],
[
"Xie",
"Tianyi",
""
],
[
"Chen",
"Yunuo",
""
],
[
"Yang",
"Yin",
""
],
[
"Han",
"Xuchen",
""
],
[
"Jiang",
"Chenfanfu",
""
]
]
| TITLE: GRIP: A General Robotic Incremental Potential Contact Simulation Dataset
for Unified Deformable-Rigid Coupled Grasping
ABSTRACT: Grasping is fundamental to robotic manipulation, and recent advances in
large-scale grasping datasets have provided essential training data and
evaluation benchmarks, accelerating the development of learning-based methods
for robust object grasping. However, most existing datasets exclude deformable
bodies due to the lack of scalable, robust simulation pipelines, limiting the
development of generalizable models for compliant grippers and soft
manipulands. To address these challenges, we present GRIP, a General Robotic
Incremental Potential contact simulation dataset for universal grasping. GRIP
leverages an optimized Incremental Potential Contact (IPC)-based simulator for
multi-environment data generation, achieving up to 48x speedup while ensuring
efficient, intersection- and inversion-free simulations for compliant grippers
and deformable objects. Our fully automated pipeline generates and evaluates
diverse grasp interactions across 1,200 objects and 100,000 grasp poses,
incorporating both soft and rigid grippers. The GRIP dataset enables
applications such as neural grasp generation and stress field prediction.
| new_dataset | 0.960063 |
2503.05037 | Mohsen Fayyaz | Mohsen Fayyaz, Ali Modarressi, Hinrich Schuetze, Nanyun Peng | Collapse of Dense Retrievers: Short, Early, and Literal Biases
Outranking Factual Evidence | null | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dense retrieval models are commonly used in Information Retrieval (IR)
applications, such as Retrieval-Augmented Generation (RAG). Since they often
serve as the first step in these systems, their robustness is critical to avoid
failures. In this work, by repurposing a relation extraction dataset (e.g.
Re-DocRED), we design controlled experiments to quantify the impact of
heuristic biases, such as favoring shorter documents, in retrievers like
Dragon+ and Contriever. Our findings reveal significant vulnerabilities:
retrievers often rely on superficial patterns like over-prioritizing document
beginnings, shorter documents, repeated entities, and literal matches.
Additionally, they tend to overlook whether the document contains the query's
answer, lacking deep semantic understanding. Notably, when multiple biases
combine, models exhibit catastrophic performance degradation, selecting the
answer-containing document in less than 3% of cases over a biased document
without the answer. Furthermore, we show that these biases have direct
consequences for downstream applications like RAG, where retrieval-preferred
documents can mislead LLMs, resulting in a 34% performance drop than not
providing any documents at all.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 23:23:13 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Fayyaz",
"Mohsen",
""
],
[
"Modarressi",
"Ali",
""
],
[
"Schuetze",
"Hinrich",
""
],
[
"Peng",
"Nanyun",
""
]
]
| TITLE: Collapse of Dense Retrievers: Short, Early, and Literal Biases
Outranking Factual Evidence
ABSTRACT: Dense retrieval models are commonly used in Information Retrieval (IR)
applications, such as Retrieval-Augmented Generation (RAG). Since they often
serve as the first step in these systems, their robustness is critical to avoid
failures. In this work, by repurposing a relation extraction dataset (e.g.
Re-DocRED), we design controlled experiments to quantify the impact of
heuristic biases, such as favoring shorter documents, in retrievers like
Dragon+ and Contriever. Our findings reveal significant vulnerabilities:
retrievers often rely on superficial patterns like over-prioritizing document
beginnings, shorter documents, repeated entities, and literal matches.
Additionally, they tend to overlook whether the document contains the query's
answer, lacking deep semantic understanding. Notably, when multiple biases
combine, models exhibit catastrophic performance degradation, selecting the
answer-containing document in less than 3% of cases over a biased document
without the answer. Furthermore, we show that these biases have direct
consequences for downstream applications like RAG, where retrieval-preferred
documents can mislead LLMs, resulting in a 34% performance drop than not
providing any documents at all.
| no_new_dataset | 0.943034 |
2503.05047 | Grace Proebsting | Grace Proebsting and Adam Poliak | Biases in Large Language Model-Elicited Text: A Case Study in Natural
Language Inference | arXiv admin note: substantial text overlap with arXiv:2410.08996 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We test whether NLP datasets created with Large Language Models (LLMs)
contain annotation artifacts and social biases like NLP datasets elicited from
crowd-source workers. We recreate a portion of the Stanford Natural Language
Inference corpus using GPT-4, Llama-2 70b for Chat, and Mistral 7b Instruct. We
train hypothesis-only classifiers to determine whether LLM-elicited NLI
datasets contain annotation artifacts. Next, we use pointwise mutual
information to identify the words in each dataset that are associated with
gender, race, and age-related terms. On our LLM-generated NLI datasets,
fine-tuned BERT hypothesis-only classifiers achieve between 86-96% accuracy.
Our analyses further characterize the annotation artifacts and stereotypical
biases in LLM-generated datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 23:49:30 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Proebsting",
"Grace",
""
],
[
"Poliak",
"Adam",
""
]
]
| TITLE: Biases in Large Language Model-Elicited Text: A Case Study in Natural
Language Inference
ABSTRACT: We test whether NLP datasets created with Large Language Models (LLMs)
contain annotation artifacts and social biases like NLP datasets elicited from
crowd-source workers. We recreate a portion of the Stanford Natural Language
Inference corpus using GPT-4, Llama-2 70b for Chat, and Mistral 7b Instruct. We
train hypothesis-only classifiers to determine whether LLM-elicited NLI
datasets contain annotation artifacts. Next, we use pointwise mutual
information to identify the words in each dataset that are associated with
gender, race, and age-related terms. On our LLM-generated NLI datasets,
fine-tuned BERT hypothesis-only classifiers achieve between 86-96% accuracy.
Our analyses further characterize the annotation artifacts and stereotypical
biases in LLM-generated datasets.
| no_new_dataset | 0.947527 |
2503.05049 | Preetam Prabhu Srikar Dammu | Preetam Prabhu Srikar Dammu, Himanshu Naidu, Chirag Shah | Dynamic-KGQA: A Scalable Framework for Generating Adaptive Question
Answering Datasets | null | null | null | null | cs.CL cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As question answering (QA) systems advance alongside the rapid evolution of
foundation models, the need for robust, adaptable, and large-scale evaluation
benchmarks becomes increasingly critical. Traditional QA benchmarks are often
static and publicly available, making them susceptible to data contamination
and memorization by large language models (LLMs). Consequently, static
benchmarks may overestimate model generalization and hinder a reliable
assessment of real-world performance. In this work, we introduce Dynamic-KGQA,
a scalable framework for generating adaptive QA datasets from knowledge graphs
(KGs), designed to mitigate memorization risks while maintaining statistical
consistency across iterations. Unlike fixed benchmarks, Dynamic-KGQA generates
a new dataset variant on every run while preserving the underlying
distribution, enabling fair and reproducible evaluations. Furthermore, our
framework provides fine-grained control over dataset characteristics,
supporting domain-specific and topic-focused QA dataset generation.
Additionally, Dynamic-KGQA produces compact, semantically coherent subgraphs
that facilitate both training and evaluation of KGQA models, enhancing their
ability to leverage structured knowledge effectively. To align with existing
evaluation protocols, we also provide static large-scale train/test/validation
splits, ensuring comparability with prior methods. By introducing a dynamic,
customizable benchmarking paradigm, Dynamic-KGQA enables a more rigorous and
adaptable evaluation of QA systems.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 23:58:01 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Dammu",
"Preetam Prabhu Srikar",
""
],
[
"Naidu",
"Himanshu",
""
],
[
"Shah",
"Chirag",
""
]
]
| TITLE: Dynamic-KGQA: A Scalable Framework for Generating Adaptive Question
Answering Datasets
ABSTRACT: As question answering (QA) systems advance alongside the rapid evolution of
foundation models, the need for robust, adaptable, and large-scale evaluation
benchmarks becomes increasingly critical. Traditional QA benchmarks are often
static and publicly available, making them susceptible to data contamination
and memorization by large language models (LLMs). Consequently, static
benchmarks may overestimate model generalization and hinder a reliable
assessment of real-world performance. In this work, we introduce Dynamic-KGQA,
a scalable framework for generating adaptive QA datasets from knowledge graphs
(KGs), designed to mitigate memorization risks while maintaining statistical
consistency across iterations. Unlike fixed benchmarks, Dynamic-KGQA generates
a new dataset variant on every run while preserving the underlying
distribution, enabling fair and reproducible evaluations. Furthermore, our
framework provides fine-grained control over dataset characteristics,
supporting domain-specific and topic-focused QA dataset generation.
Additionally, Dynamic-KGQA produces compact, semantically coherent subgraphs
that facilitate both training and evaluation of KGQA models, enhancing their
ability to leverage structured knowledge effectively. To align with existing
evaluation protocols, we also provide static large-scale train/test/validation
splits, ensuring comparability with prior methods. By introducing a dynamic,
customizable benchmarking paradigm, Dynamic-KGQA enables a more rigorous and
adaptable evaluation of QA systems.
| no_new_dataset | 0.92657 |
2503.05051 | Di Xu | Di Xu, Hengjie Liu, Xin Miao, Daniel O'Connor, Jessica E. Scholey,
Wensha Yang, Mary Feng, Michael Ohliger, Hui Lin, Dan Ruan, Yang Yang and Ke
Sheng | Accelerated Patient-specific Non-Cartesian MRI Reconstruction using
Implicit Neural Representations | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | The scanning time for a fully sampled MRI can be undesirably lengthy.
Compressed sensing has been developed to minimize image artifacts in
accelerated scans, but the required iterative reconstruction is computationally
complex and difficult to generalize on new cases. Image-domain-based deep
learning methods (e.g., convolutional neural networks) emerged as a faster
alternative but face challenges in modeling continuous k-space, a problem
amplified with non-Cartesian sampling commonly used in accelerated acquisition.
In comparison, implicit neural representations can model continuous signals in
the frequency domain and thus are compatible with arbitrary k-space sampling
patterns. The current study develops a novel generative-adversarially trained
implicit neural representations (k-GINR) for de novo undersampled non-Cartesian
k-space reconstruction. k-GINR consists of two stages: 1) supervised training
on an existing patient cohort; 2) self-supervised patient-specific
optimization. In stage 1, the network is trained with the
generative-adversarial network on diverse patients of the same anatomical
region supervised by fully sampled acquisition. In stage 2, undersampled
k-space data of individual patients is used to tailor the prior-embedded
network for patient-specific optimization. The UCSF StarVIBE T1-weighted liver
dataset was evaluated on the proposed framework. k-GINR is compared with an
image-domain deep learning method, Deep Cascade CNN, and a compressed sensing
method. k-GINR consistently outperformed the baselines with a larger
performance advantage observed at very high accelerations (e.g., 20 times).
k-GINR offers great value for direct non-Cartesian k-space reconstruction for
new incoming patients across a wide range of accelerations liver anatomy.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 00:05:43 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Xu",
"Di",
""
],
[
"Liu",
"Hengjie",
""
],
[
"Miao",
"Xin",
""
],
[
"O'Connor",
"Daniel",
""
],
[
"Scholey",
"Jessica E.",
""
],
[
"Yang",
"Wensha",
""
],
[
"Feng",
"Mary",
""
],
[
"Ohliger",
"Michael",
""
],
[
"Lin",
"Hui",
""
],
[
"Ruan",
"Dan",
""
],
[
"Yang",
"Yang",
""
],
[
"Sheng",
"Ke",
""
]
]
| TITLE: Accelerated Patient-specific Non-Cartesian MRI Reconstruction using
Implicit Neural Representations
ABSTRACT: The scanning time for a fully sampled MRI can be undesirably lengthy.
Compressed sensing has been developed to minimize image artifacts in
accelerated scans, but the required iterative reconstruction is computationally
complex and difficult to generalize on new cases. Image-domain-based deep
learning methods (e.g., convolutional neural networks) emerged as a faster
alternative but face challenges in modeling continuous k-space, a problem
amplified with non-Cartesian sampling commonly used in accelerated acquisition.
In comparison, implicit neural representations can model continuous signals in
the frequency domain and thus are compatible with arbitrary k-space sampling
patterns. The current study develops a novel generative-adversarially trained
implicit neural representations (k-GINR) for de novo undersampled non-Cartesian
k-space reconstruction. k-GINR consists of two stages: 1) supervised training
on an existing patient cohort; 2) self-supervised patient-specific
optimization. In stage 1, the network is trained with the
generative-adversarial network on diverse patients of the same anatomical
region supervised by fully sampled acquisition. In stage 2, undersampled
k-space data of individual patients is used to tailor the prior-embedded
network for patient-specific optimization. The UCSF StarVIBE T1-weighted liver
dataset was evaluated on the proposed framework. k-GINR is compared with an
image-domain deep learning method, Deep Cascade CNN, and a compressed sensing
method. k-GINR consistently outperformed the baselines with a larger
performance advantage observed at very high accelerations (e.g., 20 times).
k-GINR offers great value for direct non-Cartesian k-space reconstruction for
new incoming patients across a wide range of accelerations liver anatomy.
| no_new_dataset | 0.956634 |
2503.05060 | Yosuke Yamagishi | Yosuke Yamagishi, Tomohiro Kikuchi, Shouhei Hanaoka, Takeharu
Yoshikawa, Osamu Abe | ModernBERT is More Efficient than Conventional BERT for Chest CT
Findings Classification in Japanese Radiology Reports | 23 pages, 8 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Objective: This study aims to evaluate and compare the performance of two
Japanese language models-conventional Bidirectional Encoder Representations
from Transformers (BERT) and the newer ModernBERT-in classifying findings from
chest CT reports, with a focus on tokenization efficiency, processing time, and
classification performance. Methods: We conducted a retrospective study using
the CT-RATE-JPN dataset containing 22,778 training reports and 150 test
reports. Both models were fine-tuned for multi-label classification of 18
common chest CT conditions. The training data was split in 18,222:4,556 for
training and validation. Performance was evaluated using F1 scores for each
condition and exact match accuracy across all 18 labels. Results: ModernBERT
demonstrated superior tokenization efficiency, requiring 24.0% fewer tokens per
document (258.1 vs. 339.6) compared to BERT Base. This translated to
significant performance improvements, with ModernBERT completing training in
1877.67 seconds versus BERT's 3090.54 seconds (39% reduction). ModernBERT
processed 38.82 samples per second during training (1.65x faster) and 139.90
samples per second during inference (1.66x faster). Despite these efficiency
gains, classification performance remained comparable, with ModernBERT
achieving superior F1 scores in 8 conditions, while BERT performed better in 4
conditions. Overall exact match accuracy was slightly higher for ModernBERT
(74.67% vs. 72.67%), though this difference was not statistically significant
(p=0.6291). Conclusion: ModernBERT offers substantial improvements in
tokenization efficiency and training speed without sacrificing classification
performance. These results suggest that ModernBERT is a promising candidate for
clinical applications in Japanese radiology reports analysis.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 00:28:08 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Yamagishi",
"Yosuke",
""
],
[
"Kikuchi",
"Tomohiro",
""
],
[
"Hanaoka",
"Shouhei",
""
],
[
"Yoshikawa",
"Takeharu",
""
],
[
"Abe",
"Osamu",
""
]
]
| TITLE: ModernBERT is More Efficient than Conventional BERT for Chest CT
Findings Classification in Japanese Radiology Reports
ABSTRACT: Objective: This study aims to evaluate and compare the performance of two
Japanese language models-conventional Bidirectional Encoder Representations
from Transformers (BERT) and the newer ModernBERT-in classifying findings from
chest CT reports, with a focus on tokenization efficiency, processing time, and
classification performance. Methods: We conducted a retrospective study using
the CT-RATE-JPN dataset containing 22,778 training reports and 150 test
reports. Both models were fine-tuned for multi-label classification of 18
common chest CT conditions. The training data was split in 18,222:4,556 for
training and validation. Performance was evaluated using F1 scores for each
condition and exact match accuracy across all 18 labels. Results: ModernBERT
demonstrated superior tokenization efficiency, requiring 24.0% fewer tokens per
document (258.1 vs. 339.6) compared to BERT Base. This translated to
significant performance improvements, with ModernBERT completing training in
1877.67 seconds versus BERT's 3090.54 seconds (39% reduction). ModernBERT
processed 38.82 samples per second during training (1.65x faster) and 139.90
samples per second during inference (1.66x faster). Despite these efficiency
gains, classification performance remained comparable, with ModernBERT
achieving superior F1 scores in 8 conditions, while BERT performed better in 4
conditions. Overall exact match accuracy was slightly higher for ModernBERT
(74.67% vs. 72.67%), though this difference was not statistically significant
(p=0.6291). Conclusion: ModernBERT offers substantial improvements in
tokenization efficiency and training speed without sacrificing classification
performance. These results suggest that ModernBERT is a promising candidate for
clinical applications in Japanese radiology reports analysis.
| no_new_dataset | 0.951097 |
2503.05061 | Michael Krumdick | Michael Krumdick, Charles Lovering, Varshini Reddy, Seth Ebner, Chris
Tanner | No Free Labels: Limitations of LLM-as-a-Judge Without Human Grounding | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | LLM-as-a-Judge is a framework that uses an LLM (large language model) to
evaluate the quality of natural language text - typically text that is also
generated by an LLM. This framework holds great promise due to its relative
low-cost, ease of use, and strong correlations with human stylistic
preferences. However, LLM Judges have been shown to exhibit biases that can
distort their judgments. We evaluate how well LLM Judges can grade whether a
given response to a conversational question is correct, an ability crucial to
soundly estimating the overall response quality. To do so, we create and
publicly release a human-annotated dataset with labels of correctness for 1,200
LLM responses. We source questions from a combination of existing datasets and
a novel, challenging benchmark (BFF-Bench) created for this analysis. We
demonstrate a strong connection between an LLM's ability to correctly answer a
question and grade responses to that question. Although aggregate level
statistics might imply a judge has high agreement with human annotators, it
will struggle on the subset of questions it could not answer. To address this
issue, we recommend a simple solution: provide the judge with a correct,
human-written reference answer. We perform an in-depth analysis on how
reference quality can affect the performance of an LLM Judge. We show that
providing a weaker judge (e.g. Qwen 2.5 7B) with higher quality references
reaches better agreement with human annotators than a stronger judge (e.g.
GPT-4o) with synthetic references.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 00:42:08 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Krumdick",
"Michael",
""
],
[
"Lovering",
"Charles",
""
],
[
"Reddy",
"Varshini",
""
],
[
"Ebner",
"Seth",
""
],
[
"Tanner",
"Chris",
""
]
]
| TITLE: No Free Labels: Limitations of LLM-as-a-Judge Without Human Grounding
ABSTRACT: LLM-as-a-Judge is a framework that uses an LLM (large language model) to
evaluate the quality of natural language text - typically text that is also
generated by an LLM. This framework holds great promise due to its relative
low-cost, ease of use, and strong correlations with human stylistic
preferences. However, LLM Judges have been shown to exhibit biases that can
distort their judgments. We evaluate how well LLM Judges can grade whether a
given response to a conversational question is correct, an ability crucial to
soundly estimating the overall response quality. To do so, we create and
publicly release a human-annotated dataset with labels of correctness for 1,200
LLM responses. We source questions from a combination of existing datasets and
a novel, challenging benchmark (BFF-Bench) created for this analysis. We
demonstrate a strong connection between an LLM's ability to correctly answer a
question and grade responses to that question. Although aggregate level
statistics might imply a judge has high agreement with human annotators, it
will struggle on the subset of questions it could not answer. To address this
issue, we recommend a simple solution: provide the judge with a correct,
human-written reference answer. We perform an in-depth analysis on how
reference quality can affect the performance of an LLM Judge. We show that
providing a weaker judge (e.g. Qwen 2.5 7B) with higher quality references
reaches better agreement with human annotators than a stronger judge (e.g.
GPT-4o) with synthetic references.
| new_dataset | 0.957397 |
2503.05074 | Yao Meng | Yao Meng, Sean P. Cornelius, Yang-Yu Liu, Aming Li | Personalised strategy of allocating social goods in structured
populations | 11 pages, 6 figures | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cooperation underlies many aspects of the evolution of human and animal
societies, where cooperators produce social goods to benefit others. Explaining
the emergence of cooperation among selfish individuals has become a major
research interest in evolutionary dynamics. Previous studies typically use
complex networks to capture the interactions between individuals, and assume
that cooperators distribute benefits equally to their neighbors. In practice,
the distribution of social goods is often non-uniform, and individuals may
selectively provide benefits to those they interact with based on their
personal preferences. Here, we develop an efficient algorithm to optimize the
placement of donation structure in any given network to minimize the threshold
for the emergence of cooperation. We find when cooperators allocate the
benefits preferentially compared to the traditional settings of donating to all
neighbors, cooperation tends to be maximally promoted. Furthermore, the optimal
donation structure is strongly disassortative -- the low-degree nodes tend to
donate to high-degree ones preferentially and vice versa. Based on this
finding, we offer a local heuristic strategy based on degree thresholds for
personalizing the allocation of social goods and choosing each cooperator's
recipient, which we use to prove its effectiveness in empirical datasets. Our
findings advance the understanding of mechanisms for promoting cooperation with
strategic allocations of social goods.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 01:35:30 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Meng",
"Yao",
""
],
[
"Cornelius",
"Sean P.",
""
],
[
"Liu",
"Yang-Yu",
""
],
[
"Li",
"Aming",
""
]
]
| TITLE: Personalised strategy of allocating social goods in structured
populations
ABSTRACT: Cooperation underlies many aspects of the evolution of human and animal
societies, where cooperators produce social goods to benefit others. Explaining
the emergence of cooperation among selfish individuals has become a major
research interest in evolutionary dynamics. Previous studies typically use
complex networks to capture the interactions between individuals, and assume
that cooperators distribute benefits equally to their neighbors. In practice,
the distribution of social goods is often non-uniform, and individuals may
selectively provide benefits to those they interact with based on their
personal preferences. Here, we develop an efficient algorithm to optimize the
placement of donation structure in any given network to minimize the threshold
for the emergence of cooperation. We find when cooperators allocate the
benefits preferentially compared to the traditional settings of donating to all
neighbors, cooperation tends to be maximally promoted. Furthermore, the optimal
donation structure is strongly disassortative -- the low-degree nodes tend to
donate to high-degree ones preferentially and vice versa. Based on this
finding, we offer a local heuristic strategy based on degree thresholds for
personalizing the allocation of social goods and choosing each cooperator's
recipient, which we use to prove its effectiveness in empirical datasets. Our
findings advance the understanding of mechanisms for promoting cooperation with
strategic allocations of social goods.
| no_new_dataset | 0.945651 |
2503.05086 | Anith Selvakumar | Anith Selvakumar and Manasa Bharadwaj | Fake It To Make It: Virtual Multiviews to Enhance Monocular Indoor
Semantic Scene Completion | Submitted to IROS 2025 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monocular Indoor Semantic Scene Completion (SSC) aims to reconstruct a 3D
semantic occupancy map from a single RGB image of an indoor scene, inferring
spatial layout and object categories from 2D image cues. The challenge of this
task arises from the depth, scale, and shape ambiguities that emerge when
transforming a 2D image into 3D space, particularly within the complex and
often heavily occluded environments of indoor scenes. Current SSC methods often
struggle with these ambiguities, resulting in distorted or missing object
representations. To overcome these limitations, we introduce an innovative
approach that leverages novel view synthesis and multiview fusion.
Specifically, we demonstrate how virtual cameras can be placed around the scene
to emulate multiview inputs that enhance contextual scene information. We also
introduce a Multiview Fusion Adaptor (MVFA) to effectively combine the
multiview 3D scene predictions into a unified 3D semantic occupancy map.
Finally, we identify and study the inherent limitation of generative techniques
when applied to SSC, specifically the Novelty-Consistency tradeoff. Our system,
GenFuSE, demonstrates IoU score improvements of up to 2.8% for Scene Completion
and 4.9% for Semantic Scene Completion when integrated with existing SSC
networks on the NYUv2 dataset. This work introduces GenFuSE as a standard
framework for advancing monocular SSC with synthesized inputs.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 02:09:38 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Selvakumar",
"Anith",
""
],
[
"Bharadwaj",
"Manasa",
""
]
]
| TITLE: Fake It To Make It: Virtual Multiviews to Enhance Monocular Indoor
Semantic Scene Completion
ABSTRACT: Monocular Indoor Semantic Scene Completion (SSC) aims to reconstruct a 3D
semantic occupancy map from a single RGB image of an indoor scene, inferring
spatial layout and object categories from 2D image cues. The challenge of this
task arises from the depth, scale, and shape ambiguities that emerge when
transforming a 2D image into 3D space, particularly within the complex and
often heavily occluded environments of indoor scenes. Current SSC methods often
struggle with these ambiguities, resulting in distorted or missing object
representations. To overcome these limitations, we introduce an innovative
approach that leverages novel view synthesis and multiview fusion.
Specifically, we demonstrate how virtual cameras can be placed around the scene
to emulate multiview inputs that enhance contextual scene information. We also
introduce a Multiview Fusion Adaptor (MVFA) to effectively combine the
multiview 3D scene predictions into a unified 3D semantic occupancy map.
Finally, we identify and study the inherent limitation of generative techniques
when applied to SSC, specifically the Novelty-Consistency tradeoff. Our system,
GenFuSE, demonstrates IoU score improvements of up to 2.8% for Scene Completion
and 4.9% for Semantic Scene Completion when integrated with existing SSC
networks on the NYUv2 dataset. This work introduces GenFuSE as a standard
framework for advancing monocular SSC with synthesized inputs.
| no_new_dataset | 0.951142 |
2503.05102 | HengRui Xing | Hengrui Xing, Cong Tian, Liang Zhao, Zhi Ma, WenSheng Wang, Nan Zhang,
Chao Huang, Zhenhua Duan | AutoTestForge: A Multidimensional Automated Testing Framework for
Natural Language Processing Models | 15 pages, 4 figures, Under review | null | null | null | cs.SE cs.CL cs.CR | http://creativecommons.org/licenses/by/4.0/ | In recent years, the application of behavioral testing in Natural Language
Processing (NLP) model evaluation has experienced a remarkable and substantial
growth. However, the existing methods continue to be restricted by the
requirements for manual labor and the limited scope of capability assessment.
To address these limitations, we introduce AutoTestForge, an automated and
multidimensional testing framework for NLP models in this paper. Within
AutoTestForge, through the utilization of Large Language Models (LLMs) to
automatically generate test templates and instantiate them, manual involvement
is significantly reduced. Additionally, a mechanism for the validation of test
case labels based on differential testing is implemented which makes use of a
multi-model voting system to guarantee the quality of test cases. The framework
also extends the test suite across three dimensions, taxonomy, fairness, and
robustness, offering a comprehensive evaluation of the capabilities of NLP
models. This expansion enables a more in-depth and thorough assessment of the
models, providing valuable insights into their strengths and weaknesses. A
comprehensive evaluation across sentiment analysis (SA) and semantic textual
similarity (STS) tasks demonstrates that AutoTestForge consistently outperforms
existing datasets and testing tools, achieving higher error detection rates (an
average of $30.89\%$ for SA and $34.58\%$ for STS). Moreover, different
generation strategies exhibit stable effectiveness, with error detection rates
ranging from $29.03\% - 36.82\%$.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 02:44:17 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Xing",
"Hengrui",
""
],
[
"Tian",
"Cong",
""
],
[
"Zhao",
"Liang",
""
],
[
"Ma",
"Zhi",
""
],
[
"Wang",
"WenSheng",
""
],
[
"Zhang",
"Nan",
""
],
[
"Huang",
"Chao",
""
],
[
"Duan",
"Zhenhua",
""
]
]
| TITLE: AutoTestForge: A Multidimensional Automated Testing Framework for
Natural Language Processing Models
ABSTRACT: In recent years, the application of behavioral testing in Natural Language
Processing (NLP) model evaluation has experienced a remarkable and substantial
growth. However, the existing methods continue to be restricted by the
requirements for manual labor and the limited scope of capability assessment.
To address these limitations, we introduce AutoTestForge, an automated and
multidimensional testing framework for NLP models in this paper. Within
AutoTestForge, through the utilization of Large Language Models (LLMs) to
automatically generate test templates and instantiate them, manual involvement
is significantly reduced. Additionally, a mechanism for the validation of test
case labels based on differential testing is implemented which makes use of a
multi-model voting system to guarantee the quality of test cases. The framework
also extends the test suite across three dimensions, taxonomy, fairness, and
robustness, offering a comprehensive evaluation of the capabilities of NLP
models. This expansion enables a more in-depth and thorough assessment of the
models, providing valuable insights into their strengths and weaknesses. A
comprehensive evaluation across sentiment analysis (SA) and semantic textual
similarity (STS) tasks demonstrates that AutoTestForge consistently outperforms
existing datasets and testing tools, achieving higher error detection rates (an
average of $30.89\%$ for SA and $34.58\%$ for STS). Moreover, different
generation strategies exhibit stable effectiveness, with error detection rates
ranging from $29.03\% - 36.82\%$.
| no_new_dataset | 0.943764 |
2503.05106 | Ruinan Wang Raynham | Ruinan Wang, Ian Nabney, Mohammad Golbabaee | Grouped Sequential Optimization Strategy -- the Application of
Hyperparameter Importance Assessment in Deep Learning | 12 pages | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hyperparameter optimization (HPO) is a critical component of machine learning
pipelines, significantly affecting model robustness, stability, and
generalization. However, HPO is often a time-consuming and computationally
intensive task. Traditional HPO methods, such as grid search and random search,
often suffer from inefficiency. Bayesian optimization, while more efficient,
still struggles with high-dimensional search spaces. In this paper, we
contribute to the field by exploring how insights gained from hyperparameter
importance assessment (HIA) can be leveraged to accelerate HPO, reducing both
time and computational resources. Building on prior work that quantified
hyperparameter importance by evaluating 10 hyperparameters on CNNs using 10
common image classification datasets, we implement a novel HPO strategy called
'Sequential Grouping.' That prior work assessed the importance weights of the
investigated hyperparameters based on their influence on model performance,
providing valuable insights that we leverage to optimize our HPO process. Our
experiments, validated across six additional image classification datasets,
demonstrate that incorporating hyperparameter importance assessment (HIA) can
significantly accelerate HPO without compromising model performance, reducing
optimization time by an average of 31.9\% compared to the conventional
simultaneous strategy.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 03:01:00 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Wang",
"Ruinan",
""
],
[
"Nabney",
"Ian",
""
],
[
"Golbabaee",
"Mohammad",
""
]
]
| TITLE: Grouped Sequential Optimization Strategy -- the Application of
Hyperparameter Importance Assessment in Deep Learning
ABSTRACT: Hyperparameter optimization (HPO) is a critical component of machine learning
pipelines, significantly affecting model robustness, stability, and
generalization. However, HPO is often a time-consuming and computationally
intensive task. Traditional HPO methods, such as grid search and random search,
often suffer from inefficiency. Bayesian optimization, while more efficient,
still struggles with high-dimensional search spaces. In this paper, we
contribute to the field by exploring how insights gained from hyperparameter
importance assessment (HIA) can be leveraged to accelerate HPO, reducing both
time and computational resources. Building on prior work that quantified
hyperparameter importance by evaluating 10 hyperparameters on CNNs using 10
common image classification datasets, we implement a novel HPO strategy called
'Sequential Grouping.' That prior work assessed the importance weights of the
investigated hyperparameters based on their influence on model performance,
providing valuable insights that we leverage to optimize our HPO process. Our
experiments, validated across six additional image classification datasets,
demonstrate that incorporating hyperparameter importance assessment (HIA) can
significantly accelerate HPO without compromising model performance, reducing
optimization time by an average of 31.9\% compared to the conventional
simultaneous strategy.
| no_new_dataset | 0.949248 |
2503.05112 | Chaoran Xiong | Chaoran Xiong, Litao Wei, Kehui Ma, Zhen Sun, Yan Xiang, Zihan Nan,
Trieu-Kien Truong and Ling Pei | THE-SEAN: A Heart Rate Variation-Inspired Temporally High-Order
Event-Based Visual Odometry with Self-Supervised Spiking Event Accumulation
Networks | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event-based visual odometry has recently gained attention for its high
accuracy and real-time performance in fast-motion systems. Unlike traditional
synchronous estimators that rely on constant-frequency (zero-order) triggers,
event-based visual odometry can actively accumulate information to generate
temporally high-order estimation triggers. However, existing methods primarily
focus on adaptive event representation after estimation triggers, neglecting
the decision-making process for efficient temporal triggering itself. This
oversight leads to the computational redundancy and noise accumulation. In this
paper, we introduce a temporally high-order event-based visual odometry with
spiking event accumulation networks (THE-SEAN). To the best of our knowledge,
it is the first event-based visual odometry capable of dynamically adjusting
its estimation trigger decision in response to motion and environmental
changes. Inspired by biological systems that regulate hormone secretion to
modulate heart rate, a self-supervised spiking neural network is designed to
generate estimation triggers. This spiking network extracts temporal features
to produce triggers, with rewards based on block matching points and Fisher
information matrix (FIM) trace acquired from the estimator itself. Finally,
THE-SEAN is evaluated across several open datasets, thereby demonstrating
average improvements of 13\% in estimation accuracy, 9\% in smoothness, and
38\% in triggering efficiency compared to the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 03:16:32 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Xiong",
"Chaoran",
""
],
[
"Wei",
"Litao",
""
],
[
"Ma",
"Kehui",
""
],
[
"Sun",
"Zhen",
""
],
[
"Xiang",
"Yan",
""
],
[
"Nan",
"Zihan",
""
],
[
"Truong",
"Trieu-Kien",
""
],
[
"Pei",
"Ling",
""
]
]
| TITLE: THE-SEAN: A Heart Rate Variation-Inspired Temporally High-Order
Event-Based Visual Odometry with Self-Supervised Spiking Event Accumulation
Networks
ABSTRACT: Event-based visual odometry has recently gained attention for its high
accuracy and real-time performance in fast-motion systems. Unlike traditional
synchronous estimators that rely on constant-frequency (zero-order) triggers,
event-based visual odometry can actively accumulate information to generate
temporally high-order estimation triggers. However, existing methods primarily
focus on adaptive event representation after estimation triggers, neglecting
the decision-making process for efficient temporal triggering itself. This
oversight leads to the computational redundancy and noise accumulation. In this
paper, we introduce a temporally high-order event-based visual odometry with
spiking event accumulation networks (THE-SEAN). To the best of our knowledge,
it is the first event-based visual odometry capable of dynamically adjusting
its estimation trigger decision in response to motion and environmental
changes. Inspired by biological systems that regulate hormone secretion to
modulate heart rate, a self-supervised spiking neural network is designed to
generate estimation triggers. This spiking network extracts temporal features
to produce triggers, with rewards based on block matching points and Fisher
information matrix (FIM) trace acquired from the estimator itself. Finally,
THE-SEAN is evaluated across several open datasets, thereby demonstrating
average improvements of 13\% in estimation accuracy, 9\% in smoothness, and
38\% in triggering efficiency compared to the state-of-the-art methods.
| no_new_dataset | 0.95594 |
2503.05142 | Tianjun Wei | Tianjun Wei, Wei Wen, Ruizhi Qiao, Xing Sun, Jianghong Ma | RocketEval: Efficient Automated LLM Evaluation via Grading Checklist | Accepted by ICLR 2025: https://openreview.net/forum?id=zJjzNj6QUe | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Evaluating large language models (LLMs) in diverse and challenging scenarios
is essential to align them with human preferences. To mitigate the prohibitive
costs associated with human evaluations, utilizing a powerful LLM as a judge
has emerged as a favored approach. Nevertheless, this methodology encounters
several challenges, including substantial expenses, concerns regarding privacy
and security, and reproducibility. In this paper, we propose a straightforward,
replicable, and accurate automated evaluation method by leveraging a
lightweight LLM as the judge, named RocketEval. Initially, we identify that the
performance disparity between lightweight and powerful LLMs in evaluation tasks
primarily stems from their ability to conduct comprehensive analyses, which is
not easily enhanced through techniques such as chain-of-thought reasoning. By
reframing the evaluation task as a multi-faceted Q&A using an instance-specific
checklist, we demonstrate that the limited judgment accuracy of lightweight
LLMs is largely attributes to high uncertainty and positional bias. To address
these challenges, we introduce an automated evaluation process grounded in
checklist grading, which is designed to accommodate a variety of scenarios and
questions. This process encompasses the creation of checklists, the grading of
these checklists by lightweight LLMs, and the reweighting of checklist items to
align with the supervised annotations. Our experiments carried out on the
automated evaluation benchmarks, MT-Bench and WildBench datasets, reveal that
RocketEval, when using Gemma-2-2B as the judge, achieves a high correlation
(0.965) with human preferences, which is comparable to GPT-4o. Moreover,
RocketEval provides a cost reduction exceeding 50-fold for large-scale
evaluation and comparison scenarios. Our code is available at
https://github.com/Joinn99/RocketEval-ICLR .
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 04:51:30 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Wei",
"Tianjun",
""
],
[
"Wen",
"Wei",
""
],
[
"Qiao",
"Ruizhi",
""
],
[
"Sun",
"Xing",
""
],
[
"Ma",
"Jianghong",
""
]
]
| TITLE: RocketEval: Efficient Automated LLM Evaluation via Grading Checklist
ABSTRACT: Evaluating large language models (LLMs) in diverse and challenging scenarios
is essential to align them with human preferences. To mitigate the prohibitive
costs associated with human evaluations, utilizing a powerful LLM as a judge
has emerged as a favored approach. Nevertheless, this methodology encounters
several challenges, including substantial expenses, concerns regarding privacy
and security, and reproducibility. In this paper, we propose a straightforward,
replicable, and accurate automated evaluation method by leveraging a
lightweight LLM as the judge, named RocketEval. Initially, we identify that the
performance disparity between lightweight and powerful LLMs in evaluation tasks
primarily stems from their ability to conduct comprehensive analyses, which is
not easily enhanced through techniques such as chain-of-thought reasoning. By
reframing the evaluation task as a multi-faceted Q&A using an instance-specific
checklist, we demonstrate that the limited judgment accuracy of lightweight
LLMs is largely attributes to high uncertainty and positional bias. To address
these challenges, we introduce an automated evaluation process grounded in
checklist grading, which is designed to accommodate a variety of scenarios and
questions. This process encompasses the creation of checklists, the grading of
these checklists by lightweight LLMs, and the reweighting of checklist items to
align with the supervised annotations. Our experiments carried out on the
automated evaluation benchmarks, MT-Bench and WildBench datasets, reveal that
RocketEval, when using Gemma-2-2B as the judge, achieves a high correlation
(0.965) with human preferences, which is comparable to GPT-4o. Moreover,
RocketEval provides a cost reduction exceeding 50-fold for large-scale
evaluation and comparison scenarios. Our code is available at
https://github.com/Joinn99/RocketEval-ICLR .
| no_new_dataset | 0.94474 |
2503.05143 | Wenhao Wang | Wenhao Wang, Zijie Yu, Rui Ye, Jianqing Zhang, Siheng Chen, Yanfeng
Wang | FedMABench: Benchmarking Mobile Agents on Decentralized Heterogeneous
User Data | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Mobile agents have attracted tremendous research participation recently.
Traditional approaches to mobile agent training rely on centralized data
collection, leading to high cost and limited scalability. Distributed training
utilizing federated learning offers an alternative by harnessing real-world
user data, providing scalability and reducing costs. However, pivotal
challenges, including the absence of standardized benchmarks, hinder progress
in this field.
To tackle the challenges, we introduce FedMABench, the first benchmark for
federated training and evaluation of mobile agents, specifically designed for
heterogeneous scenarios. FedMABench features 6 datasets with 30+ subsets, 8
federated algorithms, 10+ base models, and over 800 apps across 5 categories,
providing a comprehensive framework for evaluating mobile agents across diverse
environments. Through extensive experiments, we uncover several key insights:
federated algorithms consistently outperform local training; the distribution
of specific apps plays a crucial role in heterogeneity; and, even apps from
distinct categories can exhibit correlations during training. FedMABench is
publicly available at: https://github.com/wwh0411/FedMABench with the datasets
at: https://huggingface.co/datasets/wwh0411/FedMABench.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 04:52:20 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Wang",
"Wenhao",
""
],
[
"Yu",
"Zijie",
""
],
[
"Ye",
"Rui",
""
],
[
"Zhang",
"Jianqing",
""
],
[
"Chen",
"Siheng",
""
],
[
"Wang",
"Yanfeng",
""
]
]
| TITLE: FedMABench: Benchmarking Mobile Agents on Decentralized Heterogeneous
User Data
ABSTRACT: Mobile agents have attracted tremendous research participation recently.
Traditional approaches to mobile agent training rely on centralized data
collection, leading to high cost and limited scalability. Distributed training
utilizing federated learning offers an alternative by harnessing real-world
user data, providing scalability and reducing costs. However, pivotal
challenges, including the absence of standardized benchmarks, hinder progress
in this field.
To tackle the challenges, we introduce FedMABench, the first benchmark for
federated training and evaluation of mobile agents, specifically designed for
heterogeneous scenarios. FedMABench features 6 datasets with 30+ subsets, 8
federated algorithms, 10+ base models, and over 800 apps across 5 categories,
providing a comprehensive framework for evaluating mobile agents across diverse
environments. Through extensive experiments, we uncover several key insights:
federated algorithms consistently outperform local training; the distribution
of specific apps plays a crucial role in heterogeneity; and, even apps from
distinct categories can exhibit correlations during training. FedMABench is
publicly available at: https://github.com/wwh0411/FedMABench with the datasets
at: https://huggingface.co/datasets/wwh0411/FedMABench.
| no_new_dataset | 0.526343 |
2503.05150 | Bowen Wu | Bowen Wu, Wenqing Wang, Haoran Li, Ying Li, Jingsong Yu, Baoxun Wang | Interpersonal Memory Matters: A New Task for Proactive Dialogue
Utilizing Conversational History | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proactive dialogue systems aim to empower chatbots with the capability of
leading conversations towards specific targets, thereby enhancing user
engagement and service autonomy. Existing systems typically target pre-defined
keywords or entities, neglecting user attributes and preferences implicit in
dialogue history, hindering the development of long-term user intimacy. To
address these challenges, we take a radical step towards building a more
human-like conversational agent by integrating proactive dialogue systems with
long-term memory into a unified framework. Specifically, we define a novel task
named Memory-aware Proactive Dialogue (MapDia). By decomposing the task, we
then propose an automatic data construction method and create the first Chinese
Memory-aware Proactive Dataset (ChMapData). Furthermore, we introduce a joint
framework based on Retrieval Augmented Generation (RAG), featuring three
modules: Topic Summarization, Topic Retrieval, and Proactive Topic-shifting
Detection and Generation, designed to steer dialogues towards relevant
historical topics at the right time. The effectiveness of our dataset and
models is validated through both automatic and human evaluations. We release
the open-source framework and dataset at
https://github.com/FrontierLabs/MapDia.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 05:19:17 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Wu",
"Bowen",
""
],
[
"Wang",
"Wenqing",
""
],
[
"Li",
"Haoran",
""
],
[
"Li",
"Ying",
""
],
[
"Yu",
"Jingsong",
""
],
[
"Wang",
"Baoxun",
""
]
]
| TITLE: Interpersonal Memory Matters: A New Task for Proactive Dialogue
Utilizing Conversational History
ABSTRACT: Proactive dialogue systems aim to empower chatbots with the capability of
leading conversations towards specific targets, thereby enhancing user
engagement and service autonomy. Existing systems typically target pre-defined
keywords or entities, neglecting user attributes and preferences implicit in
dialogue history, hindering the development of long-term user intimacy. To
address these challenges, we take a radical step towards building a more
human-like conversational agent by integrating proactive dialogue systems with
long-term memory into a unified framework. Specifically, we define a novel task
named Memory-aware Proactive Dialogue (MapDia). By decomposing the task, we
then propose an automatic data construction method and create the first Chinese
Memory-aware Proactive Dataset (ChMapData). Furthermore, we introduce a joint
framework based on Retrieval Augmented Generation (RAG), featuring three
modules: Topic Summarization, Topic Retrieval, and Proactive Topic-shifting
Detection and Generation, designed to steer dialogues towards relevant
historical topics at the right time. The effectiveness of our dataset and
models is validated through both automatic and human evaluations. We release
the open-source framework and dataset at
https://github.com/FrontierLabs/MapDia.
| new_dataset | 0.953966 |
2503.05161 | Bo Yu | Zheng Zhou, Zhe Li, Bo Yu, Lina Hu, Liang Dong, Zijian Yang, Xiaoli
Liu, Ning Xu, Ziwei Wang, Yonghao Dang, Jianqin Yin | GaussianCAD: Robust Self-Supervised CAD Reconstruction from Three
Orthographic Views Using 3D Gaussian Splatting | null | null | null | null | cs.CV cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The automatic reconstruction of 3D computer-aided design (CAD) models from
CAD sketches has recently gained significant attention in the computer vision
community. Most existing methods, however, rely on vector CAD sketches and 3D
ground truth for supervision, which are often difficult to be obtained in
industrial applications and are sensitive to noise inputs. We propose viewing
CAD reconstruction as a specific instance of sparse-view 3D reconstruction to
overcome these limitations. While this reformulation offers a promising
perspective, existing 3D reconstruction methods typically require natural
images and corresponding camera poses as inputs, which introduces two major
significant challenges: (1) modality discrepancy between CAD sketches and
natural images, and (2) difficulty of accurate camera pose estimation for CAD
sketches. To solve these issues, we first transform the CAD sketches into
representations resembling natural images and extract corresponding masks.
Next, we manually calculate the camera poses for the orthographic views to
ensure accurate alignment within the 3D coordinate system. Finally, we employ a
customized sparse-view 3D reconstruction method to achieve high-quality
reconstructions from aligned orthographic views. By leveraging raster CAD
sketches for self-supervision, our approach eliminates the reliance on vector
CAD sketches and 3D ground truth. Experiments on the Sub-Fusion360 dataset
demonstrate that our proposed method significantly outperforms previous
approaches in CAD reconstruction performance and exhibits strong robustness to
noisy inputs.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 05:55:50 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Zhou",
"Zheng",
""
],
[
"Li",
"Zhe",
""
],
[
"Yu",
"Bo",
""
],
[
"Hu",
"Lina",
""
],
[
"Dong",
"Liang",
""
],
[
"Yang",
"Zijian",
""
],
[
"Liu",
"Xiaoli",
""
],
[
"Xu",
"Ning",
""
],
[
"Wang",
"Ziwei",
""
],
[
"Dang",
"Yonghao",
""
],
[
"Yin",
"Jianqin",
""
]
]
| TITLE: GaussianCAD: Robust Self-Supervised CAD Reconstruction from Three
Orthographic Views Using 3D Gaussian Splatting
ABSTRACT: The automatic reconstruction of 3D computer-aided design (CAD) models from
CAD sketches has recently gained significant attention in the computer vision
community. Most existing methods, however, rely on vector CAD sketches and 3D
ground truth for supervision, which are often difficult to be obtained in
industrial applications and are sensitive to noise inputs. We propose viewing
CAD reconstruction as a specific instance of sparse-view 3D reconstruction to
overcome these limitations. While this reformulation offers a promising
perspective, existing 3D reconstruction methods typically require natural
images and corresponding camera poses as inputs, which introduces two major
significant challenges: (1) modality discrepancy between CAD sketches and
natural images, and (2) difficulty of accurate camera pose estimation for CAD
sketches. To solve these issues, we first transform the CAD sketches into
representations resembling natural images and extract corresponding masks.
Next, we manually calculate the camera poses for the orthographic views to
ensure accurate alignment within the 3D coordinate system. Finally, we employ a
customized sparse-view 3D reconstruction method to achieve high-quality
reconstructions from aligned orthographic views. By leveraging raster CAD
sketches for self-supervision, our approach eliminates the reliance on vector
CAD sketches and 3D ground truth. Experiments on the Sub-Fusion360 dataset
demonstrate that our proposed method significantly outperforms previous
approaches in CAD reconstruction performance and exhibits strong robustness to
noisy inputs.
| no_new_dataset | 0.950869 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.