Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.15107 | Emre Anakok | Emre Anakok, Pierre Barbillon, Colin Fontaine, Elisa Thebault | Interpretability of Graph Neural Networks to Assess Effects of Global
Change Drivers on Ecological Networks | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pollinators play a crucial role for plant reproduction, either in natural
ecosystem or in human-modified landscape. Global change drivers,including
climate change or land use modifications, can alter the plant-pollinator
interactions. To assess the potential influence of global change drivers on
pollination, large-scale interactions, climate and land use data are required.
While recent machine learning methods, such as graph neural networks (GNNs),
allow the analysis of such datasets, interpreting their results can be
challenging. We explore existing methods for interpreting GNNs in order to
highlight the effects of various environmental covariates on pollination
network connectivity. A large simulation study is performed to confirm whether
these methods can detect the interactive effect between a covariate and a genus
of plant on connectivity, and whether the application of debiasing techniques
influences the estimation of these effects. An application on the Spipoll
dataset, with and without accounting for sampling effects, highlights the
potential impact of land use on network connectivity and shows that accounting
for sampling effects partially alters the estimation of these effects.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 11:04:53 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 20:08:35 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Anakok",
"Emre",
""
],
[
"Barbillon",
"Pierre",
""
],
[
"Fontaine",
"Colin",
""
],
[
"Thebault",
"Elisa",
""
]
] | TITLE: Interpretability of Graph Neural Networks to Assess Effects of Global
Change Drivers on Ecological Networks
ABSTRACT: Pollinators play a crucial role for plant reproduction, either in natural
ecosystem or in human-modified landscape. Global change drivers,including
climate change or land use modifications, can alter the plant-pollinator
interactions. To assess the potential influence of global change drivers on
pollination, large-scale interactions, climate and land use data are required.
While recent machine learning methods, such as graph neural networks (GNNs),
allow the analysis of such datasets, interpreting their results can be
challenging. We explore existing methods for interpreting GNNs in order to
highlight the effects of various environmental covariates on pollination
network connectivity. A large simulation study is performed to confirm whether
these methods can detect the interactive effect between a covariate and a genus
of plant on connectivity, and whether the application of debiasing techniques
influences the estimation of these effects. An application on the Spipoll
dataset, with and without accounting for sampling effects, highlights the
potential impact of land use on network connectivity and shows that accounting
for sampling effects partially alters the estimation of these effects.
|
2503.15383 | Alexandre Bousse | Corentin Vazia, Thore Dassow, Alexandre Bousse, Jacques Froment,
B\'eatrice Vedel, Franck Vermet, Alessandro Perelli, Jean-Pierre Tasu and
Dimitris Visvikis | Material Decomposition in Photon-Counting Computed Tomography with
Diffusion Models: Comparative Study and Hybridization with Variational
Regularizers | 12 pages, 10 figures, 4 tables | null | null | null | physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Photon-counting computed tomography (PCCT) enables spectral imaging and
material decomposition (MD) but often suffers from low signal-to-noise ratios
due to constraints like low photon counts and sparse-view settings. Traditional
variational methods depend heavily on handcrafted regularizers, while AI-based
approaches, particularly convolutional neural networks (CNNs), have become
state-of-the-art. More recently, diffusion models (DMs) have gained prominence
in generative modeling by learning distribution functions, which can serve as
priors for inverse problems. This work explores DMs as regularizers for MD
tasks in PCCT using diffusion posterior sampling (DPS). We evaluate three
DPS-based approaches: image-domain two-step DPS (im-TDPS), projection-domain
two-step DPS (proj-TDPS), and one-step DPS (ODPS). Im-TDPS first samples
spectral images via DPS, then performs image-based MD; proj-TDPS applies
projection-based MD before sampling material images via DPS; ODPS directly
samples material images from measurement data. Results show ODPS outperforms
im-TDPS and proj-TDPS, producing sharper, noise-free, and crosstalk-free
images. Additionally, we propose a hybrid ODPS method integrating DM priors
with variational regularizers to handle materials absent from the training
dataset. This approach enhances material reconstruction quality over standard
variational methods.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 16:21:16 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 18:58:04 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Vazia",
"Corentin",
""
],
[
"Dassow",
"Thore",
""
],
[
"Bousse",
"Alexandre",
""
],
[
"Froment",
"Jacques",
""
],
[
"Vedel",
"Béatrice",
""
],
[
"Vermet",
"Franck",
""
],
[
"Perelli",
"Alessandro",
""
],
[
"Tasu",
"Jean-Pierre",
""
],
[
"Visvikis",
"Dimitris",
""
]
] | TITLE: Material Decomposition in Photon-Counting Computed Tomography with
Diffusion Models: Comparative Study and Hybridization with Variational
Regularizers
ABSTRACT: Photon-counting computed tomography (PCCT) enables spectral imaging and
material decomposition (MD) but often suffers from low signal-to-noise ratios
due to constraints like low photon counts and sparse-view settings. Traditional
variational methods depend heavily on handcrafted regularizers, while AI-based
approaches, particularly convolutional neural networks (CNNs), have become
state-of-the-art. More recently, diffusion models (DMs) have gained prominence
in generative modeling by learning distribution functions, which can serve as
priors for inverse problems. This work explores DMs as regularizers for MD
tasks in PCCT using diffusion posterior sampling (DPS). We evaluate three
DPS-based approaches: image-domain two-step DPS (im-TDPS), projection-domain
two-step DPS (proj-TDPS), and one-step DPS (ODPS). Im-TDPS first samples
spectral images via DPS, then performs image-based MD; proj-TDPS applies
projection-based MD before sampling material images via DPS; ODPS directly
samples material images from measurement data. Results show ODPS outperforms
im-TDPS and proj-TDPS, producing sharper, noise-free, and crosstalk-free
images. Additionally, we propose a hybrid ODPS method integrating DM priors
with variational regularizers to handle materials absent from the training
dataset. This approach enhances material reconstruction quality over standard
variational methods.
|
2503.17794 | Ketan Suhaas Saichandran | Ketan Suhaas Saichandran, Xavier Thomas, Prakhar Kaushik, Deepti
Ghadiyaram | Progressive Prompt Detailing for Improved Alignment in Text-to-Image
Generative Models | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Text-to-image generative models often struggle with long prompts detailing
complex scenes, diverse objects with distinct visual characteristics and
spatial relationships. In this work, we propose SCoPE (Scheduled interpolation
of Coarse-to-fine Prompt Embeddings), a training-free method to improve
text-to-image alignment by progressively refining the input prompt in a
coarse-to-fine-grained manner. Given a detailed input prompt, we first
decompose it into multiple sub-prompts which evolve from describing broad scene
layout to highly intricate details. During inference, we interpolate between
these sub-prompts and thus progressively introduce finer-grained details into
the generated image. Our training-free plug-and-play approach significantly
enhances prompt alignment, achieves an average improvement of up to +4% in
Visual Question Answering (VQA) scores over the Stable Diffusion baselines on
85% of the prompts from the GenAI-Bench dataset.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 15:05:21 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 02:03:32 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Saichandran",
"Ketan Suhaas",
""
],
[
"Thomas",
"Xavier",
""
],
[
"Kaushik",
"Prakhar",
""
],
[
"Ghadiyaram",
"Deepti",
""
]
] | TITLE: Progressive Prompt Detailing for Improved Alignment in Text-to-Image
Generative Models
ABSTRACT: Text-to-image generative models often struggle with long prompts detailing
complex scenes, diverse objects with distinct visual characteristics and
spatial relationships. In this work, we propose SCoPE (Scheduled interpolation
of Coarse-to-fine Prompt Embeddings), a training-free method to improve
text-to-image alignment by progressively refining the input prompt in a
coarse-to-fine-grained manner. Given a detailed input prompt, we first
decompose it into multiple sub-prompts which evolve from describing broad scene
layout to highly intricate details. During inference, we interpolate between
these sub-prompts and thus progressively introduce finer-grained details into
the generated image. Our training-free plug-and-play approach significantly
enhances prompt alignment, achieves an average improvement of up to +4% in
Visual Question Answering (VQA) scores over the Stable Diffusion baselines on
85% of the prompts from the GenAI-Bench dataset.
|
2503.18175 | Amanpreet Singh Saimbhi | Amanpreet Singh Saimbhi | Enhancing Software Vulnerability Detection Using Code Property Graphs
and Convolutional Neural Networks | null | 2025 International Conference on Computational, Communication and
Information Technology (ICCCIT), Indore, India, 2025, pp. 435-440 | 10.1109/ICCCIT62592.2025.10928033 | null | cs.SE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing complexity of modern software systems has led to a rise in
vulnerabilities that malicious actors can exploit. Traditional methods of
vulnerability detection, such as static and dynamic analysis, have limitations
in scalability and automation. This paper proposes a novel approach to
detecting software vulnerabilities using a combination of code property graphs
and machine learning techniques. By leveraging code property graphs, which
integrate abstract syntax trees, control flow graphs, and program dependency
graphs, we achieve a detailed representation of software code that enhances the
accuracy and granularity of vulnerability detection. We introduce various
neural network models, including convolutional neural networks adapted for
graph data, to process these representations. Our approach provides a scalable
and automated solution for vulnerability detection, addressing the shortcomings
of existing methods. We also present a newly generated dataset labeled with
function-level vulnerability types sourced from open-source repositories. Our
contributions include a methodology for transforming software code into code
property graphs, the implementation of a convolutional neural network model for
graph data, and the creation of a comprehensive dataset for training and
evaluation. This work lays the foundation for more effective and efficient
vulnerability detection in complex software systems.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 19:12:07 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Saimbhi",
"Amanpreet Singh",
""
]
] | TITLE: Enhancing Software Vulnerability Detection Using Code Property Graphs
and Convolutional Neural Networks
ABSTRACT: The increasing complexity of modern software systems has led to a rise in
vulnerabilities that malicious actors can exploit. Traditional methods of
vulnerability detection, such as static and dynamic analysis, have limitations
in scalability and automation. This paper proposes a novel approach to
detecting software vulnerabilities using a combination of code property graphs
and machine learning techniques. By leveraging code property graphs, which
integrate abstract syntax trees, control flow graphs, and program dependency
graphs, we achieve a detailed representation of software code that enhances the
accuracy and granularity of vulnerability detection. We introduce various
neural network models, including convolutional neural networks adapted for
graph data, to process these representations. Our approach provides a scalable
and automated solution for vulnerability detection, addressing the shortcomings
of existing methods. We also present a newly generated dataset labeled with
function-level vulnerability types sourced from open-source repositories. Our
contributions include a methodology for transforming software code into code
property graphs, the implementation of a convolutional neural network model for
graph data, and the creation of a comprehensive dataset for training and
evaluation. This work lays the foundation for more effective and efficient
vulnerability detection in complex software systems.
|
2503.18296 | Mengya Xu | Mengya Xu, Zhongzhen Huang, Jie Zhang, Xiaofan Zhang, and Qi Dou | Surgical Action Planning with Large Language Models | 10 pages,4 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In robot-assisted minimally invasive surgery, we introduce the Surgical
Action Planning (SAP) task, which generates future action plans from visual
inputs to address the absence of intraoperative predictive planning in current
intelligent applications. SAP shows great potential for enhancing
intraoperative guidance and automating procedures. However, it faces challenges
such as understanding instrument-action relationships and tracking surgical
progress. Large Language Models (LLMs) show promise in understanding surgical
video content but remain underexplored for predictive decision-making in SAP,
as they focus mainly on retrospective analysis. Challenges like data privacy,
computational demands, and modality-specific constraints further highlight
significant research gaps. To tackle these challenges, we introduce LLM-SAP, a
Large Language Models-based Surgical Action Planning framework that predicts
future actions and generates text responses by interpreting natural language
prompts of surgical goals. The text responses potentially support surgical
education, intraoperative decision-making, procedure documentation, and skill
analysis. LLM-SAP integrates two novel modules: the Near-History Focus Memory
Module (NHF-MM) for modeling historical states and the prompts factory for
action planning. We evaluate LLM-SAP on our constructed CholecT50-SAP dataset
using models like Qwen2.5 and Qwen2-VL, demonstrating its effectiveness in
next-action prediction. Pre-trained LLMs are tested in a zero-shot setting, and
supervised fine-tuning (SFT) with LoRA is implemented. Our experiments show
that Qwen2.5-72B-SFT surpasses Qwen2.5-72B with a 19.3% higher accuracy.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 03:02:04 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 15:29:24 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Xu",
"Mengya",
""
],
[
"Huang",
"Zhongzhen",
""
],
[
"Zhang",
"Jie",
""
],
[
"Zhang",
"Xiaofan",
""
],
[
"Dou",
"Qi",
""
]
] | TITLE: Surgical Action Planning with Large Language Models
ABSTRACT: In robot-assisted minimally invasive surgery, we introduce the Surgical
Action Planning (SAP) task, which generates future action plans from visual
inputs to address the absence of intraoperative predictive planning in current
intelligent applications. SAP shows great potential for enhancing
intraoperative guidance and automating procedures. However, it faces challenges
such as understanding instrument-action relationships and tracking surgical
progress. Large Language Models (LLMs) show promise in understanding surgical
video content but remain underexplored for predictive decision-making in SAP,
as they focus mainly on retrospective analysis. Challenges like data privacy,
computational demands, and modality-specific constraints further highlight
significant research gaps. To tackle these challenges, we introduce LLM-SAP, a
Large Language Models-based Surgical Action Planning framework that predicts
future actions and generates text responses by interpreting natural language
prompts of surgical goals. The text responses potentially support surgical
education, intraoperative decision-making, procedure documentation, and skill
analysis. LLM-SAP integrates two novel modules: the Near-History Focus Memory
Module (NHF-MM) for modeling historical states and the prompts factory for
action planning. We evaluate LLM-SAP on our constructed CholecT50-SAP dataset
using models like Qwen2.5 and Qwen2-VL, demonstrating its effectiveness in
next-action prediction. Pre-trained LLMs are tested in a zero-shot setting, and
supervised fine-tuning (SFT) with LoRA is implemented. Our experiments show
that Qwen2.5-72B-SFT surpasses Qwen2.5-72B with a 19.3% higher accuracy.
|
2503.18589 | Guillem Capellera Font | Guillem Capellera, Antonio Rubio, Luis Ferraz, Antonio Agudo | Unified Uncertainty-Aware Diffusion for Multi-Agent Trajectory Modeling | Accepted to CVPR 2025 conference | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Multi-agent trajectory modeling has primarily focused on forecasting future
states, often overlooking broader tasks like trajectory completion, which are
crucial for real-world applications such as correcting tracking data. Existing
methods also generally predict agents' states without offering any state-wise
measure of uncertainty. Moreover, popular multi-modal sampling methods lack any
error probability estimates for each generated scene under the same prior
observations, making it difficult to rank the predictions during inference
time. We introduce U2Diff, a \textbf{unified} diffusion model designed to
handle trajectory completion while providing state-wise \textbf{uncertainty}
estimates jointly. This uncertainty estimation is achieved by augmenting the
simple denoising loss with the negative log-likelihood of the predicted noise
and propagating latent space uncertainty to the real state space. Additionally,
we incorporate a Rank Neural Network in post-processing to enable \textbf{error
probability} estimation for each generated mode, demonstrating a strong
correlation with the error relative to ground truth. Our method outperforms the
state-of-the-art solutions in trajectory completion and forecasting across four
challenging sports datasets (NBA, Basketball-U, Football-U, Soccer-U),
highlighting the effectiveness of uncertainty and error probability estimation.
Video at https://youtu.be/ngw4D4eJToE
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 11:46:58 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 11:06:03 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Capellera",
"Guillem",
""
],
[
"Rubio",
"Antonio",
""
],
[
"Ferraz",
"Luis",
""
],
[
"Agudo",
"Antonio",
""
]
] | TITLE: Unified Uncertainty-Aware Diffusion for Multi-Agent Trajectory Modeling
ABSTRACT: Multi-agent trajectory modeling has primarily focused on forecasting future
states, often overlooking broader tasks like trajectory completion, which are
crucial for real-world applications such as correcting tracking data. Existing
methods also generally predict agents' states without offering any state-wise
measure of uncertainty. Moreover, popular multi-modal sampling methods lack any
error probability estimates for each generated scene under the same prior
observations, making it difficult to rank the predictions during inference
time. We introduce U2Diff, a \textbf{unified} diffusion model designed to
handle trajectory completion while providing state-wise \textbf{uncertainty}
estimates jointly. This uncertainty estimation is achieved by augmenting the
simple denoising loss with the negative log-likelihood of the predicted noise
and propagating latent space uncertainty to the real state space. Additionally,
we incorporate a Rank Neural Network in post-processing to enable \textbf{error
probability} estimation for each generated mode, demonstrating a strong
correlation with the error relative to ground truth. Our method outperforms the
state-of-the-art solutions in trajectory completion and forecasting across four
challenging sports datasets (NBA, Basketball-U, Football-U, Soccer-U),
highlighting the effectiveness of uncertainty and error probability estimation.
Video at https://youtu.be/ngw4D4eJToE
|
2503.18959 | Antoine Lemasson | M. Rejmund and A. Lemasson | Seven-dimensional Trajectory Reconstruction for VAMOS++ | Accepted for publication in Nucl. Instr. and Methods A | Nucl. Instr. and Method A 1076, 170445 (2025) | 10.1016/j.nima.2025.170445 | null | physics.ins-det nucl-ex | http://creativecommons.org/licenses/by/4.0/ | The VAMOS++ magnetic spectrometer is characterized by a large angular and
momentum acceptance and highly non-linear ion optics properties requiring the
use of software ion trajectory reconstruction methods to measure the ion
magnetic rigidity and the trajectory length between the beam interaction point
and the focal plane of the spectrometer. Standard measurements, involving the
use of a thin target and a narrow beam spot, allow the assumption of a
point-like beam interaction volume for ion trajectory reconstruction. However,
this represents a limitation for the case of large beam spot size or extended
gaseous target volume. To overcome this restriction, a seven-dimensional
reconstruction method incorporating the reaction position coordinates was
developed, making use of artificial deep neural networks. The neural networks
were trained on a theoretical dataset generated by standard magnetic
ray-tracing code. Future application to a voluminous gas target, necessitating
the explicit inclusion of the three-dimensional position of the beam
interaction point within the target in the trajectory reconstruction method, is
discussed. The performances of the new method are presented along with a
comparison of mass resolution obtained with previously reported model for the
case of thin-target experimental data.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 13:14:26 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Rejmund",
"M.",
""
],
[
"Lemasson",
"A.",
""
]
] | TITLE: Seven-dimensional Trajectory Reconstruction for VAMOS++
ABSTRACT: The VAMOS++ magnetic spectrometer is characterized by a large angular and
momentum acceptance and highly non-linear ion optics properties requiring the
use of software ion trajectory reconstruction methods to measure the ion
magnetic rigidity and the trajectory length between the beam interaction point
and the focal plane of the spectrometer. Standard measurements, involving the
use of a thin target and a narrow beam spot, allow the assumption of a
point-like beam interaction volume for ion trajectory reconstruction. However,
this represents a limitation for the case of large beam spot size or extended
gaseous target volume. To overcome this restriction, a seven-dimensional
reconstruction method incorporating the reaction position coordinates was
developed, making use of artificial deep neural networks. The neural networks
were trained on a theoretical dataset generated by standard magnetic
ray-tracing code. Future application to a voluminous gas target, necessitating
the explicit inclusion of the three-dimensional position of the beam
interaction point within the target in the trajectory reconstruction method, is
discussed. The performances of the new method are presented along with a
comparison of mass resolution obtained with previously reported model for the
case of thin-target experimental data.
|
2503.18985 | Xuan Liu | Xuan Liu, Xiaobin Chang | LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual
Learning | Accepted to CVPR 2025 | null | null | null | cs.LG cs.AI cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In continual learning (CL), catastrophic forgetting often arises due to
feature drift. This challenge is particularly prominent in the exemplar-free
continual learning (EFCL) setting, where samples from previous tasks cannot be
retained, making it difficult to preserve prior knowledge. To address this
issue, some EFCL methods aim to identify feature spaces that minimize the
impact on previous tasks while accommodating new ones. However, they rely on
static features or outdated statistics stored from old tasks, which prevents
them from capturing the dynamic evolution of the feature space in CL, leading
to performance degradation over time. In this paper, we introduce the
Drift-Resistant Space (DRS), which effectively handles feature drifts without
requiring explicit feature modeling or the storage of previous tasks. A novel
parameter-efficient fine-tuning approach called Low-Rank Adaptation Subtraction
(LoRA-) is proposed to develop the DRS. This method subtracts the LoRA weights
of old tasks from the initial pre-trained weight before processing new task
data to establish the DRS for model training. Therefore, LoRA- enhances
stability, improves efficiency, and simplifies implementation. Furthermore,
stabilizing feature drifts allows for better plasticity by learning with a
triplet loss. Our method consistently achieves state-of-the-art results,
especially for long task sequences, across multiple datasets.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 07:38:53 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 12:47:09 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Xuan",
""
],
[
"Chang",
"Xiaobin",
""
]
] | TITLE: LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual
Learning
ABSTRACT: In continual learning (CL), catastrophic forgetting often arises due to
feature drift. This challenge is particularly prominent in the exemplar-free
continual learning (EFCL) setting, where samples from previous tasks cannot be
retained, making it difficult to preserve prior knowledge. To address this
issue, some EFCL methods aim to identify feature spaces that minimize the
impact on previous tasks while accommodating new ones. However, they rely on
static features or outdated statistics stored from old tasks, which prevents
them from capturing the dynamic evolution of the feature space in CL, leading
to performance degradation over time. In this paper, we introduce the
Drift-Resistant Space (DRS), which effectively handles feature drifts without
requiring explicit feature modeling or the storage of previous tasks. A novel
parameter-efficient fine-tuning approach called Low-Rank Adaptation Subtraction
(LoRA-) is proposed to develop the DRS. This method subtracts the LoRA weights
of old tasks from the initial pre-trained weight before processing new task
data to establish the DRS for model training. Therefore, LoRA- enhances
stability, improves efficiency, and simplifies implementation. Furthermore,
stabilizing feature drifts allows for better plasticity by learning with a
triplet loss. Our method consistently achieves state-of-the-art results,
especially for long task sequences, across multiple datasets.
|
2503.19173 | Robert Nerem | Robert R. Nerem, Samantha Chen, Sanjoy Dasgupta, and Yusu Wang | Graph neural networks extrapolate out-of-distribution for shortest paths | null | null | null | null | cs.LG cs.DS | http://creativecommons.org/licenses/by/4.0/ | Neural networks (NNs), despite their success and wide adoption, still
struggle to extrapolate out-of-distribution (OOD), i.e., to inputs that are not
well-represented by their training dataset. Addressing the OOD generalization
gap is crucial when models are deployed in environments significantly different
from the training set, such as applying Graph Neural Networks (GNNs) trained on
small graphs to large, real-world graphs. One promising approach for achieving
robust OOD generalization is the framework of neural algorithmic alignment,
which incorporates ideas from classical algorithms by designing neural
architectures that resemble specific algorithmic paradigms (e.g. dynamic
programming). The hope is that trained models of this form would have superior
OOD capabilities, in much the same way that classical algorithms work for all
instances. We rigorously analyze the role of algorithmic alignment in achieving
OOD generalization, focusing on graph neural networks (GNNs) applied to the
canonical shortest path problem. We prove that GNNs, trained to minimize a
sparsity-regularized loss over a small set of shortest path instances, exactly
implement the Bellman-Ford (BF) algorithm for shortest paths. In fact, if a GNN
minimizes this loss within an error of $\epsilon$, it implements the BF
algorithm with an error of $O(\epsilon)$. Consequently, despite limited
training data, these GNNs are guaranteed to extrapolate to arbitrary
shortest-path problems, including instances of any size. Our empirical results
support our theory by showing that NNs trained by gradient descent are able to
minimize this loss and extrapolate in practice.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 21:52:05 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 00:46:30 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Nerem",
"Robert R.",
""
],
[
"Chen",
"Samantha",
""
],
[
"Dasgupta",
"Sanjoy",
""
],
[
"Wang",
"Yusu",
""
]
] | TITLE: Graph neural networks extrapolate out-of-distribution for shortest paths
ABSTRACT: Neural networks (NNs), despite their success and wide adoption, still
struggle to extrapolate out-of-distribution (OOD), i.e., to inputs that are not
well-represented by their training dataset. Addressing the OOD generalization
gap is crucial when models are deployed in environments significantly different
from the training set, such as applying Graph Neural Networks (GNNs) trained on
small graphs to large, real-world graphs. One promising approach for achieving
robust OOD generalization is the framework of neural algorithmic alignment,
which incorporates ideas from classical algorithms by designing neural
architectures that resemble specific algorithmic paradigms (e.g. dynamic
programming). The hope is that trained models of this form would have superior
OOD capabilities, in much the same way that classical algorithms work for all
instances. We rigorously analyze the role of algorithmic alignment in achieving
OOD generalization, focusing on graph neural networks (GNNs) applied to the
canonical shortest path problem. We prove that GNNs, trained to minimize a
sparsity-regularized loss over a small set of shortest path instances, exactly
implement the Bellman-Ford (BF) algorithm for shortest paths. In fact, if a GNN
minimizes this loss within an error of $\epsilon$, it implements the BF
algorithm with an error of $O(\epsilon)$. Consequently, despite limited
training data, these GNNs are guaranteed to extrapolate to arbitrary
shortest-path problems, including instances of any size. Our empirical results
support our theory by showing that NNs trained by gradient descent are able to
minimize this loss and extrapolate in practice.
|
2503.19367 | Zizhi Chen | Zizhi Chen, Minghao Han, Xukun Zhang, Shuwei Ma, Tao Liu, Xing Wei,
Lihua Zhang | VGAT: A Cancer Survival Analysis Framework Transitioning from Generative
Visual Question Answering to Genomic Reconstruction | Acceppted by ICME2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal learning combining pathology images and genomic sequences enhances
cancer survival analysis but faces clinical implementation barriers due to
limited access to genomic sequencing in under-resourced regions. To enable
survival prediction using only whole-slide images (WSI), we propose the
Visual-Genomic Answering-Guided Transformer (VGAT), a framework integrating
Visual Question Answering (VQA) techniques for genomic modality reconstruction.
By adapting VQA's text feature extraction approach, we derive stable genomic
representations that circumvent dimensionality challenges in raw genomic data.
Simultaneously, a cluster-based visual prompt module selectively enhances
discriminative WSI patches, addressing noise from unfiltered image regions.
Evaluated across five TCGA datasets, VGAT outperforms existing WSI-only
methods, demonstrating the viability of genomic-informed inference without
sequencing. This approach bridges multimodal research and clinical feasibility
in resource-constrained settings. The code link is
https://github.com/CZZZZZZZZZZZZZZZZZ/VGAT.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 05:48:31 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 12:05:53 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Zizhi",
""
],
[
"Han",
"Minghao",
""
],
[
"Zhang",
"Xukun",
""
],
[
"Ma",
"Shuwei",
""
],
[
"Liu",
"Tao",
""
],
[
"Wei",
"Xing",
""
],
[
"Zhang",
"Lihua",
""
]
] | TITLE: VGAT: A Cancer Survival Analysis Framework Transitioning from Generative
Visual Question Answering to Genomic Reconstruction
ABSTRACT: Multimodal learning combining pathology images and genomic sequences enhances
cancer survival analysis but faces clinical implementation barriers due to
limited access to genomic sequencing in under-resourced regions. To enable
survival prediction using only whole-slide images (WSI), we propose the
Visual-Genomic Answering-Guided Transformer (VGAT), a framework integrating
Visual Question Answering (VQA) techniques for genomic modality reconstruction.
By adapting VQA's text feature extraction approach, we derive stable genomic
representations that circumvent dimensionality challenges in raw genomic data.
Simultaneously, a cluster-based visual prompt module selectively enhances
discriminative WSI patches, addressing noise from unfiltered image regions.
Evaluated across five TCGA datasets, VGAT outperforms existing WSI-only
methods, demonstrating the viability of genomic-informed inference without
sequencing. This approach bridges multimodal research and clinical feasibility
in resource-constrained settings. The code link is
https://github.com/CZZZZZZZZZZZZZZZZZ/VGAT.
|
2503.19612 | Yunhao Tang | Yunhao Tang, Taco Cohen, David W. Zhang, Michal Valko, R\'emi Munos | RL-finetuning LLMs from on- and off-policy data with a single algorithm | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel reinforcement learning algorithm (AGRO, for
Any-Generation Reward Optimization) for fine-tuning large-language models. AGRO
leverages the concept of generation consistency, which states that the optimal
policy satisfies the notion of consistency across any possible generation of
the model. We derive algorithms that find optimal solutions via the
sample-based policy gradient and provide theoretical guarantees on their
convergence. Our experiments demonstrate the effectiveness of AGRO in both
on-policy and off-policy settings, showing improved performance on the
mathematical reasoning dataset over baseline algorithms.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 12:52:38 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 18:02:54 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tang",
"Yunhao",
""
],
[
"Cohen",
"Taco",
""
],
[
"Zhang",
"David W.",
""
],
[
"Valko",
"Michal",
""
],
[
"Munos",
"Rémi",
""
]
] | TITLE: RL-finetuning LLMs from on- and off-policy data with a single algorithm
ABSTRACT: We introduce a novel reinforcement learning algorithm (AGRO, for
Any-Generation Reward Optimization) for fine-tuning large-language models. AGRO
leverages the concept of generation consistency, which states that the optimal
policy satisfies the notion of consistency across any possible generation of
the model. We derive algorithms that find optimal solutions via the
sample-based policy gradient and provide theoretical guarantees on their
convergence. Our experiments demonstrate the effectiveness of AGRO in both
on-policy and off-policy settings, showing improved performance on the
mathematical reasoning dataset over baseline algorithms.
|
2503.19653 | Yabin Wang | Yabin Wang, Zhiwu Huang, Xiaopeng Hong | OpenSDI: Spotting Diffusion-Generated Images in the Open World | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper identifies OpenSDI, a challenge for spotting diffusion-generated
images in open-world settings. In response to this challenge, we define a new
benchmark, the OpenSDI dataset (OpenSDID), which stands out from existing
datasets due to its diverse use of large vision-language models that simulate
open-world diffusion-based manipulations. Another outstanding feature of
OpenSDID is its inclusion of both detection and localization tasks for images
manipulated globally and locally by diffusion models. To address the OpenSDI
challenge, we propose a Synergizing Pretrained Models (SPM) scheme to build up
a mixture of foundation models. This approach exploits a collaboration
mechanism with multiple pretrained foundation models to enhance generalization
in the OpenSDI context, moving beyond traditional training by synergizing
multiple pretrained models through prompting and attending strategies. Building
on this scheme, we introduce MaskCLIP, an SPM-based model that aligns
Contrastive Language-Image Pre-Training (CLIP) with Masked Autoencoder (MAE).
Extensive evaluations on OpenSDID show that MaskCLIP significantly outperforms
current state-of-the-art methods for the OpenSDI challenge, achieving
remarkable relative improvements of 14.23% in IoU (14.11% in F1) and 2.05% in
accuracy (2.38% in F1) compared to the second-best model in localization and
detection tasks, respectively. Our dataset and code are available at
https://github.com/iamwangyabin/OpenSDI.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 13:43:16 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 11:48:54 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Yabin",
""
],
[
"Huang",
"Zhiwu",
""
],
[
"Hong",
"Xiaopeng",
""
]
] | TITLE: OpenSDI: Spotting Diffusion-Generated Images in the Open World
ABSTRACT: This paper identifies OpenSDI, a challenge for spotting diffusion-generated
images in open-world settings. In response to this challenge, we define a new
benchmark, the OpenSDI dataset (OpenSDID), which stands out from existing
datasets due to its diverse use of large vision-language models that simulate
open-world diffusion-based manipulations. Another outstanding feature of
OpenSDID is its inclusion of both detection and localization tasks for images
manipulated globally and locally by diffusion models. To address the OpenSDI
challenge, we propose a Synergizing Pretrained Models (SPM) scheme to build up
a mixture of foundation models. This approach exploits a collaboration
mechanism with multiple pretrained foundation models to enhance generalization
in the OpenSDI context, moving beyond traditional training by synergizing
multiple pretrained models through prompting and attending strategies. Building
on this scheme, we introduce MaskCLIP, an SPM-based model that aligns
Contrastive Language-Image Pre-Training (CLIP) with Masked Autoencoder (MAE).
Extensive evaluations on OpenSDID show that MaskCLIP significantly outperforms
current state-of-the-art methods for the OpenSDI challenge, achieving
remarkable relative improvements of 14.23% in IoU (14.11% in F1) and 2.05% in
accuracy (2.38% in F1) compared to the second-best model in localization and
detection tasks, respectively. Our dataset and code are available at
https://github.com/iamwangyabin/OpenSDI.
|
2503.19654 | Mehdi Moshtaghi | Mehdi Moshtaghi, Siavash H. Khajavi, Joni Pajarinen | RGB-Th-Bench: A Dense benchmark for Visual-Thermal Understanding of
Vision Language Models | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | We introduce RGB-Th-Bench, the first benchmark designed to evaluate the
ability of Vision-Language Models (VLMs) to comprehend RGB-Thermal image pairs.
While VLMs have demonstrated remarkable progress in visual reasoning and
multimodal understanding, their evaluation has been predominantly limited to
RGB-based benchmarks, leaving a critical gap in assessing their capabilities in
infrared vision tasks. Existing visible-infrared datasets are either
task-specific or lack high-quality annotations necessary for rigorous model
evaluation. To address these limitations, RGB-Th-Bench provides a comprehensive
evaluation framework covering 14 distinct skill dimensions, with a total of
1,600+ expert-annotated Yes/No questions. The benchmark employs two accuracy
metrics: a standard question-level accuracy and a stricter skill-level
accuracy, which evaluates model robustness across multiple questions within
each skill dimension. This design ensures a thorough assessment of model
performance, including resilience to adversarial and hallucinated responses. We
conduct extensive evaluations on 19 state-of-the-art VLMs, revealing
significant performance gaps in RGB-Thermal understanding. Our results show
that even the strongest models struggle with thermal image comprehension, with
performance heavily constrained by their RGB-based capabilities. Additionally,
the lack of large-scale application-specific and expert-annotated
thermal-caption-pair datasets in pre-training is an important reason of the
observed performance gap. RGB-Th-Bench highlights the urgent need for further
advancements in multimodal learning to bridge the gap between visible and
thermal image understanding. The dataset is available through this link, and
the evaluation code will also be made publicly available.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 13:43:47 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 10:11:22 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 15:08:23 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Moshtaghi",
"Mehdi",
""
],
[
"Khajavi",
"Siavash H.",
""
],
[
"Pajarinen",
"Joni",
""
]
] | TITLE: RGB-Th-Bench: A Dense benchmark for Visual-Thermal Understanding of
Vision Language Models
ABSTRACT: We introduce RGB-Th-Bench, the first benchmark designed to evaluate the
ability of Vision-Language Models (VLMs) to comprehend RGB-Thermal image pairs.
While VLMs have demonstrated remarkable progress in visual reasoning and
multimodal understanding, their evaluation has been predominantly limited to
RGB-based benchmarks, leaving a critical gap in assessing their capabilities in
infrared vision tasks. Existing visible-infrared datasets are either
task-specific or lack high-quality annotations necessary for rigorous model
evaluation. To address these limitations, RGB-Th-Bench provides a comprehensive
evaluation framework covering 14 distinct skill dimensions, with a total of
1,600+ expert-annotated Yes/No questions. The benchmark employs two accuracy
metrics: a standard question-level accuracy and a stricter skill-level
accuracy, which evaluates model robustness across multiple questions within
each skill dimension. This design ensures a thorough assessment of model
performance, including resilience to adversarial and hallucinated responses. We
conduct extensive evaluations on 19 state-of-the-art VLMs, revealing
significant performance gaps in RGB-Thermal understanding. Our results show
that even the strongest models struggle with thermal image comprehension, with
performance heavily constrained by their RGB-based capabilities. Additionally,
the lack of large-scale application-specific and expert-annotated
thermal-caption-pair datasets in pre-training is an important reason of the
observed performance gap. RGB-Th-Bench highlights the urgent need for further
advancements in multimodal learning to bridge the gap between visible and
thermal image understanding. The dataset is available through this link, and
the evaluation code will also be made publicly available.
|
2503.20084 | Simiao Ren | Simiao Ren, Yao Yao, Kidus Zewde, Zisheng Liang, Tsang (Dennis) Ng,
Ning-Yau Cheng, Xiaoou Zhan, Qinzhe Liu, Yifei Chen, and Hengwei Xu | Can Multi-modal (reasoning) LLMs work as deepfake detectors? | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deepfake detection remains a critical challenge in the era of advanced
generative models, particularly as synthetic media becomes more sophisticated.
In this study, we explore the potential of state of the art multi-modal
(reasoning) large language models (LLMs) for deepfake image detection such as
(OpenAI O1/4o, Gemini thinking Flash 2, Deepseek Janus, Grok 3, llama 3.2, Qwen
2/2.5 VL, Mistral Pixtral, Claude 3.5/3.7 sonnet) . We benchmark 12 latest
multi-modal LLMs against traditional deepfake detection methods across multiple
datasets, including recently published real-world deepfake imagery. To enhance
performance, we employ prompt tuning and conduct an in-depth analysis of the
models' reasoning pathways to identify key contributing factors in their
decision-making process. Our findings indicate that best multi-modal LLMs
achieve competitive performance with promising generalization ability with zero
shot, even surpass traditional deepfake detection pipelines in
out-of-distribution datasets while the rest of the LLM families performs
extremely disappointing with some worse than random guess. Furthermore, we
found newer model version and reasoning capabilities does not contribute to
performance in such niche tasks of deepfake detection while model size do help
in some cases. This study highlights the potential of integrating multi-modal
reasoning in future deepfake detection frameworks and provides insights into
model interpretability for robustness in real-world scenarios.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 21:47:29 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 19:19:14 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ren",
"Simiao",
"",
"Dennis"
],
[
"Yao",
"Yao",
"",
"Dennis"
],
[
"Zewde",
"Kidus",
"",
"Dennis"
],
[
"Liang",
"Zisheng",
"",
"Dennis"
],
[
"Tsang",
"",
"",
"Dennis"
],
[
"Ng",
"",
"",
"Dennis"
],
[
"Cheng",
"Ning-Yau",
""
],
[
"Zhan",
"Xiaoou",
""
],
[
"Liu",
"Qinzhe",
""
],
[
"Chen",
"Yifei",
""
],
[
"Xu",
"Hengwei",
""
]
] | TITLE: Can Multi-modal (reasoning) LLMs work as deepfake detectors?
ABSTRACT: Deepfake detection remains a critical challenge in the era of advanced
generative models, particularly as synthetic media becomes more sophisticated.
In this study, we explore the potential of state of the art multi-modal
(reasoning) large language models (LLMs) for deepfake image detection such as
(OpenAI O1/4o, Gemini thinking Flash 2, Deepseek Janus, Grok 3, llama 3.2, Qwen
2/2.5 VL, Mistral Pixtral, Claude 3.5/3.7 sonnet) . We benchmark 12 latest
multi-modal LLMs against traditional deepfake detection methods across multiple
datasets, including recently published real-world deepfake imagery. To enhance
performance, we employ prompt tuning and conduct an in-depth analysis of the
models' reasoning pathways to identify key contributing factors in their
decision-making process. Our findings indicate that best multi-modal LLMs
achieve competitive performance with promising generalization ability with zero
shot, even surpass traditional deepfake detection pipelines in
out-of-distribution datasets while the rest of the LLM families performs
extremely disappointing with some worse than random guess. Furthermore, we
found newer model version and reasoning capabilities does not contribute to
performance in such niche tasks of deepfake detection while model size do help
in some cases. This study highlights the potential of integrating multi-modal
reasoning in future deepfake detection frameworks and provides insights into
model interpretability for robustness in real-world scenarios.
|
2503.20294 | Xinghao Wang | Xinghao Wang, Tao Gong, Qi Chu, Bin Liu and Nenghai Yu | Context-Aware Weakly Supervised Image Manipulation Localization with SAM
Refinement | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Malicious image manipulation poses societal risks, increasing the importance
of effective image manipulation detection methods. Recent approaches in image
manipulation detection have largely been driven by fully supervised approaches,
which require labor-intensive pixel-level annotations. Thus, it is essential to
explore weakly supervised image manipulation localization methods that only
require image-level binary labels for training. However, existing weakly
supervised image manipulation methods overlook the importance of edge
information for accurate localization, leading to suboptimal localization
performance. To address this, we propose a Context-Aware Boundary Localization
(CABL) module to aggregate boundary features and learn context-inconsistency
for localizing manipulated areas. Furthermore, by leveraging Class Activation
Mapping (CAM) and Segment Anything Model (SAM), we introduce the CAM-Guided SAM
Refinement (CGSR) module to generate more accurate manipulation localization
maps. By integrating two modules, we present a novel weakly supervised
framework based on a dual-branch Transformer-CNN architecture. Our method
achieves outstanding localization performance across multiple datasets.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 07:35:09 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 04:54:08 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Xinghao",
""
],
[
"Gong",
"Tao",
""
],
[
"Chu",
"Qi",
""
],
[
"Liu",
"Bin",
""
],
[
"Yu",
"Nenghai",
""
]
] | TITLE: Context-Aware Weakly Supervised Image Manipulation Localization with SAM
Refinement
ABSTRACT: Malicious image manipulation poses societal risks, increasing the importance
of effective image manipulation detection methods. Recent approaches in image
manipulation detection have largely been driven by fully supervised approaches,
which require labor-intensive pixel-level annotations. Thus, it is essential to
explore weakly supervised image manipulation localization methods that only
require image-level binary labels for training. However, existing weakly
supervised image manipulation methods overlook the importance of edge
information for accurate localization, leading to suboptimal localization
performance. To address this, we propose a Context-Aware Boundary Localization
(CABL) module to aggregate boundary features and learn context-inconsistency
for localizing manipulated areas. Furthermore, by leveraging Class Activation
Mapping (CAM) and Segment Anything Model (SAM), we introduce the CAM-Guided SAM
Refinement (CGSR) module to generate more accurate manipulation localization
maps. By integrating two modules, we present a novel weakly supervised
framework based on a dual-branch Transformer-CNN architecture. Our method
achieves outstanding localization performance across multiple datasets.
|
2503.20308 | Lee Chae-Yeon | Lee Chae-Yeon, Oh Hyun-Bin, Han EunGi, Kim Sung-Bin, Suekyeong Nam,
Tae-Hyun Oh | Perceptually Accurate 3D Talking Head Generation: New Definitions,
Speech-Mesh Representation, and Evaluation Metrics | CVPR 2025. Project page:
https://perceptual-3d-talking-head.github.io/ | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in speech-driven 3D talking head generation have made
significant progress in lip synchronization. However, existing models still
struggle to capture the perceptual alignment between varying speech
characteristics and corresponding lip movements. In this work, we claim that
three criteria -- Temporal Synchronization, Lip Readability, and Expressiveness
-- are crucial for achieving perceptually accurate lip movements. Motivated by
our hypothesis that a desirable representation space exists to meet these three
criteria, we introduce a speech-mesh synchronized representation that captures
intricate correspondences between speech signals and 3D face meshes. We found
that our learned representation exhibits desirable characteristics, and we plug
it into existing models as a perceptual loss to better align lip movements to
the given speech. In addition, we utilize this representation as a perceptual
metric and introduce two other physically grounded lip synchronization metrics
to assess how well the generated 3D talking heads align with these three
criteria. Experiments show that training 3D talking head generation models with
our perceptual loss significantly improve all three aspects of perceptually
accurate lip synchronization. Codes and datasets are available at
https://perceptual-3d-talking-head.github.io/.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 08:18:57 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 04:19:30 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 16:08:23 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chae-Yeon",
"Lee",
""
],
[
"Hyun-Bin",
"Oh",
""
],
[
"EunGi",
"Han",
""
],
[
"Sung-Bin",
"Kim",
""
],
[
"Nam",
"Suekyeong",
""
],
[
"Oh",
"Tae-Hyun",
""
]
] | TITLE: Perceptually Accurate 3D Talking Head Generation: New Definitions,
Speech-Mesh Representation, and Evaluation Metrics
ABSTRACT: Recent advancements in speech-driven 3D talking head generation have made
significant progress in lip synchronization. However, existing models still
struggle to capture the perceptual alignment between varying speech
characteristics and corresponding lip movements. In this work, we claim that
three criteria -- Temporal Synchronization, Lip Readability, and Expressiveness
-- are crucial for achieving perceptually accurate lip movements. Motivated by
our hypothesis that a desirable representation space exists to meet these three
criteria, we introduce a speech-mesh synchronized representation that captures
intricate correspondences between speech signals and 3D face meshes. We found
that our learned representation exhibits desirable characteristics, and we plug
it into existing models as a perceptual loss to better align lip movements to
the given speech. In addition, we utilize this representation as a perceptual
metric and introduce two other physically grounded lip synchronization metrics
to assess how well the generated 3D talking heads align with these three
criteria. Experiments show that training 3D talking head generation models with
our perceptual loss significantly improve all three aspects of perceptually
accurate lip synchronization. Codes and datasets are available at
https://perceptual-3d-talking-head.github.io/.
|
2503.21080 | Yunbo Long | Yuhan Liu, Yunbo Long | EQ-Negotiator: An Emotion-Reasoning LLM Agent in Credit Dialogues | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | While large language model (LLM)-based chatbots have been applied for
effective engagement in credit dialogues, their capacity for dynamic emotional
expression remains limited. Current agents primarily rely on passive empathy
rather than affective reasoning. For instance, when faced with persistent
client negativity, the agent should employ strategic emotional adaptation by
expressing measured anger to discourage counterproductive behavior and guide
the conversation toward resolution. This context-aware emotional modulation is
essential for imitating the nuanced decision-making of human negotiators. This
paper introduces an EQ-negotiator that combines emotion sensing from
pre-trained language models (PLMs) with emotional reasoning based on Game
Theory and Hidden Markov Models. It takes into account both the current and
historical emotions of the client to better manage and address negative
emotions during interactions. By fine-tuning pre-trained language models (PLMs)
on public emotion datasets and validating them on the credit dialogue datasets,
our approach enables LLM-based agents to effectively capture shifts in client
emotions and dynamically adjust their response tone based on our emotion
decision policies in real-world financial negotiations. This EQ-negotiator can
also help credit agencies foster positive client relationships, enhancing
satisfaction in credit services.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 01:41:34 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 10:57:38 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 17:55:35 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Yuhan",
""
],
[
"Long",
"Yunbo",
""
]
] | TITLE: EQ-Negotiator: An Emotion-Reasoning LLM Agent in Credit Dialogues
ABSTRACT: While large language model (LLM)-based chatbots have been applied for
effective engagement in credit dialogues, their capacity for dynamic emotional
expression remains limited. Current agents primarily rely on passive empathy
rather than affective reasoning. For instance, when faced with persistent
client negativity, the agent should employ strategic emotional adaptation by
expressing measured anger to discourage counterproductive behavior and guide
the conversation toward resolution. This context-aware emotional modulation is
essential for imitating the nuanced decision-making of human negotiators. This
paper introduces an EQ-negotiator that combines emotion sensing from
pre-trained language models (PLMs) with emotional reasoning based on Game
Theory and Hidden Markov Models. It takes into account both the current and
historical emotions of the client to better manage and address negative
emotions during interactions. By fine-tuning pre-trained language models (PLMs)
on public emotion datasets and validating them on the credit dialogue datasets,
our approach enables LLM-based agents to effectively capture shifts in client
emotions and dynamically adjust their response tone based on our emotion
decision policies in real-world financial negotiations. This EQ-negotiator can
also help credit agencies foster positive client relationships, enhancing
satisfaction in credit services.
|
2503.21620 | Zhengxi Lu | Zhengxi Lu, Yuxiang Chai, Yaxuan Guo, Xi Yin, Liang Liu, Hao Wang,
Guanjing Xiong, Hongsheng Li | UI-R1: Enhancing Action Prediction of GUI Agents by Reinforcement
Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The recent DeepSeek-R1 has showcased the emergence of reasoning capabilities
in LLMs through reinforcement learning (RL) with rule-based rewards. Building
on this idea, we are the first to explore how rule-based RL can enhance the
reasoning capabilities of multimodal large language models (MLLMs) for graphic
user interface (GUI) action prediction tasks. To this end, we curate a small
yet high-quality dataset of 136 challenging tasks, encompassing five common
action types on mobile devices. We also introduce a unified rule-based action
reward, enabling model optimization via policy-based algorithms such as Group
Relative Policy Optimization (GRPO). Experimental results demonstrate that our
proposed data-efficient model, UI-R1-3B, achieves substantial improvements on
both in-domain (ID) and out-of-domain (OOD) tasks. Specifically, on the ID
benchmark AndroidControl, the action type accuracy improves by 15%, while
grounding accuracy increases by 10.3%, compared with the base model (i.e.
Qwen2.5-VL-3B). On the OOD GUI grounding benchmark ScreenSpot-Pro, our model
surpasses the base model by 6.0% and achieves competitive performance with
larger models (e.g., OS-Atlas-7B), which are trained via supervised fine-tuning
(SFT) on 76K data. These results underscore the potential of rule-based
reinforcement learning to advance GUI understanding and control, paving the way
for future research in this domain.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 15:39:30 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 13:05:16 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lu",
"Zhengxi",
""
],
[
"Chai",
"Yuxiang",
""
],
[
"Guo",
"Yaxuan",
""
],
[
"Yin",
"Xi",
""
],
[
"Liu",
"Liang",
""
],
[
"Wang",
"Hao",
""
],
[
"Xiong",
"Guanjing",
""
],
[
"Li",
"Hongsheng",
""
]
] | TITLE: UI-R1: Enhancing Action Prediction of GUI Agents by Reinforcement
Learning
ABSTRACT: The recent DeepSeek-R1 has showcased the emergence of reasoning capabilities
in LLMs through reinforcement learning (RL) with rule-based rewards. Building
on this idea, we are the first to explore how rule-based RL can enhance the
reasoning capabilities of multimodal large language models (MLLMs) for graphic
user interface (GUI) action prediction tasks. To this end, we curate a small
yet high-quality dataset of 136 challenging tasks, encompassing five common
action types on mobile devices. We also introduce a unified rule-based action
reward, enabling model optimization via policy-based algorithms such as Group
Relative Policy Optimization (GRPO). Experimental results demonstrate that our
proposed data-efficient model, UI-R1-3B, achieves substantial improvements on
both in-domain (ID) and out-of-domain (OOD) tasks. Specifically, on the ID
benchmark AndroidControl, the action type accuracy improves by 15%, while
grounding accuracy increases by 10.3%, compared with the base model (i.e.
Qwen2.5-VL-3B). On the OOD GUI grounding benchmark ScreenSpot-Pro, our model
surpasses the base model by 6.0% and achieves competitive performance with
larger models (e.g., OS-Atlas-7B), which are trained via supervised fine-tuning
(SFT) on 76K data. These results underscore the potential of rule-based
reinforcement learning to advance GUI understanding and control, paving the way
for future research in this domain.
|
2503.21679 | Yunze Xiao | Yunze Xiao, Tingyu He, Lionel Z. Wang, Yiming Ma, Xingyu Song,
Xiaohang Xu, Irene Li and Ka Chung Ng | JiraiBench: A Bilingual Benchmark for Evaluating Large Language Models'
Detection of Human Self-Destructive Behavior Content in Jirai Community | 20 pages, 1 figures | null | null | null | cs.CL cs.CY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper introduces JiraiBench, the first bilingual benchmark for
evaluating large language models' effectiveness in detecting self-destructive
content across Chinese and Japanese social media communities. Focusing on the
transnational "Jirai" (landmine) online subculture that encompasses multiple
forms of self-destructive behaviors including drug overdose, eating disorders,
and self-harm, we present a comprehensive evaluation framework incorporating
both linguistic and cultural dimensions. Our dataset comprises 10,419 Chinese
posts and 5,000 Japanese posts with multidimensional annotation along three
behavioral categories, achieving substantial inter-annotator agreement.
Experimental evaluations across four state-of-the-art models reveal significant
performance variations based on instructional language, with Japanese prompts
unexpectedly outperforming Chinese prompts when processing Chinese content.
This emergent cross-cultural transfer suggests that cultural proximity can
sometimes outweigh linguistic similarity in detection tasks. Cross-lingual
transfer experiments with fine-tuned models further demonstrate the potential
for knowledge transfer between these language systems without explicit target
language training. These findings highlight the need for culturally-informed
approaches to multilingual content moderation and provide empirical evidence
for the importance of cultural context in developing more effective detection
systems for vulnerable online communities.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 16:48:58 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 14:02:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Xiao",
"Yunze",
""
],
[
"He",
"Tingyu",
""
],
[
"Wang",
"Lionel Z.",
""
],
[
"Ma",
"Yiming",
""
],
[
"Song",
"Xingyu",
""
],
[
"Xu",
"Xiaohang",
""
],
[
"Li",
"Irene",
""
],
[
"Ng",
"Ka Chung",
""
]
] | TITLE: JiraiBench: A Bilingual Benchmark for Evaluating Large Language Models'
Detection of Human Self-Destructive Behavior Content in Jirai Community
ABSTRACT: This paper introduces JiraiBench, the first bilingual benchmark for
evaluating large language models' effectiveness in detecting self-destructive
content across Chinese and Japanese social media communities. Focusing on the
transnational "Jirai" (landmine) online subculture that encompasses multiple
forms of self-destructive behaviors including drug overdose, eating disorders,
and self-harm, we present a comprehensive evaluation framework incorporating
both linguistic and cultural dimensions. Our dataset comprises 10,419 Chinese
posts and 5,000 Japanese posts with multidimensional annotation along three
behavioral categories, achieving substantial inter-annotator agreement.
Experimental evaluations across four state-of-the-art models reveal significant
performance variations based on instructional language, with Japanese prompts
unexpectedly outperforming Chinese prompts when processing Chinese content.
This emergent cross-cultural transfer suggests that cultural proximity can
sometimes outweigh linguistic similarity in detection tasks. Cross-lingual
transfer experiments with fine-tuned models further demonstrate the potential
for knowledge transfer between these language systems without explicit target
language training. These findings highlight the need for culturally-informed
approaches to multilingual content moderation and provide empirical evidence
for the importance of cultural context in developing more effective detection
systems for vulnerable online communities.
|
2503.21804 | Shusaku Egami | Shusaku Egami, Kyoumoto Matsushita, Takanori Ugai, Ken Fukuda | Comparison of Metadata Representation Models for Knowledge Graph
Embeddings | 11 pages, 9 Figures | null | null | null | cs.LG cs.AI cs.IR | http://creativecommons.org/licenses/by/4.0/ | Hyper-relational Knowledge Graphs (HRKGs) extend traditional KGs beyond
binary relations, enabling the representation of contextual, provenance, and
temporal information in domains, such as historical events, sensor data, video
content, and narratives. HRKGs can be structured using several Metadata
Representation Models (MRMs), including Reification (REF), Singleton Property
(SGP), and RDF-star (RDR). However, the effects of different MRMs on KG
Embedding (KGE) and Link Prediction (LP) models remain unclear. This study
evaluates MRMs in the context of LP tasks, identifies the limitations of
existing evaluation frameworks, and introduces a new task that ensures fair
comparisons across MRMs. Furthermore, we propose a framework that effectively
reflects the knowledge representations of the three MRMs in latent space.
Experiments on two types of datasets reveal that REF performs well in simple
HRKGs, whereas SGP is less effective. However, in complex HRKGs, the
differences among MRMs in the LP tasks are minimal. Our findings contribute to
an optimal knowledge representation strategy for HRKGs in LP tasks.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 04:46:23 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 04:31:23 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Egami",
"Shusaku",
""
],
[
"Matsushita",
"Kyoumoto",
""
],
[
"Ugai",
"Takanori",
""
],
[
"Fukuda",
"Ken",
""
]
] | TITLE: Comparison of Metadata Representation Models for Knowledge Graph
Embeddings
ABSTRACT: Hyper-relational Knowledge Graphs (HRKGs) extend traditional KGs beyond
binary relations, enabling the representation of contextual, provenance, and
temporal information in domains, such as historical events, sensor data, video
content, and narratives. HRKGs can be structured using several Metadata
Representation Models (MRMs), including Reification (REF), Singleton Property
(SGP), and RDF-star (RDR). However, the effects of different MRMs on KG
Embedding (KGE) and Link Prediction (LP) models remain unclear. This study
evaluates MRMs in the context of LP tasks, identifies the limitations of
existing evaluation frameworks, and introduces a new task that ensures fair
comparisons across MRMs. Furthermore, we propose a framework that effectively
reflects the knowledge representations of the three MRMs in latent space.
Experiments on two types of datasets reveal that REF performs well in simple
HRKGs, whereas SGP is less effective. However, in complex HRKGs, the
differences among MRMs in the LP tasks are minimal. Our findings contribute to
an optimal knowledge representation strategy for HRKGs in LP tasks.
|
2503.22236 | Yushuang Wu | Chongjie Ye, Yushuang Wu, Ziteng Lu, Jiahao Chang, Xiaoyang Guo,
Jiaqing Zhou, Hao Zhao, Xiaoguang Han | Hi3DGen: High-fidelity 3D Geometry Generation from Images via Normal
Bridging | https://stable-x.github.io/Hi3DGen | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growing demand for high-fidelity 3D models from 2D images, existing
methods still face significant challenges in accurately reproducing
fine-grained geometric details due to limitations in domain gaps and inherent
ambiguities in RGB images. To address these issues, we propose Hi3DGen, a novel
framework for generating high-fidelity 3D geometry from images via normal
bridging. Hi3DGen consists of three key components: (1) an image-to-normal
estimator that decouples the low-high frequency image pattern with noise
injection and dual-stream training to achieve generalizable, stable, and sharp
estimation; (2) a normal-to-geometry learning approach that uses
normal-regularized latent diffusion learning to enhance 3D geometry generation
fidelity; and (3) a 3D data synthesis pipeline that constructs a high-quality
dataset to support training. Extensive experiments demonstrate the
effectiveness and superiority of our framework in generating rich geometric
details, outperforming state-of-the-art methods in terms of fidelity. Our work
provides a new direction for high-fidelity 3D geometry generation from images
by leveraging normal maps as an intermediate representation.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 08:39:20 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 03:41:01 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ye",
"Chongjie",
""
],
[
"Wu",
"Yushuang",
""
],
[
"Lu",
"Ziteng",
""
],
[
"Chang",
"Jiahao",
""
],
[
"Guo",
"Xiaoyang",
""
],
[
"Zhou",
"Jiaqing",
""
],
[
"Zhao",
"Hao",
""
],
[
"Han",
"Xiaoguang",
""
]
] | TITLE: Hi3DGen: High-fidelity 3D Geometry Generation from Images via Normal
Bridging
ABSTRACT: With the growing demand for high-fidelity 3D models from 2D images, existing
methods still face significant challenges in accurately reproducing
fine-grained geometric details due to limitations in domain gaps and inherent
ambiguities in RGB images. To address these issues, we propose Hi3DGen, a novel
framework for generating high-fidelity 3D geometry from images via normal
bridging. Hi3DGen consists of three key components: (1) an image-to-normal
estimator that decouples the low-high frequency image pattern with noise
injection and dual-stream training to achieve generalizable, stable, and sharp
estimation; (2) a normal-to-geometry learning approach that uses
normal-regularized latent diffusion learning to enhance 3D geometry generation
fidelity; and (3) a 3D data synthesis pipeline that constructs a high-quality
dataset to support training. Extensive experiments demonstrate the
effectiveness and superiority of our framework in generating rich geometric
details, outperforming state-of-the-art methods in terms of fidelity. Our work
provides a new direction for high-fidelity 3D geometry generation from images
by leveraging normal maps as an intermediate representation.
|
2503.22241 | Ziye Chen | Ziye Chen, Yiqun Duan, Riheng Zhu, Zhenbang Sun, Mingming Gong | Agent-Centric Personalized Multiple Clustering with Multi-Modal LLMs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personalized multiple clustering aims to generate diverse partitions of a
dataset based on different user-specific aspects, rather than a single
clustering. It has recently drawn research interest for accommodating varying
user preferences. Recent approaches primarily use CLIP embeddings with proxy
learning to extract representations biased toward user clustering preferences.
However, CLIP primarily focuses on coarse image-text alignment, lacking a deep
contextual understanding of user interests. To overcome these limitations, we
propose an agent-centric personalized clustering framework that leverages
multi-modal large language models (MLLMs) as agents to comprehensively traverse
a relational graph to search for clusters based on user interests. Due to the
advanced reasoning mechanism of MLLMs, the obtained clusters align more closely
with user-defined criteria than those obtained from CLIP-based representations.
To reduce computational overhead, we shorten the agents' traversal path by
constructing a relational graph using user-interest-biased embeddings extracted
by MLLMs. A large number of weakly connected edges can be filtered out based on
embedding similarity, facilitating an efficient traversal search for agents.
Experimental results show that the proposed method achieves NMI scores of
0.9667 and 0.9481 on the Card Order and Card Suits benchmarks, respectively,
largely improving the SOTA model by over 140%.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 08:45:15 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 02:56:24 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Ziye",
""
],
[
"Duan",
"Yiqun",
""
],
[
"Zhu",
"Riheng",
""
],
[
"Sun",
"Zhenbang",
""
],
[
"Gong",
"Mingming",
""
]
] | TITLE: Agent-Centric Personalized Multiple Clustering with Multi-Modal LLMs
ABSTRACT: Personalized multiple clustering aims to generate diverse partitions of a
dataset based on different user-specific aspects, rather than a single
clustering. It has recently drawn research interest for accommodating varying
user preferences. Recent approaches primarily use CLIP embeddings with proxy
learning to extract representations biased toward user clustering preferences.
However, CLIP primarily focuses on coarse image-text alignment, lacking a deep
contextual understanding of user interests. To overcome these limitations, we
propose an agent-centric personalized clustering framework that leverages
multi-modal large language models (MLLMs) as agents to comprehensively traverse
a relational graph to search for clusters based on user interests. Due to the
advanced reasoning mechanism of MLLMs, the obtained clusters align more closely
with user-defined criteria than those obtained from CLIP-based representations.
To reduce computational overhead, we shorten the agents' traversal path by
constructing a relational graph using user-interest-biased embeddings extracted
by MLLMs. A large number of weakly connected edges can be filtered out based on
embedding similarity, facilitating an efficient traversal search for agents.
Experimental results show that the proposed method achieves NMI scores of
0.9667 and 0.9481 on the Card Order and Card Suits benchmarks, respectively,
largely improving the SOTA model by over 140%.
|
2503.22370 | Haofei Lu | Haofei Lu, Yifei Dong, Zehang Weng, Jens Lundell, Danica Kragic | Grasping a Handful: Sequential Multi-Object Dexterous Grasp Generation | 8 pages, 7 figures | null | null | null | cs.RO cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the sequential multi-object robotic grasp sampling algorithm
SeqGrasp that can robustly synthesize stable grasps on diverse objects using
the robotic hand's partial Degrees of Freedom (DoF). We use SeqGrasp to
construct the large-scale Allegro Hand sequential grasping dataset SeqDataset
and use it for training the diffusion-based sequential grasp generator
SeqDiffuser. We experimentally evaluate SeqGrasp and SeqDiffuser against the
state-of-the-art non-sequential multi-object grasp generation method MultiGrasp
in simulation and on a real robot. The experimental results demonstrate that
SeqGrasp and SeqDiffuser reach an 8.71%-43.33% higher grasp success rate than
MultiGrasp. Furthermore, SeqDiffuser is approximately 1000 times faster at
generating grasps than SeqGrasp and MultiGrasp.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 12:24:26 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 09:06:26 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lu",
"Haofei",
""
],
[
"Dong",
"Yifei",
""
],
[
"Weng",
"Zehang",
""
],
[
"Lundell",
"Jens",
""
],
[
"Kragic",
"Danica",
""
]
] | TITLE: Grasping a Handful: Sequential Multi-Object Dexterous Grasp Generation
ABSTRACT: We introduce the sequential multi-object robotic grasp sampling algorithm
SeqGrasp that can robustly synthesize stable grasps on diverse objects using
the robotic hand's partial Degrees of Freedom (DoF). We use SeqGrasp to
construct the large-scale Allegro Hand sequential grasping dataset SeqDataset
and use it for training the diffusion-based sequential grasp generator
SeqDiffuser. We experimentally evaluate SeqGrasp and SeqDiffuser against the
state-of-the-art non-sequential multi-object grasp generation method MultiGrasp
in simulation and on a real robot. The experimental results demonstrate that
SeqGrasp and SeqDiffuser reach an 8.71%-43.33% higher grasp success rate than
MultiGrasp. Furthermore, SeqDiffuser is approximately 1000 times faster at
generating grasps than SeqGrasp and MultiGrasp.
|
2503.22661 | Danish Khan | Danish Khan | Non-linear and non-empirical double hybrid density functional | null | null | null | null | physics.chem-ph cond-mat.mtrl-sci cond-mat.str-el | http://creativecommons.org/licenses/by/4.0/ | We develop a non-linear and non-empirical (nLanE) double hybrid density
functional derived from an accurate interpolation of the adiabatic connection
in density functional theory, incorporating the correct asymptotic expansions.
By bridging the second-order perturbative weak correlation limit with the fully
interacting limit from the semi-local SCAN functional, nLanE-SCAN is free of
fitted parameters while providing improved energetic predictions compared to
SCAN for moderately and strongly correlated systems alike. It delivers accurate
predictions for atomic total energies and multiple reaction datasets from the
GMTKN55 benchmark while significantly outperforming traditional linear hybrids
and double hybrids for non-covalent interactions without requiring dispersion
corrections. Due to the exact constraints at the weak correlation limit,
nLanE-SCAN has reduced delocalization errors as evident through SIE4x4 and bond
dissociations of H\(_2^+\) and He\(_2^+\). Its proper asymptotic behavior
ensures stability in strongly correlated systems, improving H\(_2\) and N\(_2\)
bond dissociation profiles compared to conventional functionals.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 17:45:25 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 16:04:20 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Khan",
"Danish",
""
]
] | TITLE: Non-linear and non-empirical double hybrid density functional
ABSTRACT: We develop a non-linear and non-empirical (nLanE) double hybrid density
functional derived from an accurate interpolation of the adiabatic connection
in density functional theory, incorporating the correct asymptotic expansions.
By bridging the second-order perturbative weak correlation limit with the fully
interacting limit from the semi-local SCAN functional, nLanE-SCAN is free of
fitted parameters while providing improved energetic predictions compared to
SCAN for moderately and strongly correlated systems alike. It delivers accurate
predictions for atomic total energies and multiple reaction datasets from the
GMTKN55 benchmark while significantly outperforming traditional linear hybrids
and double hybrids for non-covalent interactions without requiring dispersion
corrections. Due to the exact constraints at the weak correlation limit,
nLanE-SCAN has reduced delocalization errors as evident through SIE4x4 and bond
dissociations of H\(_2^+\) and He\(_2^+\). Its proper asymptotic behavior
ensures stability in strongly correlated systems, improving H\(_2\) and N\(_2\)
bond dissociation profiles compared to conventional functionals.
|
2503.22684 | Md Ahnaf Akif | Md Ahnaf Akif | Binary and Multi-Class Intrusion Detection in IoT Using Standalone and
Hybrid Machine and Deep Learning Models | Master's thesis, 80 pages, 18 figures, 4 tables | Procedia Computer Science, Volume No: 233, Pages: 670-681, Year:
2024 | null | null | cs.CR cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Maintaining security in IoT systems depends on intrusion detection since
these networks' sensitivity to cyber-attacks is growing. Based on the IoT23
dataset, this study explores the use of several Machine Learning (ML) and Deep
Learning (DL) along with the hybrid models for binary and multi-class intrusion
detection. The standalone machine and deep learning models like Random Forest
(RF), Extreme Gradient Boosting (XGBoost), Artificial Neural Network (ANN),
K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Convolutional
Neural Network (CNN) were used. Furthermore, two hybrid models were created by
combining machine learning techniques: RF, XGBoost, AdaBoost, KNN, and SVM and
these hybrid models were voting based hybrid classifier. Where one is for
binary, and the other one is for multi-class classification. These models vi
were tested using precision, recall, accuracy, and F1-score criteria and
compared the performance of each model. This work thoroughly explains how
hybrid, standalone ML and DL techniques could improve IDS (Intrusion Detection
System) in terms of accuracy and scalability in IoT (Internet of Things).
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 17:47:38 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Akif",
"Md Ahnaf",
""
]
] | TITLE: Binary and Multi-Class Intrusion Detection in IoT Using Standalone and
Hybrid Machine and Deep Learning Models
ABSTRACT: Maintaining security in IoT systems depends on intrusion detection since
these networks' sensitivity to cyber-attacks is growing. Based on the IoT23
dataset, this study explores the use of several Machine Learning (ML) and Deep
Learning (DL) along with the hybrid models for binary and multi-class intrusion
detection. The standalone machine and deep learning models like Random Forest
(RF), Extreme Gradient Boosting (XGBoost), Artificial Neural Network (ANN),
K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Convolutional
Neural Network (CNN) were used. Furthermore, two hybrid models were created by
combining machine learning techniques: RF, XGBoost, AdaBoost, KNN, and SVM and
these hybrid models were voting based hybrid classifier. Where one is for
binary, and the other one is for multi-class classification. These models vi
were tested using precision, recall, accuracy, and F1-score criteria and
compared the performance of each model. This work thoroughly explains how
hybrid, standalone ML and DL techniques could improve IDS (Intrusion Detection
System) in terms of accuracy and scalability in IoT (Internet of Things).
|
2503.22687 | Jinming Chen | Jinming Chen, Jingyi Fang, Yuanzhong Zheng, Yaoxuan Wang, Haojun Fei | Qieemo: Speech Is All You Need in the Emotion Recognition in
Conversations | null | null | null | null | eess.AS cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Emotion recognition plays a pivotal role in intelligent human-machine
interaction systems. Multimodal approaches benefit from the fusion of diverse
modalities, thereby improving the recognition accuracy. However, the lack of
high-quality multimodal data and the challenge of achieving optimal alignment
between different modalities significantly limit the potential for improvement
in multimodal approaches. In this paper, the proposed Qieemo framework
effectively utilizes the pretrained automatic speech recognition (ASR) model
backbone which contains naturally frame aligned textual and emotional features,
to achieve precise emotion classification solely based on the audio modality.
Furthermore, we design the multimodal fusion (MMF) module and cross-modal
attention (CMA) module in order to fuse the phonetic posteriorgram (PPG) and
emotional features extracted by the ASR encoder for improving recognition
accuracy. The experimental results on the IEMOCAP dataset demonstrate that
Qieemo outperforms the benchmark unimodal, multimodal, and self-supervised
models with absolute improvements of 3.0%, 1.2%, and 1.9% respectively.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 07:02:30 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Jinming",
""
],
[
"Fang",
"Jingyi",
""
],
[
"Zheng",
"Yuanzhong",
""
],
[
"Wang",
"Yaoxuan",
""
],
[
"Fei",
"Haojun",
""
]
] | TITLE: Qieemo: Speech Is All You Need in the Emotion Recognition in
Conversations
ABSTRACT: Emotion recognition plays a pivotal role in intelligent human-machine
interaction systems. Multimodal approaches benefit from the fusion of diverse
modalities, thereby improving the recognition accuracy. However, the lack of
high-quality multimodal data and the challenge of achieving optimal alignment
between different modalities significantly limit the potential for improvement
in multimodal approaches. In this paper, the proposed Qieemo framework
effectively utilizes the pretrained automatic speech recognition (ASR) model
backbone which contains naturally frame aligned textual and emotional features,
to achieve precise emotion classification solely based on the audio modality.
Furthermore, we design the multimodal fusion (MMF) module and cross-modal
attention (CMA) module in order to fuse the phonetic posteriorgram (PPG) and
emotional features extracted by the ASR encoder for improving recognition
accuracy. The experimental results on the IEMOCAP dataset demonstrate that
Qieemo outperforms the benchmark unimodal, multimodal, and self-supervised
models with absolute improvements of 3.0%, 1.2%, and 1.9% respectively.
|
2503.22689 | Hongru Du | Chenzhi Ma, Hongru Du, Shengzhi Luan, Ensheng Dong, Lauren M. Gardner,
and Thomas Gernay | From Occurrence to Consequence: A Comprehensive Data-driven Analysis of
Building Fire Risk | null | null | null | null | cs.LG physics.data-an stat.AP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Building fires pose a persistent threat to life, property, and
infrastructure, emphasizing the need for advanced risk mitigation strategies.
This study presents a data-driven framework analyzing U.S. fire risks by
integrating over one million fire incident reports with diverse fire-relevant
datasets, including social determinants, building inventories, weather
conditions, and incident-specific factors. By adapting machine learning models,
we identify key risk factors influencing fire occurrence and consequences. Our
findings show that vulnerable communities, characterized by socioeconomic
disparities or the prevalence of outdated or vacant buildings, face higher fire
risks. Incident-specific factors, such as fire origins and safety features,
strongly influence fire consequences. Buildings equipped with fire detectors
and automatic extinguishing systems experience significantly lower fire spread
and injury risks. By pinpointing high-risk areas and populations, this research
supports targeted interventions, including mandating fire safety systems and
providing subsidies for disadvantaged communities. These measures can enhance
fire prevention, protect vulnerable groups, and promote safer, more equitable
communities.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 14:55:31 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ma",
"Chenzhi",
""
],
[
"Du",
"Hongru",
""
],
[
"Luan",
"Shengzhi",
""
],
[
"Dong",
"Ensheng",
""
],
[
"Gardner",
"Lauren M.",
""
],
[
"Gernay",
"Thomas",
""
]
] | TITLE: From Occurrence to Consequence: A Comprehensive Data-driven Analysis of
Building Fire Risk
ABSTRACT: Building fires pose a persistent threat to life, property, and
infrastructure, emphasizing the need for advanced risk mitigation strategies.
This study presents a data-driven framework analyzing U.S. fire risks by
integrating over one million fire incident reports with diverse fire-relevant
datasets, including social determinants, building inventories, weather
conditions, and incident-specific factors. By adapting machine learning models,
we identify key risk factors influencing fire occurrence and consequences. Our
findings show that vulnerable communities, characterized by socioeconomic
disparities or the prevalence of outdated or vacant buildings, face higher fire
risks. Incident-specific factors, such as fire origins and safety features,
strongly influence fire consequences. Buildings equipped with fire detectors
and automatic extinguishing systems experience significantly lower fire spread
and injury risks. By pinpointing high-risk areas and populations, this research
supports targeted interventions, including mandating fire safety systems and
providing subsidies for disadvantaged communities. These measures can enhance
fire prevention, protect vulnerable groups, and promote safer, more equitable
communities.
|
2503.22692 | Shokoufeh Mirzaei | Shokoufeh Mirzaei, Jesse Arzate, Yukti Vijay | Enhancing Aviation Communication Transcription: Fine-Tuning
Distil-Whisper with LoRA | 14 pages, 4 Figures, 4 Tables, Under review by Journal of Aerospace
Information Systems | null | null | null | eess.AS cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Transcription of aviation communications has several applications, from
assisting air traffic controllers in identifying the accuracy of read-back
errors to search and rescue operations. Recent advances in artificial
intelligence have provided unprecedented opportunities for improving aviation
communication transcription tasks. OpenAI's Whisper is one of the leading
automatic speech recognition models. However, fine-tuning Whisper for aviation
communication transcription is not computationally efficient. Thus, this paper
aims to use a Parameter-Efficient Fine-tuning method called Low-Rank Adaptation
to fine-tune a more computationally efficient version of Whisper,
distil-Whisper. To perform the fine-tuning, we used the Air Traffic Control
Corpus dataset from the Linguistic Data Consortium, which contains
approximately 70 hours of controller and pilot transmissions near three major
airports in the US. The objective was to reduce the word error rate to enhance
accuracy in the transcription of aviation communication. First, starting with
an initial set of hyperparameters for LoRA (Alpha = 64 and Rank = 32), we
performed a grid search. We applied a 5-fold cross-validation to find the best
combination of distil-Whisper hyperparameters. Then, we fine-tuned the model
for LoRA hyperparameters, achieving an impressive average word error rate of
3.86% across five folds. This result highlights the model's potential for use
in the cockpit.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 22:12:45 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Mirzaei",
"Shokoufeh",
""
],
[
"Arzate",
"Jesse",
""
],
[
"Vijay",
"Yukti",
""
]
] | TITLE: Enhancing Aviation Communication Transcription: Fine-Tuning
Distil-Whisper with LoRA
ABSTRACT: Transcription of aviation communications has several applications, from
assisting air traffic controllers in identifying the accuracy of read-back
errors to search and rescue operations. Recent advances in artificial
intelligence have provided unprecedented opportunities for improving aviation
communication transcription tasks. OpenAI's Whisper is one of the leading
automatic speech recognition models. However, fine-tuning Whisper for aviation
communication transcription is not computationally efficient. Thus, this paper
aims to use a Parameter-Efficient Fine-tuning method called Low-Rank Adaptation
to fine-tune a more computationally efficient version of Whisper,
distil-Whisper. To perform the fine-tuning, we used the Air Traffic Control
Corpus dataset from the Linguistic Data Consortium, which contains
approximately 70 hours of controller and pilot transmissions near three major
airports in the US. The objective was to reduce the word error rate to enhance
accuracy in the transcription of aviation communication. First, starting with
an initial set of hyperparameters for LoRA (Alpha = 64 and Rank = 32), we
performed a grid search. We applied a 5-fold cross-validation to find the best
combination of distil-Whisper hyperparameters. Then, we fine-tuned the model
for LoRA hyperparameters, achieving an impressive average word error rate of
3.86% across five folds. This result highlights the model's potential for use
in the cockpit.
|
2503.22706 | Loukas Triantafyllopoulos Dr | Francesca Meimeti, Loukas Triantafyllopoulos, Aikaterini Sakagianni,
Vasileios Kaldis, Lazaros Tzelves, Nikolaos Theodorakis, Evgenia Paxinou,
Georgios Feretzakis, Dimitris Kalles, Vassilios S. Verykios | Validating Emergency Department Admission Predictions Based on Local
Data Through MIMIC-IV | 36 pages, 3 figures, 6 tables | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The effective management of Emergency Department (ED) overcrowding is
essential for improving patient outcomes and optimizing healthcare resource
allocation. This study validates hospital admission prediction models initially
developed using a small local dataset from a Greek hospital by leveraging the
comprehensive MIMIC-IV dataset. After preprocessing the MIMIC-IV data, five
algorithms were evaluated: Linear Discriminant Analysis (LDA), K-Nearest
Neighbors (KNN), Random Forest (RF), Recursive Partitioning and Regression
Trees (RPART), and Support Vector Machines (SVM Radial). Among these, RF
demonstrated superior performance, achieving an Area Under the Receiver
Operating Characteristic Curve (AUC-ROC) of 0.9999, sensitivity of 0.9997, and
specificity of 0.9999 when applied to the MIMIC-IV data. These findings
highlight the robustness of RF in handling complex datasets for admission
prediction, establish MIMIC-IV as a valuable benchmark for validating models
based on smaller local datasets, and provide actionable insights for improving
ED management strategies.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 13:54:28 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Meimeti",
"Francesca",
""
],
[
"Triantafyllopoulos",
"Loukas",
""
],
[
"Sakagianni",
"Aikaterini",
""
],
[
"Kaldis",
"Vasileios",
""
],
[
"Tzelves",
"Lazaros",
""
],
[
"Theodorakis",
"Nikolaos",
""
],
[
"Paxinou",
"Evgenia",
""
],
[
"Feretzakis",
"Georgios",
""
],
[
"Kalles",
"Dimitris",
""
],
[
"Verykios",
"Vassilios S.",
""
]
] | TITLE: Validating Emergency Department Admission Predictions Based on Local
Data Through MIMIC-IV
ABSTRACT: The effective management of Emergency Department (ED) overcrowding is
essential for improving patient outcomes and optimizing healthcare resource
allocation. This study validates hospital admission prediction models initially
developed using a small local dataset from a Greek hospital by leveraging the
comprehensive MIMIC-IV dataset. After preprocessing the MIMIC-IV data, five
algorithms were evaluated: Linear Discriminant Analysis (LDA), K-Nearest
Neighbors (KNN), Random Forest (RF), Recursive Partitioning and Regression
Trees (RPART), and Support Vector Machines (SVM Radial). Among these, RF
demonstrated superior performance, achieving an Area Under the Receiver
Operating Characteristic Curve (AUC-ROC) of 0.9999, sensitivity of 0.9997, and
specificity of 0.9999 when applied to the MIMIC-IV data. These findings
highlight the robustness of RF in handling complex datasets for admission
prediction, establish MIMIC-IV as a valuable benchmark for validating models
based on smaller local datasets, and provide actionable insights for improving
ED management strategies.
|
2503.22712 | Zijun Jia | Zijun Jia | Risk-Calibrated Affective Speech Recognition via Conformal Coverage
Guarantees: A Stochastic Calibrative Framework for Emergent Uncertainty
Quantification | null | null | null | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic safety challenges arising from extreme driver emotions highlight the
urgent need for reliable emotion recognition systems. Traditional deep learning
approaches in speech emotion recognition suffer from overfitting and poorly
calibrated confidence estimates. We propose a framework integrating Conformal
Prediction (CP) and Risk Control,using Mel-spectrogram features processed
through a pre-trained convolutional neural network. Our key innovation is the
development of a nonconformity score that heuristically measures how closely a
classifier's predictions align with given inputs. Through calibration samples,
we compute this score and derive a statistically rigorous threshold based on
user-specified risk level $\alpha$, constructing prediction sets with provable
coverage guarantees ($\geq 1-\alpha$). The Risk Control framework enables
task-specific adaptation through customizable loss functions, dynamically
adjusting prediction set sizes while maintaining coverage guarantees.
Cross-dataset experiments on IEMOCAP and TESS demonstrate: 1) Strict coverage
guarantee, 2) Significant negative correlation between Average Prediction Set
Size (APSS) and $\alpha$, revealing reduced model uncertainty under high-risk
conditions. We further propose APSS as a novel metric for evaluating
classification uncertainty. This approach enhances speech emotion recognition
reliability, with direct applications in intelligent transportation systems and
real-time emotion monitoring.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 12:26:28 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Jia",
"Zijun",
""
]
] | TITLE: Risk-Calibrated Affective Speech Recognition via Conformal Coverage
Guarantees: A Stochastic Calibrative Framework for Emergent Uncertainty
Quantification
ABSTRACT: Traffic safety challenges arising from extreme driver emotions highlight the
urgent need for reliable emotion recognition systems. Traditional deep learning
approaches in speech emotion recognition suffer from overfitting and poorly
calibrated confidence estimates. We propose a framework integrating Conformal
Prediction (CP) and Risk Control,using Mel-spectrogram features processed
through a pre-trained convolutional neural network. Our key innovation is the
development of a nonconformity score that heuristically measures how closely a
classifier's predictions align with given inputs. Through calibration samples,
we compute this score and derive a statistically rigorous threshold based on
user-specified risk level $\alpha$, constructing prediction sets with provable
coverage guarantees ($\geq 1-\alpha$). The Risk Control framework enables
task-specific adaptation through customizable loss functions, dynamically
adjusting prediction set sizes while maintaining coverage guarantees.
Cross-dataset experiments on IEMOCAP and TESS demonstrate: 1) Strict coverage
guarantee, 2) Significant negative correlation between Average Prediction Set
Size (APSS) and $\alpha$, revealing reduced model uncertainty under high-risk
conditions. We further propose APSS as a novel metric for evaluating
classification uncertainty. This approach enhances speech emotion recognition
reliability, with direct applications in intelligent transportation systems and
real-time emotion monitoring.
|
2503.22715 | Jiahao Qin | Jiahao Qin, Feng Liu, Lu Zong | Hierarchical Adaptive Expert for Multimodal Sentiment Analysis | 11 pages, 3 figures | null | null | null | cs.LG cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal sentiment analysis has emerged as a critical tool for
understanding human emotions across diverse communication channels. While
existing methods have made significant strides, they often struggle to
effectively differentiate and integrate modality-shared and modality-specific
information, limiting the performance of multimodal learning. To address this
challenge, we propose the Hierarchical Adaptive Expert for Multimodal Sentiment
Analysis (HAEMSA), a novel framework that synergistically combines evolutionary
optimization, cross-modal knowledge transfer, and multi-task learning. HAEMSA
employs a hierarchical structure of adaptive experts to capture both global and
local modality representations, enabling more nuanced sentiment analysis. Our
approach leverages evolutionary algorithms to dynamically optimize network
architectures and modality combinations, adapting to both partial and full
modality scenarios. Extensive experiments demonstrate HAEMSA's superior
performance across multiple benchmark datasets. On CMU-MOSEI, HAEMSA achieves a
2.6% increase in 7-class accuracy and a 0.059 decrease in MAE compared to the
previous best method. For CMU-MOSI, we observe a 6.3% improvement in 7-class
accuracy and a 0.058 reduction in MAE. On IEMOCAP, HAEMSA outperforms the
state-of-the-art by 2.84% in weighted-F1 score for emotion recognition. These
results underscore HAEMSA's effectiveness in capturing complex multimodal
interactions and generalizing across different emotional contexts.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 09:52:08 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Qin",
"Jiahao",
""
],
[
"Liu",
"Feng",
""
],
[
"Zong",
"Lu",
""
]
] | TITLE: Hierarchical Adaptive Expert for Multimodal Sentiment Analysis
ABSTRACT: Multimodal sentiment analysis has emerged as a critical tool for
understanding human emotions across diverse communication channels. While
existing methods have made significant strides, they often struggle to
effectively differentiate and integrate modality-shared and modality-specific
information, limiting the performance of multimodal learning. To address this
challenge, we propose the Hierarchical Adaptive Expert for Multimodal Sentiment
Analysis (HAEMSA), a novel framework that synergistically combines evolutionary
optimization, cross-modal knowledge transfer, and multi-task learning. HAEMSA
employs a hierarchical structure of adaptive experts to capture both global and
local modality representations, enabling more nuanced sentiment analysis. Our
approach leverages evolutionary algorithms to dynamically optimize network
architectures and modality combinations, adapting to both partial and full
modality scenarios. Extensive experiments demonstrate HAEMSA's superior
performance across multiple benchmark datasets. On CMU-MOSEI, HAEMSA achieves a
2.6% increase in 7-class accuracy and a 0.059 decrease in MAE compared to the
previous best method. For CMU-MOSI, we observe a 6.3% improvement in 7-class
accuracy and a 0.058 reduction in MAE. On IEMOCAP, HAEMSA outperforms the
state-of-the-art by 2.84% in weighted-F1 score for emotion recognition. These
results underscore HAEMSA's effectiveness in capturing complex multimodal
interactions and generalizing across different emotional contexts.
|
2503.22719 | Sarah Martinson | Sarah Martinson, Lingkai Kong, Cheol Woo Kim, Aparna Taneja, Milind
Tambe | LLM-based Agent Simulation for Maternal Health Interventions:
Uncertainty Estimation and Decision-focused Evaluation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Agent-based simulation is crucial for modeling complex human behavior, yet
traditional approaches require extensive domain knowledge and large datasets.
In data-scarce healthcare settings where historic and counterfactual data are
limited, large language models (LLMs) offer a promising alternative by
leveraging broad world knowledge. This study examines an LLM-driven simulation
of a maternal mobile health program, predicting beneficiaries' listening
behavior when they receive health information via automated messages (control)
or live representatives (intervention). Since uncertainty quantification is
critical for decision-making in health interventions, we propose an LLM
epistemic uncertainty estimation method based on binary entropy across multiple
samples. We enhance model robustness through ensemble approaches, improving F1
score and model calibration compared to individual models. Beyond direct
evaluation, we take a decision-focused approach, demonstrating how LLM
predictions inform intervention feasibility and trial implementation in
data-limited settings. The proposed method extends to public health, disaster
response, and other domains requiring rapid intervention assessment under
severe data constraints. All code and prompts used for this work can be found
at https://github.com/sarahmart/LLM-ABS-ARMMAN-prediction.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 20:24:47 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Martinson",
"Sarah",
""
],
[
"Kong",
"Lingkai",
""
],
[
"Kim",
"Cheol Woo",
""
],
[
"Taneja",
"Aparna",
""
],
[
"Tambe",
"Milind",
""
]
] | TITLE: LLM-based Agent Simulation for Maternal Health Interventions:
Uncertainty Estimation and Decision-focused Evaluation
ABSTRACT: Agent-based simulation is crucial for modeling complex human behavior, yet
traditional approaches require extensive domain knowledge and large datasets.
In data-scarce healthcare settings where historic and counterfactual data are
limited, large language models (LLMs) offer a promising alternative by
leveraging broad world knowledge. This study examines an LLM-driven simulation
of a maternal mobile health program, predicting beneficiaries' listening
behavior when they receive health information via automated messages (control)
or live representatives (intervention). Since uncertainty quantification is
critical for decision-making in health interventions, we propose an LLM
epistemic uncertainty estimation method based on binary entropy across multiple
samples. We enhance model robustness through ensemble approaches, improving F1
score and model calibration compared to individual models. Beyond direct
evaluation, we take a decision-focused approach, demonstrating how LLM
predictions inform intervention feasibility and trial implementation in
data-limited settings. The proposed method extends to public health, disaster
response, and other domains requiring rapid intervention assessment under
severe data constraints. All code and prompts used for this work can be found
at https://github.com/sarahmart/LLM-ABS-ARMMAN-prediction.
|
2503.22725 | Jinxu Lin | Jinxu Lin, Linwei Tao, Minjing Dong, Chang Xu | Uncertainty Weighted Gradients for Model Calibration | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Model calibration is essential for ensuring that the predictions of deep
neural networks accurately reflect true probabilities in real-world
classification tasks. However, deep networks often produce over-confident or
under-confident predictions, leading to miscalibration. Various methods have
been proposed to address this issue by designing effective loss functions for
calibration, such as focal loss. In this paper, we analyze its effectiveness
and provide a unified loss framework of focal loss and its variants, where we
mainly attribute their superiority in model calibration to the loss weighting
factor that estimates sample-wise uncertainty. Based on our analysis, existing
loss functions fail to achieve optimal calibration performance due to two main
issues: including misalignment during optimization and insufficient precision
in uncertainty estimation. Specifically, focal loss cannot align sample
uncertainty with gradient scaling and the single logit cannot indicate the
uncertainty. To address these issues, we reformulate the optimization from the
perspective of gradients, which focuses on uncertain samples. Meanwhile, we
propose using the Brier Score as the loss weight factor, which provides a more
accurate uncertainty estimation via all the logits. Extensive experiments on
various models and datasets demonstrate that our method achieves
state-of-the-art (SOTA) performance.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 04:16:05 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lin",
"Jinxu",
""
],
[
"Tao",
"Linwei",
""
],
[
"Dong",
"Minjing",
""
],
[
"Xu",
"Chang",
""
]
] | TITLE: Uncertainty Weighted Gradients for Model Calibration
ABSTRACT: Model calibration is essential for ensuring that the predictions of deep
neural networks accurately reflect true probabilities in real-world
classification tasks. However, deep networks often produce over-confident or
under-confident predictions, leading to miscalibration. Various methods have
been proposed to address this issue by designing effective loss functions for
calibration, such as focal loss. In this paper, we analyze its effectiveness
and provide a unified loss framework of focal loss and its variants, where we
mainly attribute their superiority in model calibration to the loss weighting
factor that estimates sample-wise uncertainty. Based on our analysis, existing
loss functions fail to achieve optimal calibration performance due to two main
issues: including misalignment during optimization and insufficient precision
in uncertainty estimation. Specifically, focal loss cannot align sample
uncertainty with gradient scaling and the single logit cannot indicate the
uncertainty. To address these issues, we reformulate the optimization from the
perspective of gradients, which focuses on uncertain samples. Meanwhile, we
propose using the Brier Score as the loss weight factor, which provides a more
accurate uncertainty estimation via all the logits. Extensive experiments on
various models and datasets demonstrate that our method achieves
state-of-the-art (SOTA) performance.
|
2503.22729 | Jiahao Qin | Jiahao Qin and Feng Liu and Lu Zong | Ancestral Mamba: Enhancing Selective Discriminant Space Model with
Online Visual Prototype Learning for Efficient and Robust Discriminant
Approach | 10 pages, 3 figures | null | null | null | cs.GR cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the realm of computer graphics, the ability to learn continuously from
non-stationary data streams while adapting to new visual patterns and
mitigating catastrophic forgetting is of paramount importance. Existing
approaches often struggle to capture and represent the essential
characteristics of evolving visual concepts, hindering their applicability to
dynamic graphics tasks. In this paper, we propose Ancestral Mamba, a novel
approach that integrates online prototype learning into a selective
discriminant space model for efficient and robust online continual learning.
The key components of our approach include Ancestral Prototype Adaptation
(APA), which continuously refines and builds upon learned visual prototypes,
and Mamba Feedback (MF), which provides targeted feedback to adapt to
challenging visual patterns. APA enables the model to continuously adapt its
prototypes, building upon ancestral knowledge to tackle new challenges, while
MF acts as a targeted feedback mechanism, focusing on challenging classes and
refining their representations. Extensive experiments on graphics-oriented
datasets, such as CIFAR-10 and CIFAR-100, demonstrate the superior performance
of Ancestral Mamba compared to state-of-the-art baselines, achieving
significant improvements in accuracy and forgetting mitigation.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 08:36:05 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Qin",
"Jiahao",
""
],
[
"Liu",
"Feng",
""
],
[
"Zong",
"Lu",
""
]
] | TITLE: Ancestral Mamba: Enhancing Selective Discriminant Space Model with
Online Visual Prototype Learning for Efficient and Robust Discriminant
Approach
ABSTRACT: In the realm of computer graphics, the ability to learn continuously from
non-stationary data streams while adapting to new visual patterns and
mitigating catastrophic forgetting is of paramount importance. Existing
approaches often struggle to capture and represent the essential
characteristics of evolving visual concepts, hindering their applicability to
dynamic graphics tasks. In this paper, we propose Ancestral Mamba, a novel
approach that integrates online prototype learning into a selective
discriminant space model for efficient and robust online continual learning.
The key components of our approach include Ancestral Prototype Adaptation
(APA), which continuously refines and builds upon learned visual prototypes,
and Mamba Feedback (MF), which provides targeted feedback to adapt to
challenging visual patterns. APA enables the model to continuously adapt its
prototypes, building upon ancestral knowledge to tackle new challenges, while
MF acts as a targeted feedback mechanism, focusing on challenging classes and
refining their representations. Extensive experiments on graphics-oriented
datasets, such as CIFAR-10 and CIFAR-100, demonstrate the superior performance
of Ancestral Mamba compared to state-of-the-art baselines, achieving
significant improvements in accuracy and forgetting mitigation.
|
2503.22730 | Abdoulaye SAKHO | Abdoulaye Sakho (LPSM), Emmanuel Malherbe, Carl-Erik Gauthier, Erwan
Scornet (LPSM) | Harnessing Mixed Features for Imbalance Data Oversampling: Application
to Bank Customers Scoring | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study investigates rare event detection on tabular data within binary
classification. Standard techniques to handle class imbalance include SMOTE,
which generates synthetic samples from the minority class. However, SMOTE is
intrinsically designed for continuous input variables. In fact, despite
SMOTE-NC-its default extension to handle mixed features (continuous and
categorical variables)-very few works propose procedures to synthesize mixed
features. On the other hand, many real-world classification tasks, such as in
banking sector, deal with mixed features, which have a significant impact on
predictive performances. To this purpose, we introduce MGS-GRF, an oversampling
strategy designed for mixed features. This method uses a kernel density
estimator with locally estimated full-rank covariances to generate continuous
features, while categorical ones are drawn from the original samples through a
generalized random forest. Empirically, contrary to SMOTE-NC, we show that
MGS-GRF exhibits two important properties: (i) the coherence i.e. the ability
to only generate combinations of categorical features that are already present
in the original dataset and (ii) association, i.e. the ability to preserve the
dependence between continuous and categorical features. We also evaluate the
predictive performances of LightGBM classifiers trained on data sets, augmented
with synthetic samples from various strategies. Our comparison is performed on
simulated and public real-world data sets, as well as on a private data set
from a leading financial institution. We observe that synthetic procedures that
have the properties of coherence and association display better predictive
performances in terms of various predictive metrics (PR and ROC AUC...), with
MGS-GRF being the best one. Furthermore, our method exhibits promising results
for the private banking application, with development pipeline being compliant
with regulatory constraints.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 08:53:40 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Sakho",
"Abdoulaye",
"",
"LPSM"
],
[
"Malherbe",
"Emmanuel",
"",
"LPSM"
],
[
"Gauthier",
"Carl-Erik",
"",
"LPSM"
],
[
"Scornet",
"Erwan",
"",
"LPSM"
]
] | TITLE: Harnessing Mixed Features for Imbalance Data Oversampling: Application
to Bank Customers Scoring
ABSTRACT: This study investigates rare event detection on tabular data within binary
classification. Standard techniques to handle class imbalance include SMOTE,
which generates synthetic samples from the minority class. However, SMOTE is
intrinsically designed for continuous input variables. In fact, despite
SMOTE-NC-its default extension to handle mixed features (continuous and
categorical variables)-very few works propose procedures to synthesize mixed
features. On the other hand, many real-world classification tasks, such as in
banking sector, deal with mixed features, which have a significant impact on
predictive performances. To this purpose, we introduce MGS-GRF, an oversampling
strategy designed for mixed features. This method uses a kernel density
estimator with locally estimated full-rank covariances to generate continuous
features, while categorical ones are drawn from the original samples through a
generalized random forest. Empirically, contrary to SMOTE-NC, we show that
MGS-GRF exhibits two important properties: (i) the coherence i.e. the ability
to only generate combinations of categorical features that are already present
in the original dataset and (ii) association, i.e. the ability to preserve the
dependence between continuous and categorical features. We also evaluate the
predictive performances of LightGBM classifiers trained on data sets, augmented
with synthetic samples from various strategies. Our comparison is performed on
simulated and public real-world data sets, as well as on a private data set
from a leading financial institution. We observe that synthetic procedures that
have the properties of coherence and association display better predictive
performances in terms of various predictive metrics (PR and ROC AUC...), with
MGS-GRF being the best one. Furthermore, our method exhibits promising results
for the private banking application, with development pipeline being compliant
with regulatory constraints.
|
2503.22734 | Chiara Francalanci | Michela Corvino, Filippo Daffin\`a, Chiara Francalanci, Paolo
Giacomazzi, Martina Magliani, Paolo Ravanelli, Torbjorn Stahl | A Methodology to extract Geo-Referenced Standard Routes from AIS Data | null | null | null | ITADATA/2024/02 | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Maritime AIS (Automatic Identification Systems) data serve as a valuable
resource for studying vessel behavior. This study proposes a methodology to
analyze route between maritime points of interest and extract geo-referenced
standard routes, as maritime patterns of life, from raw AIS data. The
underlying assumption is that ships adhere to consistent patterns when
travelling in certain maritime areas due to geographical, environmental, or
economic factors. Deviations from these patterns may be attributed to weather
conditions, seasonality, or illicit activities. This enables maritime
surveillance authorities to analyze the navigational behavior between ports,
providing insights on vessel route patterns, possibly categorized by vessel
characteristics (type, flag, or size). Our methodological process begins by
segmenting AIS data into distinct routes using a finite state machine (FSM),
which describes routes as seg-ments connecting pairs of points of interest. The
extracted segments are ag-gregated based on their departure and destination
ports and then modelled using iterative density-based clustering to connect
these ports. The cluster-ing parameters are assigned manually to sample and
then extended to the en-tire dataset using linear regression. Overall, the
approach proposed in this paper is unsupervised and does not require any ground
truth to be trained. The approach has been tested on data on the on a six-year
AIS dataset cover-ing the Arctic region and the Europe, Middle East, North
Africa areas. The total size of our dataset is 1.15 Tbytes. The approach has
proved effective in extracting standard routes, with less than 5% outliers,
mostly due to routes with either their departure or their destination port not
included in the test areas.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 13:29:41 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Corvino",
"Michela",
""
],
[
"Daffinà",
"Filippo",
""
],
[
"Francalanci",
"Chiara",
""
],
[
"Giacomazzi",
"Paolo",
""
],
[
"Magliani",
"Martina",
""
],
[
"Ravanelli",
"Paolo",
""
],
[
"Stahl",
"Torbjorn",
""
]
] | TITLE: A Methodology to extract Geo-Referenced Standard Routes from AIS Data
ABSTRACT: Maritime AIS (Automatic Identification Systems) data serve as a valuable
resource for studying vessel behavior. This study proposes a methodology to
analyze route between maritime points of interest and extract geo-referenced
standard routes, as maritime patterns of life, from raw AIS data. The
underlying assumption is that ships adhere to consistent patterns when
travelling in certain maritime areas due to geographical, environmental, or
economic factors. Deviations from these patterns may be attributed to weather
conditions, seasonality, or illicit activities. This enables maritime
surveillance authorities to analyze the navigational behavior between ports,
providing insights on vessel route patterns, possibly categorized by vessel
characteristics (type, flag, or size). Our methodological process begins by
segmenting AIS data into distinct routes using a finite state machine (FSM),
which describes routes as seg-ments connecting pairs of points of interest. The
extracted segments are ag-gregated based on their departure and destination
ports and then modelled using iterative density-based clustering to connect
these ports. The cluster-ing parameters are assigned manually to sample and
then extended to the en-tire dataset using linear regression. Overall, the
approach proposed in this paper is unsupervised and does not require any ground
truth to be trained. The approach has been tested on data on the on a six-year
AIS dataset cover-ing the Arctic region and the Europe, Middle East, North
Africa areas. The total size of our dataset is 1.15 Tbytes. The approach has
proved effective in extracting standard routes, with less than 5% outliers,
mostly due to routes with either their departure or their destination port not
included in the test areas.
|
2503.22736 | Kai North | Kai North and Christopher Ormerod | Cyborg Data: Merging Human with AI Generated Training Data | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated scoring (AS) systems used in large-scale assessment have
traditionally used small statistical models that require a large quantity of
hand-scored data to make accurate predictions, which can be time-consuming and
costly. Generative Large Language Models are trained on many tasks and have
shown impressive abilities to generalize to new tasks with little to no data.
While these models require substantially more computational power to make
predictions, they still require some fine-tuning to meet operational standards.
Evidence suggests that these models can exceed human-human levels of agreement
even when fine-tuned on small amounts of data. With this in mind, we propose a
model distillation pipeline in which a large generative model, a Teacher,
teaches a much smaller model, a Student. The Teacher, trained on a small subset
of the training data, is used to provide scores on the remaining training data,
which is then used to train the Student. We call the resulting dataset "Cyborg
Data", as it combines human and machine-scored responses. Our findings show
that Student models trained on "Cyborg Data" show performance comparable to
training on the entire dataset, while only requiring 10% of the original
hand-scored data.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 16:38:20 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"North",
"Kai",
""
],
[
"Ormerod",
"Christopher",
""
]
] | TITLE: Cyborg Data: Merging Human with AI Generated Training Data
ABSTRACT: Automated scoring (AS) systems used in large-scale assessment have
traditionally used small statistical models that require a large quantity of
hand-scored data to make accurate predictions, which can be time-consuming and
costly. Generative Large Language Models are trained on many tasks and have
shown impressive abilities to generalize to new tasks with little to no data.
While these models require substantially more computational power to make
predictions, they still require some fine-tuning to meet operational standards.
Evidence suggests that these models can exceed human-human levels of agreement
even when fine-tuned on small amounts of data. With this in mind, we propose a
model distillation pipeline in which a large generative model, a Teacher,
teaches a much smaller model, a Student. The Teacher, trained on a small subset
of the training data, is used to provide scores on the remaining training data,
which is then used to train the Student. We call the resulting dataset "Cyborg
Data", as it combines human and machine-scored responses. Our findings show
that Student models trained on "Cyborg Data" show performance comparable to
training on the entire dataset, while only requiring 10% of the original
hand-scored data.
|
2503.22738 | Zhaorun Chen | Zhaorun Chen, Mintong Kang, Bo Li | ShieldAgent: Shielding Agents via Verifiable Safety Policy Reasoning | null | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous agents powered by foundation models have seen widespread adoption
across various real-world applications. However, they remain highly vulnerable
to malicious instructions and attacks, which can result in severe consequences
such as privacy breaches and financial losses. More critically, existing
guardrails for LLMs are not applicable due to the complex and dynamic nature of
agents. To tackle these challenges, we propose ShieldAgent, the first guardrail
agent designed to enforce explicit safety policy compliance for the action
trajectory of other protected agents through logical reasoning. Specifically,
ShieldAgent first constructs a safety policy model by extracting verifiable
rules from policy documents and structuring them into a set of action-based
probabilistic rule circuits. Given the action trajectory of the protected
agent, ShieldAgent retrieves relevant rule circuits and generates a shielding
plan, leveraging its comprehensive tool library and executable code for formal
verification. In addition, given the lack of guardrail benchmarks for agents,
we introduce ShieldAgent-Bench, a dataset with 3K safety-related pairs of agent
instructions and action trajectories, collected via SOTA attacks across 6 web
environments and 7 risk categories. Experiments show that ShieldAgent achieves
SOTA on ShieldAgent-Bench and three existing benchmarks, outperforming prior
methods by 11.3% on average with a high recall of 90.1%. Additionally,
ShieldAgent reduces API queries by 64.7% and inference time by 58.2%,
demonstrating its high precision and efficiency in safeguarding agents.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:58:40 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Zhaorun",
""
],
[
"Kang",
"Mintong",
""
],
[
"Li",
"Bo",
""
]
] | TITLE: ShieldAgent: Shielding Agents via Verifiable Safety Policy Reasoning
ABSTRACT: Autonomous agents powered by foundation models have seen widespread adoption
across various real-world applications. However, they remain highly vulnerable
to malicious instructions and attacks, which can result in severe consequences
such as privacy breaches and financial losses. More critically, existing
guardrails for LLMs are not applicable due to the complex and dynamic nature of
agents. To tackle these challenges, we propose ShieldAgent, the first guardrail
agent designed to enforce explicit safety policy compliance for the action
trajectory of other protected agents through logical reasoning. Specifically,
ShieldAgent first constructs a safety policy model by extracting verifiable
rules from policy documents and structuring them into a set of action-based
probabilistic rule circuits. Given the action trajectory of the protected
agent, ShieldAgent retrieves relevant rule circuits and generates a shielding
plan, leveraging its comprehensive tool library and executable code for formal
verification. In addition, given the lack of guardrail benchmarks for agents,
we introduce ShieldAgent-Bench, a dataset with 3K safety-related pairs of agent
instructions and action trajectories, collected via SOTA attacks across 6 web
environments and 7 risk categories. Experiments show that ShieldAgent achieves
SOTA on ShieldAgent-Bench and three existing benchmarks, outperforming prior
methods by 11.3% on average with a high recall of 90.1%. Additionally,
ShieldAgent reduces API queries by 64.7% and inference time by 58.2%,
demonstrating its high precision and efficiency in safeguarding agents.
|
2503.22742 | Suhas K M | William Claster, Suhas KM, Dhairya Gundechia | Adaptive Integrated Layered Attention (AILA) | null | null | null | null | cs.LG cs.AI cs.CL cs.CV cs.IR cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Adaptive Integrated Layered Attention (AILA), a neural network
architecture that combines dense skip connections with different mechanisms for
adaptive feature reuse across network layers. We evaluate AILA on three
challenging tasks: price forecasting for various commodities and indices (S&P
500, Gold, US dollar Futures, Coffee, Wheat), image recognition using the
CIFAR-10 dataset, and sentiment analysis on the IMDB movie review dataset. In
all cases, AILA matches strong deep learning baselines (LSTMs, Transformers,
and ResNets), achieving it at a fraction of the training and inference time.
Notably, we implement and test two versions of the model - AILA-Architecture 1,
which uses simple linear layers as the connection mechanism between layers, and
AILA-Architecture 2, which implements an attention mechanism to selectively
focus on outputs from previous layers. Both architectures are applied in a
single-task learning setting, with each model trained separately for individual
tasks. Results confirm that AILA's adaptive inter-layer connections yield
robust gains by flexibly reusing pertinent features at multiple network depths.
The AILA approach thus presents an extension to existing architectures,
improving long-range sequence modeling, image recognition with optimised
computational speed, and SOTA classification performance in practice.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 19:32:31 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Claster",
"William",
""
],
[
"KM",
"Suhas",
""
],
[
"Gundechia",
"Dhairya",
""
]
] | TITLE: Adaptive Integrated Layered Attention (AILA)
ABSTRACT: We propose Adaptive Integrated Layered Attention (AILA), a neural network
architecture that combines dense skip connections with different mechanisms for
adaptive feature reuse across network layers. We evaluate AILA on three
challenging tasks: price forecasting for various commodities and indices (S&P
500, Gold, US dollar Futures, Coffee, Wheat), image recognition using the
CIFAR-10 dataset, and sentiment analysis on the IMDB movie review dataset. In
all cases, AILA matches strong deep learning baselines (LSTMs, Transformers,
and ResNets), achieving it at a fraction of the training and inference time.
Notably, we implement and test two versions of the model - AILA-Architecture 1,
which uses simple linear layers as the connection mechanism between layers, and
AILA-Architecture 2, which implements an attention mechanism to selectively
focus on outputs from previous layers. Both architectures are applied in a
single-task learning setting, with each model trained separately for individual
tasks. Results confirm that AILA's adaptive inter-layer connections yield
robust gains by flexibly reusing pertinent features at multiple network depths.
The AILA approach thus presents an extension to existing architectures,
improving long-range sequence modeling, image recognition with optimised
computational speed, and SOTA classification performance in practice.
|
2503.22743 | Chao Li | Alice Zhang, Chao Li | Adaptive State-Space Mamba for Real-Time Sensor Data Anomaly Detection | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-space modeling has emerged as a powerful paradigm for sequence analysis
in various tasks such as natural language processing, time-series forecasting,
and signal processing. In this work, we propose an \emph{Adaptive State-Space
Mamba} (\textbf{ASSM}) framework for real-time sensor data anomaly detection.
While state-space models have been previously employed for image processing
applications (e.g., style transfer \cite{wang2024stylemamba}), our approach
leverages the core idea of sequential hidden states to tackle a significantly
different domain: detecting anomalies on streaming sensor data.
In particular, we introduce an adaptive gating mechanism that dynamically
modulates the hidden state update based on contextual and learned statistical
cues. This design ensures that our model remains computationally efficient and
scalable, even under rapid data arrival rates. Extensive experiments on
real-world and synthetic sensor datasets demonstrate that our method achieves
superior detection performance compared to existing baselines. Our approach is
easily extensible to other time-series tasks that demand rapid and reliable
detection capabilities.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 21:37:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhang",
"Alice",
""
],
[
"Li",
"Chao",
""
]
] | TITLE: Adaptive State-Space Mamba for Real-Time Sensor Data Anomaly Detection
ABSTRACT: State-space modeling has emerged as a powerful paradigm for sequence analysis
in various tasks such as natural language processing, time-series forecasting,
and signal processing. In this work, we propose an \emph{Adaptive State-Space
Mamba} (\textbf{ASSM}) framework for real-time sensor data anomaly detection.
While state-space models have been previously employed for image processing
applications (e.g., style transfer \cite{wang2024stylemamba}), our approach
leverages the core idea of sequential hidden states to tackle a significantly
different domain: detecting anomalies on streaming sensor data.
In particular, we introduce an adaptive gating mechanism that dynamically
modulates the hidden state update based on contextual and learned statistical
cues. This design ensures that our model remains computationally efficient and
scalable, even under rapid data arrival rates. Extensive experiments on
real-world and synthetic sensor datasets demonstrate that our method achieves
superior detection performance compared to existing baselines. Our approach is
easily extensible to other time-series tasks that demand rapid and reliable
detection capabilities.
|
2503.22744 | Chao Li | Emily Wang, Michael Chen, and Chao Li | Uncertainty-Aware Graph Self-Training with Expectation-Maximization
Regularization | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel \emph{uncertainty-aware graph
self-training} approach for semi-supervised node classification. Our method
introduces an Expectation-Maximization (EM) regularization scheme to
incorporate an uncertainty mechanism during pseudo-label generation and model
retraining. Unlike conventional graph self-training pipelines that rely on
fixed pseudo-labels, our approach iteratively refines label confidences with an
EM-inspired uncertainty measure. This ensures that the predictive model focuses
on reliable graph regions while gradually incorporating ambiguous nodes.
Inspired by prior work on uncertainty-aware self-training
techniques~\cite{wang2024uncertainty}, our framework is designed to handle
noisy graph structures and feature spaces more effectively. Through extensive
experiments on several benchmark graph datasets, we demonstrate that our method
outperforms strong baselines by a margin of up to 2.5\% in accuracy while
maintaining lower variance in performance across multiple runs.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 21:52:21 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Emily",
""
],
[
"Chen",
"Michael",
""
],
[
"Li",
"Chao",
""
]
] | TITLE: Uncertainty-Aware Graph Self-Training with Expectation-Maximization
Regularization
ABSTRACT: In this paper, we propose a novel \emph{uncertainty-aware graph
self-training} approach for semi-supervised node classification. Our method
introduces an Expectation-Maximization (EM) regularization scheme to
incorporate an uncertainty mechanism during pseudo-label generation and model
retraining. Unlike conventional graph self-training pipelines that rely on
fixed pseudo-labels, our approach iteratively refines label confidences with an
EM-inspired uncertainty measure. This ensures that the predictive model focuses
on reliable graph regions while gradually incorporating ambiguous nodes.
Inspired by prior work on uncertainty-aware self-training
techniques~\cite{wang2024uncertainty}, our framework is designed to handle
noisy graph structures and feature spaces more effectively. Through extensive
experiments on several benchmark graph datasets, we demonstrate that our method
outperforms strong baselines by a margin of up to 2.5\% in accuracy while
maintaining lower variance in performance across multiple runs.
|
2503.22745 | Chao Li | Tom Liu, Anna Wu, Chao Li | Graph-Based Uncertainty-Aware Self-Training with Stochastic Node
Labeling | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-training has become a popular semi-supervised learning technique for
leveraging unlabeled data. However, the over-confidence of pseudo-labels
remains a key challenge. In this paper, we propose a novel \emph{graph-based
uncertainty-aware self-training} (GUST) framework to combat over-confidence in
node classification. Drawing inspiration from the uncertainty integration idea
introduced by Wang \emph{et al.}~\cite{wang2024uncertainty}, our method largely
diverges from previous self-training approaches by focusing on \emph{stochastic
node labeling} grounded in the graph topology. Specifically, we deploy a
Bayesian-inspired module to estimate node-level uncertainty, incorporate these
estimates into the pseudo-label generation process via an
expectation-maximization (EM)-like step, and iteratively update both node
embeddings and adjacency-based transformations. Experimental results on several
benchmark graph datasets demonstrate that our GUST framework achieves
state-of-the-art performance, especially in settings where labeled data is
extremely sparse.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 21:54:19 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Tom",
""
],
[
"Wu",
"Anna",
""
],
[
"Li",
"Chao",
""
]
] | TITLE: Graph-Based Uncertainty-Aware Self-Training with Stochastic Node
Labeling
ABSTRACT: Self-training has become a popular semi-supervised learning technique for
leveraging unlabeled data. However, the over-confidence of pseudo-labels
remains a key challenge. In this paper, we propose a novel \emph{graph-based
uncertainty-aware self-training} (GUST) framework to combat over-confidence in
node classification. Drawing inspiration from the uncertainty integration idea
introduced by Wang \emph{et al.}~\cite{wang2024uncertainty}, our method largely
diverges from previous self-training approaches by focusing on \emph{stochastic
node labeling} grounded in the graph topology. Specifically, we deploy a
Bayesian-inspired module to estimate node-level uncertainty, incorporate these
estimates into the pseudo-label generation process via an
expectation-maximization (EM)-like step, and iteratively update both node
embeddings and adjacency-based transformations. Experimental results on several
benchmark graph datasets demonstrate that our GUST framework achieves
state-of-the-art performance, especially in settings where labeled data is
extremely sparse.
|
2503.22746 | Sangjoon Park | Kyung Ho Lim, Ujin Kang, Xiang Li, Jin Sung Kim, Young-Chul Jung,
Sangjoon Park, Byung-Hoon Kim | Susceptibility of Large Language Models to User-Driven Factors in
Medical Queries | null | null | null | null | cs.CL cs.AI cs.CY | http://creativecommons.org/licenses/by-sa/4.0/ | Large language models (LLMs) are increasingly used in healthcare, but their
reliability is heavily influenced by user-driven factors such as question
phrasing and the completeness of clinical information. In this study, we
examined how misinformation framing, source authority, model persona, and
omission of key clinical details affect the diagnostic accuracy and reliability
of LLM outputs. We conducted two experiments: one introducing misleading
external opinions with varying assertiveness (perturbation test), and another
removing specific categories of patient information (ablation test). Using
public datasets (MedQA and Medbullets), we evaluated proprietary models
(GPT-4o, Claude 3.5 Sonnet, Claude 3.5 Haiku, Gemini 1.5 Pro, Gemini 1.5 Flash)
and open-source models (LLaMA 3 8B, LLaMA 3 Med42 8B, DeepSeek R1 8B). All
models were vulnerable to user-driven misinformation, with proprietary models
especially affected by definitive and authoritative language. Assertive tone
had the greatest negative impact on accuracy. In the ablation test, omitting
physical exam findings and lab results caused the most significant performance
drop. Although proprietary models had higher baseline accuracy, their
performance declined sharply under misinformation. These results highlight the
need for well-structured prompts and complete clinical context. Users should
avoid authoritative framing of misinformation and provide full clinical
details, especially for complex cases.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 23:28:21 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lim",
"Kyung Ho",
""
],
[
"Kang",
"Ujin",
""
],
[
"Li",
"Xiang",
""
],
[
"Kim",
"Jin Sung",
""
],
[
"Jung",
"Young-Chul",
""
],
[
"Park",
"Sangjoon",
""
],
[
"Kim",
"Byung-Hoon",
""
]
] | TITLE: Susceptibility of Large Language Models to User-Driven Factors in
Medical Queries
ABSTRACT: Large language models (LLMs) are increasingly used in healthcare, but their
reliability is heavily influenced by user-driven factors such as question
phrasing and the completeness of clinical information. In this study, we
examined how misinformation framing, source authority, model persona, and
omission of key clinical details affect the diagnostic accuracy and reliability
of LLM outputs. We conducted two experiments: one introducing misleading
external opinions with varying assertiveness (perturbation test), and another
removing specific categories of patient information (ablation test). Using
public datasets (MedQA and Medbullets), we evaluated proprietary models
(GPT-4o, Claude 3.5 Sonnet, Claude 3.5 Haiku, Gemini 1.5 Pro, Gemini 1.5 Flash)
and open-source models (LLaMA 3 8B, LLaMA 3 Med42 8B, DeepSeek R1 8B). All
models were vulnerable to user-driven misinformation, with proprietary models
especially affected by definitive and authoritative language. Assertive tone
had the greatest negative impact on accuracy. In the ablation test, omitting
physical exam findings and lab results caused the most significant performance
drop. Although proprietary models had higher baseline accuracy, their
performance declined sharply under misinformation. These results highlight the
need for well-structured prompts and complete clinical context. Users should
avoid authoritative framing of misinformation and provide full clinical
details, especially for complex cases.
|
2503.22748 | Gongzhu Yin | Gongzhu Yin, Hongli Zhang, Yi Luo, Yuchen Yang, Kun Lu, Chao Meng | Ignite Forecasting with SPARK: An Efficient Generative Framework for
Refining LLMs in Temporal Knowledge Graph Forecasting | To be published in the 30th International Conference on Database
Systems for Advanced Applications (DASFAA 2025) | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Temporal Knowledge Graph (TKG) forecasting is crucial for predicting future
events using historical data. With the surge of Large Language Models (LLMs),
recent studies have begun exploring their integration into TKG forecasting and
achieved some success. However, they still face limitations such as limited
input length, inefficient output generation, and resource-intensive refinement,
which undermine their performance and practical applicability. To address these
limitations, we introduce SPARK, a Sequence-level Proxy-Adapting framework for
Refining LLMs in TKG forecasting. Inspired by inference-time algorithms adopted
in controlling generation, SPARK offers a cost-effective, plug-and-play
solution through two key innovations: (1) Beam Sequence-Level Generation, which
reframes TKG forecasting as a top-K sequence-level generation task, using beam
search for efficiently generating next-entity distribution in a single forward
pass. (2) TKG Adapter for Refinement, which employs traditional TKG models as
trainable proxy adapters to leverage global graph information and refine LLM
outputs, overcoming both the input length and the resource-intensive
fine-tuning problems. Experiments across diverse datasets validate SPARK's
forecasting performance, robust generalization capabilities, and high
efficiency. We release source codes at https://github.com/yin-gz/SPARK.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 03:02:02 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yin",
"Gongzhu",
""
],
[
"Zhang",
"Hongli",
""
],
[
"Luo",
"Yi",
""
],
[
"Yang",
"Yuchen",
""
],
[
"Lu",
"Kun",
""
],
[
"Meng",
"Chao",
""
]
] | TITLE: Ignite Forecasting with SPARK: An Efficient Generative Framework for
Refining LLMs in Temporal Knowledge Graph Forecasting
ABSTRACT: Temporal Knowledge Graph (TKG) forecasting is crucial for predicting future
events using historical data. With the surge of Large Language Models (LLMs),
recent studies have begun exploring their integration into TKG forecasting and
achieved some success. However, they still face limitations such as limited
input length, inefficient output generation, and resource-intensive refinement,
which undermine their performance and practical applicability. To address these
limitations, we introduce SPARK, a Sequence-level Proxy-Adapting framework for
Refining LLMs in TKG forecasting. Inspired by inference-time algorithms adopted
in controlling generation, SPARK offers a cost-effective, plug-and-play
solution through two key innovations: (1) Beam Sequence-Level Generation, which
reframes TKG forecasting as a top-K sequence-level generation task, using beam
search for efficiently generating next-entity distribution in a single forward
pass. (2) TKG Adapter for Refinement, which employs traditional TKG models as
trainable proxy adapters to leverage global graph information and refine LLM
outputs, overcoming both the input length and the resource-intensive
fine-tuning problems. Experiments across diverse datasets validate SPARK's
forecasting performance, robust generalization capabilities, and high
efficiency. We release source codes at https://github.com/yin-gz/SPARK.
|
2503.22749 | Kanishka Ranaweera Mr. | Kanishka Ranaweera, Dinh C. Nguyen, Pubudu N. Pathirana, David Smith,
Ming Ding, Thierry Rakotoarivelo and Aruna Seneviratne | Adaptive Clipping for Privacy-Preserving Few-Shot Learning: Enhancing
Generalization with Limited Data | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the era of data-driven machine-learning applications, privacy concerns and
the scarcity of labeled data have become paramount challenges. These challenges
are particularly pronounced in the domain of few-shot learning, where the
ability to learn from limited labeled data is crucial. Privacy-preserving
few-shot learning algorithms have emerged as a promising solution to address
such pronounced challenges. However, it is well-known that privacy-preserving
techniques often lead to a drop in utility due to the fundamental trade-off
between data privacy and model performance. To enhance the utility of
privacy-preserving few-shot learning methods, we introduce a novel approach
called Meta-Clip. This technique is specifically designed for meta-learning
algorithms, including Differentially Private (DP) model-agnostic meta-learning,
DP-Reptile, and DP-MetaSGD algorithms, with the objective of balancing data
privacy preservation with learning capacity maximization. By dynamically
adjusting clipping thresholds during the training process, our Adaptive
Clipping method provides fine-grained control over the disclosure of sensitive
information, mitigating overfitting on small datasets and significantly
improving the generalization performance of meta-learning models. Through
comprehensive experiments on diverse benchmark datasets, we demonstrate the
effectiveness of our approach in minimizing utility degradation, showcasing a
superior privacy-utility trade-off compared to existing privacy-preserving
techniques. The adoption of Adaptive Clipping represents a substantial step
forward in the field of privacy-preserving few-shot learning, empowering the
development of secure and accurate models for real-world applications,
especially in scenarios where there are limited data availability.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 05:14:18 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ranaweera",
"Kanishka",
""
],
[
"Nguyen",
"Dinh C.",
""
],
[
"Pathirana",
"Pubudu N.",
""
],
[
"Smith",
"David",
""
],
[
"Ding",
"Ming",
""
],
[
"Rakotoarivelo",
"Thierry",
""
],
[
"Seneviratne",
"Aruna",
""
]
] | TITLE: Adaptive Clipping for Privacy-Preserving Few-Shot Learning: Enhancing
Generalization with Limited Data
ABSTRACT: In the era of data-driven machine-learning applications, privacy concerns and
the scarcity of labeled data have become paramount challenges. These challenges
are particularly pronounced in the domain of few-shot learning, where the
ability to learn from limited labeled data is crucial. Privacy-preserving
few-shot learning algorithms have emerged as a promising solution to address
such pronounced challenges. However, it is well-known that privacy-preserving
techniques often lead to a drop in utility due to the fundamental trade-off
between data privacy and model performance. To enhance the utility of
privacy-preserving few-shot learning methods, we introduce a novel approach
called Meta-Clip. This technique is specifically designed for meta-learning
algorithms, including Differentially Private (DP) model-agnostic meta-learning,
DP-Reptile, and DP-MetaSGD algorithms, with the objective of balancing data
privacy preservation with learning capacity maximization. By dynamically
adjusting clipping thresholds during the training process, our Adaptive
Clipping method provides fine-grained control over the disclosure of sensitive
information, mitigating overfitting on small datasets and significantly
improving the generalization performance of meta-learning models. Through
comprehensive experiments on diverse benchmark datasets, we demonstrate the
effectiveness of our approach in minimizing utility degradation, showcasing a
superior privacy-utility trade-off compared to existing privacy-preserving
techniques. The adoption of Adaptive Clipping represents a substantial step
forward in the field of privacy-preserving few-shot learning, empowering the
development of secure and accurate models for real-world applications,
especially in scenarios where there are limited data availability.
|
2503.22751 | Zahratu Shabrina Dr | Nicholas Robert Fisk, Matthew Ng Kok Ming, Zahratu Shabrina | Advancing Spatiotemporal Prediction using Artificial Intelligence:
Extending the Framework of Geographically and Temporally Weighted Neural
Network (GTWNN) for Differing Geographical and Temporal Contexts | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper aims at improving predictive crime models by extending the
mathematical framework of Artificial Neural Networks (ANNs) tailored to general
spatiotemporal problems and appropriately applying them. Recent advancements in
the geospatial-temporal modelling field have focused on the inclusion of
geographical weighting in their deep learning models to account for nonspatial
stationarity, which is often apparent in spatial data. We formulate a novel
semi-analytical approach to solving Geographically and Temporally Weighted
Regression (GTWR), and applying it to London crime data. The results produce
high-accuracy predictive evaluation scores that affirm the validity of the
assumptions and approximations in the approach. This paper presents
mathematical advances to the Geographically and Temporally Weighted Neural
Network (GTWNN) framework, which offers a novel contribution to the field.
Insights from past literature are harmoniously employed with the assumptions
and approximations to generate three mathematical extensions to GTWNN's
framework. Combinations of these extensions produce five novel ANNs, applied to
the London and Detroit datasets. The results suggest that one of the extensions
is redundant and is generally surpassed by another extension, which we term the
history-dependent module. The remaining extensions form three novel ANN designs
that pose potential GTWNN improvements. We evaluated the efficacy of various
models in both the London and Detroit crime datasets, highlighting the
importance of accounting for specific geographic and temporal characteristics
when selecting modelling strategies to improve model suitability. In general,
the proposed methods provide the foundations for a more context-aware,
accurate, and robust ANN approach in spatio-temporal modelling.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 06:45:59 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Fisk",
"Nicholas Robert",
""
],
[
"Ming",
"Matthew Ng Kok",
""
],
[
"Shabrina",
"Zahratu",
""
]
] | TITLE: Advancing Spatiotemporal Prediction using Artificial Intelligence:
Extending the Framework of Geographically and Temporally Weighted Neural
Network (GTWNN) for Differing Geographical and Temporal Contexts
ABSTRACT: This paper aims at improving predictive crime models by extending the
mathematical framework of Artificial Neural Networks (ANNs) tailored to general
spatiotemporal problems and appropriately applying them. Recent advancements in
the geospatial-temporal modelling field have focused on the inclusion of
geographical weighting in their deep learning models to account for nonspatial
stationarity, which is often apparent in spatial data. We formulate a novel
semi-analytical approach to solving Geographically and Temporally Weighted
Regression (GTWR), and applying it to London crime data. The results produce
high-accuracy predictive evaluation scores that affirm the validity of the
assumptions and approximations in the approach. This paper presents
mathematical advances to the Geographically and Temporally Weighted Neural
Network (GTWNN) framework, which offers a novel contribution to the field.
Insights from past literature are harmoniously employed with the assumptions
and approximations to generate three mathematical extensions to GTWNN's
framework. Combinations of these extensions produce five novel ANNs, applied to
the London and Detroit datasets. The results suggest that one of the extensions
is redundant and is generally surpassed by another extension, which we term the
history-dependent module. The remaining extensions form three novel ANN designs
that pose potential GTWNN improvements. We evaluated the efficacy of various
models in both the London and Detroit crime datasets, highlighting the
importance of accounting for specific geographic and temporal characteristics
when selecting modelling strategies to improve model suitability. In general,
the proposed methods provide the foundations for a more context-aware,
accurate, and robust ANN approach in spatio-temporal modelling.
|
2503.22752 | Ngoc Luyen LE | Ngoc Luyen Le (Heudiasyc), Marie-H\'el\`ene Abel (Heudiasyc) | From Individual to Group: Developing a Context-Aware Multi-Criteria
Group Recommender System | The 16th International Conference on Management of Digital
EcoSystems, Nov 2024, Naples, Italy | null | null | null | cs.LG cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Group decision-making is becoming increasingly common in areas such as
education, dining, travel, and finance, where collaborative choices must
balance diverse individual preferences. While conventional recommender systems
are effective in personalization, they fall short in group settings due to
their inability to manage conflicting preferences, contextual factors, and
multiple evaluation criteria. This study presents the development of a
Context-Aware Multi-Criteria Group Recommender System (CA-MCGRS) designed to
address these challenges by integrating contextual factors and multiple
criteria to enhance recommendation accuracy. By leveraging a Multi-Head
Attention mechanism, our model dynamically weighs the importance of different
features. Experiments conducted on an educational dataset with varied ratings
and contextual variables demonstrate that CA-MCGRS consistently outperforms
other approaches across four scenarios. Our findings underscore the importance
of incorporating context and multi-criteria evaluations to improve group
recommendations, offering valuable insights for developing more effective group
recommender systems.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 09:01:45 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Le",
"Ngoc Luyen",
"",
"Heudiasyc"
],
[
"Abel",
"Marie-Hélène",
"",
"Heudiasyc"
]
] | TITLE: From Individual to Group: Developing a Context-Aware Multi-Criteria
Group Recommender System
ABSTRACT: Group decision-making is becoming increasingly common in areas such as
education, dining, travel, and finance, where collaborative choices must
balance diverse individual preferences. While conventional recommender systems
are effective in personalization, they fall short in group settings due to
their inability to manage conflicting preferences, contextual factors, and
multiple evaluation criteria. This study presents the development of a
Context-Aware Multi-Criteria Group Recommender System (CA-MCGRS) designed to
address these challenges by integrating contextual factors and multiple
criteria to enhance recommendation accuracy. By leveraging a Multi-Head
Attention mechanism, our model dynamically weighs the importance of different
features. Experiments conducted on an educational dataset with varied ratings
and contextual variables demonstrate that CA-MCGRS consistently outperforms
other approaches across four scenarios. Our findings underscore the importance
of incorporating context and multi-criteria evaluations to improve group
recommendations, offering valuable insights for developing more effective group
recommender systems.
|
2503.22753 | Tisha Ghosh | Tisha Ghosh | Combating the Bullwhip Effect in Rival Online Food Delivery Platforms
Using Deep Learning | null | null | null | null | cs.LG cs.CY stat.AP stat.ML | http://creativecommons.org/licenses/by-sa/4.0/ | The wastage of perishable items has led to significant health and economic
crises, increasing business uncertainty and fluctuating customer demand. This
issue is worsened by online food delivery services, where frequent and
unpredictable orders create inefficiencies in supply chain management,
contributing to the bullwhip effect. This effect results in stockouts, excess
inventory, and inefficiencies. Accurate demand forecasting helps stabilize
inventory, optimize supplier orders, and reduce waste. This paper presents a
Third-Party Logistics (3PL) supply chain model involving restaurants, online
food apps, and customers, along with a deep learning-based demand forecasting
model using a two-phase Long Short-Term Memory (LSTM) network.
Phase one, intra-day forecasting, captures short-term variations, while phase
two, daily forecasting, predicts overall demand. A two-year dataset from
January 2023 to January 2025 from Swiggy and Zomato is used, employing discrete
event simulation and grid search for optimal LSTM hyperparameters. The proposed
method is evaluated using RMSE, MAE, and R-squared score, with R-squared as the
primary accuracy measure. Phase one achieves an R-squared score of 0.69 for
Zomato and 0.71 for Swiggy with a training time of 12 minutes, while phase two
improves to 0.88 for Zomato and 0.90 for Swiggy with a training time of 8
minutes.
To mitigate demand fluctuations, restaurant inventory is dynamically managed
using the newsvendor model, adjusted based on forecasted demand. The proposed
framework significantly reduces the bullwhip effect, improving forecasting
accuracy and supply chain efficiency. For phase one, supply chain instability
decreases from 2.61 to 0.96, and for phase two, from 2.19 to 0.80. This
demonstrates the model's effectiveness in minimizing food waste and maintaining
optimal restaurant inventory levels.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 10:22:52 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ghosh",
"Tisha",
""
]
] | TITLE: Combating the Bullwhip Effect in Rival Online Food Delivery Platforms
Using Deep Learning
ABSTRACT: The wastage of perishable items has led to significant health and economic
crises, increasing business uncertainty and fluctuating customer demand. This
issue is worsened by online food delivery services, where frequent and
unpredictable orders create inefficiencies in supply chain management,
contributing to the bullwhip effect. This effect results in stockouts, excess
inventory, and inefficiencies. Accurate demand forecasting helps stabilize
inventory, optimize supplier orders, and reduce waste. This paper presents a
Third-Party Logistics (3PL) supply chain model involving restaurants, online
food apps, and customers, along with a deep learning-based demand forecasting
model using a two-phase Long Short-Term Memory (LSTM) network.
Phase one, intra-day forecasting, captures short-term variations, while phase
two, daily forecasting, predicts overall demand. A two-year dataset from
January 2023 to January 2025 from Swiggy and Zomato is used, employing discrete
event simulation and grid search for optimal LSTM hyperparameters. The proposed
method is evaluated using RMSE, MAE, and R-squared score, with R-squared as the
primary accuracy measure. Phase one achieves an R-squared score of 0.69 for
Zomato and 0.71 for Swiggy with a training time of 12 minutes, while phase two
improves to 0.88 for Zomato and 0.90 for Swiggy with a training time of 8
minutes.
To mitigate demand fluctuations, restaurant inventory is dynamically managed
using the newsvendor model, adjusted based on forecasted demand. The proposed
framework significantly reduces the bullwhip effect, improving forecasting
accuracy and supply chain efficiency. For phase one, supply chain instability
decreases from 2.61 to 0.96, and for phase two, from 2.19 to 0.80. This
demonstrates the model's effectiveness in minimizing food waste and maintaining
optimal restaurant inventory levels.
|
2503.22754 | Moncef Garouani | Moncef Garouani, Franck Ravat, Nathalie Valles-Parlangeau | Model Lake: a New Alternative for Machine Learning Models Management and
Governance | null | null | 10.1007/978-981-96-0573-6_10 | null | cs.LG cs.AI cs.SE | http://creativecommons.org/licenses/by/4.0/ | The rise of artificial intelligence and data science across industries
underscores the pressing need for effective management and governance of
machine learning (ML) models. Traditional approaches to ML models management
often involve disparate storage systems and lack standardized methodologies for
versioning, audit, and re-use. Inspired by data lake concepts, this paper
develops the concept of ML Model Lake as a centralized management framework for
datasets, codes, and models within organizations environments. We provide an
in-depth exploration of the Model Lake concept, delineating its architectural
foundations, key components, operational benefits, and practical challenges. We
discuss the transformative potential of adopting a Model Lake approach, such as
enhanced model lifecycle management, discovery, audit, and reusability.
Furthermore, we illustrate a real-world application of Model Lake and its
transformative impact on data, code and model management practices.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 10:35:51 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Garouani",
"Moncef",
""
],
[
"Ravat",
"Franck",
""
],
[
"Valles-Parlangeau",
"Nathalie",
""
]
] | TITLE: Model Lake: a New Alternative for Machine Learning Models Management and
Governance
ABSTRACT: The rise of artificial intelligence and data science across industries
underscores the pressing need for effective management and governance of
machine learning (ML) models. Traditional approaches to ML models management
often involve disparate storage systems and lack standardized methodologies for
versioning, audit, and re-use. Inspired by data lake concepts, this paper
develops the concept of ML Model Lake as a centralized management framework for
datasets, codes, and models within organizations environments. We provide an
in-depth exploration of the Model Lake concept, delineating its architectural
foundations, key components, operational benefits, and practical challenges. We
discuss the transformative potential of adopting a Model Lake approach, such as
enhanced model lifecycle management, discovery, audit, and reusability.
Furthermore, we illustrate a real-world application of Model Lake and its
transformative impact on data, code and model management practices.
|
2503.22758 | Han Siyu | Siyu Han, Lihan Jia, Lanzhe Guo | Multiple Embeddings for Quantum Machine Learning | null | null | null | null | quant-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work focuses on the limitations about the insufficient fitting
capability of current quantum machine learning methods, which results from the
over-reliance on a single data embedding strategy. We propose a novel quantum
machine learning framework that integrates multiple quantum data embedding
strategies, allowing the model to fully exploit the diversity of quantum
computing when processing various datasets. Experimental results validate the
effectiveness of the proposed framework, demonstrating significant improvements
over existing state-of-the-art methods and achieving superior performance in
practical applications.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 15:16:53 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Han",
"Siyu",
""
],
[
"Jia",
"Lihan",
""
],
[
"Guo",
"Lanzhe",
""
]
] | TITLE: Multiple Embeddings for Quantum Machine Learning
ABSTRACT: This work focuses on the limitations about the insufficient fitting
capability of current quantum machine learning methods, which results from the
over-reliance on a single data embedding strategy. We propose a novel quantum
machine learning framework that integrates multiple quantum data embedding
strategies, allowing the model to fully exploit the diversity of quantum
computing when processing various datasets. Experimental results validate the
effectiveness of the proposed framework, demonstrating significant improvements
over existing state-of-the-art methods and achieving superior performance in
practical applications.
|
2503.22760 | Md Rafiqul Islam Rabin | Rafiqul Rabin, Sean McGregor, Nick Judd | Malicious and Unintentional Disclosure Risks in Large Language Models
for Code Generation | The 3rd International Workshop on Mining Software Repositories
Applications for Privacy and Security (MSR4P&S), co-located with SANER 2025 | null | null | null | cs.CR cs.LG cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the risk that a large language model (LLM) trained for
code generation on data mined from software repositories will generate content
that discloses sensitive information included in its training data. We
decompose this risk, known in the literature as ``unintended memorization,''
into two components: unintentional disclosure (where an LLM presents secrets to
users without the user seeking them out) and malicious disclosure (where an LLM
presents secrets to an attacker equipped with partial knowledge of the training
data). We observe that while existing work mostly anticipates malicious
disclosure, unintentional disclosure is also a concern. We describe methods to
assess unintentional and malicious disclosure risks side-by-side across
different releases of training datasets and models. We demonstrate these
methods through an independent assessment of the Open Language Model (OLMo)
family of models and its Dolma training datasets. Our results show, first, that
changes in data source and processing are associated with substantial changes
in unintended memorization risk; second, that the same set of operational
changes may increase one risk while mitigating another; and, third, that the
risk of disclosing sensitive information varies not only by prompt strategies
or test datasets but also by the types of sensitive information. These
contributions rely on data mining to enable greater privacy and security
testing required for the LLM training data supply chain.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 16:09:23 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Rabin",
"Rafiqul",
""
],
[
"McGregor",
"Sean",
""
],
[
"Judd",
"Nick",
""
]
] | TITLE: Malicious and Unintentional Disclosure Risks in Large Language Models
for Code Generation
ABSTRACT: This paper explores the risk that a large language model (LLM) trained for
code generation on data mined from software repositories will generate content
that discloses sensitive information included in its training data. We
decompose this risk, known in the literature as ``unintended memorization,''
into two components: unintentional disclosure (where an LLM presents secrets to
users without the user seeking them out) and malicious disclosure (where an LLM
presents secrets to an attacker equipped with partial knowledge of the training
data). We observe that while existing work mostly anticipates malicious
disclosure, unintentional disclosure is also a concern. We describe methods to
assess unintentional and malicious disclosure risks side-by-side across
different releases of training datasets and models. We demonstrate these
methods through an independent assessment of the Open Language Model (OLMo)
family of models and its Dolma training datasets. Our results show, first, that
changes in data source and processing are associated with substantial changes
in unintended memorization risk; second, that the same set of operational
changes may increase one risk while mitigating another; and, third, that the
risk of disclosing sensitive information varies not only by prompt strategies
or test datasets but also by the types of sensitive information. These
contributions rely on data mining to enable greater privacy and security
testing required for the LLM training data supply chain.
|
2503.22773 | Abdul Jabbar | Abdul Jabbar, Ethan Grooby, Jack Crozier, Alexander Gallon, Vivian
Pham, Khawza I Ahmad, Md Hassanuzzaman, Raqibul Mostafa, Ahsan H. Khandoker,
Faezeh Marzbanrad | Congenital Heart Disease Classification Using Phonocardiograms: A
Scalable Screening Tool for Diverse Environments | 12 pages, 6 figures | null | null | null | eess.AS cs.LG | http://creativecommons.org/licenses/by/4.0/ | Congenital heart disease (CHD) is a critical condition that demands early
detection, particularly in infancy and childhood. This study presents a deep
learning model designed to detect CHD using phonocardiogram (PCG) signals, with
a focus on its application in global health. We evaluated our model on several
datasets, including the primary dataset from Bangladesh, achieving a high
accuracy of 94.1%, sensitivity of 92.7%, specificity of 96.3%. The model also
demonstrated robust performance on the public PhysioNet Challenge 2022 and 2016
datasets, underscoring its generalizability to diverse populations and data
sources. We assessed the performance of the algorithm for single and multiple
auscultation sites on the chest, demonstrating that the model maintains over
85% accuracy even when using a single location. Furthermore, our algorithm was
able to achieve an accuracy of 80% on low-quality recordings, which
cardiologists deemed non-diagnostic. This research suggests that an AI- driven
digital stethoscope could serve as a cost-effective screening tool for CHD in
resource-limited settings, enhancing clinical decision support and ultimately
improving patient outcomes.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 05:47:44 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Jabbar",
"Abdul",
""
],
[
"Grooby",
"Ethan",
""
],
[
"Crozier",
"Jack",
""
],
[
"Gallon",
"Alexander",
""
],
[
"Pham",
"Vivian",
""
],
[
"Ahmad",
"Khawza I",
""
],
[
"Hassanuzzaman",
"Md",
""
],
[
"Mostafa",
"Raqibul",
""
],
[
"Khandoker",
"Ahsan H.",
""
],
[
"Marzbanrad",
"Faezeh",
""
]
] | TITLE: Congenital Heart Disease Classification Using Phonocardiograms: A
Scalable Screening Tool for Diverse Environments
ABSTRACT: Congenital heart disease (CHD) is a critical condition that demands early
detection, particularly in infancy and childhood. This study presents a deep
learning model designed to detect CHD using phonocardiogram (PCG) signals, with
a focus on its application in global health. We evaluated our model on several
datasets, including the primary dataset from Bangladesh, achieving a high
accuracy of 94.1%, sensitivity of 92.7%, specificity of 96.3%. The model also
demonstrated robust performance on the public PhysioNet Challenge 2022 and 2016
datasets, underscoring its generalizability to diverse populations and data
sources. We assessed the performance of the algorithm for single and multiple
auscultation sites on the chest, demonstrating that the model maintains over
85% accuracy even when using a single location. Furthermore, our algorithm was
able to achieve an accuracy of 80% on low-quality recordings, which
cardiologists deemed non-diagnostic. This research suggests that an AI- driven
digital stethoscope could serve as a cost-effective screening tool for CHD in
resource-limited settings, enhancing clinical decision support and ultimately
improving patient outcomes.
|
2503.22777 | Peng Zhang | Peng Zhang and Branson Blaylock | A reduced-scale autonomous morphing vehicle prototype with enhanced
aerodynamic efficiency | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Road vehicles contribute to significant levels of greenhouse gas (GHG)
emissions. A potential strategy for improving their aerodynamic efficiency and
reducing emissions is through active adaptation of their exterior shapes to the
aerodynamic environment. In this study, we present a reduced-scale morphing
vehicle prototype capable of actively interacting with the aerodynamic
environment to enhance fuel economy. Morphing is accomplished by retrofitting a
deformable structure actively actuated by built-in motors. The morphing vehicle
prototype is integrated with an optimization algorithm that can autonomously
identify the structural shape that minimizes aerodynamic drag. The performance
of the morphing vehicle prototype is investigated through an extensive
experimental campaign in a large-scale wind tunnel facility. The autonomous
optimization algorithm identifies an optimal morphing shape that can elicit an
8.5% reduction in the mean drag force. Our experiments provide a comprehensive
dataset that validates the efficiency of shape morphing, demonstrating a clear
and consistent decrease in the drag force as the vehicle transitions from a
suboptimal to the optimal shape. Insights gained from experiments on
scaled-down models provide valuable guidelines for the design of full-size
morphing vehicles, which could lead to appreciable energy savings and
reductions in GHG emissions. This study highlights the feasibility and benefits
of real-time shape morphing under conditions representative of realistic road
environments, paving the way for the realization of full-scale morphing
vehicles with enhanced aerodynamic efficiency and reduced GHG emissions.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 15:55:33 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhang",
"Peng",
""
],
[
"Blaylock",
"Branson",
""
]
] | TITLE: A reduced-scale autonomous morphing vehicle prototype with enhanced
aerodynamic efficiency
ABSTRACT: Road vehicles contribute to significant levels of greenhouse gas (GHG)
emissions. A potential strategy for improving their aerodynamic efficiency and
reducing emissions is through active adaptation of their exterior shapes to the
aerodynamic environment. In this study, we present a reduced-scale morphing
vehicle prototype capable of actively interacting with the aerodynamic
environment to enhance fuel economy. Morphing is accomplished by retrofitting a
deformable structure actively actuated by built-in motors. The morphing vehicle
prototype is integrated with an optimization algorithm that can autonomously
identify the structural shape that minimizes aerodynamic drag. The performance
of the morphing vehicle prototype is investigated through an extensive
experimental campaign in a large-scale wind tunnel facility. The autonomous
optimization algorithm identifies an optimal morphing shape that can elicit an
8.5% reduction in the mean drag force. Our experiments provide a comprehensive
dataset that validates the efficiency of shape morphing, demonstrating a clear
and consistent decrease in the drag force as the vehicle transitions from a
suboptimal to the optimal shape. Insights gained from experiments on
scaled-down models provide valuable guidelines for the design of full-size
morphing vehicles, which could lead to appreciable energy savings and
reductions in GHG emissions. This study highlights the feasibility and benefits
of real-time shape morphing under conditions representative of realistic road
environments, paving the way for the realization of full-scale morphing
vehicles with enhanced aerodynamic efficiency and reduced GHG emissions.
|
2503.22810 | Philippe Talatchian | Jonathan Peters and Philippe Talatchian | Harnessing uncertainty when learning through Equilibrium Propagation in
neural networks | 8 pages, 5 figures | null | null | null | cs.LG cond-mat.mtrl-sci physics.app-ph | http://creativecommons.org/licenses/by/4.0/ | Equilibrium Propagation (EP) is a supervised learning algorithm that trains
network parameters using local neuronal activity. This is in stark contrast to
backpropagation, where updating the parameters of the network requires
significant data shuffling. Avoiding data movement makes EP particularly
compelling as a learning framework for energy-efficient training on
neuromorphic systems. In this work, we assess the ability of EP to learn on
hardware that contain physical uncertainties. This is particularly important
for researchers concerned with hardware implementations of self-learning
systems that utilize EP. Our results demonstrate that deep, multi-layer neural
network architectures can be trained successfully using EP in the presence of
finite uncertainties, up to a critical limit. This limit is independent of the
training dataset, and can be scaled through sampling the network according to
the central limit theorem. Additionally, we demonstrate improved model
convergence and performance for finite levels of uncertainty on the MNIST,
KMNIST and FashionMNIST datasets. Optimal performance is found for networks
trained with uncertainties close to the critical limit. Our research supports
future work to build self-learning hardware in situ with EP.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 18:16:39 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Peters",
"Jonathan",
""
],
[
"Talatchian",
"Philippe",
""
]
] | TITLE: Harnessing uncertainty when learning through Equilibrium Propagation in
neural networks
ABSTRACT: Equilibrium Propagation (EP) is a supervised learning algorithm that trains
network parameters using local neuronal activity. This is in stark contrast to
backpropagation, where updating the parameters of the network requires
significant data shuffling. Avoiding data movement makes EP particularly
compelling as a learning framework for energy-efficient training on
neuromorphic systems. In this work, we assess the ability of EP to learn on
hardware that contain physical uncertainties. This is particularly important
for researchers concerned with hardware implementations of self-learning
systems that utilize EP. Our results demonstrate that deep, multi-layer neural
network architectures can be trained successfully using EP in the presence of
finite uncertainties, up to a critical limit. This limit is independent of the
training dataset, and can be scaled through sampling the network according to
the central limit theorem. Additionally, we demonstrate improved model
convergence and performance for finite levels of uncertainty on the MNIST,
KMNIST and FashionMNIST datasets. Optimal performance is found for networks
trained with uncertainties close to the critical limit. Our research supports
future work to build self-learning hardware in situ with EP.
|
2503.22828 | Alexander Gurung | Alexander Gurung, Mirella Lapata | Learning to Reason for Long-Form Story Generation | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Generating high-quality stories spanning thousands of tokens requires
competency across a variety of skills, from tracking plot and character arcs to
keeping a consistent and engaging style. Due to the difficulty of sourcing
labeled datasets and precise quality measurements, most work using large
language models (LLMs) for long-form story generation uses combinations of
hand-designed prompting techniques to elicit author-like behavior. This is a
manual process that is highly dependent on the specific story-generation task.
Motivated by the recent success of applying RL with Verifiable Rewards to
domains like math and coding, we propose a general story-generation task
(Next-Chapter Prediction) and a reward formulation (Verified Rewards via
Completion Likelihood Improvement) that allows us to use an unlabeled book
dataset as a learning signal for reasoning. We learn to reason over a story's
condensed information and generate a detailed plan for the next chapter. Our
reasoning is evaluated via the chapters it helps a story-generator create, and
compared against non-trained and supervised finetuning (SFT) baselines.
Pairwise human judgments reveal the chapters our learned reasoning produces are
preferred across almost all metrics, and the effect is more pronounced in Scifi
and Fantasy genres.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 18:48:26 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Gurung",
"Alexander",
""
],
[
"Lapata",
"Mirella",
""
]
] | TITLE: Learning to Reason for Long-Form Story Generation
ABSTRACT: Generating high-quality stories spanning thousands of tokens requires
competency across a variety of skills, from tracking plot and character arcs to
keeping a consistent and engaging style. Due to the difficulty of sourcing
labeled datasets and precise quality measurements, most work using large
language models (LLMs) for long-form story generation uses combinations of
hand-designed prompting techniques to elicit author-like behavior. This is a
manual process that is highly dependent on the specific story-generation task.
Motivated by the recent success of applying RL with Verifiable Rewards to
domains like math and coding, we propose a general story-generation task
(Next-Chapter Prediction) and a reward formulation (Verified Rewards via
Completion Likelihood Improvement) that allows us to use an unlabeled book
dataset as a learning signal for reasoning. We learn to reason over a story's
condensed information and generate a detailed plan for the next chapter. Our
reasoning is evaluated via the chapters it helps a story-generator create, and
compared against non-trained and supervised finetuning (SFT) baselines.
Pairwise human judgments reveal the chapters our learned reasoning produces are
preferred across almost all metrics, and the effect is more pronounced in Scifi
and Fantasy genres.
|
2503.22849 | Alberto Padoan | Alberto Padoan and Jeremy Coulson | Distances between finite-horizon linear behaviors | IEEE Control Systems Letters / 64th IEEE Conference on Decision and
Control | null | null | null | math.OC cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | The paper introduces a class of distances for linear behaviors over finite
time horizons. These distances allow for comparisons between finite-horizon
linear behaviors represented by matrices of possibly different dimensions. They
remain invariant under coordinate changes, rotations, and permutations,
ensuring independence from input-output partitions. Moreover, they naturally
encode complexity-misfit trade-offs for Linear Time-Invariant (LTI) behaviors,
providing a principled solution to a longstanding puzzle in behavioral systems
theory. The resulting framework characterizes modeling as a minimum distance
problem, identifying the Most Powerful Unfalsified Model (MPUM) as optimal
among all systems unfalsified by a given dataset.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 19:57:09 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Padoan",
"Alberto",
""
],
[
"Coulson",
"Jeremy",
""
]
] | TITLE: Distances between finite-horizon linear behaviors
ABSTRACT: The paper introduces a class of distances for linear behaviors over finite
time horizons. These distances allow for comparisons between finite-horizon
linear behaviors represented by matrices of possibly different dimensions. They
remain invariant under coordinate changes, rotations, and permutations,
ensuring independence from input-output partitions. Moreover, they naturally
encode complexity-misfit trade-offs for Linear Time-Invariant (LTI) behaviors,
providing a principled solution to a longstanding puzzle in behavioral systems
theory. The resulting framework characterizes modeling as a minimum distance
problem, identifying the Most Powerful Unfalsified Model (MPUM) as optimal
among all systems unfalsified by a given dataset.
|
2503.22856 | Shanshan Bai | Shanshan Bai, Anna Kruspe, Xiaoxiang Zhu | Generating Synthetic Oracle Datasets to Analyze Noise Impact: A Study on
Building Function Classification Using Tweets | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Tweets provides valuable semantic context for earth observation tasks and
serves as a complementary modality to remote sensing imagery. In building
function classification (BFC), tweets are often collected using geographic
heuristics and labeled via external databases, an inherently weakly supervised
process that introduces both label noise and sentence level feature noise
(e.g., irrelevant or uninformative tweets). While label noise has been widely
studied, the impact of sentence level feature noise remains underexplored,
largely due to the lack of clean benchmark datasets for controlled analysis. In
this work, we propose a method for generating a synthetic oracle dataset using
LLM, designed to contain only tweets that are both correctly labeled and
semantically relevant to their associated buildings. This oracle dataset
enables systematic investigation of noise impacts that are otherwise difficult
to isolate in real-world data. To assess its utility, we compare model
performance using Naive Bayes and mBERT classifiers under three configurations:
real vs. synthetic training data, and cross-domain generalization. Results show
that noise in real tweets significantly degrades the contextual learning
capacity of mBERT, reducing its performance to that of a simple keyword-based
model. In contrast, the clean synthetic dataset allows mBERT to learn
effectively, outperforming Naive Bayes Bayes by a large margin. These findings
highlight that addressing feature noise is more critical than model complexity
in this task. Our synthetic dataset offers a novel experimental environment for
future noise injection studies and is publicly available on GitHub.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 20:18:28 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Bai",
"Shanshan",
""
],
[
"Kruspe",
"Anna",
""
],
[
"Zhu",
"Xiaoxiang",
""
]
] | TITLE: Generating Synthetic Oracle Datasets to Analyze Noise Impact: A Study on
Building Function Classification Using Tweets
ABSTRACT: Tweets provides valuable semantic context for earth observation tasks and
serves as a complementary modality to remote sensing imagery. In building
function classification (BFC), tweets are often collected using geographic
heuristics and labeled via external databases, an inherently weakly supervised
process that introduces both label noise and sentence level feature noise
(e.g., irrelevant or uninformative tweets). While label noise has been widely
studied, the impact of sentence level feature noise remains underexplored,
largely due to the lack of clean benchmark datasets for controlled analysis. In
this work, we propose a method for generating a synthetic oracle dataset using
LLM, designed to contain only tweets that are both correctly labeled and
semantically relevant to their associated buildings. This oracle dataset
enables systematic investigation of noise impacts that are otherwise difficult
to isolate in real-world data. To assess its utility, we compare model
performance using Naive Bayes and mBERT classifiers under three configurations:
real vs. synthetic training data, and cross-domain generalization. Results show
that noise in real tweets significantly degrades the contextual learning
capacity of mBERT, reducing its performance to that of a simple keyword-based
model. In contrast, the clean synthetic dataset allows mBERT to learn
effectively, outperforming Naive Bayes Bayes by a large margin. These findings
highlight that addressing feature noise is more critical than model complexity
in this task. Our synthetic dataset offers a novel experimental environment for
future noise injection studies and is publicly available on GitHub.
|
2503.22862 | Soumitri Chattopadhyay | Soumitri Chattopadhyay and Basar Demir and Marc Niethammer | Zero-shot Domain Generalization of Foundational Models for 3D Medical
Image Segmentation: An Experimental Study | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Domain shift, caused by variations in imaging modalities and acquisition
protocols, limits model generalization in medical image segmentation. While
foundation models (FMs) trained on diverse large-scale data hold promise for
zero-shot generalization, their application to volumetric medical data remains
underexplored. In this study, we examine their ability towards domain
generalization (DG), by conducting a comprehensive experimental study
encompassing 6 medical segmentation FMs and 12 public datasets spanning
multiple modalities and anatomies. Our findings reveal the potential of
promptable FMs in bridging the domain gap via smart prompting techniques.
Additionally, by probing into multiple facets of zero-shot DG, we offer
valuable insights into the viability of FMs for DG and identify promising
avenues for future research.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 20:33:41 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chattopadhyay",
"Soumitri",
""
],
[
"Demir",
"Basar",
""
],
[
"Niethammer",
"Marc",
""
]
] | TITLE: Zero-shot Domain Generalization of Foundational Models for 3D Medical
Image Segmentation: An Experimental Study
ABSTRACT: Domain shift, caused by variations in imaging modalities and acquisition
protocols, limits model generalization in medical image segmentation. While
foundation models (FMs) trained on diverse large-scale data hold promise for
zero-shot generalization, their application to volumetric medical data remains
underexplored. In this study, we examine their ability towards domain
generalization (DG), by conducting a comprehensive experimental study
encompassing 6 medical segmentation FMs and 12 public datasets spanning
multiple modalities and anatomies. Our findings reveal the potential of
promptable FMs in bridging the domain gap via smart prompting techniques.
Additionally, by probing into multiple facets of zero-shot DG, we offer
valuable insights into the viability of FMs for DG and identify promising
avenues for future research.
|
2503.22877 | Bruno Coelho | Bruno Coelho, Shujaat Mirza, Yuyuan Cui, Christina P\"opper, Damon
McCoy | Understanding Inequality of LLM Fact-Checking over Geographic Regions
with Agent and Retrieval models | null | null | null | null | cs.CL cs.AI cs.IR | http://creativecommons.org/licenses/by/4.0/ | Fact-checking is a potentially useful application of Large Language Models
(LLMs) to combat the growing dissemination of disinformation. However, the
performance of LLMs varies across geographic regions. In this paper, we
evaluate the factual accuracy of open and private models across a diverse set
of regions and scenarios.
Using a dataset containing 600 fact-checked statements balanced across six
global regions we examine three experimental setups of fact-checking a
statement: (1) when just the statement is available, (2) when an LLM-based
agent with Wikipedia access is utilized, and (3) as a best case scenario when a
Retrieval-Augmented Generation (RAG) system provided with the official fact
check is employed. Our findings reveal that regardless of the scenario and LLM
used, including GPT-4, Claude Sonnet, and LLaMA, statements from the Global
North perform substantially better than those from the Global South.
Furthermore, this gap is broadened for the more realistic case of a Wikipedia
agent-based system, highlighting that overly general knowledge bases have a
limited ability to address region-specific nuances. These results underscore
the urgent need for better dataset balancing and robust retrieval strategies to
enhance LLM fact-checking capabilities, particularly in geographically diverse
contexts.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 21:07:43 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Coelho",
"Bruno",
""
],
[
"Mirza",
"Shujaat",
""
],
[
"Cui",
"Yuyuan",
""
],
[
"Pöpper",
"Christina",
""
],
[
"McCoy",
"Damon",
""
]
] | TITLE: Understanding Inequality of LLM Fact-Checking over Geographic Regions
with Agent and Retrieval models
ABSTRACT: Fact-checking is a potentially useful application of Large Language Models
(LLMs) to combat the growing dissemination of disinformation. However, the
performance of LLMs varies across geographic regions. In this paper, we
evaluate the factual accuracy of open and private models across a diverse set
of regions and scenarios.
Using a dataset containing 600 fact-checked statements balanced across six
global regions we examine three experimental setups of fact-checking a
statement: (1) when just the statement is available, (2) when an LLM-based
agent with Wikipedia access is utilized, and (3) as a best case scenario when a
Retrieval-Augmented Generation (RAG) system provided with the official fact
check is employed. Our findings reveal that regardless of the scenario and LLM
used, including GPT-4, Claude Sonnet, and LLaMA, statements from the Global
North perform substantially better than those from the Global South.
Furthermore, this gap is broadened for the more realistic case of a Wikipedia
agent-based system, highlighting that overly general knowledge bases have a
limited ability to address region-specific nuances. These results underscore
the urgent need for better dataset balancing and robust retrieval strategies to
enhance LLM fact-checking capabilities, particularly in geographically diverse
contexts.
|
2503.22880 | Matias Valdenegro-Toro | Matias Valdenegro-Toro and Deepan Chakravarthi Padmanabhan and Deepak
Singh and Bilal Wehbe and Yvan Petillot | The Marine Debris Forward-Looking Sonar Datasets | 10 pages, 12 figures, Oceans Brest 2025 camera readyu | null | null | null | cs.CV cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | Sonar sensing is fundamental for underwater robotics, but limited by
capabilities of AI systems, which need large training datasets. Public data in
sonar modalities is lacking. This paper presents the Marine Debris
Forward-Looking Sonar datasets, with three different settings (watertank,
turntable, flooded quarry) increasing dataset diversity and multiple computer
vision tasks: object classification, object detection, semantic segmentation,
patch matching, and unsupervised learning. We provide full dataset description,
basic analysis and initial results for some tasks. We expect the research
community will benefit from this dataset, which is publicly available at
https://doi.org/10.5281/zenodo.15101686
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 21:12:03 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Valdenegro-Toro",
"Matias",
""
],
[
"Padmanabhan",
"Deepan Chakravarthi",
""
],
[
"Singh",
"Deepak",
""
],
[
"Wehbe",
"Bilal",
""
],
[
"Petillot",
"Yvan",
""
]
] | TITLE: The Marine Debris Forward-Looking Sonar Datasets
ABSTRACT: Sonar sensing is fundamental for underwater robotics, but limited by
capabilities of AI systems, which need large training datasets. Public data in
sonar modalities is lacking. This paper presents the Marine Debris
Forward-Looking Sonar datasets, with three different settings (watertank,
turntable, flooded quarry) increasing dataset diversity and multiple computer
vision tasks: object classification, object detection, semantic segmentation,
patch matching, and unsupervised learning. We provide full dataset description,
basic analysis and initial results for some tasks. We expect the research
community will benefit from this dataset, which is publicly available at
https://doi.org/10.5281/zenodo.15101686
|
2503.22881 | Lauren Shrack | Lauren Shrack, Timm Haucke, Antoine Sala\"un, Arjun Subramonian, Sara
Beery | Pairwise Matching of Intermediate Representations for Fine-grained
Explainability | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | The differences between images belonging to fine-grained categories are often
subtle and highly localized, and existing explainability techniques for deep
learning models are often too diffuse to provide useful and interpretable
explanations. We propose a new explainability method (PAIR-X) that leverages
both intermediate model activations and backpropagated relevance scores to
generate fine-grained, highly-localized pairwise visual explanations. We use
animal and building re-identification (re-ID) as a primary case study of our
method, and we demonstrate qualitatively improved results over a diverse set of
explainability baselines on 35 public re-ID datasets. In interviews, animal
re-ID experts were in unanimous agreement that PAIR-X was an improvement over
existing baselines for deep model explainability, and suggested that its
visualizations would be directly applicable to their work. We also propose a
novel quantitative evaluation metric for our method, and demonstrate that
PAIR-X visualizations appear more plausible for correct image matches than
incorrect ones even when the model similarity score for the pairs is the same.
By improving interpretability, PAIR-X enables humans to better distinguish
correct and incorrect matches. Our code is available at:
https://github.com/pairx-explains/pairx
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 21:13:43 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Shrack",
"Lauren",
""
],
[
"Haucke",
"Timm",
""
],
[
"Salaün",
"Antoine",
""
],
[
"Subramonian",
"Arjun",
""
],
[
"Beery",
"Sara",
""
]
] | TITLE: Pairwise Matching of Intermediate Representations for Fine-grained
Explainability
ABSTRACT: The differences between images belonging to fine-grained categories are often
subtle and highly localized, and existing explainability techniques for deep
learning models are often too diffuse to provide useful and interpretable
explanations. We propose a new explainability method (PAIR-X) that leverages
both intermediate model activations and backpropagated relevance scores to
generate fine-grained, highly-localized pairwise visual explanations. We use
animal and building re-identification (re-ID) as a primary case study of our
method, and we demonstrate qualitatively improved results over a diverse set of
explainability baselines on 35 public re-ID datasets. In interviews, animal
re-ID experts were in unanimous agreement that PAIR-X was an improvement over
existing baselines for deep model explainability, and suggested that its
visualizations would be directly applicable to their work. We also propose a
novel quantitative evaluation metric for our method, and demonstrate that
PAIR-X visualizations appear more plausible for correct image matches than
incorrect ones even when the model similarity score for the pairs is the same.
By improving interpretability, PAIR-X enables humans to better distinguish
correct and incorrect matches. Our code is available at:
https://github.com/pairx-explains/pairx
|
2503.22884 | Yi-Ting Shen | Yi-Ting Shen, Sungmin Eum, Doheon Lee, Rohit Shete, Chiao-Yi Wang,
Heesung Kwon, Shuvra S. Bhattacharyya | AutoComPose: Automatic Generation of Pose Transition Descriptions for
Composed Pose Retrieval Using Multimodal LLMs | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Composed pose retrieval (CPR) enables users to search for human poses by
specifying a reference pose and a transition description, but progress in this
field is hindered by the scarcity and inconsistency of annotated pose
transitions. Existing CPR datasets rely on costly human annotations or
heuristic-based rule generation, both of which limit scalability and diversity.
In this work, we introduce AutoComPose, the first framework that leverages
multimodal large language models (MLLMs) to automatically generate rich and
structured pose transition descriptions. Our method enhances annotation quality
by structuring transitions into fine-grained body part movements and
introducing mirrored/swapped variations, while a cyclic consistency constraint
ensures logical coherence between forward and reverse transitions. To advance
CPR research, we construct and release two dedicated benchmarks, AIST-CPR and
PoseFixCPR, supplementing prior datasets with enhanced attributes. Extensive
experiments demonstrate that training retrieval models with AutoComPose yields
superior performance over human-annotated and heuristic-based methods,
significantly reducing annotation costs while improving retrieval quality. Our
work pioneers the automatic annotation of pose transitions, establishing a
scalable foundation for future CPR research.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 21:21:35 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Shen",
"Yi-Ting",
""
],
[
"Eum",
"Sungmin",
""
],
[
"Lee",
"Doheon",
""
],
[
"Shete",
"Rohit",
""
],
[
"Wang",
"Chiao-Yi",
""
],
[
"Kwon",
"Heesung",
""
],
[
"Bhattacharyya",
"Shuvra S.",
""
]
] | TITLE: AutoComPose: Automatic Generation of Pose Transition Descriptions for
Composed Pose Retrieval Using Multimodal LLMs
ABSTRACT: Composed pose retrieval (CPR) enables users to search for human poses by
specifying a reference pose and a transition description, but progress in this
field is hindered by the scarcity and inconsistency of annotated pose
transitions. Existing CPR datasets rely on costly human annotations or
heuristic-based rule generation, both of which limit scalability and diversity.
In this work, we introduce AutoComPose, the first framework that leverages
multimodal large language models (MLLMs) to automatically generate rich and
structured pose transition descriptions. Our method enhances annotation quality
by structuring transitions into fine-grained body part movements and
introducing mirrored/swapped variations, while a cyclic consistency constraint
ensures logical coherence between forward and reverse transitions. To advance
CPR research, we construct and release two dedicated benchmarks, AIST-CPR and
PoseFixCPR, supplementing prior datasets with enhanced attributes. Extensive
experiments demonstrate that training retrieval models with AutoComPose yields
superior performance over human-annotated and heuristic-based methods,
significantly reducing annotation costs while improving retrieval quality. Our
work pioneers the automatic annotation of pose transitions, establishing a
scalable foundation for future CPR research.
|
2503.22890 | Ke Zhang | Ke Zhang, Vishal M. Patel | MedCL: Learning Consistent Anatomy Distribution for Scribble-supervised
Medical Image Segmentation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Curating large-scale fully annotated datasets is expensive, laborious, and
cumbersome, especially for medical images. Several methods have been proposed
in the literature that make use of weak annotations in the form of scribbles.
However, these approaches require large amounts of scribble annotations, and
are only applied to the segmentation of regular organs, which are often
unavailable for the disease species that fall in the long-tailed distribution.
Motivated by the fact that the medical labels have anatomy distribution priors,
we propose a scribble-supervised clustering-based framework, called MedCL, to
learn the inherent anatomy distribution of medical labels. Our approach
consists of two steps: i) Mix the features with intra- and inter-image mix
operations, and ii) Perform feature clustering and regularize the anatomy
distribution at both local and global levels. Combined with a small amount of
weak supervision, the proposed MedCL is able to segment both regular organs and
challenging irregular pathologies. We implement MedCL based on SAM and UNet
backbones, and evaluate the performance on three open datasets of regular
structure (MSCMRseg), multiple organs (BTCV) and irregular pathology (MyoPS).
It is shown that even with less scribble supervision, MedCL substantially
outperforms the conventional segmentation methods. Our code is available at
https://github.com/BWGZK/MedCL.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 21:41:44 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhang",
"Ke",
""
],
[
"Patel",
"Vishal M.",
""
]
] | TITLE: MedCL: Learning Consistent Anatomy Distribution for Scribble-supervised
Medical Image Segmentation
ABSTRACT: Curating large-scale fully annotated datasets is expensive, laborious, and
cumbersome, especially for medical images. Several methods have been proposed
in the literature that make use of weak annotations in the form of scribbles.
However, these approaches require large amounts of scribble annotations, and
are only applied to the segmentation of regular organs, which are often
unavailable for the disease species that fall in the long-tailed distribution.
Motivated by the fact that the medical labels have anatomy distribution priors,
we propose a scribble-supervised clustering-based framework, called MedCL, to
learn the inherent anatomy distribution of medical labels. Our approach
consists of two steps: i) Mix the features with intra- and inter-image mix
operations, and ii) Perform feature clustering and regularize the anatomy
distribution at both local and global levels. Combined with a small amount of
weak supervision, the proposed MedCL is able to segment both regular organs and
challenging irregular pathologies. We implement MedCL based on SAM and UNet
backbones, and evaluate the performance on three open datasets of regular
structure (MSCMRseg), multiple organs (BTCV) and irregular pathology (MyoPS).
It is shown that even with less scribble supervision, MedCL substantially
outperforms the conventional segmentation methods. Our code is available at
https://github.com/BWGZK/MedCL.
|
2503.22902 | Md Fazle Rabbi | Barisha Chowdhury, Md Fazle Rabbi, S. M. Mahedy Hasan, Minhaz F.
Zibran | Insights into Dependency Maintenance Trends in the Maven Ecosystem | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | As modern software development increasingly relies on reusable libraries and
components, managing dependencies has become critical for ensuring software
stability and security. However, challenges such as outdated dependencies,
missed releases, and the complexity of interdependent libraries can
significantly impact project maintenance. In this paper, we present a
quantitative analysis of the Neo4j dataset using the Goblin framework to
uncover patterns of freshness in projects with different numbers of
dependencies. Our analysis reveals that releases with fewer dependencies have a
higher number of missed releases. Additionally, our study shows that the
dependencies in the latest releases have positive freshness scores, indicating
better software management efficacy. These results can encourage better
management practices and contribute to the overall health of software
ecosystems.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 22:20:24 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chowdhury",
"Barisha",
""
],
[
"Rabbi",
"Md Fazle",
""
],
[
"Hasan",
"S. M. Mahedy",
""
],
[
"Zibran",
"Minhaz F.",
""
]
] | TITLE: Insights into Dependency Maintenance Trends in the Maven Ecosystem
ABSTRACT: As modern software development increasingly relies on reusable libraries and
components, managing dependencies has become critical for ensuring software
stability and security. However, challenges such as outdated dependencies,
missed releases, and the complexity of interdependent libraries can
significantly impact project maintenance. In this paper, we present a
quantitative analysis of the Neo4j dataset using the Goblin framework to
uncover patterns of freshness in projects with different numbers of
dependencies. Our analysis reveals that releases with fewer dependencies have a
higher number of missed releases. Additionally, our study shows that the
dependencies in the latest releases have positive freshness scores, indicating
better software management efficacy. These results can encourage better
management practices and contribute to the overall health of software
ecosystems.
|
2503.22906 | Heng Yu | Heng Yu, Juze Zhang, Changan Chen, Tiange Xiang, Yusu Fang, Juan
Carlos Niebles, Ehsan Adeli | SocialGen: Modeling Multi-Human Social Interaction with Language Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Human interactions in everyday life are inherently social, involving
engagements with diverse individuals across various contexts. Modeling these
social interactions is fundamental to a wide range of real-world applications.
In this paper, we introduce SocialGen, the first unified motion-language model
capable of modeling interaction behaviors among varying numbers of individuals,
to address this crucial yet challenging problem. Unlike prior methods that are
limited to two-person interactions, we propose a novel social motion
representation that supports tokenizing the motions of an arbitrary number of
individuals and aligning them with the language space. This alignment enables
the model to leverage rich, pretrained linguistic knowledge to better
understand and reason about human social behaviors. To tackle the challenges of
data scarcity, we curate a comprehensive multi-human interaction dataset,
SocialX, enriched with textual annotations. Leveraging this dataset, we
establish the first comprehensive benchmark for multi-human interaction tasks.
Our method achieves state-of-the-art performance across motion-language tasks,
setting a new standard for multi-human interaction modeling.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 22:57:25 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yu",
"Heng",
""
],
[
"Zhang",
"Juze",
""
],
[
"Chen",
"Changan",
""
],
[
"Xiang",
"Tiange",
""
],
[
"Fang",
"Yusu",
""
],
[
"Niebles",
"Juan Carlos",
""
],
[
"Adeli",
"Ehsan",
""
]
] | TITLE: SocialGen: Modeling Multi-Human Social Interaction with Language Models
ABSTRACT: Human interactions in everyday life are inherently social, involving
engagements with diverse individuals across various contexts. Modeling these
social interactions is fundamental to a wide range of real-world applications.
In this paper, we introduce SocialGen, the first unified motion-language model
capable of modeling interaction behaviors among varying numbers of individuals,
to address this crucial yet challenging problem. Unlike prior methods that are
limited to two-person interactions, we propose a novel social motion
representation that supports tokenizing the motions of an arbitrary number of
individuals and aligning them with the language space. This alignment enables
the model to leverage rich, pretrained linguistic knowledge to better
understand and reason about human social behaviors. To tackle the challenges of
data scarcity, we curate a comprehensive multi-human interaction dataset,
SocialX, enriched with textual annotations. Leveraging this dataset, we
establish the first comprehensive benchmark for multi-human interaction tasks.
Our method achieves state-of-the-art performance across motion-language tasks,
setting a new standard for multi-human interaction modeling.
|
2503.22909 | Anas Berka | Anas Berka, Mohamed El Hajji, Raphael Canals, Youssef Es-saady, Adel
Hafiane | Enhancing DeepLabV3+ to Fuse Aerial and Satellite Images for Semantic
Segmentation | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aerial and satellite imagery are inherently complementary remote sensing
sources, offering high-resolution detail alongside expansive spatial coverage.
However, the use of these sources for land cover segmentation introduces
several challenges, prompting the development of a variety of segmentation
methods. Among these approaches, the DeepLabV3+ architecture is considered as a
promising approach in the field of single-source image segmentation. However,
despite its reliable results for segmentation, there is still a need to
increase its robustness and improve its performance. This is particularly
crucial for multimodal image segmentation, where the fusion of diverse types of
information is essential.
An interesting approach involves enhancing this architectural framework
through the integration of novel components and the modification of certain
internal processes.
In this paper, we enhance the DeepLabV3+ architecture by introducing a new
transposed conventional layers block for upsampling a second entry to fuse it
with high level features. This block is designed to amplify and integrate
information from satellite images, thereby enriching the segmentation process
through fusion with aerial images.
For experiments, we used the LandCover.ai (Land Cover from Aerial Imagery)
dataset for aerial images, alongside the corresponding dataset sourced from
Sentinel 2 data.
Through the fusion of both sources, the mean Intersection over Union (mIoU)
achieved a total mIoU of 84.91% without data augmentation.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 23:07:39 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Berka",
"Anas",
""
],
[
"Hajji",
"Mohamed El",
""
],
[
"Canals",
"Raphael",
""
],
[
"Es-saady",
"Youssef",
""
],
[
"Hafiane",
"Adel",
""
]
] | TITLE: Enhancing DeepLabV3+ to Fuse Aerial and Satellite Images for Semantic
Segmentation
ABSTRACT: Aerial and satellite imagery are inherently complementary remote sensing
sources, offering high-resolution detail alongside expansive spatial coverage.
However, the use of these sources for land cover segmentation introduces
several challenges, prompting the development of a variety of segmentation
methods. Among these approaches, the DeepLabV3+ architecture is considered as a
promising approach in the field of single-source image segmentation. However,
despite its reliable results for segmentation, there is still a need to
increase its robustness and improve its performance. This is particularly
crucial for multimodal image segmentation, where the fusion of diverse types of
information is essential.
An interesting approach involves enhancing this architectural framework
through the integration of novel components and the modification of certain
internal processes.
In this paper, we enhance the DeepLabV3+ architecture by introducing a new
transposed conventional layers block for upsampling a second entry to fuse it
with high level features. This block is designed to amplify and integrate
information from satellite images, thereby enriching the segmentation process
through fusion with aerial images.
For experiments, we used the LandCover.ai (Land Cover from Aerial Imagery)
dataset for aerial images, alongside the corresponding dataset sourced from
Sentinel 2 data.
Through the fusion of both sources, the mean Intersection over Union (mIoU)
achieved a total mIoU of 84.91% without data augmentation.
|
2503.22912 | Xin Liang | Xin Liang, Yogesh S Rawat | DIFFER: Disentangling Identity Features via Semantic Cues for
Clothes-Changing Person Re-ID | Accepted in CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Clothes-changing person re-identification (CC-ReID) aims to recognize
individuals under different clothing scenarios. Current CC-ReID approaches
either concentrate on modeling body shape using additional modalities including
silhouette, pose, and body mesh, potentially causing the model to overlook
other critical biometric traits such as gender, age, and style, or they
incorporate supervision through additional labels that the model tries to
disregard or emphasize, such as clothing or personal attributes. However, these
annotations are discrete in nature and do not capture comprehensive
descriptions.
In this work, we propose DIFFER: Disentangle Identity Features From Entangled
Representations, a novel adversarial learning method that leverages textual
descriptions to disentangle identity features. Recognizing that image features
inherently mix inseparable information, DIFFER introduces NBDetach, a mechanism
designed for feature disentanglement by leveraging the separable nature of text
descriptions as supervision. It partitions the feature space into distinct
subspaces and, through gradient reversal layers, effectively separates
identity-related features from non-biometric features. We evaluate DIFFER on 4
different benchmark datasets (LTCC, PRCC, CelebreID-Light, and CCVID) to
demonstrate its effectiveness and provide state-of-the-art performance across
all the benchmarks. DIFFER consistently outperforms the baseline method, with
improvements in top-1 accuracy of 3.6% on LTCC, 3.4% on PRCC, 2.5% on
CelebReID-Light, and 1% on CCVID. Our code can be found here.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 23:40:59 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liang",
"Xin",
""
],
[
"Rawat",
"Yogesh S",
""
]
] | TITLE: DIFFER: Disentangling Identity Features via Semantic Cues for
Clothes-Changing Person Re-ID
ABSTRACT: Clothes-changing person re-identification (CC-ReID) aims to recognize
individuals under different clothing scenarios. Current CC-ReID approaches
either concentrate on modeling body shape using additional modalities including
silhouette, pose, and body mesh, potentially causing the model to overlook
other critical biometric traits such as gender, age, and style, or they
incorporate supervision through additional labels that the model tries to
disregard or emphasize, such as clothing or personal attributes. However, these
annotations are discrete in nature and do not capture comprehensive
descriptions.
In this work, we propose DIFFER: Disentangle Identity Features From Entangled
Representations, a novel adversarial learning method that leverages textual
descriptions to disentangle identity features. Recognizing that image features
inherently mix inseparable information, DIFFER introduces NBDetach, a mechanism
designed for feature disentanglement by leveraging the separable nature of text
descriptions as supervision. It partitions the feature space into distinct
subspaces and, through gradient reversal layers, effectively separates
identity-related features from non-biometric features. We evaluate DIFFER on 4
different benchmark datasets (LTCC, PRCC, CelebreID-Light, and CCVID) to
demonstrate its effectiveness and provide state-of-the-art performance across
all the benchmarks. DIFFER consistently outperforms the baseline method, with
improvements in top-1 accuracy of 3.6% on LTCC, 3.4% on PRCC, 2.5% on
CelebReID-Light, and 1% on CCVID. Our code can be found here.
|
2503.22934 | Yongkai Wu | Yucong Dai, Jie Ji, Xiaolong Ma, Yongkai Wu | FairSAM: Fair Classification on Corrupted Data Through Sharpness-Aware
Minimization | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Image classification models trained on clean data often suffer from
significant performance degradation when exposed to testing corrupted data,
such as images with impulse noise, Gaussian noise, or environmental noise. This
degradation not only impacts overall performance but also disproportionately
affects various demographic subgroups, raising critical algorithmic bias
concerns. Although robust learning algorithms like Sharpness-Aware Minimization
(SAM) have shown promise in improving overall model robustness and
generalization, they fall short in addressing the biased performance
degradation across demographic subgroups. Existing fairness-aware machine
learning methods - such as fairness constraints and reweighing strategies - aim
to reduce performance disparities but hardly maintain robust and equitable
accuracy across demographic subgroups when faced with data corruption. This
reveals an inherent tension between robustness and fairness when dealing with
corrupted data. To address these challenges, we introduce one novel metric
specifically designed to assess performance degradation across subgroups under
data corruption. Additionally, we propose \textbf{FairSAM}, a new framework
that integrates \underline{Fair}ness-oriented strategies into \underline{SAM}
to deliver equalized performance across demographic groups under corrupted
conditions. Our experiments on multiple real-world datasets and various
predictive tasks show that FairSAM successfully reconciles robustness and
fairness, offering a structured solution for equitable and resilient image
classification in the presence of data corruption.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 01:51:59 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Dai",
"Yucong",
""
],
[
"Ji",
"Jie",
""
],
[
"Ma",
"Xiaolong",
""
],
[
"Wu",
"Yongkai",
""
]
] | TITLE: FairSAM: Fair Classification on Corrupted Data Through Sharpness-Aware
Minimization
ABSTRACT: Image classification models trained on clean data often suffer from
significant performance degradation when exposed to testing corrupted data,
such as images with impulse noise, Gaussian noise, or environmental noise. This
degradation not only impacts overall performance but also disproportionately
affects various demographic subgroups, raising critical algorithmic bias
concerns. Although robust learning algorithms like Sharpness-Aware Minimization
(SAM) have shown promise in improving overall model robustness and
generalization, they fall short in addressing the biased performance
degradation across demographic subgroups. Existing fairness-aware machine
learning methods - such as fairness constraints and reweighing strategies - aim
to reduce performance disparities but hardly maintain robust and equitable
accuracy across demographic subgroups when faced with data corruption. This
reveals an inherent tension between robustness and fairness when dealing with
corrupted data. To address these challenges, we introduce one novel metric
specifically designed to assess performance degradation across subgroups under
data corruption. Additionally, we propose \textbf{FairSAM}, a new framework
that integrates \underline{Fair}ness-oriented strategies into \underline{SAM}
to deliver equalized performance across demographic groups under corrupted
conditions. Our experiments on multiple real-world datasets and various
predictive tasks show that FairSAM successfully reconciles robustness and
fairness, offering a structured solution for equitable and resilient image
classification in the presence of data corruption.
|
2503.22935 | Xueqing Liu | Xueqing Liu, Jiangrui Zheng, Guanqun Yang, Siyan Wen, Qiushi Liu | Improving the Context Length and Efficiency of Code Retrieval for
Tracing Security Vulnerability Fixes | null | null | null | null | cs.CR cs.SE | http://creativecommons.org/licenses/by/4.0/ | In recent years, the rapid increase of security vulnerabilities has caused
major challenges in managing them. One critical task in vulnerability
management is tracing the patches that fix a vulnerability. By accurately
tracing the patching commits, security stakeholders can precisely identify
affected software components, determine vulnerable and fixed versions, assess
the severity etc., which facilitates rapid deployment of mitigations. However,
previous work has shown that the patch information is often missing in
vulnerability databases, including both the National Vulnerability Databases
(NVD) and the GitHub Advisory Database, which increases the risk of delayed
mitigation, incorrect vulnerability assessment, and potential exploits.
Although existing work has proposed several approaches for patch tracing,
they suffer from two major challenges: (1) the lack of scalability to the
full-repository level, and (2) the lack of study on how to model the semantic
similarity between the CVE and the full diff code. Upon identifying this gap,
we propose SITPatchTracer, a scalable full-repo full-context retrieval system
for security vulnerability patch tracing. SITPatchTracer leverages
ElasticSearch, learning-to-rank, and a hierarchical embedding approach based on
GritLM, a top-ranked LLM for text embedding with unlimited context length and
fast inference speed. The evaluation of SITPatchTracer shows that it achieves a
high recall on both evaluated datasets. SITPatchTracer's recall not only
outperforms several existing works (PatchFinder, PatchScout, VFCFinder), but
also Voyage, the SOTA commercial code embedding API by 13\% and 28\%.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 01:53:07 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Xueqing",
""
],
[
"Zheng",
"Jiangrui",
""
],
[
"Yang",
"Guanqun",
""
],
[
"Wen",
"Siyan",
""
],
[
"Liu",
"Qiushi",
""
]
] | TITLE: Improving the Context Length and Efficiency of Code Retrieval for
Tracing Security Vulnerability Fixes
ABSTRACT: In recent years, the rapid increase of security vulnerabilities has caused
major challenges in managing them. One critical task in vulnerability
management is tracing the patches that fix a vulnerability. By accurately
tracing the patching commits, security stakeholders can precisely identify
affected software components, determine vulnerable and fixed versions, assess
the severity etc., which facilitates rapid deployment of mitigations. However,
previous work has shown that the patch information is often missing in
vulnerability databases, including both the National Vulnerability Databases
(NVD) and the GitHub Advisory Database, which increases the risk of delayed
mitigation, incorrect vulnerability assessment, and potential exploits.
Although existing work has proposed several approaches for patch tracing,
they suffer from two major challenges: (1) the lack of scalability to the
full-repository level, and (2) the lack of study on how to model the semantic
similarity between the CVE and the full diff code. Upon identifying this gap,
we propose SITPatchTracer, a scalable full-repo full-context retrieval system
for security vulnerability patch tracing. SITPatchTracer leverages
ElasticSearch, learning-to-rank, and a hierarchical embedding approach based on
GritLM, a top-ranked LLM for text embedding with unlimited context length and
fast inference speed. The evaluation of SITPatchTracer shows that it achieves a
high recall on both evaluated datasets. SITPatchTracer's recall not only
outperforms several existing works (PatchFinder, PatchScout, VFCFinder), but
also Voyage, the SOTA commercial code embedding API by 13\% and 28\%.
|
2503.22941 | Yugen Sato | Yugen Sato, Tomohiro Takagi | Identifying Multi-modal Knowledge Neurons in Pretrained Transformers via
Two-stage Filtering | null | null | null | null | cs.AI cs.LG cs.MM | http://creativecommons.org/licenses/by/4.0/ | Recent advances in large language models (LLMs) have led to the development
of multimodal LLMs (MLLMs) in the fields of natural language processing (NLP)
and computer vision. Although these models allow for integrated visual and
language understanding, they present challenges such as opaque internal
processing and the generation of hallucinations and misinformation. Therefore,
there is a need for a method to clarify the location of knowledge in MLLMs.
In this study, we propose a method to identify neurons associated with
specific knowledge using MiniGPT-4, a Transformer-based MLLM. Specifically, we
extract knowledge neurons through two stages: activation differences filtering
using inpainting and gradient-based filtering using GradCAM. Experiments on the
image caption generation task using the MS COCO 2017 dataset, BLEU, ROUGE, and
BERTScore quantitative evaluation, and qualitative evaluation using an
activation heatmap showed that our method is able to locate knowledge with
higher accuracy than existing methods.
This study contributes to the visualization and explainability of knowledge
in MLLMs and shows the potential for future knowledge editing and control.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 02:16:15 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Sato",
"Yugen",
""
],
[
"Takagi",
"Tomohiro",
""
]
] | TITLE: Identifying Multi-modal Knowledge Neurons in Pretrained Transformers via
Two-stage Filtering
ABSTRACT: Recent advances in large language models (LLMs) have led to the development
of multimodal LLMs (MLLMs) in the fields of natural language processing (NLP)
and computer vision. Although these models allow for integrated visual and
language understanding, they present challenges such as opaque internal
processing and the generation of hallucinations and misinformation. Therefore,
there is a need for a method to clarify the location of knowledge in MLLMs.
In this study, we propose a method to identify neurons associated with
specific knowledge using MiniGPT-4, a Transformer-based MLLM. Specifically, we
extract knowledge neurons through two stages: activation differences filtering
using inpainting and gradient-based filtering using GradCAM. Experiments on the
image caption generation task using the MS COCO 2017 dataset, BLEU, ROUGE, and
BERTScore quantitative evaluation, and qualitative evaluation using an
activation heatmap showed that our method is able to locate knowledge with
higher accuracy than existing methods.
This study contributes to the visualization and explainability of knowledge
in MLLMs and shows the potential for future knowledge editing and control.
|
2503.22948 | Xiaoze Liu | Tianyang Xu, Xiaoze Liu, Feijie Wu, Xiaoqian Wang, Jing Gao | SUV: Scalable Large Language Model Copyright Compliance with Regularized
Selective Unlearning | null | null | null | null | cs.CL cs.AI cs.CY cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have transformed natural language processing by
learning from massive datasets, yet this rapid progress has also drawn legal
scrutiny, as the ability to unintentionally generate copyrighted content has
already prompted several prominent lawsuits. In this work, we introduce SUV
(Selective Unlearning for Verbatim data), a selective unlearning framework
designed to prevent LLM from memorizing copyrighted content while preserving
its overall utility. In detail, the proposed method constructs a dataset that
captures instances of copyrighted infringement cases by the targeted LLM. With
the dataset, we unlearn the content from the LLM by means of Direct Preference
Optimization (DPO), which replaces the verbatim copyrighted content with
plausible and coherent alternatives. Since DPO may hinder the LLM's performance
in other unrelated tasks, we integrate gradient projection and Fisher
information regularization to mitigate the degradation. We validate our
approach using a large-scale dataset of 500 famous books (predominantly
copyrighted works) and demonstrate that SUV significantly reduces verbatim
memorization with negligible impact on the performance on unrelated tasks.
Extensive experiments on both our dataset and public benchmarks confirm the
scalability and efficacy of our approach, offering a promising solution for
mitigating copyright risks in real-world LLM applications.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 02:33:26 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Xu",
"Tianyang",
""
],
[
"Liu",
"Xiaoze",
""
],
[
"Wu",
"Feijie",
""
],
[
"Wang",
"Xiaoqian",
""
],
[
"Gao",
"Jing",
""
]
] | TITLE: SUV: Scalable Large Language Model Copyright Compliance with Regularized
Selective Unlearning
ABSTRACT: Large Language Models (LLMs) have transformed natural language processing by
learning from massive datasets, yet this rapid progress has also drawn legal
scrutiny, as the ability to unintentionally generate copyrighted content has
already prompted several prominent lawsuits. In this work, we introduce SUV
(Selective Unlearning for Verbatim data), a selective unlearning framework
designed to prevent LLM from memorizing copyrighted content while preserving
its overall utility. In detail, the proposed method constructs a dataset that
captures instances of copyrighted infringement cases by the targeted LLM. With
the dataset, we unlearn the content from the LLM by means of Direct Preference
Optimization (DPO), which replaces the verbatim copyrighted content with
plausible and coherent alternatives. Since DPO may hinder the LLM's performance
in other unrelated tasks, we integrate gradient projection and Fisher
information regularization to mitigate the degradation. We validate our
approach using a large-scale dataset of 500 famous books (predominantly
copyrighted works) and demonstrate that SUV significantly reduces verbatim
memorization with negligible impact on the performance on unrelated tasks.
Extensive experiments on both our dataset and public benchmarks confirm the
scalability and efficacy of our approach, offering a promising solution for
mitigating copyright risks in real-world LLM applications.
|
2503.22955 | Yihang Lu | Yihang Lu, Mahwish Yousaf, Xianwei Meng, Enhong Chen | MNT-TNN: Spatiotemporal Traffic Data Imputation via Compact Multimode
Nonlinear Transform-based Tensor Nuclear Norm | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Imputation of random or non-random missing data is a long-standing research
topic and a crucial application for Intelligent Transportation Systems (ITS).
However, with the advent of modern communication technologies such as Global
Satellite Navigation Systems (GNSS), traffic data collection has outpaced
traditional methods, introducing new challenges in random missing value
imputation and increasing demands for spatiotemporal dependency modelings. To
address these issues, we propose a novel spatiotemporal traffic imputation
method, Multimode Nonlinear Transformed Tensor Nuclear Norm (MNT-TNN), grounded
in the Transform-based Tensor Nuclear Norm (TTNN) optimization framework which
exhibits efficient mathematical representations and theoretical guarantees for
the recovery of random missing values. Specifically, we strictly extend the
single-mode transform in TTNN to a multimode transform with nonlinear
activation, effectively capturing the intrinsic multimode spatiotemporal
correlations and low-rankness of the traffic tensor, represented as location
$\times$ location $\times$ time. To solve the nonconvex optimization problem,
we design a proximal alternating minimization (PAM) algorithm with theoretical
convergence guarantees. We suggest an Augmented Transform-based Tensor Nuclear
Norm Families (ATTNNs) framework to enhance the imputation results of TTNN
techniques, especially at very high miss rates. Extensive experiments on real
datasets demonstrate that our proposed MNT-TNN and ATTNNs can outperform the
compared state-of-the-art imputation methods, completing the benchmark of
random missing traffic value imputation.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 02:58:31 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lu",
"Yihang",
""
],
[
"Yousaf",
"Mahwish",
""
],
[
"Meng",
"Xianwei",
""
],
[
"Chen",
"Enhong",
""
]
] | TITLE: MNT-TNN: Spatiotemporal Traffic Data Imputation via Compact Multimode
Nonlinear Transform-based Tensor Nuclear Norm
ABSTRACT: Imputation of random or non-random missing data is a long-standing research
topic and a crucial application for Intelligent Transportation Systems (ITS).
However, with the advent of modern communication technologies such as Global
Satellite Navigation Systems (GNSS), traffic data collection has outpaced
traditional methods, introducing new challenges in random missing value
imputation and increasing demands for spatiotemporal dependency modelings. To
address these issues, we propose a novel spatiotemporal traffic imputation
method, Multimode Nonlinear Transformed Tensor Nuclear Norm (MNT-TNN), grounded
in the Transform-based Tensor Nuclear Norm (TTNN) optimization framework which
exhibits efficient mathematical representations and theoretical guarantees for
the recovery of random missing values. Specifically, we strictly extend the
single-mode transform in TTNN to a multimode transform with nonlinear
activation, effectively capturing the intrinsic multimode spatiotemporal
correlations and low-rankness of the traffic tensor, represented as location
$\times$ location $\times$ time. To solve the nonconvex optimization problem,
we design a proximal alternating minimization (PAM) algorithm with theoretical
convergence guarantees. We suggest an Augmented Transform-based Tensor Nuclear
Norm Families (ATTNNs) framework to enhance the imputation results of TTNN
techniques, especially at very high miss rates. Extensive experiments on real
datasets demonstrate that our proposed MNT-TNN and ATTNNs can outperform the
compared state-of-the-art imputation methods, completing the benchmark of
random missing traffic value imputation.
|
2503.22962 | Tianren Zhang | Tianren Zhang and Dai-Bei Yang | Multimodal machine learning with large language embedding model for
polymer property prediction | null | null | null | null | cs.LG cond-mat.mtrl-sci physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contemporary large language models (LLMs), such as GPT-4 and Llama, have
harnessed extensive computational power and diverse text corpora to achieve
remarkable proficiency in interpreting and generating domain-specific content,
including materials science. To leverage the domain knowledge embedded within
these models, we propose a simple yet effective multimodal architecture,
PolyLLMem, which integrates text embeddings generated by Llama 3 with molecular
structure embeddings derived from Uni-Mol, for polymer properties prediction
tasks. In our model, Low-rank adaptation (LoRA) layers were also incorporated
during the property prediction tasks to refine the embeddings based on our
limited polymer dataset, thereby enhancing their chemical relevance for polymer
SMILES representation. This balanced fusion of fine-tuned textual and
structural information enables PolyLLMem to accurately predict a variety of
polymer properties despite the scarcity of training data. Its performance is
comparable to, and in some cases exceeds, that of graph-based models, as well
as transformer-based models that typically require pretraining on millions of
polymer samples. These findings demonstrate that LLM, such as Llama, can
effectively capture chemical information encoded in polymer PSMILES, and
underscore the efficacy of multimodal fusion of LLM embeddings and molecular
structure embeddings in overcoming data scarcity and accelerating the discovery
of advanced polymeric materials.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 03:48:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhang",
"Tianren",
""
],
[
"Yang",
"Dai-Bei",
""
]
] | TITLE: Multimodal machine learning with large language embedding model for
polymer property prediction
ABSTRACT: Contemporary large language models (LLMs), such as GPT-4 and Llama, have
harnessed extensive computational power and diverse text corpora to achieve
remarkable proficiency in interpreting and generating domain-specific content,
including materials science. To leverage the domain knowledge embedded within
these models, we propose a simple yet effective multimodal architecture,
PolyLLMem, which integrates text embeddings generated by Llama 3 with molecular
structure embeddings derived from Uni-Mol, for polymer properties prediction
tasks. In our model, Low-rank adaptation (LoRA) layers were also incorporated
during the property prediction tasks to refine the embeddings based on our
limited polymer dataset, thereby enhancing their chemical relevance for polymer
SMILES representation. This balanced fusion of fine-tuned textual and
structural information enables PolyLLMem to accurately predict a variety of
polymer properties despite the scarcity of training data. Its performance is
comparable to, and in some cases exceeds, that of graph-based models, as well
as transformer-based models that typically require pretraining on millions of
polymer samples. These findings demonstrate that LLM, such as Llama, can
effectively capture chemical information encoded in polymer PSMILES, and
underscore the efficacy of multimodal fusion of LLM embeddings and molecular
structure embeddings in overcoming data scarcity and accelerating the discovery
of advanced polymeric materials.
|
2503.22963 | Peiyu Chen | Peiyu Chen, Fuling Lin, Weipeng Guan, Peng Lu | SuperEIO: Self-Supervised Event Feature Learning for Event Inertial
Odometry | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event cameras asynchronously output low-latency event streams, promising for
state estimation in high-speed motion and challenging lighting conditions. As
opposed to frame-based cameras, the motion-dependent nature of event cameras
presents persistent challenges in achieving robust event feature detection and
matching. In recent years, learning-based approaches have demonstrated superior
robustness over traditional handcrafted methods in feature detection and
matching, particularly under aggressive motion and HDR scenarios. In this
paper, we propose SuperEIO, a novel framework that leverages the learning-based
event-only detection and IMU measurements to achieve event-inertial odometry.
Our event-only feature detection employs a convolutional neural network under
continuous event streams. Moreover, our system adopts the graph neural network
to achieve event descriptor matching for loop closure. The proposed system
utilizes TensorRT to accelerate the inference speed of deep networks, which
ensures low-latency processing and robust real-time operation on
resource-limited platforms. Besides, we evaluate our method extensively on
multiple public datasets, demonstrating its superior accuracy and robustness
compared to other state-of-the-art event-based methods. We have also
open-sourced our pipeline to facilitate research in the field:
https://github.com/arclab-hku/SuperEIO.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 03:58:15 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Peiyu",
""
],
[
"Lin",
"Fuling",
""
],
[
"Guan",
"Weipeng",
""
],
[
"Lu",
"Peng",
""
]
] | TITLE: SuperEIO: Self-Supervised Event Feature Learning for Event Inertial
Odometry
ABSTRACT: Event cameras asynchronously output low-latency event streams, promising for
state estimation in high-speed motion and challenging lighting conditions. As
opposed to frame-based cameras, the motion-dependent nature of event cameras
presents persistent challenges in achieving robust event feature detection and
matching. In recent years, learning-based approaches have demonstrated superior
robustness over traditional handcrafted methods in feature detection and
matching, particularly under aggressive motion and HDR scenarios. In this
paper, we propose SuperEIO, a novel framework that leverages the learning-based
event-only detection and IMU measurements to achieve event-inertial odometry.
Our event-only feature detection employs a convolutional neural network under
continuous event streams. Moreover, our system adopts the graph neural network
to achieve event descriptor matching for loop closure. The proposed system
utilizes TensorRT to accelerate the inference speed of deep networks, which
ensures low-latency processing and robust real-time operation on
resource-limited platforms. Besides, we evaluate our method extensively on
multiple public datasets, demonstrating its superior accuracy and robustness
compared to other state-of-the-art event-based methods. We have also
open-sourced our pipeline to facilitate research in the field:
https://github.com/arclab-hku/SuperEIO.
|
2503.22965 | Henri Mueller | Henri Mueller, Yechan Kim, Trevor Gee, Mahla Nejati | Pallet Detection And Localisation From Synthetic Data | 10 pages, 9 images, 4 tables, submitted and accepted to ACRA 2024
(https://www.araa.asn.au/conference/acra-2024/) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The global warehousing industry is experiencing rapid growth, with the market
size projected to grow at an annual rate of 8.1% from 2024 to 2030 [Grand View
Research, 2021]. This expansion has led to a surge in demand for efficient
pallet detection and localisation systems. While automation can significantly
streamline warehouse operations, the development of such systems often requires
extensive manual data annotation, with an average of 35 seconds per image, for
a typical computer vision project. This paper presents a novel approach to
enhance pallet detection and localisation using purely synthetic data and
geometric features derived from their side faces. By implementing a domain
randomisation engine in Unity, the need for time-consuming manual annotation is
eliminated while achieving high-performance results. The proposed method
demonstrates a pallet detection performance of 0.995 mAP50 for single pallets
on a real-world dataset. Additionally, an average position accuracy of less
than 4.2 cm and an average rotation accuracy of 8.2{\deg} were achieved for
pallets within a 5-meter range, with the pallet positioned head-on.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 04:06:02 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Mueller",
"Henri",
""
],
[
"Kim",
"Yechan",
""
],
[
"Gee",
"Trevor",
""
],
[
"Nejati",
"Mahla",
""
]
] | TITLE: Pallet Detection And Localisation From Synthetic Data
ABSTRACT: The global warehousing industry is experiencing rapid growth, with the market
size projected to grow at an annual rate of 8.1% from 2024 to 2030 [Grand View
Research, 2021]. This expansion has led to a surge in demand for efficient
pallet detection and localisation systems. While automation can significantly
streamline warehouse operations, the development of such systems often requires
extensive manual data annotation, with an average of 35 seconds per image, for
a typical computer vision project. This paper presents a novel approach to
enhance pallet detection and localisation using purely synthetic data and
geometric features derived from their side faces. By implementing a domain
randomisation engine in Unity, the need for time-consuming manual annotation is
eliminated while achieving high-performance results. The proposed method
demonstrates a pallet detection performance of 0.995 mAP50 for single pallets
on a real-world dataset. Additionally, an average position accuracy of less
than 4.2 cm and an average rotation accuracy of 8.2{\deg} were achieved for
pallets within a 5-meter range, with the pallet positioned head-on.
|
2503.22970 | Kuntai Cai | Kuntai Cai, Xiaokui Xiao, Yin Yang | PrivPetal: Relational Data Synthesis via Permutation Relations | This is the extended version of a SIGMOD 2025 paper | null | null | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | Releasing relational databases while preserving privacy is an important
research problem with numerous applications. A canonical approach is to
generate synthetic data under differential privacy (DP), which provides a
strong, rigorous privacy guarantee. The problem is particularly challenging
when the data involve not only entities (e.g., represented by records in
tables) but also relationships (represented by foreign-key references), since
if we generate random records for each entity independently, the resulting
synthetic data usually fail to exhibit realistic relationships. The current
state of the art, PrivLava, addresses this issue by generating random join key
attributes through a sophisticated expectation-maximization (EM) algorithm.
This method, however, is rather costly in terms of privacy budget consumption,
due to the numerous EM iterations needed to retain high data utility.
Consequently, the privacy cost of PrivLava can be prohibitive for some
real-world scenarios.
We present a sophisticated PrivPetal approach that addresses the above issues
via a novel concept: permutation relation, which is constructed as a surrogate
to synthesize the flattened relation, avoiding the generation of a
high-dimensional relation directly. The synthesis is done using a refined
Markov random field mechanism, backed by fine-grained privacy analysis.
Extensive experiments using multiple real datasets and the TPC-H benchmark
demonstrate that PrivPetal significantly outperforms existing methods in terms
of aggregate query accuracy on the synthetic data.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 04:24:41 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Cai",
"Kuntai",
""
],
[
"Xiao",
"Xiaokui",
""
],
[
"Yang",
"Yin",
""
]
] | TITLE: PrivPetal: Relational Data Synthesis via Permutation Relations
ABSTRACT: Releasing relational databases while preserving privacy is an important
research problem with numerous applications. A canonical approach is to
generate synthetic data under differential privacy (DP), which provides a
strong, rigorous privacy guarantee. The problem is particularly challenging
when the data involve not only entities (e.g., represented by records in
tables) but also relationships (represented by foreign-key references), since
if we generate random records for each entity independently, the resulting
synthetic data usually fail to exhibit realistic relationships. The current
state of the art, PrivLava, addresses this issue by generating random join key
attributes through a sophisticated expectation-maximization (EM) algorithm.
This method, however, is rather costly in terms of privacy budget consumption,
due to the numerous EM iterations needed to retain high data utility.
Consequently, the privacy cost of PrivLava can be prohibitive for some
real-world scenarios.
We present a sophisticated PrivPetal approach that addresses the above issues
via a novel concept: permutation relation, which is constructed as a surrogate
to synthesize the flattened relation, avoiding the generation of a
high-dimensional relation directly. The synthesis is done using a refined
Markov random field mechanism, backed by fine-grained privacy analysis.
Extensive experiments using multiple real datasets and the TPC-H benchmark
demonstrate that PrivPetal significantly outperforms existing methods in terms
of aggregate query accuracy on the synthetic data.
|
2503.22971 | Kanishka Ranaweera Mr. | Kanishka Ranaweera, Azadeh Ghari Neiat, Xiao Liu, Bipasha Kashyap and
Pubudu N. Pathirana | Enhancing Federated Learning Through Secure Cluster-Weighted Client
Aggregation | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Federated learning (FL) has emerged as a promising paradigm in machine
learning, enabling collaborative model training across decentralized devices
without the need for raw data sharing. In FL, a global model is trained
iteratively on local datasets residing on individual devices, each contributing
to the model's improvement. However, the heterogeneous nature of these local
datasets, stemming from diverse user behaviours, device capabilities, and data
distributions, poses a significant challenge. The inherent heterogeneity in
federated learning gives rise to various issues, including model performance
discrepancies, convergence challenges, and potential privacy concerns. As the
global model progresses through rounds of training, the disparities in local
data quality and quantity can impede the overall effectiveness of federated
learning systems. Moreover, maintaining fairness and privacy across diverse
user groups becomes a paramount concern. To address this issue, this paper
introduces a novel FL framework, ClusterGuardFL, that employs dissimilarity
scores, k-means clustering, and reconciliation confidence scores to dynamically
assign weights to client updates. The dissimilarity scores between global and
local models guide the formation of clusters, with cluster size influencing the
weight allocation. Within each cluster, a reconciliation confidence score is
calculated for individual data points, and a softmax layer generates customized
weights for clients. These weights are utilized in the aggregation process,
enhancing the model's robustness and privacy. Experimental results demonstrate
the efficacy of the proposed approach in achieving improved model performance
in diverse datasets.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 04:29:24 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ranaweera",
"Kanishka",
""
],
[
"Neiat",
"Azadeh Ghari",
""
],
[
"Liu",
"Xiao",
""
],
[
"Kashyap",
"Bipasha",
""
],
[
"Pathirana",
"Pubudu N.",
""
]
] | TITLE: Enhancing Federated Learning Through Secure Cluster-Weighted Client
Aggregation
ABSTRACT: Federated learning (FL) has emerged as a promising paradigm in machine
learning, enabling collaborative model training across decentralized devices
without the need for raw data sharing. In FL, a global model is trained
iteratively on local datasets residing on individual devices, each contributing
to the model's improvement. However, the heterogeneous nature of these local
datasets, stemming from diverse user behaviours, device capabilities, and data
distributions, poses a significant challenge. The inherent heterogeneity in
federated learning gives rise to various issues, including model performance
discrepancies, convergence challenges, and potential privacy concerns. As the
global model progresses through rounds of training, the disparities in local
data quality and quantity can impede the overall effectiveness of federated
learning systems. Moreover, maintaining fairness and privacy across diverse
user groups becomes a paramount concern. To address this issue, this paper
introduces a novel FL framework, ClusterGuardFL, that employs dissimilarity
scores, k-means clustering, and reconciliation confidence scores to dynamically
assign weights to client updates. The dissimilarity scores between global and
local models guide the formation of clusters, with cluster size influencing the
weight allocation. Within each cluster, a reconciliation confidence score is
calculated for individual data points, and a softmax layer generates customized
weights for clients. These weights are utilized in the aggregation process,
enhancing the model's robustness and privacy. Experimental results demonstrate
the efficacy of the proposed approach in achieving improved model performance
in diverse datasets.
|
2503.22973 | Vivek Iyer | Vivek Iyer, Ricardo Rei, Pinzhen Chen and Alexandra Birch | XL-Instruct: Synthetic Data for Cross-Lingual Open-Ended Generation | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Cross-lingual open-ended generation -- i.e. generating responses in a desired
language different from that of the user's query -- is an important yet
understudied problem. We introduce XL-AlpacaEval, a new benchmark for
evaluating cross-lingual generation capabilities in Large Language Models
(LLMs), and propose XL-Instruct, a high-quality synthetic data generation
method. Fine-tuning with just 8K XL-Instruct-generated instructions
significantly improves model performance, increasing the win rate against
GPT-4o-Mini from 7.4% to 21.5%, and improving on several fine-grained quality
metrics. Additionally, models fine-tuned on XL-Instruct exhibit strong
zero-shot transfer to both English-only and multilingual generation tasks.
Given its consistent gains across the board, we strongly recommend
incorporating XL-Instruct in the post-training pipeline of future multilingual
LLMs. To facilitate further research, we will publicly and freely release the
XL-Instruct and XL-AlpacaEval datasets, which constitute two of the few
cross-lingual resources currently available in the literature.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 04:34:03 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Iyer",
"Vivek",
""
],
[
"Rei",
"Ricardo",
""
],
[
"Chen",
"Pinzhen",
""
],
[
"Birch",
"Alexandra",
""
]
] | TITLE: XL-Instruct: Synthetic Data for Cross-Lingual Open-Ended Generation
ABSTRACT: Cross-lingual open-ended generation -- i.e. generating responses in a desired
language different from that of the user's query -- is an important yet
understudied problem. We introduce XL-AlpacaEval, a new benchmark for
evaluating cross-lingual generation capabilities in Large Language Models
(LLMs), and propose XL-Instruct, a high-quality synthetic data generation
method. Fine-tuning with just 8K XL-Instruct-generated instructions
significantly improves model performance, increasing the win rate against
GPT-4o-Mini from 7.4% to 21.5%, and improving on several fine-grained quality
metrics. Additionally, models fine-tuned on XL-Instruct exhibit strong
zero-shot transfer to both English-only and multilingual generation tasks.
Given its consistent gains across the board, we strongly recommend
incorporating XL-Instruct in the post-training pipeline of future multilingual
LLMs. To facilitate further research, we will publicly and freely release the
XL-Instruct and XL-AlpacaEval datasets, which constitute two of the few
cross-lingual resources currently available in the literature.
|
2503.22983 | Ashesh Ashesh | Ashesh Ashesh, Florian Jug | indiSplit: Bringing Severity Cognizance to Image Decomposition in
Fluorescence Microscopy | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Fluorescence microscopy, while being a key driver for progress in the life
sciences, is also subject to technical limitations. To overcome them,
computational multiplexing techniques have recently been proposed, which allow
multiple cellular structures to be captured in a single image and later be
unmixed. Existing image decomposition methods are trained on a set of
superimposed input images and the respective unmixed target images. It is
critical to note that the relative strength (mixing ratio) of the superimposed
images for a given input is a priori unknown. However, existing methods are
trained on a fixed intensity ratio of superimposed inputs, making them not
cognizant to the range of relative intensities that can occur in fluorescence
microscopy. In this work, we propose a novel method called indiSplit that is
cognizant of the severity of the above mentioned mixing ratio. Our idea is
based on InDI, a popular iterative method for image restoration, and an ideal
starting point to embrace the unknown mixing ratio in any given input. We
introduce (i) a suitably trained regressor network that predicts the
degradation level (mixing asymmetry) of a given input image and (ii) a
degradation-specific normalization module, enabling degradation-aware inference
across all mixing ratios. We show that this method solves two relevant tasks in
fluorescence microscopy, namely image splitting and bleedthrough removal, and
empirically demonstrate the applicability of indiSplit on $5$ public datasets.
We will release all sources under a permissive license.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 06:00:40 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ashesh",
"Ashesh",
""
],
[
"Jug",
"Florian",
""
]
] | TITLE: indiSplit: Bringing Severity Cognizance to Image Decomposition in
Fluorescence Microscopy
ABSTRACT: Fluorescence microscopy, while being a key driver for progress in the life
sciences, is also subject to technical limitations. To overcome them,
computational multiplexing techniques have recently been proposed, which allow
multiple cellular structures to be captured in a single image and later be
unmixed. Existing image decomposition methods are trained on a set of
superimposed input images and the respective unmixed target images. It is
critical to note that the relative strength (mixing ratio) of the superimposed
images for a given input is a priori unknown. However, existing methods are
trained on a fixed intensity ratio of superimposed inputs, making them not
cognizant to the range of relative intensities that can occur in fluorescence
microscopy. In this work, we propose a novel method called indiSplit that is
cognizant of the severity of the above mentioned mixing ratio. Our idea is
based on InDI, a popular iterative method for image restoration, and an ideal
starting point to embrace the unknown mixing ratio in any given input. We
introduce (i) a suitably trained regressor network that predicts the
degradation level (mixing asymmetry) of a given input image and (ii) a
degradation-specific normalization module, enabling degradation-aware inference
across all mixing ratios. We show that this method solves two relevant tasks in
fluorescence microscopy, namely image splitting and bleedthrough removal, and
empirically demonstrate the applicability of indiSplit on $5$ public datasets.
We will release all sources under a permissive license.
|
2503.22984 | Tianchen Zhao Dr. | Zhuowei Li, Tianchen Zhao, Xiang Xu, Zheng Zhang, Zhihua Li, Xuanbai
Chen, Qin Zhang, Alessandro Bergamo, Anil K. Jain, Yifan Xing | Optimal Transport-Guided Source-Free Adaptation for Face Anti-Spoofing | 15 pages, 7 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Developing a face anti-spoofing model that meets the security requirements of
clients worldwide is challenging due to the domain gap between training
datasets and diverse end-user test data. Moreover, for security and privacy
reasons, it is undesirable for clients to share a large amount of their face
data with service providers. In this work, we introduce a novel method in which
the face anti-spoofing model can be adapted by the client itself to a target
domain at test time using only a small sample of data while keeping model
parameters and training data inaccessible to the client. Specifically, we
develop a prototype-based base model and an optimal transport-guided adaptor
that enables adaptation in either a lightweight training or training-free
fashion, without updating base model's parameters. Furthermore, we propose
geodesic mixup, an optimal transport-based synthesis method that generates
augmented training data along the geodesic path between source prototypes and
target data distribution. This allows training a lightweight classifier to
effectively adapt to target-specific characteristics while retaining essential
knowledge learned from the source domain. In cross-domain and cross-attack
settings, compared with recent methods, our method achieves average relative
improvements of 19.17% in HTER and 8.58% in AUC, respectively.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 06:10:34 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Zhuowei",
""
],
[
"Zhao",
"Tianchen",
""
],
[
"Xu",
"Xiang",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Li",
"Zhihua",
""
],
[
"Chen",
"Xuanbai",
""
],
[
"Zhang",
"Qin",
""
],
[
"Bergamo",
"Alessandro",
""
],
[
"Jain",
"Anil K.",
""
],
[
"Xing",
"Yifan",
""
]
] | TITLE: Optimal Transport-Guided Source-Free Adaptation for Face Anti-Spoofing
ABSTRACT: Developing a face anti-spoofing model that meets the security requirements of
clients worldwide is challenging due to the domain gap between training
datasets and diverse end-user test data. Moreover, for security and privacy
reasons, it is undesirable for clients to share a large amount of their face
data with service providers. In this work, we introduce a novel method in which
the face anti-spoofing model can be adapted by the client itself to a target
domain at test time using only a small sample of data while keeping model
parameters and training data inaccessible to the client. Specifically, we
develop a prototype-based base model and an optimal transport-guided adaptor
that enables adaptation in either a lightweight training or training-free
fashion, without updating base model's parameters. Furthermore, we propose
geodesic mixup, an optimal transport-based synthesis method that generates
augmented training data along the geodesic path between source prototypes and
target data distribution. This allows training a lightweight classifier to
effectively adapt to target-specific characteristics while retaining essential
knowledge learned from the source domain. In cross-domain and cross-attack
settings, compared with recent methods, our method achieves average relative
improvements of 19.17% in HTER and 8.58% in AUC, respectively.
|
2503.22985 | Zhengyi Zhao | Zhengyi Zhao, Shubo Zhang, Zezhong Wang, Bin Liang, Binyang Li,
Kam-Fai Wong | FReM: A Flexible Reasoning Mechanism for Balancing Quick and Slow
Thinking in Long-Context Question Answering | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Long-context question-answering (LCQA) systems have greatly benefited from
the powerful reasoning capabilities of large language models (LLMs), which can
be categorized into slow and quick reasoning modes. However, both modes have
their limitations. Slow thinking generally leans to explore every possible
reasoning path, which leads to heavy overthinking and wastes time. Quick
thinking usually relies on pattern matching rather than truly understanding the
query logic, which misses proper understanding. To address these issues, we
propose FReM: Flexible Reasoning Mechanism, a method that adjusts reasoning
depth according to the complexity of each question. Specifically, FReM
leverages synthetic reference QA examples to provide an explicit chain of
thought, enabling efficient handling of simple queries while allowing deeper
reasoning for more complex ones. By doing so, FReM helps quick-thinking models
move beyond superficial pattern matching and narrows the reasoning space for
slow-thinking models to avoid unnecessary exploration. Experiments on seven QA
datasets show that FReM improves reasoning accuracy and scalability,
particularly for complex multihop questions, indicating its potential to
advance LCQA methodologies.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 06:20:12 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhao",
"Zhengyi",
""
],
[
"Zhang",
"Shubo",
""
],
[
"Wang",
"Zezhong",
""
],
[
"Liang",
"Bin",
""
],
[
"Li",
"Binyang",
""
],
[
"Wong",
"Kam-Fai",
""
]
] | TITLE: FReM: A Flexible Reasoning Mechanism for Balancing Quick and Slow
Thinking in Long-Context Question Answering
ABSTRACT: Long-context question-answering (LCQA) systems have greatly benefited from
the powerful reasoning capabilities of large language models (LLMs), which can
be categorized into slow and quick reasoning modes. However, both modes have
their limitations. Slow thinking generally leans to explore every possible
reasoning path, which leads to heavy overthinking and wastes time. Quick
thinking usually relies on pattern matching rather than truly understanding the
query logic, which misses proper understanding. To address these issues, we
propose FReM: Flexible Reasoning Mechanism, a method that adjusts reasoning
depth according to the complexity of each question. Specifically, FReM
leverages synthetic reference QA examples to provide an explicit chain of
thought, enabling efficient handling of simple queries while allowing deeper
reasoning for more complex ones. By doing so, FReM helps quick-thinking models
move beyond superficial pattern matching and narrows the reasoning space for
slow-thinking models to avoid unnecessary exploration. Experiments on seven QA
datasets show that FReM improves reasoning accuracy and scalability,
particularly for complex multihop questions, indicating its potential to
advance LCQA methodologies.
|
2503.22989 | Gabriel Recchia | Gabriel Recchia, Chatrik Singh Mangat, Issac Li, Gayatri Krishnakumar | FindTheFlaws: Annotated Errors for Detecting Flawed Reasoning and
Scalable Oversight Research | 43 pages, 3 figures. for associated repository, see
https://github.com/modulo-research/findtheflaws | null | null | null | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | As AI models tackle increasingly complex problems, ensuring reliable human
oversight becomes more challenging due to the difficulty of verifying
solutions. Approaches to scaling AI supervision include debate, in which two
agents engage in structured dialogue to help a judge evaluate claims; critique,
in which models identify potential flaws in proposed solutions; and
prover-verifier games, in which a capable 'prover' model generates solutions
that must be verifiable by a less capable 'verifier'. Evaluations of the
scalability of these and similar approaches to difficult problems benefit from
datasets that include (1) long-form expert-verified correct solutions and (2)
long-form flawed solutions with annotations highlighting specific errors, but
few are available.
To address this gap, we present FindTheFlaws, a group of five diverse
datasets spanning medicine, mathematics, science, coding, and the Lojban
language. Each dataset contains questions and long-form solutions with expert
annotations validating their correctness or identifying specific error(s) in
the reasoning. We evaluate frontier models' critiquing capabilities and observe
a range of performance that can be leveraged for scalable oversight
experiments: models performing more poorly on particular datasets can serve as
judges/verifiers for more capable models. Additionally, for some task/dataset
combinations, expert baselines exceed even top model performance, making them
more beneficial for scalable oversight experiments.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 06:38:30 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Recchia",
"Gabriel",
""
],
[
"Mangat",
"Chatrik Singh",
""
],
[
"Li",
"Issac",
""
],
[
"Krishnakumar",
"Gayatri",
""
]
] | TITLE: FindTheFlaws: Annotated Errors for Detecting Flawed Reasoning and
Scalable Oversight Research
ABSTRACT: As AI models tackle increasingly complex problems, ensuring reliable human
oversight becomes more challenging due to the difficulty of verifying
solutions. Approaches to scaling AI supervision include debate, in which two
agents engage in structured dialogue to help a judge evaluate claims; critique,
in which models identify potential flaws in proposed solutions; and
prover-verifier games, in which a capable 'prover' model generates solutions
that must be verifiable by a less capable 'verifier'. Evaluations of the
scalability of these and similar approaches to difficult problems benefit from
datasets that include (1) long-form expert-verified correct solutions and (2)
long-form flawed solutions with annotations highlighting specific errors, but
few are available.
To address this gap, we present FindTheFlaws, a group of five diverse
datasets spanning medicine, mathematics, science, coding, and the Lojban
language. Each dataset contains questions and long-form solutions with expert
annotations validating their correctness or identifying specific error(s) in
the reasoning. We evaluate frontier models' critiquing capabilities and observe
a range of performance that can be leveraged for scalable oversight
experiments: models performing more poorly on particular datasets can serve as
judges/verifiers for more capable models. Additionally, for some task/dataset
combinations, expert baselines exceed even top model performance, making them
more beneficial for scalable oversight experiments.
|
2503.22998 | Yuni Lai | Yuni Lai and Yulin Zhu and Yixuan Sun and Yulun Wu and Bin Xiao and
Gaolei Li and Jianhua Li and Kai Zhou | AuditVotes: A Framework Towards More Deployable Certified Robustness for
Graph Neural Networks | 20 pages | null | null | null | cs.LG cs.AI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite advancements in Graph Neural Networks (GNNs), adaptive attacks
continue to challenge their robustness. Certified robustness based on
randomized smoothing has emerged as a promising solution, offering provable
guarantees that a model's predictions remain stable under adversarial
perturbations within a specified range. However, existing methods face a
critical trade-off between accuracy and robustness, as achieving stronger
robustness requires introducing greater noise into the input graph. This
excessive randomization degrades data quality and disrupts prediction
consistency, limiting the practical deployment of certifiably robust GNNs in
real-world scenarios where both accuracy and robustness are essential. To
address this challenge, we propose \textbf{AuditVotes}, the first framework to
achieve both high clean accuracy and certifiably robust accuracy for GNNs. It
integrates randomized smoothing with two key components,
\underline{au}gmentation and con\underline{dit}ional smoothing, aiming to
improve data quality and prediction consistency. The augmentation, acting as a
pre-processing step, de-noises the randomized graph, significantly improving
data quality and clean accuracy. The conditional smoothing, serving as a
post-processing step, employs a filtering function to selectively count votes,
thereby filtering low-quality predictions and improving voting consistency.
Extensive experimental results demonstrate that AuditVotes significantly
enhances clean accuracy, certified robustness, and empirical robustness while
maintaining high computational efficiency. Notably, compared to baseline
randomized smoothing, AuditVotes improves clean accuracy by $437.1\%$ and
certified accuracy by $409.3\%$ when the attacker can arbitrarily insert $20$
edges on the Cora-ML datasets, representing a substantial step toward deploying
certifiably robust GNNs in real-world applications.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 07:27:32 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lai",
"Yuni",
""
],
[
"Zhu",
"Yulin",
""
],
[
"Sun",
"Yixuan",
""
],
[
"Wu",
"Yulun",
""
],
[
"Xiao",
"Bin",
""
],
[
"Li",
"Gaolei",
""
],
[
"Li",
"Jianhua",
""
],
[
"Zhou",
"Kai",
""
]
] | TITLE: AuditVotes: A Framework Towards More Deployable Certified Robustness for
Graph Neural Networks
ABSTRACT: Despite advancements in Graph Neural Networks (GNNs), adaptive attacks
continue to challenge their robustness. Certified robustness based on
randomized smoothing has emerged as a promising solution, offering provable
guarantees that a model's predictions remain stable under adversarial
perturbations within a specified range. However, existing methods face a
critical trade-off between accuracy and robustness, as achieving stronger
robustness requires introducing greater noise into the input graph. This
excessive randomization degrades data quality and disrupts prediction
consistency, limiting the practical deployment of certifiably robust GNNs in
real-world scenarios where both accuracy and robustness are essential. To
address this challenge, we propose \textbf{AuditVotes}, the first framework to
achieve both high clean accuracy and certifiably robust accuracy for GNNs. It
integrates randomized smoothing with two key components,
\underline{au}gmentation and con\underline{dit}ional smoothing, aiming to
improve data quality and prediction consistency. The augmentation, acting as a
pre-processing step, de-noises the randomized graph, significantly improving
data quality and clean accuracy. The conditional smoothing, serving as a
post-processing step, employs a filtering function to selectively count votes,
thereby filtering low-quality predictions and improving voting consistency.
Extensive experimental results demonstrate that AuditVotes significantly
enhances clean accuracy, certified robustness, and empirical robustness while
maintaining high computational efficiency. Notably, compared to baseline
randomized smoothing, AuditVotes improves clean accuracy by $437.1\%$ and
certified accuracy by $409.3\%$ when the attacker can arbitrarily insert $20$
edges on the Cora-ML datasets, representing a substantial step toward deploying
certifiably robust GNNs in real-world applications.
|
2503.23011 | Hoigi Seo | Hoigi Seo, Junseo Bang, Haechang Lee, Joohoon Lee, Byung Hyun Lee, Se
Young Chun | On Geometrical Properties of Text Token Embeddings for Strong Semantic
Binding in Text-to-Image Generation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Text-to-Image (T2I) models often suffer from text-image misalignment in
complex scenes involving multiple objects and attributes. Semantic binding aims
to mitigate this issue by accurately associating the generated attributes and
objects with their corresponding noun phrases (NPs). Existing methods rely on
text or latent optimizations, yet the factors influencing semantic binding
remain underexplored. Here we investigate the geometrical properties of text
token embeddings and their cross-attention (CA) maps. We empirically and
theoretically analyze that the geometrical properties of token embeddings,
specifically both angular distances and norms, play a crucial role in CA map
differentiation. Then, we propose \textbf{TeeMo}, a training-free text
embedding-aware T2I framework with strong semantic binding. TeeMo consists of
Causality-Aware Projection-Out (CAPO) for distinct inter-NP CA maps and
Adaptive Token Mixing (ATM) with our loss to enhance inter-NP separation while
maintaining intra-NP cohesion in CA maps. Extensive experiments confirm TeeMo
consistently outperforms prior arts across diverse baselines and datasets.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 08:31:30 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Seo",
"Hoigi",
""
],
[
"Bang",
"Junseo",
""
],
[
"Lee",
"Haechang",
""
],
[
"Lee",
"Joohoon",
""
],
[
"Lee",
"Byung Hyun",
""
],
[
"Chun",
"Se Young",
""
]
] | TITLE: On Geometrical Properties of Text Token Embeddings for Strong Semantic
Binding in Text-to-Image Generation
ABSTRACT: Text-to-Image (T2I) models often suffer from text-image misalignment in
complex scenes involving multiple objects and attributes. Semantic binding aims
to mitigate this issue by accurately associating the generated attributes and
objects with their corresponding noun phrases (NPs). Existing methods rely on
text or latent optimizations, yet the factors influencing semantic binding
remain underexplored. Here we investigate the geometrical properties of text
token embeddings and their cross-attention (CA) maps. We empirically and
theoretically analyze that the geometrical properties of token embeddings,
specifically both angular distances and norms, play a crucial role in CA map
differentiation. Then, we propose \textbf{TeeMo}, a training-free text
embedding-aware T2I framework with strong semantic binding. TeeMo consists of
Causality-Aware Projection-Out (CAPO) for distinct inter-NP CA maps and
Adaptive Token Mixing (ATM) with our loss to enhance inter-NP separation while
maintaining intra-NP cohesion in CA maps. Extensive experiments confirm TeeMo
consistently outperforms prior arts across diverse baselines and datasets.
|
2503.23022 | Xianglong He | Xianglong He, Junyi Chen, Di Huang, Zexiang Liu, Xiaoshui Huang, Wanli
Ouyang, Chun Yuan, Yangguang Li | MeshCraft: Exploring Efficient and Controllable Mesh Generation with
Flow-based DiTs | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In the domain of 3D content creation, achieving optimal mesh topology through
AI models has long been a pursuit for 3D artists. Previous methods, such as
MeshGPT, have explored the generation of ready-to-use 3D objects via mesh
auto-regressive techniques. While these methods produce visually impressive
results, their reliance on token-by-token predictions in the auto-regressive
process leads to several significant limitations. These include extremely slow
generation speeds and an uncontrollable number of mesh faces. In this paper, we
introduce MeshCraft, a novel framework for efficient and controllable mesh
generation, which leverages continuous spatial diffusion to generate discrete
triangle faces. Specifically, MeshCraft consists of two core components: 1) a
transformer-based VAE that encodes raw meshes into continuous face-level tokens
and decodes them back to the original meshes, and 2) a flow-based diffusion
transformer conditioned on the number of faces, enabling the generation of
high-quality 3D meshes with a predefined number of faces. By utilizing the
diffusion model for the simultaneous generation of the entire mesh topology,
MeshCraft achieves high-fidelity mesh generation at significantly faster speeds
compared to auto-regressive methods. Specifically, MeshCraft can generate an
800-face mesh in just 3.2 seconds (35$\times$ faster than existing baselines).
Extensive experiments demonstrate that MeshCraft outperforms state-of-the-art
techniques in both qualitative and quantitative evaluations on ShapeNet dataset
and demonstrates superior performance on Objaverse dataset. Moreover, it
integrates seamlessly with existing conditional guidance strategies, showcasing
its potential to relieve artists from the time-consuming manual work involved
in mesh creation.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 09:21:50 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"He",
"Xianglong",
""
],
[
"Chen",
"Junyi",
""
],
[
"Huang",
"Di",
""
],
[
"Liu",
"Zexiang",
""
],
[
"Huang",
"Xiaoshui",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Yuan",
"Chun",
""
],
[
"Li",
"Yangguang",
""
]
] | TITLE: MeshCraft: Exploring Efficient and Controllable Mesh Generation with
Flow-based DiTs
ABSTRACT: In the domain of 3D content creation, achieving optimal mesh topology through
AI models has long been a pursuit for 3D artists. Previous methods, such as
MeshGPT, have explored the generation of ready-to-use 3D objects via mesh
auto-regressive techniques. While these methods produce visually impressive
results, their reliance on token-by-token predictions in the auto-regressive
process leads to several significant limitations. These include extremely slow
generation speeds and an uncontrollable number of mesh faces. In this paper, we
introduce MeshCraft, a novel framework for efficient and controllable mesh
generation, which leverages continuous spatial diffusion to generate discrete
triangle faces. Specifically, MeshCraft consists of two core components: 1) a
transformer-based VAE that encodes raw meshes into continuous face-level tokens
and decodes them back to the original meshes, and 2) a flow-based diffusion
transformer conditioned on the number of faces, enabling the generation of
high-quality 3D meshes with a predefined number of faces. By utilizing the
diffusion model for the simultaneous generation of the entire mesh topology,
MeshCraft achieves high-fidelity mesh generation at significantly faster speeds
compared to auto-regressive methods. Specifically, MeshCraft can generate an
800-face mesh in just 3.2 seconds (35$\times$ faster than existing baselines).
Extensive experiments demonstrate that MeshCraft outperforms state-of-the-art
techniques in both qualitative and quantitative evaluations on ShapeNet dataset
and demonstrates superior performance on Objaverse dataset. Moreover, it
integrates seamlessly with existing conditional guidance strategies, showcasing
its potential to relieve artists from the time-consuming manual work involved
in mesh creation.
|
2503.23024 | Zhihao Yuan | Zhihao Yuan, Yibo Peng, Jinke Ren, Yinghong Liao, Yatong Han, Chun-Mei
Feng, Hengshuang Zhao, Guanbin Li, Shuguang Cui, Zhen Li | Empowering Large Language Models with 3D Situation Awareness | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driven by the great success of Large Language Models (LLMs) in the 2D image
domain, their applications in 3D scene understanding has emerged as a new
trend. A key difference between 3D and 2D is that the situation of an
egocentric observer in 3D scenes can change, resulting in different
descriptions (e.g., ''left" or ''right"). However, current LLM-based methods
overlook the egocentric perspective and simply use datasets from a global
viewpoint. To address this issue, we propose a novel approach to automatically
generate a situation-aware dataset by leveraging the scanning trajectory during
data collection and utilizing Vision-Language Models (VLMs) to produce
high-quality captions and question-answer pairs. Furthermore, we introduce a
situation grounding module to explicitly predict the position and orientation
of observer's viewpoint, thereby enabling LLMs to ground situation description
in 3D scenes. We evaluate our approach on several benchmarks, demonstrating
that our method effectively enhances the 3D situational awareness of LLMs while
significantly expanding existing datasets and reducing manual effort.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 09:34:16 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yuan",
"Zhihao",
""
],
[
"Peng",
"Yibo",
""
],
[
"Ren",
"Jinke",
""
],
[
"Liao",
"Yinghong",
""
],
[
"Han",
"Yatong",
""
],
[
"Feng",
"Chun-Mei",
""
],
[
"Zhao",
"Hengshuang",
""
],
[
"Li",
"Guanbin",
""
],
[
"Cui",
"Shuguang",
""
],
[
"Li",
"Zhen",
""
]
] | TITLE: Empowering Large Language Models with 3D Situation Awareness
ABSTRACT: Driven by the great success of Large Language Models (LLMs) in the 2D image
domain, their applications in 3D scene understanding has emerged as a new
trend. A key difference between 3D and 2D is that the situation of an
egocentric observer in 3D scenes can change, resulting in different
descriptions (e.g., ''left" or ''right"). However, current LLM-based methods
overlook the egocentric perspective and simply use datasets from a global
viewpoint. To address this issue, we propose a novel approach to automatically
generate a situation-aware dataset by leveraging the scanning trajectory during
data collection and utilizing Vision-Language Models (VLMs) to produce
high-quality captions and question-answer pairs. Furthermore, we introduce a
situation grounding module to explicitly predict the position and orientation
of observer's viewpoint, thereby enabling LLMs to ground situation description
in 3D scenes. We evaluate our approach on several benchmarks, demonstrating
that our method effectively enhances the 3D situational awareness of LLMs while
significantly expanding existing datasets and reducing manual effort.
|
2503.23026 | Ziang Lu | Ziang Lu, Lei Guo, Xu Yu, Zhiyong Cheng, Xiaohui Han, Lei Zhu | Federated Semantic Learning for Privacy-preserving Cross-domain
Recommendation | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the evolving landscape of recommender systems, the challenge of
effectively conducting privacy-preserving Cross-Domain Recommendation (CDR),
especially under strict non-overlapping constraints, has emerged as a key
focus. Despite extensive research has made significant progress, several
limitations still exist: 1) Previous semantic-based methods fail to deeply
exploit rich textual information, since they quantize the text into codes,
losing its original rich semantics. 2) The current solution solely relies on
the text-modality, while the synergistic effects with the ID-modality are
ignored. 3) Existing studies do not consider the impact of irrelevant semantic
features, leading to inaccurate semantic representation. To address these
challenges, we introduce federated semantic learning and devise FFMSR as our
solution. For Limitation 1, we locally learn items'semantic encodings from
their original texts by a multi-layer semantic encoder, and then cluster them
on the server to facilitate the transfer of semantic knowledge between domains.
To tackle Limitation 2, we integrate both ID and Text modalities on the
clients, and utilize them to learn different aspects of items. To handle
Limitation 3, a Fast Fourier Transform (FFT)-based filter and a gating
mechanism are developed to alleviate the impact of irrelevant semantic
information in the local model. We conduct extensive experiments on two
real-world datasets, and the results demonstrate the superiority of our FFMSR
method over other SOTA methods. Our source codes are publicly available at:
https://github.com/Sapphire-star/FFMSR.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 09:37:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lu",
"Ziang",
""
],
[
"Guo",
"Lei",
""
],
[
"Yu",
"Xu",
""
],
[
"Cheng",
"Zhiyong",
""
],
[
"Han",
"Xiaohui",
""
],
[
"Zhu",
"Lei",
""
]
] | TITLE: Federated Semantic Learning for Privacy-preserving Cross-domain
Recommendation
ABSTRACT: In the evolving landscape of recommender systems, the challenge of
effectively conducting privacy-preserving Cross-Domain Recommendation (CDR),
especially under strict non-overlapping constraints, has emerged as a key
focus. Despite extensive research has made significant progress, several
limitations still exist: 1) Previous semantic-based methods fail to deeply
exploit rich textual information, since they quantize the text into codes,
losing its original rich semantics. 2) The current solution solely relies on
the text-modality, while the synergistic effects with the ID-modality are
ignored. 3) Existing studies do not consider the impact of irrelevant semantic
features, leading to inaccurate semantic representation. To address these
challenges, we introduce federated semantic learning and devise FFMSR as our
solution. For Limitation 1, we locally learn items'semantic encodings from
their original texts by a multi-layer semantic encoder, and then cluster them
on the server to facilitate the transfer of semantic knowledge between domains.
To tackle Limitation 2, we integrate both ID and Text modalities on the
clients, and utilize them to learn different aspects of items. To handle
Limitation 3, a Fast Fourier Transform (FFT)-based filter and a gating
mechanism are developed to alleviate the impact of irrelevant semantic
information in the local model. We conduct extensive experiments on two
real-world datasets, and the results demonstrate the superiority of our FFMSR
method over other SOTA methods. Our source codes are publicly available at:
https://github.com/Sapphire-star/FFMSR.
|
2503.23029 | Yichun Feng | Yichun Feng, Jiawei Wang, Ruikun He, Lu Zhou, Yixue Li | A Retrieval-Augmented Knowledge Mining Method with Deep Thinking LLMs
for Biomedical Research and Clinical Support | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Knowledge graphs and large language models (LLMs) are key tools for
biomedical knowledge integration and reasoning, facilitating structured
organization of scientific articles and discovery of complex semantic
relationships. However, current methods face challenges: knowledge graph
construction is limited by complex terminology, data heterogeneity, and rapid
knowledge evolution, while LLMs show limitations in retrieval and reasoning,
making it difficult to uncover cross-document associations and reasoning
pathways. To address these issues, we propose a pipeline that uses LLMs to
construct a biomedical knowledge graph (BioStrataKG) from large-scale articles
and builds a cross-document question-answering dataset (BioCDQA) to evaluate
latent knowledge retrieval and multi-hop reasoning. We then introduce
Integrated and Progressive Retrieval-Augmented Reasoning (IP-RAR) to enhance
retrieval accuracy and knowledge reasoning. IP-RAR maximizes information recall
through Integrated Reasoning-based Retrieval and refines knowledge via
Progressive Reasoning-based Generation, using self-reflection to achieve deep
thinking and precise contextual understanding. Experiments show that IP-RAR
improves document retrieval F1 score by 20\% and answer generation accuracy by
25\% over existing methods. This framework helps doctors efficiently integrate
treatment evidence for personalized medication plans and enables researchers to
analyze advancements and research gaps, accelerating scientific discovery and
decision-making.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 09:56:42 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Feng",
"Yichun",
""
],
[
"Wang",
"Jiawei",
""
],
[
"He",
"Ruikun",
""
],
[
"Zhou",
"Lu",
""
],
[
"Li",
"Yixue",
""
]
] | TITLE: A Retrieval-Augmented Knowledge Mining Method with Deep Thinking LLMs
for Biomedical Research and Clinical Support
ABSTRACT: Knowledge graphs and large language models (LLMs) are key tools for
biomedical knowledge integration and reasoning, facilitating structured
organization of scientific articles and discovery of complex semantic
relationships. However, current methods face challenges: knowledge graph
construction is limited by complex terminology, data heterogeneity, and rapid
knowledge evolution, while LLMs show limitations in retrieval and reasoning,
making it difficult to uncover cross-document associations and reasoning
pathways. To address these issues, we propose a pipeline that uses LLMs to
construct a biomedical knowledge graph (BioStrataKG) from large-scale articles
and builds a cross-document question-answering dataset (BioCDQA) to evaluate
latent knowledge retrieval and multi-hop reasoning. We then introduce
Integrated and Progressive Retrieval-Augmented Reasoning (IP-RAR) to enhance
retrieval accuracy and knowledge reasoning. IP-RAR maximizes information recall
through Integrated Reasoning-based Retrieval and refines knowledge via
Progressive Reasoning-based Generation, using self-reflection to achieve deep
thinking and precise contextual understanding. Experiments show that IP-RAR
improves document retrieval F1 score by 20\% and answer generation accuracy by
25\% over existing methods. This framework helps doctors efficiently integrate
treatment evidence for personalized medication plans and enables researchers to
analyze advancements and research gaps, accelerating scientific discovery and
decision-making.
|
2503.23032 | Fang Junjie | Yuyuan Li, Junjie Fang, Chaochao Chen, Xiaolin Zheng, Yizhao Zhang,
Zhongxuan Han | Reproducibility Companion Paper: Making Users Indistinguishable:
Attribute-wise Unlearning in Recommender Systems | null | null | null | null | cs.IR cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we reproduce the experimental results presented in our
previous work titled "Making Users Indistinguishable: Attribute-wise Unlearning
in Recommender Systems," which was published in the proceedings of the 31st ACM
International Conference on Multimedia. This paper aims to validate the
effectiveness of our proposed method and help others reproduce our experimental
results. We provide detailed descriptions of our preprocessed datasets, source
code structure, configuration file settings, experimental environment, and
reproduced experimental results.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 10:25:49 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Yuyuan",
""
],
[
"Fang",
"Junjie",
""
],
[
"Chen",
"Chaochao",
""
],
[
"Zheng",
"Xiaolin",
""
],
[
"Zhang",
"Yizhao",
""
],
[
"Han",
"Zhongxuan",
""
]
] | TITLE: Reproducibility Companion Paper: Making Users Indistinguishable:
Attribute-wise Unlearning in Recommender Systems
ABSTRACT: In this paper, we reproduce the experimental results presented in our
previous work titled "Making Users Indistinguishable: Attribute-wise Unlearning
in Recommender Systems," which was published in the proceedings of the 31st ACM
International Conference on Multimedia. This paper aims to validate the
effectiveness of our proposed method and help others reproduce our experimental
results. We provide detailed descriptions of our preprocessed datasets, source
code structure, configuration file settings, experimental environment, and
reproduced experimental results.
|
2503.23033 | Sangam Lee | Sangam Lee, Ryang Heo, SeongKu Kang, Dongha Lee | Imagine All The Relevance: Scenario-Profiled Indexing with Knowledge
Expansion for Dense Retrieval | 9 pages | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Existing dense retrieval models struggle with reasoning-intensive retrieval
task as they fail to capture implicit relevance that requires reasoning beyond
surface-level semantic information. To address these challenges, we propose
Scenario-Profiled Indexing with Knowledge Expansion (SPIKE), a dense retrieval
framework that explicitly indexes implicit relevance by decomposing documents
into scenario-based retrieval units. SPIKE organizes documents into scenario,
which encapsulates the reasoning process necessary to uncover implicit
relationships between hypothetical information needs and document content.
SPIKE constructs a scenario-augmented dataset using a powerful teacher large
language model (LLM), then distills these reasoning capabilities into a
smaller, efficient scenario generator. During inference, SPIKE incorporates
scenario-level relevance alongside document-level relevance, enabling
reasoning-aware retrieval. Extensive experiments demonstrate that SPIKE
consistently enhances retrieval performance across various query types and
dense retrievers. It also enhances the retrieval experience for users through
scenario and offers valuable contextual information for LLMs in
retrieval-augmented generation (RAG).
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 10:36:54 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lee",
"Sangam",
""
],
[
"Heo",
"Ryang",
""
],
[
"Kang",
"SeongKu",
""
],
[
"Lee",
"Dongha",
""
]
] | TITLE: Imagine All The Relevance: Scenario-Profiled Indexing with Knowledge
Expansion for Dense Retrieval
ABSTRACT: Existing dense retrieval models struggle with reasoning-intensive retrieval
task as they fail to capture implicit relevance that requires reasoning beyond
surface-level semantic information. To address these challenges, we propose
Scenario-Profiled Indexing with Knowledge Expansion (SPIKE), a dense retrieval
framework that explicitly indexes implicit relevance by decomposing documents
into scenario-based retrieval units. SPIKE organizes documents into scenario,
which encapsulates the reasoning process necessary to uncover implicit
relationships between hypothetical information needs and document content.
SPIKE constructs a scenario-augmented dataset using a powerful teacher large
language model (LLM), then distills these reasoning capabilities into a
smaller, efficient scenario generator. During inference, SPIKE incorporates
scenario-level relevance alongside document-level relevance, enabling
reasoning-aware retrieval. Extensive experiments demonstrate that SPIKE
consistently enhances retrieval performance across various query types and
dense retrievers. It also enhances the retrieval experience for users through
scenario and offers valuable contextual information for LLMs in
retrieval-augmented generation (RAG).
|
2503.23035 | Yuxiang Bao | Yuxiang Bao, Huijie Liu, Xun Gao, Huan Fu, Guoliang Kang | FreeInv: Free Lunch for Improving DDIM Inversion | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Naive DDIM inversion process usually suffers from a trajectory deviation
issue, i.e., the latent trajectory during reconstruction deviates from the one
during inversion. To alleviate this issue, previous methods either learn to
mitigate the deviation or design cumbersome compensation strategy to reduce the
mismatch error, exhibiting substantial time and computation cost. In this work,
we present a nearly free-lunch method (named FreeInv) to address the issue more
effectively and efficiently. In FreeInv, we randomly transform the latent
representation and keep the transformation the same between the corresponding
inversion and reconstruction time-step. It is motivated from a statistical
perspective that an ensemble of DDIM inversion processes for multiple
trajectories yields a smaller trajectory mismatch error on expectation.
Moreover, through theoretical analysis and empirical study, we show that
FreeInv performs an efficient ensemble of multiple trajectories. FreeInv can be
freely integrated into existing inversion-based image and video editing
techniques. Especially for inverting video sequences, it brings more
significant fidelity and efficiency improvements. Comprehensive quantitative
and qualitative evaluation on PIE benchmark and DAVIS dataset shows that
FreeInv remarkably outperforms conventional DDIM inversion, and is competitive
among previous state-of-the-art inversion methods, with superior computation
efficiency.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 10:47:43 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Bao",
"Yuxiang",
""
],
[
"Liu",
"Huijie",
""
],
[
"Gao",
"Xun",
""
],
[
"Fu",
"Huan",
""
],
[
"Kang",
"Guoliang",
""
]
] | TITLE: FreeInv: Free Lunch for Improving DDIM Inversion
ABSTRACT: Naive DDIM inversion process usually suffers from a trajectory deviation
issue, i.e., the latent trajectory during reconstruction deviates from the one
during inversion. To alleviate this issue, previous methods either learn to
mitigate the deviation or design cumbersome compensation strategy to reduce the
mismatch error, exhibiting substantial time and computation cost. In this work,
we present a nearly free-lunch method (named FreeInv) to address the issue more
effectively and efficiently. In FreeInv, we randomly transform the latent
representation and keep the transformation the same between the corresponding
inversion and reconstruction time-step. It is motivated from a statistical
perspective that an ensemble of DDIM inversion processes for multiple
trajectories yields a smaller trajectory mismatch error on expectation.
Moreover, through theoretical analysis and empirical study, we show that
FreeInv performs an efficient ensemble of multiple trajectories. FreeInv can be
freely integrated into existing inversion-based image and video editing
techniques. Especially for inverting video sequences, it brings more
significant fidelity and efficiency improvements. Comprehensive quantitative
and qualitative evaluation on PIE benchmark and DAVIS dataset shows that
FreeInv remarkably outperforms conventional DDIM inversion, and is competitive
among previous state-of-the-art inversion methods, with superior computation
efficiency.
|
2503.23038 | Jianpeng Liu | Jianpeng Liu, Qizhi Pan | Function Fitting Based on Kolmogorov-Arnold Theorem and Kernel Functions | 19 pages, 12 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a unified theoretical framework based on the
Kolmogorov-Arnold representation theorem and kernel methods. By analyzing the
mathematical relationship among kernels, B-spline basis functions in
Kolmogorov-Arnold Networks (KANs) and the inner product operation in
self-attention mechanisms, we establish a kernel-based feature fitting
framework that unifies the two models as linear combinations of kernel
functions. Under this framework, we propose a low-rank Pseudo-Multi-Head
Self-Attention module (Pseudo-MHSA), which reduces the parameter count of
traditional MHSA by nearly 50\%. Furthermore, we design a Gaussian kernel
multi-head self-attention variant (Gaussian-MHSA) to validate the effectiveness
of nonlinear kernel functions in feature extraction. Experiments on the
CIFAR-10 dataset demonstrate that Pseudo-MHSA model achieves performance
comparable to the ViT model of the same dimensionality under the MAE framework
and visualization analysis reveals their similarity of multi-head distribution
patterns. Our code is publicly available.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 11:03:28 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Jianpeng",
""
],
[
"Pan",
"Qizhi",
""
]
] | TITLE: Function Fitting Based on Kolmogorov-Arnold Theorem and Kernel Functions
ABSTRACT: This paper proposes a unified theoretical framework based on the
Kolmogorov-Arnold representation theorem and kernel methods. By analyzing the
mathematical relationship among kernels, B-spline basis functions in
Kolmogorov-Arnold Networks (KANs) and the inner product operation in
self-attention mechanisms, we establish a kernel-based feature fitting
framework that unifies the two models as linear combinations of kernel
functions. Under this framework, we propose a low-rank Pseudo-Multi-Head
Self-Attention module (Pseudo-MHSA), which reduces the parameter count of
traditional MHSA by nearly 50\%. Furthermore, we design a Gaussian kernel
multi-head self-attention variant (Gaussian-MHSA) to validate the effectiveness
of nonlinear kernel functions in feature extraction. Experiments on the
CIFAR-10 dataset demonstrate that Pseudo-MHSA model achieves performance
comparable to the ViT model of the same dimensionality under the MAE framework
and visualization analysis reveals their similarity of multi-head distribution
patterns. Our code is publicly available.
|
2503.23040 | Zehui He | Yixiu Liu, Zehui He, Yuyuan Li, Zhongxuan Han, Chaochao Chen, Xiaolin
Zheng | Reproducibility Companion Paper:In-processing User Constrained Dominant
Sets for User-Oriented Fairness in Recommender Systems | 4 pages | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we reproduce experimental results presented in our earlier
work titled "In-processing User Constrained Dominant Sets for User-Oriented
Fairness in Recommender Systems" that was presented in the proceeding of the
31st ACM International Conference on Multimedia.This work aims to verify the
effectiveness of our previously proposed method and provide guidance for
reproducibility. We present detailed descriptions of our preprocessed datasets,
the structure of our source code, configuration file settings, experimental
environment, and the reproduced experimental results.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 11:07:33 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Yixiu",
""
],
[
"He",
"Zehui",
""
],
[
"Li",
"Yuyuan",
""
],
[
"Han",
"Zhongxuan",
""
],
[
"Chen",
"Chaochao",
""
],
[
"Zheng",
"Xiaolin",
""
]
] | TITLE: Reproducibility Companion Paper:In-processing User Constrained Dominant
Sets for User-Oriented Fairness in Recommender Systems
ABSTRACT: In this paper, we reproduce experimental results presented in our earlier
work titled "In-processing User Constrained Dominant Sets for User-Oriented
Fairness in Recommender Systems" that was presented in the proceeding of the
31st ACM International Conference on Multimedia.This work aims to verify the
effectiveness of our previously proposed method and provide guidance for
reproducibility. We present detailed descriptions of our preprocessed datasets,
the structure of our source code, configuration file settings, experimental
environment, and the reproduced experimental results.
|
2503.23042 | Catarina Barata | M Rita Verdelho and Alexandre Bernardino and Catarina Barata | MIL vs. Aggregation: Evaluating Patient-Level Survival Prediction
Strategies Using Graph-Based Learning | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Oncologists often rely on a multitude of data, including whole-slide images
(WSIs), to guide therapeutic decisions, aiming for the best patient outcome.
However, predicting the prognosis of cancer patients can be a challenging task
due to tumor heterogeneity and intra-patient variability, and the complexity of
analyzing WSIs. These images are extremely large, containing billions of
pixels, making direct processing computationally expensive and requiring
specialized methods to extract relevant information. Additionally, multiple
WSIs from the same patient may capture different tumor regions, some being more
informative than others. This raises a fundamental question: Should we use all
WSIs to characterize the patient, or should we identify the most representative
slide for prognosis? Our work seeks to answer this question by performing a
comparison of various strategies for predicting survival at the WSI and patient
level. The former treats each WSI as an independent sample, mimicking the
strategy adopted in other works, while the latter comprises methods to either
aggregate the predictions of the several WSIs or automatically identify the
most relevant slide using multiple-instance learning (MIL). Additionally, we
evaluate different Graph Neural Networks architectures under these strategies.
We conduct our experiments using the MMIST-ccRCC dataset, which comprises
patients with clear cell renal cell carcinoma (ccRCC). Our results show that
MIL-based selection improves accuracy, suggesting that choosing the most
representative slide benefits survival prediction.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 11:14:02 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Verdelho",
"M Rita",
""
],
[
"Bernardino",
"Alexandre",
""
],
[
"Barata",
"Catarina",
""
]
] | TITLE: MIL vs. Aggregation: Evaluating Patient-Level Survival Prediction
Strategies Using Graph-Based Learning
ABSTRACT: Oncologists often rely on a multitude of data, including whole-slide images
(WSIs), to guide therapeutic decisions, aiming for the best patient outcome.
However, predicting the prognosis of cancer patients can be a challenging task
due to tumor heterogeneity and intra-patient variability, and the complexity of
analyzing WSIs. These images are extremely large, containing billions of
pixels, making direct processing computationally expensive and requiring
specialized methods to extract relevant information. Additionally, multiple
WSIs from the same patient may capture different tumor regions, some being more
informative than others. This raises a fundamental question: Should we use all
WSIs to characterize the patient, or should we identify the most representative
slide for prognosis? Our work seeks to answer this question by performing a
comparison of various strategies for predicting survival at the WSI and patient
level. The former treats each WSI as an independent sample, mimicking the
strategy adopted in other works, while the latter comprises methods to either
aggregate the predictions of the several WSIs or automatically identify the
most relevant slide using multiple-instance learning (MIL). Additionally, we
evaluate different Graph Neural Networks architectures under these strategies.
We conduct our experiments using the MMIST-ccRCC dataset, which comprises
patients with clear cell renal cell carcinoma (ccRCC). Our results show that
MIL-based selection improves accuracy, suggesting that choosing the most
representative slide benefits survival prediction.
|
2503.23046 | Haibo Hu | Haibo Hu, Jiacheng Zuo, Yang Lou, Yufei Cui, Jianping Wang, Nan Guan,
Jin Wang, Yung-Hui Li, Chun Jason Xue | VLM-C4L: Continual Core Dataset Learning with Corner Case Optimization
via Vision-Language Models for Autonomous Driving | null | null | null | null | cs.RO cs.LG | http://creativecommons.org/licenses/by/4.0/ | With the widespread adoption and deployment of autonomous driving, handling
complex environments has become an unavoidable challenge. Due to the scarcity
and diversity of extreme scenario datasets, current autonomous driving models
struggle to effectively manage corner cases. This limitation poses a
significant safety risk, according to the National Highway Traffic Safety
Administration (NHTSA), autonomous vehicle systems have been involved in
hundreds of reported crashes annually in the United States, occurred in corner
cases like sun glare and fog, which caused a few fatal accident. Furthermore,
in order to consistently maintain a robust and reliable autonomous driving
system, it is essential for models not only to perform well on routine
scenarios but also to adapt to newly emerging scenarios, especially those
corner cases that deviate from the norm. This requires a learning mechanism
that incrementally integrates new knowledge without degrading previously
acquired capabilities. However, to the best of our knowledge, no existing
continual learning methods have been proposed to ensure consistent and scalable
corner case learning in autonomous driving. To address these limitations, we
propose VLM-C4L, a continual learning framework that introduces Vision-Language
Models (VLMs) to dynamically optimize and enhance corner case datasets, and
VLM-C4L combines VLM-guided high-quality data extraction with a core data
replay strategy, enabling the model to incrementally learn from diverse corner
cases while preserving performance on previously routine scenarios, thus
ensuring long-term stability and adaptability in real-world autonomous driving.
We evaluate VLM-C4L on large-scale real-world autonomous driving datasets,
including Waymo and the corner case dataset CODA.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 11:40:34 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Hu",
"Haibo",
""
],
[
"Zuo",
"Jiacheng",
""
],
[
"Lou",
"Yang",
""
],
[
"Cui",
"Yufei",
""
],
[
"Wang",
"Jianping",
""
],
[
"Guan",
"Nan",
""
],
[
"Wang",
"Jin",
""
],
[
"Li",
"Yung-Hui",
""
],
[
"Xue",
"Chun Jason",
""
]
] | TITLE: VLM-C4L: Continual Core Dataset Learning with Corner Case Optimization
via Vision-Language Models for Autonomous Driving
ABSTRACT: With the widespread adoption and deployment of autonomous driving, handling
complex environments has become an unavoidable challenge. Due to the scarcity
and diversity of extreme scenario datasets, current autonomous driving models
struggle to effectively manage corner cases. This limitation poses a
significant safety risk, according to the National Highway Traffic Safety
Administration (NHTSA), autonomous vehicle systems have been involved in
hundreds of reported crashes annually in the United States, occurred in corner
cases like sun glare and fog, which caused a few fatal accident. Furthermore,
in order to consistently maintain a robust and reliable autonomous driving
system, it is essential for models not only to perform well on routine
scenarios but also to adapt to newly emerging scenarios, especially those
corner cases that deviate from the norm. This requires a learning mechanism
that incrementally integrates new knowledge without degrading previously
acquired capabilities. However, to the best of our knowledge, no existing
continual learning methods have been proposed to ensure consistent and scalable
corner case learning in autonomous driving. To address these limitations, we
propose VLM-C4L, a continual learning framework that introduces Vision-Language
Models (VLMs) to dynamically optimize and enhance corner case datasets, and
VLM-C4L combines VLM-guided high-quality data extraction with a core data
replay strategy, enabling the model to incrementally learn from diverse corner
cases while preserving performance on previously routine scenarios, thus
ensuring long-term stability and adaptability in real-world autonomous driving.
We evaluate VLM-C4L on large-scale real-world autonomous driving datasets,
including Waymo and the corner case dataset CODA.
|
2503.23051 | Zhihan Jiang | Yichen Li, Yulun Wu, Jinyang Liu, Zhihan Jiang, Zhuangbin Chen,
Guangba Yu and Michael R. Lyu | COCA: Generative Root Cause Analysis for Distributed Systems with Code
Knowledge | Accepted by the 47th IEEE/ACM International Conference on Software
Engineering (ICSE'25) | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Runtime failures are commonplace in modern distributed systems. When such
issues arise, users often turn to platforms such as Github or JIRA to report
them and request assistance. Automatically identifying the root cause of these
failures is critical for ensuring high reliability and availability. However,
prevailing automatic root cause analysis (RCA) approaches rely significantly on
comprehensive runtime monitoring data, which is often not fully available in
issue platforms. Recent methods leverage large language models (LLMs) to
analyze issue reports, but their effectiveness is limited by incomplete or
ambiguous user-provided information. To obtain more accurate and comprehensive
RCA results, the core idea of this work is to extract additional diagnostic
clues from code to supplement data-limited issue reports. Specifically, we
propose COCA, a code knowledge enhanced root cause analysis approach for issue
reports. Based on the data within issue reports, COCA intelligently extracts
relevant code snippets and reconstructs execution paths, providing a
comprehensive execution context for further RCA. Subsequently, COCA constructs
a prompt combining historical issue reports along with profiled code knowledge,
enabling the LLMs to generate detailed root cause summaries and localize
responsible components. Our evaluation on datasets from five real-world
distributed systems demonstrates that COCA significantly outperforms existing
methods, achieving a 28.3% improvement in root cause localization and a 22.0%
improvement in root cause summarization. Furthermore, COCA's performance
consistency across various LLMs underscores its robust generalizability.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 11:56:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Yichen",
""
],
[
"Wu",
"Yulun",
""
],
[
"Liu",
"Jinyang",
""
],
[
"Jiang",
"Zhihan",
""
],
[
"Chen",
"Zhuangbin",
""
],
[
"Yu",
"Guangba",
""
],
[
"Lyu",
"Michael R.",
""
]
] | TITLE: COCA: Generative Root Cause Analysis for Distributed Systems with Code
Knowledge
ABSTRACT: Runtime failures are commonplace in modern distributed systems. When such
issues arise, users often turn to platforms such as Github or JIRA to report
them and request assistance. Automatically identifying the root cause of these
failures is critical for ensuring high reliability and availability. However,
prevailing automatic root cause analysis (RCA) approaches rely significantly on
comprehensive runtime monitoring data, which is often not fully available in
issue platforms. Recent methods leverage large language models (LLMs) to
analyze issue reports, but their effectiveness is limited by incomplete or
ambiguous user-provided information. To obtain more accurate and comprehensive
RCA results, the core idea of this work is to extract additional diagnostic
clues from code to supplement data-limited issue reports. Specifically, we
propose COCA, a code knowledge enhanced root cause analysis approach for issue
reports. Based on the data within issue reports, COCA intelligently extracts
relevant code snippets and reconstructs execution paths, providing a
comprehensive execution context for further RCA. Subsequently, COCA constructs
a prompt combining historical issue reports along with profiled code knowledge,
enabling the LLMs to generate detailed root cause summaries and localize
responsible components. Our evaluation on datasets from five real-world
distributed systems demonstrates that COCA significantly outperforms existing
methods, achieving a 28.3% improvement in root cause localization and a 22.0%
improvement in root cause summarization. Furthermore, COCA's performance
consistency across various LLMs underscores its robust generalizability.
|
2503.23060 | Vincent Jacob | Vincent Jacob, Yanlei Diao | Unsupervised Anomaly Detection in Multivariate Time Series across
Heterogeneous Domains | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The widespread adoption of digital services, along with the scale and
complexity at which they operate, has made incidents in IT operations
increasingly more likely, diverse, and impactful. This has led to the rapid
development of a central aspect of "Artificial Intelligence for IT Operations"
(AIOps), focusing on detecting anomalies in vast amounts of multivariate time
series data generated by service entities. In this paper, we begin by
introducing a unifying framework for benchmarking unsupervised anomaly
detection (AD) methods, and highlight the problem of shifts in normal behaviors
that can occur in practical AIOps scenarios. To tackle anomaly detection under
domain shift, we then cast the problem in the framework of domain
generalization and propose a novel approach, Domain-Invariant VAE for Anomaly
Detection (DIVAD), to learn domain-invariant representations for unsupervised
anomaly detection. Our evaluation results using the Exathlon benchmark show
that the two main DIVAD variants significantly outperform the best unsupervised
AD method in maximum performance, with 20% and 15% improvements in maximum peak
F1-scores, respectively. Evaluation using the Application Server Dataset
further demonstrates the broader applicability of our domain generalization
methods.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 12:38:28 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Jacob",
"Vincent",
""
],
[
"Diao",
"Yanlei",
""
]
] | TITLE: Unsupervised Anomaly Detection in Multivariate Time Series across
Heterogeneous Domains
ABSTRACT: The widespread adoption of digital services, along with the scale and
complexity at which they operate, has made incidents in IT operations
increasingly more likely, diverse, and impactful. This has led to the rapid
development of a central aspect of "Artificial Intelligence for IT Operations"
(AIOps), focusing on detecting anomalies in vast amounts of multivariate time
series data generated by service entities. In this paper, we begin by
introducing a unifying framework for benchmarking unsupervised anomaly
detection (AD) methods, and highlight the problem of shifts in normal behaviors
that can occur in practical AIOps scenarios. To tackle anomaly detection under
domain shift, we then cast the problem in the framework of domain
generalization and propose a novel approach, Domain-Invariant VAE for Anomaly
Detection (DIVAD), to learn domain-invariant representations for unsupervised
anomaly detection. Our evaluation results using the Exathlon benchmark show
that the two main DIVAD variants significantly outperform the best unsupervised
AD method in maximum performance, with 20% and 15% improvements in maximum peak
F1-scores, respectively. Evaluation using the Application Server Dataset
further demonstrates the broader applicability of our domain generalization
methods.
|
2503.23062 | Sagi Eppel | Sagi Eppel, Mor Bismut, Alona Faktor | Shape and Texture Recognition in Large Vision-Language Models | null | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | Shape and texture recognition is fundamental to visual perception. The
ability to identify shapes regardless of orientation, texture, or context, and
to recognize textures independently of their associated objects, is essential
for general visual understanding of the world. We introduce the Large Shape &
Textures dataset (LAS&T), a giant collection of diverse shapes and textures
automatically extracted from real-world images. This dataset is used to
evaluate how effectively leading Large Vision-Language Models (LVLMs)
understand shapes, textures, and materials in both 2D and 3D scenes. For shape
recognition, we test models' ability to match identical shapes that differ in
orientation, texture, color, or environment. Our results show that LVLMs' shape
identification capabilities remain significantly below human performance.
Single alterations (orientation, texture) cause minor decreases in matching
accuracy, while multiple changes precipitate dramatic drops. LVLMs appear to
rely predominantly on high-level and semantic features and struggle with
abstract shapes lacking clear class associations. For texture and material
recognition, we evaluate models' ability to identify identical textures and
materials across different objects and environments. Interestingly, leading
LVLMs approach human-level performance in recognizing materials in 3D scenes,
yet substantially underperform humans when identifying simpler 2D textures. The
LAS&T dataset and benchmark, the largest and most diverse resource for shape
and texture evaluation, is freely available with generation and testing
scripts.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 12:43:29 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Eppel",
"Sagi",
""
],
[
"Bismut",
"Mor",
""
],
[
"Faktor",
"Alona",
""
]
] | TITLE: Shape and Texture Recognition in Large Vision-Language Models
ABSTRACT: Shape and texture recognition is fundamental to visual perception. The
ability to identify shapes regardless of orientation, texture, or context, and
to recognize textures independently of their associated objects, is essential
for general visual understanding of the world. We introduce the Large Shape &
Textures dataset (LAS&T), a giant collection of diverse shapes and textures
automatically extracted from real-world images. This dataset is used to
evaluate how effectively leading Large Vision-Language Models (LVLMs)
understand shapes, textures, and materials in both 2D and 3D scenes. For shape
recognition, we test models' ability to match identical shapes that differ in
orientation, texture, color, or environment. Our results show that LVLMs' shape
identification capabilities remain significantly below human performance.
Single alterations (orientation, texture) cause minor decreases in matching
accuracy, while multiple changes precipitate dramatic drops. LVLMs appear to
rely predominantly on high-level and semantic features and struggle with
abstract shapes lacking clear class associations. For texture and material
recognition, we evaluate models' ability to identify identical textures and
materials across different objects and environments. Interestingly, leading
LVLMs approach human-level performance in recognizing materials in 3D scenes,
yet substantially underperform humans when identifying simpler 2D textures. The
LAS&T dataset and benchmark, the largest and most diverse resource for shape
and texture evaluation, is freely available with generation and testing
scripts.
|
2503.23072 | Yuyang Liang | Yuyang Liang, Yankai Chen, Yixiang Fang, Laks V. S. Lakshmanan,
Chenhao Ma | TRACE: Intra-visit Clinical Event Nowcasting via Effective Patient
Trajectory Encoding | Accepted by WWW'25 short paper track | null | 10.1145/3701716.3715545 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electronic Health Records (EHR) have become a valuable resource for a wide
range of predictive tasks in healthcare. However, existing approaches have
largely focused on inter-visit event predictions, overlooking the importance of
intra-visit nowcasting, which provides prompt clinical insights during an
ongoing patient visit. To address this gap, we introduce the task of laboratory
measurement prediction within a hospital visit. We study the laboratory data
that, however, remained underexplored in previous work. We propose TRACE, a
Transformer-based model designed for clinical event nowcasting by encoding
patient trajectories. TRACE effectively handles long sequences and captures
temporal dependencies through a novel timestamp embedding that integrates decay
properties and periodic patterns of data. Additionally, we introduce a smoothed
mask for denoising, improving the robustness of the model. Experiments on two
large-scale electronic health record datasets demonstrate that the proposed
model significantly outperforms previous methods, highlighting its potential
for improving patient care through more accurate laboratory measurement
nowcasting. The code is available at https://github.com/Amehi/TRACE.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 13:08:59 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liang",
"Yuyang",
""
],
[
"Chen",
"Yankai",
""
],
[
"Fang",
"Yixiang",
""
],
[
"Lakshmanan",
"Laks V. S.",
""
],
[
"Ma",
"Chenhao",
""
]
] | TITLE: TRACE: Intra-visit Clinical Event Nowcasting via Effective Patient
Trajectory Encoding
ABSTRACT: Electronic Health Records (EHR) have become a valuable resource for a wide
range of predictive tasks in healthcare. However, existing approaches have
largely focused on inter-visit event predictions, overlooking the importance of
intra-visit nowcasting, which provides prompt clinical insights during an
ongoing patient visit. To address this gap, we introduce the task of laboratory
measurement prediction within a hospital visit. We study the laboratory data
that, however, remained underexplored in previous work. We propose TRACE, a
Transformer-based model designed for clinical event nowcasting by encoding
patient trajectories. TRACE effectively handles long sequences and captures
temporal dependencies through a novel timestamp embedding that integrates decay
properties and periodic patterns of data. Additionally, we introduce a smoothed
mask for denoising, improving the robustness of the model. Experiments on two
large-scale electronic health record datasets demonstrate that the proposed
model significantly outperforms previous methods, highlighting its potential
for improving patient care through more accurate laboratory measurement
nowcasting. The code is available at https://github.com/Amehi/TRACE.
|
2503.23078 | Zhengyi Zhao | Zhengyi Zhao, Shubo Zhang, Yiming Du, Bin Liang, Baojun Wang,
Zhongyang Li, Binyang Li, Kam-Fai Wong | EventWeave: A Dynamic Framework for Capturing Core and Supporting Events
in Dialogue Systems | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Existing large language models (LLMs) have shown remarkable progress in
dialogue systems. However, many approaches still overlook the fundamental role
of events throughout multi-turn interactions, leading to \textbf{incomplete
context tracking}. Without tracking these events, dialogue systems often lose
coherence and miss subtle shifts in user intent, causing disjointed responses.
To bridge this gap, we present \textbf{EventWeave}, an event-centric framework
that identifies and updates both core and supporting events as the conversation
unfolds. Specifically, we organize these events into a dynamic event graph,
which represents the interplay between \textbf{core events} that shape the
primary idea and \textbf{supporting events} that provide critical context
during the whole dialogue. By leveraging this dynamic graph, EventWeave helps
models focus on the most relevant events when generating responses, thus
avoiding repeated visits of the entire dialogue history. Experimental results
on two benchmark datasets show that EventWeave improves response quality and
event relevance without fine-tuning.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 13:33:42 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhao",
"Zhengyi",
""
],
[
"Zhang",
"Shubo",
""
],
[
"Du",
"Yiming",
""
],
[
"Liang",
"Bin",
""
],
[
"Wang",
"Baojun",
""
],
[
"Li",
"Zhongyang",
""
],
[
"Li",
"Binyang",
""
],
[
"Wong",
"Kam-Fai",
""
]
] | TITLE: EventWeave: A Dynamic Framework for Capturing Core and Supporting Events
in Dialogue Systems
ABSTRACT: Existing large language models (LLMs) have shown remarkable progress in
dialogue systems. However, many approaches still overlook the fundamental role
of events throughout multi-turn interactions, leading to \textbf{incomplete
context tracking}. Without tracking these events, dialogue systems often lose
coherence and miss subtle shifts in user intent, causing disjointed responses.
To bridge this gap, we present \textbf{EventWeave}, an event-centric framework
that identifies and updates both core and supporting events as the conversation
unfolds. Specifically, we organize these events into a dynamic event graph,
which represents the interplay between \textbf{core events} that shape the
primary idea and \textbf{supporting events} that provide critical context
during the whole dialogue. By leveraging this dynamic graph, EventWeave helps
models focus on the most relevant events when generating responses, thus
avoiding repeated visits of the entire dialogue history. Experimental results
on two benchmark datasets show that EventWeave improves response quality and
event relevance without fine-tuning.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.