id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.02695 | Wanting Wang | Wanting Wang | Zero-Shot Complex Question-Answering on Long Scientific Documents | AAAI 2025 Workshop on Document Understanding and Intelligence | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | With the rapid development in Transformer-based language models, the reading
comprehension tasks on short documents and simple questions have been largely
addressed. Long documents, specifically the scientific documents that are
densely packed with knowledge discovered and developed by humans, remain
relatively unexplored. These documents often come with a set of complex and
more realistic questions, adding to their complexity. We present a zero-shot
pipeline framework that enables social science researchers to perform
question-answering tasks that are complex yet of predetermined question formats
on full-length research papers without requiring machine learning expertise.
Our approach integrates pre-trained language models to handle challenging
scenarios including multi-span extraction, multi-hop reasoning, and long-answer
generation. Evaluating on MLPsych, a novel dataset of social psychology papers
with annotated complex questions, we demonstrate that our framework achieves
strong performance through combination of extractive and generative models.
This work advances document understanding capabilities for social sciences
while providing practical tools for researchers.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 15:12:18 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Wanting",
""
]
]
| TITLE: Zero-Shot Complex Question-Answering on Long Scientific Documents
ABSTRACT: With the rapid development in Transformer-based language models, the reading
comprehension tasks on short documents and simple questions have been largely
addressed. Long documents, specifically the scientific documents that are
densely packed with knowledge discovered and developed by humans, remain
relatively unexplored. These documents often come with a set of complex and
more realistic questions, adding to their complexity. We present a zero-shot
pipeline framework that enables social science researchers to perform
question-answering tasks that are complex yet of predetermined question formats
on full-length research papers without requiring machine learning expertise.
Our approach integrates pre-trained language models to handle challenging
scenarios including multi-span extraction, multi-hop reasoning, and long-answer
generation. Evaluating on MLPsych, a novel dataset of social psychology papers
with annotated complex questions, we demonstrate that our framework achieves
strong performance through combination of extractive and generative models.
This work advances document understanding capabilities for social sciences
while providing practical tools for researchers.
| new_dataset | 0.958809 |
2503.02701 | Shuaike Li | Shuaike Li, Kai Zhang, Qi Liu, Enhong Chen | MindBridge: Scalable and Cross-Model Knowledge Editing via
Memory-Augmented Modality | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge editing is a technique for efficiently and accurately updating the
knowledge of large language models (LLMs) to alleviate obsolescence and correct
errors. However, most existing methods overfit to specific models, causing
edited knowledge to be discarded during each LLM update and requiring frequent
re-editing, which is particularly burdensome in today's rapidly evolving
open-source community. To address this issue, we propose the problem of
cross-model knowledge editing and introduce MindBridge, a scalable solution
inspired by the low coupling between modality processing and LLMs in
multi-modal models. MindBridge introduces the novel concept of memory modality,
which encodes edited knowledge as an independent modality. It first performs
LLM-agnostic pre-training of the memory modality and then integrates it with
various LLMs. Extensive experiments on multiple LLMs and popular knowledge
editing datasets demonstrate that MindBridge achieves superior performance even
in editing tens of thousands of knowledge entries and can flexibly adapt to
different LLMs. Our code is available at
https://github.com/CrashBugger/MindBridge.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 15:17:57 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Li",
"Shuaike",
""
],
[
"Zhang",
"Kai",
""
],
[
"Liu",
"Qi",
""
],
[
"Chen",
"Enhong",
""
]
]
| TITLE: MindBridge: Scalable and Cross-Model Knowledge Editing via
Memory-Augmented Modality
ABSTRACT: Knowledge editing is a technique for efficiently and accurately updating the
knowledge of large language models (LLMs) to alleviate obsolescence and correct
errors. However, most existing methods overfit to specific models, causing
edited knowledge to be discarded during each LLM update and requiring frequent
re-editing, which is particularly burdensome in today's rapidly evolving
open-source community. To address this issue, we propose the problem of
cross-model knowledge editing and introduce MindBridge, a scalable solution
inspired by the low coupling between modality processing and LLMs in
multi-modal models. MindBridge introduces the novel concept of memory modality,
which encodes edited knowledge as an independent modality. It first performs
LLM-agnostic pre-training of the memory modality and then integrates it with
various LLMs. Extensive experiments on multiple LLMs and popular knowledge
editing datasets demonstrate that MindBridge achieves superior performance even
in editing tens of thousands of knowledge entries and can flexibly adapt to
different LLMs. Our code is available at
https://github.com/CrashBugger/MindBridge.
| no_new_dataset | 0.951774 |
2503.02714 | Melanie Schaller Dr. | Melanie Schaller and Sergej Hloch and Akash Nag and Dagmar Klichova
and Nick Janssen and Frank Pude and Michal Zelenak and Bodo Rosenhahn | S4D-Bio Audio Monitoring of Bone Cement Disintegration in Pulsating
Fluid Jet Surgery under Laboratory Conditions | submitted to Computers in Biology and Medicine Journal | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | This study investigates a pulsating fluid jet as a novel precise, minimally
invasive and cold technique for bone cement removal. We utilize the pulsating
fluid jet device to remove bone cement from samples designed to mimic clinical
conditions. The effectiveness of long nozzles was tested to enable minimally
invasive procedures. Audio signal monitoring, complemented by the State Space
Model (SSM) S4D-Bio, was employed to optimize the fluid jet parameters
dynamically, addressing challenges like visibility obstruction from splashing.
Within our experiments, we generate a comprehensive dataset correlating various
process parameters and their equivalent audio signals to material erosion. The
use of SSMs yields precise control over the predictive erosion process,
achieving 98.93 \% accuracy. The study demonstrates on the one hand, that the
pulsating fluid jet device, coupled with advanced audio monitoring techniques,
is a highly effective tool for precise bone cement removal. On the other hand,
this study presents the first application of SSMs in biomedical surgery
technology, marking a significant advancement in the application. This research
significantly advances biomedical engineering by integrating machine learning
combined with pulsating fluid jet as surgical technology, offering a novel,
minimally invasive, cold and adaptive approach for bone cement removal in
orthopedic applications.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 15:30:36 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Schaller",
"Melanie",
""
],
[
"Hloch",
"Sergej",
""
],
[
"Nag",
"Akash",
""
],
[
"Klichova",
"Dagmar",
""
],
[
"Janssen",
"Nick",
""
],
[
"Pude",
"Frank",
""
],
[
"Zelenak",
"Michal",
""
],
[
"Rosenhahn",
"Bodo",
""
]
]
| TITLE: S4D-Bio Audio Monitoring of Bone Cement Disintegration in Pulsating
Fluid Jet Surgery under Laboratory Conditions
ABSTRACT: This study investigates a pulsating fluid jet as a novel precise, minimally
invasive and cold technique for bone cement removal. We utilize the pulsating
fluid jet device to remove bone cement from samples designed to mimic clinical
conditions. The effectiveness of long nozzles was tested to enable minimally
invasive procedures. Audio signal monitoring, complemented by the State Space
Model (SSM) S4D-Bio, was employed to optimize the fluid jet parameters
dynamically, addressing challenges like visibility obstruction from splashing.
Within our experiments, we generate a comprehensive dataset correlating various
process parameters and their equivalent audio signals to material erosion. The
use of SSMs yields precise control over the predictive erosion process,
achieving 98.93 \% accuracy. The study demonstrates on the one hand, that the
pulsating fluid jet device, coupled with advanced audio monitoring techniques,
is a highly effective tool for precise bone cement removal. On the other hand,
this study presents the first application of SSMs in biomedical surgery
technology, marking a significant advancement in the application. This research
significantly advances biomedical engineering by integrating machine learning
combined with pulsating fluid jet as surgical technology, offering a novel,
minimally invasive, cold and adaptive approach for bone cement removal in
orthopedic applications.
| no_new_dataset | 0.643525 |
2503.02717 | Lin Xi | Lin Xi, Yingliang Ma, Ethan Koland, Sandra Howell, Aldo Rinaldi, Kawal
S. Rhode | Catheter Detection and Segmentation in X-ray Images via Multi-task
Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated detection and segmentation of surgical devices, such as catheters
or wires, in X-ray fluoroscopic images have the potential to enhance image
guidance in minimally invasive heart surgeries. In this paper, we present a
convolutional neural network model that integrates a resnet architecture with
multiple prediction heads to achieve real-time, accurate localization of
electrodes on catheters and catheter segmentation in an end-to-end deep
learning framework. We also propose a multi-task learning strategy in which our
model is trained to perform both accurate electrode detection and catheter
segmentation simultaneously. A key challenge with this approach is achieving
optimal performance for both tasks. To address this, we introduce a novel
multi-level dynamic resource prioritization method. This method dynamically
adjusts sample and task weights during training to effectively prioritize more
challenging tasks, where task difficulty is inversely proportional to
performance and evolves throughout the training process. Experiments on both
public and private datasets have demonstrated that the accuracy of our method
surpasses the existing state-of-the-art methods in both single segmentation
task and in the detection and segmentation multi-task. Our approach achieves a
good trade-off between accuracy and efficiency, making it well-suited for
real-time surgical guidance applications.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 15:32:32 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Xi",
"Lin",
""
],
[
"Ma",
"Yingliang",
""
],
[
"Koland",
"Ethan",
""
],
[
"Howell",
"Sandra",
""
],
[
"Rinaldi",
"Aldo",
""
],
[
"Rhode",
"Kawal S.",
""
]
]
| TITLE: Catheter Detection and Segmentation in X-ray Images via Multi-task
Learning
ABSTRACT: Automated detection and segmentation of surgical devices, such as catheters
or wires, in X-ray fluoroscopic images have the potential to enhance image
guidance in minimally invasive heart surgeries. In this paper, we present a
convolutional neural network model that integrates a resnet architecture with
multiple prediction heads to achieve real-time, accurate localization of
electrodes on catheters and catheter segmentation in an end-to-end deep
learning framework. We also propose a multi-task learning strategy in which our
model is trained to perform both accurate electrode detection and catheter
segmentation simultaneously. A key challenge with this approach is achieving
optimal performance for both tasks. To address this, we introduce a novel
multi-level dynamic resource prioritization method. This method dynamically
adjusts sample and task weights during training to effectively prioritize more
challenging tasks, where task difficulty is inversely proportional to
performance and evolves throughout the training process. Experiments on both
public and private datasets have demonstrated that the accuracy of our method
surpasses the existing state-of-the-art methods in both single segmentation
task and in the detection and segmentation multi-task. Our approach achieves a
good trade-off between accuracy and efficiency, making it well-suited for
real-time surgical guidance applications.
| no_new_dataset | 0.947769 |
2503.02718 | Keti Korini | Keti Korini and Christian Bizer | Evaluating Knowledge Generation and Self-Refinement Strategies for
LLM-based Column Type Annotation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the semantics of columns in relational tables is an important
pre-processing step for indexing data lakes in order to provide rich data
search. An approach to establishing such understanding is column type
annotation (CTA) where the goal is to annotate table columns with terms from a
given vocabulary. This paper experimentally compares different knowledge
generation and self-refinement strategies for LLM-based column type annotation.
The strategies include using LLMs to generate term definitions, error-based
refinement of term definitions, self-correction, and fine-tuning using examples
and term definitions. We evaluate these strategies along two dimensions:
effectiveness measured as F1 performance and efficiency measured in terms of
token usage and cost. Our experiments show that the best performing strategy
depends on the model/dataset combination. We find that using training data to
generate label definitions outperforms using the same data as demonstrations
for in-context learning for two out of three datasets using OpenAI models. The
experiments further show that using the LLMs to refine label definitions brings
an average increase of 3.9% F1 in 10 out of 12 setups compared to the
performance of the non-refined definitions. Combining fine-tuned models with
self-refined term definitions results in the overall highest performance,
outperforming zero-shot prompting fine-tuned models by at least 3% in F1 score.
The costs analysis shows that while reaching similar F1 score, self-refinement
via prompting is more cost efficient for use cases requiring smaller amounts of
tables to be annotated while fine-tuning is more efficient for large amounts of
tables.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 15:32:59 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Korini",
"Keti",
""
],
[
"Bizer",
"Christian",
""
]
]
| TITLE: Evaluating Knowledge Generation and Self-Refinement Strategies for
LLM-based Column Type Annotation
ABSTRACT: Understanding the semantics of columns in relational tables is an important
pre-processing step for indexing data lakes in order to provide rich data
search. An approach to establishing such understanding is column type
annotation (CTA) where the goal is to annotate table columns with terms from a
given vocabulary. This paper experimentally compares different knowledge
generation and self-refinement strategies for LLM-based column type annotation.
The strategies include using LLMs to generate term definitions, error-based
refinement of term definitions, self-correction, and fine-tuning using examples
and term definitions. We evaluate these strategies along two dimensions:
effectiveness measured as F1 performance and efficiency measured in terms of
token usage and cost. Our experiments show that the best performing strategy
depends on the model/dataset combination. We find that using training data to
generate label definitions outperforms using the same data as demonstrations
for in-context learning for two out of three datasets using OpenAI models. The
experiments further show that using the LLMs to refine label definitions brings
an average increase of 3.9% F1 in 10 out of 12 setups compared to the
performance of the non-refined definitions. Combining fine-tuned models with
self-refined term definitions results in the overall highest performance,
outperforming zero-shot prompting fine-tuned models by at least 3% in F1 score.
The costs analysis shows that while reaching similar F1 score, self-refinement
via prompting is more cost efficient for use cases requiring smaller amounts of
tables to be annotated while fine-tuning is more efficient for large amounts of
tables.
| no_new_dataset | 0.955319 |
2503.02726 | Gokul Gowri | Gokul Gowri, Peng Yin, Allon M. Klein | Measurement noise scaling laws for cellular representation learning | null | null | null | null | q-bio.QM cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning scaling laws predict how performance improves with increased
model and dataset size. Here we identify measurement noise in data as another
performance scaling axis, governed by a distinct logarithmic law. We focus on
representation learning models of biological single cell genomic data, where a
dominant source of measurement noise is due to molecular undersampling. We
introduce an information-theoretic metric for cellular representation model
quality, and find that it scales with sampling depth. A single quantitative
relationship holds across several model types and across several datasets. We
show that the analytical form of this relationship can be derived from a simple
Gaussian noise model, which in turn provides an intuitive interpretation for
the scaling law. Finally, we show that the same relationship emerges in image
classification models with respect to two types of imaging noise, suggesting
that measurement noise scaling may be a general phenomenon. Scaling with noise
can serve as a guide in generating and curating data for deep learning models,
particularly in fields where measurement quality can vary dramatically between
datasets.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 15:44:59 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Gowri",
"Gokul",
""
],
[
"Yin",
"Peng",
""
],
[
"Klein",
"Allon M.",
""
]
]
| TITLE: Measurement noise scaling laws for cellular representation learning
ABSTRACT: Deep learning scaling laws predict how performance improves with increased
model and dataset size. Here we identify measurement noise in data as another
performance scaling axis, governed by a distinct logarithmic law. We focus on
representation learning models of biological single cell genomic data, where a
dominant source of measurement noise is due to molecular undersampling. We
introduce an information-theoretic metric for cellular representation model
quality, and find that it scales with sampling depth. A single quantitative
relationship holds across several model types and across several datasets. We
show that the analytical form of this relationship can be derived from a simple
Gaussian noise model, which in turn provides an intuitive interpretation for
the scaling law. Finally, we show that the same relationship emerges in image
classification models with respect to two types of imaging noise, suggesting
that measurement noise scaling may be a general phenomenon. Scaling with noise
can serve as a guide in generating and curating data for deep learning models,
particularly in fields where measurement quality can vary dramatically between
datasets.
| no_new_dataset | 0.947332 |
2503.02741 | Bernd Prostmaier | Bernd Prostmaier, Jan V\'avra, Bettina Gr\"un, Paul Hofmarcher | Seeded Poisson Factorization: Leveraging domain knowledge to fit topic
models | null | null | null | null | stat.ME cs.CL cs.LG econ.GN q-fin.EC | http://creativecommons.org/licenses/by/4.0/ | Topic models are widely used for discovering latent thematic structures in
large text corpora, yet traditional unsupervised methods often struggle to
align with predefined conceptual domains. This paper introduces Seeded Poisson
Factorization (SPF), a novel approach that extends the Poisson Factorization
framework by incorporating domain knowledge through seed words. SPF enables a
more interpretable and structured topic discovery by modifying the prior
distribution of topic-specific term intensities, assigning higher initial rates
to predefined seed words. The model is estimated using variational inference
with stochastic gradient optimization, ensuring scalability to large datasets.
We apply SPF to an Amazon customer feedback dataset, leveraging predefined
product categories as guiding structures. Our evaluation demonstrates that SPF
achieves superior classification performance compared to alternative guided
topic models, particularly in terms of computational efficiency and predictive
performance. Furthermore, robustness checks highlight SPF's ability to
adaptively balance domain knowledge and data-driven topic discovery, even in
cases of imperfect seed word selection. These results establish SPF as a
powerful and scalable alternative for integrating expert knowledge into topic
modeling, enhancing both interpretability and efficiency in real-world
applications.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 16:05:13 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Prostmaier",
"Bernd",
""
],
[
"Vávra",
"Jan",
""
],
[
"Grün",
"Bettina",
""
],
[
"Hofmarcher",
"Paul",
""
]
]
| TITLE: Seeded Poisson Factorization: Leveraging domain knowledge to fit topic
models
ABSTRACT: Topic models are widely used for discovering latent thematic structures in
large text corpora, yet traditional unsupervised methods often struggle to
align with predefined conceptual domains. This paper introduces Seeded Poisson
Factorization (SPF), a novel approach that extends the Poisson Factorization
framework by incorporating domain knowledge through seed words. SPF enables a
more interpretable and structured topic discovery by modifying the prior
distribution of topic-specific term intensities, assigning higher initial rates
to predefined seed words. The model is estimated using variational inference
with stochastic gradient optimization, ensuring scalability to large datasets.
We apply SPF to an Amazon customer feedback dataset, leveraging predefined
product categories as guiding structures. Our evaluation demonstrates that SPF
achieves superior classification performance compared to alternative guided
topic models, particularly in terms of computational efficiency and predictive
performance. Furthermore, robustness checks highlight SPF's ability to
adaptively balance domain knowledge and data-driven topic discovery, even in
cases of imperfect seed word selection. These results establish SPF as a
powerful and scalable alternative for integrating expert knowledge into topic
modeling, enhancing both interpretability and efficiency in real-world
applications.
| no_new_dataset | 0.945751 |
2503.02760 | Jiacheng Tang | Jiacheng Tang, Nankai Wu, Fan Gao, Chengxiao Dai, Mengyao Zhao, Xinjie
Zhao | From Metaphor to Mechanism: How LLMs Decode Traditional Chinese Medicine
Symbolic Language for Modern Clinical Relevance | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metaphorical expressions are abundant in Traditional Chinese Medicine (TCM),
conveying complex disease mechanisms and holistic health concepts through
culturally rich and often abstract terminology. Bridging these metaphors to
anatomically driven Western medical (WM) concepts poses significant challenges
for both automated language processing and real-world clinical practice. To
address this gap, we propose a novel multi-agent and chain-of-thought (CoT)
framework designed to interpret TCM metaphors accurately and map them to WM
pathophysiology. Specifically, our approach combines domain-specialized agents
(TCM Expert, WM Expert) with a Coordinator Agent, leveraging stepwise
chain-of-thought prompts to ensure transparent reasoning and conflict
resolution. We detail a methodology for building a metaphor-rich TCM dataset,
discuss strategies for effectively integrating multi-agent collaboration and
CoT reasoning, and articulate the theoretical underpinnings that guide metaphor
interpretation across distinct medical paradigms. We present a comprehensive
system design and highlight both the potential benefits and limitations of our
approach, while leaving placeholders for future experimental validation. Our
work aims to support clinical decision-making, cross-system educational
initiatives, and integrated healthcare research, ultimately offering a robust
scaffold for reconciling TCM's symbolic language with the mechanistic focus of
Western medicine.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 16:22:49 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Tang",
"Jiacheng",
""
],
[
"Wu",
"Nankai",
""
],
[
"Gao",
"Fan",
""
],
[
"Dai",
"Chengxiao",
""
],
[
"Zhao",
"Mengyao",
""
],
[
"Zhao",
"Xinjie",
""
]
]
| TITLE: From Metaphor to Mechanism: How LLMs Decode Traditional Chinese Medicine
Symbolic Language for Modern Clinical Relevance
ABSTRACT: Metaphorical expressions are abundant in Traditional Chinese Medicine (TCM),
conveying complex disease mechanisms and holistic health concepts through
culturally rich and often abstract terminology. Bridging these metaphors to
anatomically driven Western medical (WM) concepts poses significant challenges
for both automated language processing and real-world clinical practice. To
address this gap, we propose a novel multi-agent and chain-of-thought (CoT)
framework designed to interpret TCM metaphors accurately and map them to WM
pathophysiology. Specifically, our approach combines domain-specialized agents
(TCM Expert, WM Expert) with a Coordinator Agent, leveraging stepwise
chain-of-thought prompts to ensure transparent reasoning and conflict
resolution. We detail a methodology for building a metaphor-rich TCM dataset,
discuss strategies for effectively integrating multi-agent collaboration and
CoT reasoning, and articulate the theoretical underpinnings that guide metaphor
interpretation across distinct medical paradigms. We present a comprehensive
system design and highlight both the potential benefits and limitations of our
approach, while leaving placeholders for future experimental validation. Our
work aims to support clinical decision-making, cross-system educational
initiatives, and integrated healthcare research, ultimately offering a robust
scaffold for reconciling TCM's symbolic language with the mechanistic focus of
Western medicine.
| new_dataset | 0.909265 |
2503.02767 | Hiroshi Kera | Ru Ito, Supatta Viriyavisuthisakul, Kazuhiko Kawamoto, Hiroshi Kera | Undertrained Image Reconstruction for Realistic Degradation in Blind
Image Super-Resolution | 11 pages, 11 figures, 2 tables | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most super-resolution (SR) models struggle with real-world low-resolution
(LR) images. This issue arises because the degradation characteristics in the
synthetic datasets differ from those in real-world LR images. Since SR models
are trained on pairs of high-resolution (HR) and LR images generated by
downsampling, they are optimized for simple degradation. However, real-world LR
images contain complex degradation caused by factors such as the imaging
process and JPEG compression. Due to these differences in degradation
characteristics, most SR models perform poorly on real-world LR images. This
study proposes a dataset generation method using undertrained image
reconstruction models. These models have the property of reconstructing
low-quality images with diverse degradation from input images. By leveraging
this property, this study generates LR images with diverse degradation from HR
images to construct the datasets. Fine-tuning pre-trained SR models on our
generated datasets improves noise removal and blur reduction, enhancing
performance on real-world LR images. Furthermore, an analysis of the datasets
reveals that degradation diversity contributes to performance improvements,
whereas color differences between HR and LR images may degrade performance. 11
pages, (11 figures and 2 tables)
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 16:33:58 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ito",
"Ru",
""
],
[
"Viriyavisuthisakul",
"Supatta",
""
],
[
"Kawamoto",
"Kazuhiko",
""
],
[
"Kera",
"Hiroshi",
""
]
]
| TITLE: Undertrained Image Reconstruction for Realistic Degradation in Blind
Image Super-Resolution
ABSTRACT: Most super-resolution (SR) models struggle with real-world low-resolution
(LR) images. This issue arises because the degradation characteristics in the
synthetic datasets differ from those in real-world LR images. Since SR models
are trained on pairs of high-resolution (HR) and LR images generated by
downsampling, they are optimized for simple degradation. However, real-world LR
images contain complex degradation caused by factors such as the imaging
process and JPEG compression. Due to these differences in degradation
characteristics, most SR models perform poorly on real-world LR images. This
study proposes a dataset generation method using undertrained image
reconstruction models. These models have the property of reconstructing
low-quality images with diverse degradation from input images. By leveraging
this property, this study generates LR images with diverse degradation from HR
images to construct the datasets. Fine-tuning pre-trained SR models on our
generated datasets improves noise removal and blur reduction, enhancing
performance on real-world LR images. Furthermore, an analysis of the datasets
reveals that degradation diversity contributes to performance improvements,
whereas color differences between HR and LR images may degrade performance. 11
pages, (11 figures and 2 tables)
| no_new_dataset | 0.950778 |
2503.02773 | Francesco Panelli | Francesco Panelli, Doaa Almhaithawi, Tania Cerquitelli and Alessandro
Bellini | Prime Convolutional Model: Breaking the Ground for Theoretical
Explainability | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose a new theoretical approach to Explainable AI.
Following the Scientific Method, this approach consists in formulating on the
basis of empirical evidence, a mathematical model to explain and predict the
behaviors of Neural Networks. We apply the method to a case study created in a
controlled environment, which we call Prime Convolutional Model (p-Conv for
short). p-Conv operates on a dataset consisting of the first one million
natural numbers and is trained to identify the congruence classes modulo a
given integer $m$. Its architecture uses a convolutional-type neural network
that contextually processes a sequence of $B$ consecutive numbers to each
input. We take an empirical approach and exploit p-Conv to identify the
congruence classes of numbers in a validation set using different values for
$m$ and $B$. The results show that the different behaviors of p-Conv (i.e.,
whether it can perform the task or not) can be modeled mathematically in terms
of $m$ and $B$. The inferred mathematical model reveals interesting patterns
able to explain when and why p-Conv succeeds in performing task and, if not,
which error pattern it follows.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 16:42:46 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Panelli",
"Francesco",
""
],
[
"Almhaithawi",
"Doaa",
""
],
[
"Cerquitelli",
"Tania",
""
],
[
"Bellini",
"Alessandro",
""
]
]
| TITLE: Prime Convolutional Model: Breaking the Ground for Theoretical
Explainability
ABSTRACT: In this paper, we propose a new theoretical approach to Explainable AI.
Following the Scientific Method, this approach consists in formulating on the
basis of empirical evidence, a mathematical model to explain and predict the
behaviors of Neural Networks. We apply the method to a case study created in a
controlled environment, which we call Prime Convolutional Model (p-Conv for
short). p-Conv operates on a dataset consisting of the first one million
natural numbers and is trained to identify the congruence classes modulo a
given integer $m$. Its architecture uses a convolutional-type neural network
that contextually processes a sequence of $B$ consecutive numbers to each
input. We take an empirical approach and exploit p-Conv to identify the
congruence classes of numbers in a validation set using different values for
$m$ and $B$. The results show that the different behaviors of p-Conv (i.e.,
whether it can perform the task or not) can be modeled mathematically in terms
of $m$ and $B$. The inferred mathematical model reveals interesting patterns
able to explain when and why p-Conv succeeds in performing task and, if not,
which error pattern it follows.
| no_new_dataset | 0.94428 |
2503.02776 | Xinru Lin | Xinru Lin, Luyang Li | Implicit Bias in LLMs: A Survey | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Due to the implement of guardrails by developers, Large language models
(LLMs) have demonstrated exceptional performance in explicit bias tests.
However, bias in LLMs may occur not only explicitly, but also implicitly, much
like humans who consciously strive for impartiality yet still harbor implicit
bias. The unconscious and automatic nature of implicit bias makes it
particularly challenging to study. This paper provides a comprehensive review
of the existing literature on implicit bias in LLMs. We begin by introducing
key concepts, theories and methods related to implicit bias in psychology,
extending them from humans to LLMs. Drawing on the Implicit Association Test
(IAT) and other psychological frameworks, we categorize detection methods into
three primary approaches: word association, task-oriented text generation and
decision-making. We divide our taxonomy of evaluation metrics for implicit bias
into two categories: single-value-based metrics and comparison-value-based
metrics. We classify datasets into two types: sentences with masked tokens and
complete sentences, incorporating datasets from various domains to reflect the
broad application of LLMs. Although research on mitigating implicit bias in
LLMs is still limited, we summarize existing efforts and offer insights on
future challenges. We aim for this work to serve as a clear guide for
researchers and inspire innovative ideas to advance exploration in this task.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 16:49:37 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lin",
"Xinru",
""
],
[
"Li",
"Luyang",
""
]
]
| TITLE: Implicit Bias in LLMs: A Survey
ABSTRACT: Due to the implement of guardrails by developers, Large language models
(LLMs) have demonstrated exceptional performance in explicit bias tests.
However, bias in LLMs may occur not only explicitly, but also implicitly, much
like humans who consciously strive for impartiality yet still harbor implicit
bias. The unconscious and automatic nature of implicit bias makes it
particularly challenging to study. This paper provides a comprehensive review
of the existing literature on implicit bias in LLMs. We begin by introducing
key concepts, theories and methods related to implicit bias in psychology,
extending them from humans to LLMs. Drawing on the Implicit Association Test
(IAT) and other psychological frameworks, we categorize detection methods into
three primary approaches: word association, task-oriented text generation and
decision-making. We divide our taxonomy of evaluation metrics for implicit bias
into two categories: single-value-based metrics and comparison-value-based
metrics. We classify datasets into two types: sentences with masked tokens and
complete sentences, incorporating datasets from various domains to reflect the
broad application of LLMs. Although research on mitigating implicit bias in
LLMs is still limited, we summarize existing efforts and offer insights on
future challenges. We aim for this work to serve as a clear guide for
researchers and inspire innovative ideas to advance exploration in this task.
| no_new_dataset | 0.943867 |
2503.02797 | Nathan Drenkow | Nathan Drenkow and Mathias Unberath | A Causal Framework for Aligning Image Quality Metrics and Deep Neural
Network Robustness | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Image quality plays an important role in the performance of deep neural
networks (DNNs) and DNNs have been widely shown to exhibit sensitivity to
changes in imaging conditions. Large-scale datasets often contain images under
a wide range of conditions prompting a need to quantify and understand their
underlying quality distribution in order to better characterize DNN performance
and robustness. Aligning the sensitivities of image quality metrics and DNNs
ensures that estimates of quality can act as proxies for image/dataset
difficulty independent of the task models trained/evaluated on the data.
Conventional image quality assessment (IQA) seeks to measure and align quality
relative to human perceptual judgments, but here we seek a quality measure that
is not only sensitive to imaging conditions but also well-aligned with DNN
sensitivities. We first ask whether conventional IQA metrics are also
informative of DNN performance. In order to answer this question, we reframe
IQA from a causal perspective and examine conditions under which quality
metrics are predictive of DNN performance. We show theoretically and
empirically that current IQA metrics are weak predictors of DNN performance in
the context of classification. We then use our causal framework to provide an
alternative formulation and a new image quality metric that is more strongly
correlated with DNN performance and can act as a prior on performance without
training new task models. Our approach provides a means to directly estimate
the quality distribution of large-scale image datasets towards characterizing
the relationship between dataset composition and DNN performance.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 17:15:31 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Drenkow",
"Nathan",
""
],
[
"Unberath",
"Mathias",
""
]
]
| TITLE: A Causal Framework for Aligning Image Quality Metrics and Deep Neural
Network Robustness
ABSTRACT: Image quality plays an important role in the performance of deep neural
networks (DNNs) and DNNs have been widely shown to exhibit sensitivity to
changes in imaging conditions. Large-scale datasets often contain images under
a wide range of conditions prompting a need to quantify and understand their
underlying quality distribution in order to better characterize DNN performance
and robustness. Aligning the sensitivities of image quality metrics and DNNs
ensures that estimates of quality can act as proxies for image/dataset
difficulty independent of the task models trained/evaluated on the data.
Conventional image quality assessment (IQA) seeks to measure and align quality
relative to human perceptual judgments, but here we seek a quality measure that
is not only sensitive to imaging conditions but also well-aligned with DNN
sensitivities. We first ask whether conventional IQA metrics are also
informative of DNN performance. In order to answer this question, we reframe
IQA from a causal perspective and examine conditions under which quality
metrics are predictive of DNN performance. We show theoretically and
empirically that current IQA metrics are weak predictors of DNN performance in
the context of classification. We then use our causal framework to provide an
alternative formulation and a new image quality metric that is more strongly
correlated with DNN performance and can act as a prior on performance without
training new task models. Our approach provides a means to directly estimate
the quality distribution of large-scale image datasets towards characterizing
the relationship between dataset composition and DNN performance.
| no_new_dataset | 0.946892 |
2503.02799 | Weihang Wang | Weihang Wang, Duolin Sun, Jielei Zhang and Longwen Gao | MX-Font++: Mixture of Heterogeneous Aggregation Experts for Few-shot
Font Generation | 4 pages, 4 figures, accepted by ICASSP 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Few-shot Font Generation (FFG) aims to create new font libraries using
limited reference glyphs, with crucial applications in digital accessibility
and equity for low-resource languages, especially in multilingual artificial
intelligence systems. Although existing methods have shown promising
performance, transitioning to unseen characters in low-resource languages
remains a significant challenge, especially when font glyphs vary considerably
across training sets. MX-Font considers the content of a character from the
perspective of a local component, employing a Mixture of Experts (MoE) approach
to adaptively extract the component for better transition. However, the lack of
a robust feature extractor prevents them from adequately decoupling content and
style, leading to sub-optimal generation results. To alleviate these problems,
we propose Heterogeneous Aggregation Experts (HAE), a powerful feature
extraction expert that helps decouple content and style downstream from being
able to aggregate information in channel and spatial dimensions. Additionally,
we propose a novel content-style homogeneity loss to enhance the untangling.
Extensive experiments on several datasets demonstrate that our MX-Font++ yields
superior visual results in FFG and effectively outperforms state-of-the-art
methods. Code and data are available at
https://github.com/stephensun11/MXFontpp.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 17:18:43 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Weihang",
""
],
[
"Sun",
"Duolin",
""
],
[
"Zhang",
"Jielei",
""
],
[
"Gao",
"Longwen",
""
]
]
| TITLE: MX-Font++: Mixture of Heterogeneous Aggregation Experts for Few-shot
Font Generation
ABSTRACT: Few-shot Font Generation (FFG) aims to create new font libraries using
limited reference glyphs, with crucial applications in digital accessibility
and equity for low-resource languages, especially in multilingual artificial
intelligence systems. Although existing methods have shown promising
performance, transitioning to unseen characters in low-resource languages
remains a significant challenge, especially when font glyphs vary considerably
across training sets. MX-Font considers the content of a character from the
perspective of a local component, employing a Mixture of Experts (MoE) approach
to adaptively extract the component for better transition. However, the lack of
a robust feature extractor prevents them from adequately decoupling content and
style, leading to sub-optimal generation results. To alleviate these problems,
we propose Heterogeneous Aggregation Experts (HAE), a powerful feature
extraction expert that helps decouple content and style downstream from being
able to aggregate information in channel and spatial dimensions. Additionally,
we propose a novel content-style homogeneity loss to enhance the untangling.
Extensive experiments on several datasets demonstrate that our MX-Font++ yields
superior visual results in FFG and effectively outperforms state-of-the-art
methods. Code and data are available at
https://github.com/stephensun11/MXFontpp.
| no_new_dataset | 0.952662 |
2503.02823 | Matteo Spanio | Matteo Spanio and Massimiliano Zampini and Antonio Rod\`a and Franco
Pierucci | A Multimodal Symphony: Integrating Taste and Sound through Generative AI | 17 pages, 6 figures (2 + 2 figures with 2 subfigures each) | null | null | null | cs.SD cs.AI cs.MM eess.AS | http://creativecommons.org/licenses/by/4.0/ | In recent decades, neuroscientific and psychological research has traced
direct relationships between taste and auditory perceptions. This article
explores multimodal generative models capable of converting taste information
into music, building on this foundational research. We provide a brief review
of the state of the art in this field, highlighting key findings and
methodologies. We present an experiment in which a fine-tuned version of a
generative music model (MusicGEN) is used to generate music based on detailed
taste descriptions provided for each musical piece. The results are promising:
according the participants' ($n=111$) evaluation, the fine-tuned model produces
music that more coherently reflects the input taste descriptions compared to
the non-fine-tuned model. This study represents a significant step towards
understanding and developing embodied interactions between AI, sound, and
taste, opening new possibilities in the field of generative AI. We release our
dataset, code and pre-trained model at: https://osf.io/xs5jy/.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 17:48:48 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Spanio",
"Matteo",
""
],
[
"Zampini",
"Massimiliano",
""
],
[
"Rodà",
"Antonio",
""
],
[
"Pierucci",
"Franco",
""
]
]
| TITLE: A Multimodal Symphony: Integrating Taste and Sound through Generative AI
ABSTRACT: In recent decades, neuroscientific and psychological research has traced
direct relationships between taste and auditory perceptions. This article
explores multimodal generative models capable of converting taste information
into music, building on this foundational research. We provide a brief review
of the state of the art in this field, highlighting key findings and
methodologies. We present an experiment in which a fine-tuned version of a
generative music model (MusicGEN) is used to generate music based on detailed
taste descriptions provided for each musical piece. The results are promising:
according the participants' ($n=111$) evaluation, the fine-tuned model produces
music that more coherently reflects the input taste descriptions compared to
the non-fine-tuned model. This study represents a significant step towards
understanding and developing embodied interactions between AI, sound, and
taste, opening new possibilities in the field of generative AI. We release our
dataset, code and pre-trained model at: https://osf.io/xs5jy/.
| new_dataset | 0.959687 |
2503.02824 | Yujin Oh | Yujin Oh, Robert Seifert, Yihan Cao, Christoph Clement, Justin
Ferdinandus, Constantin Lapa, Alessandro Liebich, Michelle Amon, Johanna
Enke, Sifan Song, Runqi Meng, Fang Zeng, Ning Guo, Xiang Li, Pedram Heidari,
Axel Rominger, Kuangyu Shi, Quanzheng Li | Developing a PET/CT Foundation Model for Cross-Modal Anatomical and
Functional Imaging | 11 pages, 2 figures, 3 tables | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In oncology, Positron Emission Tomography-Computed Tomography (PET/CT) is
widely used in cancer diagnosis, staging, and treatment monitoring, as it
combines anatomical details from CT with functional metabolic activity and
molecular marker expression information from PET. However, existing artificial
intelligence-driven PET/CT analyses rely predominantly on task-specific models
trained from scratch or on limited datasets, limiting their generalizability
and robustness. To address this, we propose a foundation model approach
specifically designed for multimodal PET/CT imaging. We introduce the
Cross-Fraternal Twin Masked Autoencoder (FratMAE), a novel framework that
effectively integrates whole-body anatomical and functional or molecular
information. FratMAE employs separate Vision Transformer (ViT) encoders for PET
and CT scans, along with cross-attention decoders that enable synergistic
interactions between modalities during masked autoencoder training.
Additionally, it incorporates textual metadata to enhance PET representation
learning. By pre-training on PET/CT datasets, FratMAE captures intricate
cross-modal relationships and global uptake patterns, achieving superior
performance on downstream tasks and demonstrating its potential as a
generalizable foundation model.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 17:49:07 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Oh",
"Yujin",
""
],
[
"Seifert",
"Robert",
""
],
[
"Cao",
"Yihan",
""
],
[
"Clement",
"Christoph",
""
],
[
"Ferdinandus",
"Justin",
""
],
[
"Lapa",
"Constantin",
""
],
[
"Liebich",
"Alessandro",
""
],
[
"Amon",
"Michelle",
""
],
[
"Enke",
"Johanna",
""
],
[
"Song",
"Sifan",
""
],
[
"Meng",
"Runqi",
""
],
[
"Zeng",
"Fang",
""
],
[
"Guo",
"Ning",
""
],
[
"Li",
"Xiang",
""
],
[
"Heidari",
"Pedram",
""
],
[
"Rominger",
"Axel",
""
],
[
"Shi",
"Kuangyu",
""
],
[
"Li",
"Quanzheng",
""
]
]
| TITLE: Developing a PET/CT Foundation Model for Cross-Modal Anatomical and
Functional Imaging
ABSTRACT: In oncology, Positron Emission Tomography-Computed Tomography (PET/CT) is
widely used in cancer diagnosis, staging, and treatment monitoring, as it
combines anatomical details from CT with functional metabolic activity and
molecular marker expression information from PET. However, existing artificial
intelligence-driven PET/CT analyses rely predominantly on task-specific models
trained from scratch or on limited datasets, limiting their generalizability
and robustness. To address this, we propose a foundation model approach
specifically designed for multimodal PET/CT imaging. We introduce the
Cross-Fraternal Twin Masked Autoencoder (FratMAE), a novel framework that
effectively integrates whole-body anatomical and functional or molecular
information. FratMAE employs separate Vision Transformer (ViT) encoders for PET
and CT scans, along with cross-attention decoders that enable synergistic
interactions between modalities during masked autoencoder training.
Additionally, it incorporates textual metadata to enhance PET representation
learning. By pre-training on PET/CT datasets, FratMAE captures intricate
cross-modal relationships and global uptake patterns, achieving superior
performance on downstream tasks and demonstrating its potential as a
generalizable foundation model.
| no_new_dataset | 0.948632 |
2503.02846 | Yuzhe Gu | Yuzhe Gu, Wenwei Zhang, Chengqi Lyu, Dahua Lin, Kai Chen | Mask-DPO: Generalizable Fine-grained Factuality Alignment of LLMs | Accepted by ICLR 2025. Code is available at
https://github.com/open-compass/ANAH | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) exhibit hallucinations (i.e., unfaithful or
nonsensical information) when serving as AI assistants in various domains.
Since hallucinations always come with truthful content in the LLM responses,
previous factuality alignment methods that conduct response-level preference
learning inevitably introduced noises during training. Therefore, this paper
proposes a fine-grained factuality alignment method based on Direct Preference
Optimization (DPO), called Mask-DPO. Incorporating sentence-level factuality as
mask signals, Mask-DPO only learns from factually correct sentences in the
preferred samples and prevents the penalty on factual contents in the not
preferred samples, which resolves the ambiguity in the preference learning.
Extensive experimental results demonstrate that Mask-DPO can significantly
improve the factuality of LLMs responses to questions from both in-domain and
out-of-domain datasets, although these questions and their corresponding topics
are unseen during training. Only trained on the ANAH train set, the score of
Llama3.1-8B-Instruct on the ANAH test set is improved from 49.19% to 77.53%,
even surpassing the score of Llama3.1-70B-Instruct (53.44%), while its
FactScore on the out-of-domain Biography dataset is also improved from 30.29%
to 39.39%. We further study the generalization property of Mask-DPO using
different training sample scaling strategies and find that scaling the number
of topics in the dataset is more effective than the number of questions. We
provide a hypothesis of what factual alignment is doing with LLMs, on the
implication of this phenomenon, and conduct proof-of-concept experiments to
verify it. We hope the method and the findings pave the way for future research
on scaling factuality alignment.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 18:20:24 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Gu",
"Yuzhe",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Lyu",
"Chengqi",
""
],
[
"Lin",
"Dahua",
""
],
[
"Chen",
"Kai",
""
]
]
| TITLE: Mask-DPO: Generalizable Fine-grained Factuality Alignment of LLMs
ABSTRACT: Large language models (LLMs) exhibit hallucinations (i.e., unfaithful or
nonsensical information) when serving as AI assistants in various domains.
Since hallucinations always come with truthful content in the LLM responses,
previous factuality alignment methods that conduct response-level preference
learning inevitably introduced noises during training. Therefore, this paper
proposes a fine-grained factuality alignment method based on Direct Preference
Optimization (DPO), called Mask-DPO. Incorporating sentence-level factuality as
mask signals, Mask-DPO only learns from factually correct sentences in the
preferred samples and prevents the penalty on factual contents in the not
preferred samples, which resolves the ambiguity in the preference learning.
Extensive experimental results demonstrate that Mask-DPO can significantly
improve the factuality of LLMs responses to questions from both in-domain and
out-of-domain datasets, although these questions and their corresponding topics
are unseen during training. Only trained on the ANAH train set, the score of
Llama3.1-8B-Instruct on the ANAH test set is improved from 49.19% to 77.53%,
even surpassing the score of Llama3.1-70B-Instruct (53.44%), while its
FactScore on the out-of-domain Biography dataset is also improved from 30.29%
to 39.39%. We further study the generalization property of Mask-DPO using
different training sample scaling strategies and find that scaling the number
of topics in the dataset is more effective than the number of questions. We
provide a hypothesis of what factual alignment is doing with LLMs, on the
implication of this phenomenon, and conduct proof-of-concept experiments to
verify it. We hope the method and the findings pave the way for future research
on scaling factuality alignment.
| no_new_dataset | 0.950319 |
2503.02853 | Luis Marquez-Carpintero | Luis Marquez-Carpintero, Sergio Suescun-Ferrandiz, Monica
Pina-Navarro, Miguel Cazorla, Francisco Gomez-Donoso | CADDI: An in-Class Activity Detection Dataset using IMU data from
low-cost sensors | null | null | null | null | cs.CV cs.HC | http://creativecommons.org/licenses/by/4.0/ | The monitoring and prediction of in-class student activities is of paramount
importance for the comprehension of engagement and the enhancement of
pedagogical efficacy. The accurate detection of these activities enables
educators to modify their lessons in real time, thereby reducing negative
emotional states and enhancing the overall learning experience. To this end,
the use of non-intrusive devices, such as inertial measurement units (IMUs)
embedded in smartwatches, represents a viable solution. The development of
reliable predictive systems has been limited by the lack of large, labeled
datasets in education. To bridge this gap, we present a novel dataset for
in-class activity detection using affordable IMU sensors. The dataset comprises
19 diverse activities, both instantaneous and continuous, performed by 12
participants in typical classroom scenarios. It includes accelerometer,
gyroscope, rotation vector data, and synchronized stereo images, offering a
comprehensive resource for developing multimodal algorithms using sensor and
visual data. This dataset represents a key step toward scalable solutions for
activity recognition in educational settings.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 18:29:57 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Marquez-Carpintero",
"Luis",
""
],
[
"Suescun-Ferrandiz",
"Sergio",
""
],
[
"Pina-Navarro",
"Monica",
""
],
[
"Cazorla",
"Miguel",
""
],
[
"Gomez-Donoso",
"Francisco",
""
]
]
| TITLE: CADDI: An in-Class Activity Detection Dataset using IMU data from
low-cost sensors
ABSTRACT: The monitoring and prediction of in-class student activities is of paramount
importance for the comprehension of engagement and the enhancement of
pedagogical efficacy. The accurate detection of these activities enables
educators to modify their lessons in real time, thereby reducing negative
emotional states and enhancing the overall learning experience. To this end,
the use of non-intrusive devices, such as inertial measurement units (IMUs)
embedded in smartwatches, represents a viable solution. The development of
reliable predictive systems has been limited by the lack of large, labeled
datasets in education. To bridge this gap, we present a novel dataset for
in-class activity detection using affordable IMU sensors. The dataset comprises
19 diverse activities, both instantaneous and continuous, performed by 12
participants in typical classroom scenarios. It includes accelerometer,
gyroscope, rotation vector data, and synchronized stereo images, offering a
comprehensive resource for developing multimodal algorithms using sensor and
visual data. This dataset represents a key step toward scalable solutions for
activity recognition in educational settings.
| new_dataset | 0.965053 |
2503.02862 | Hong Guan | Hong Guan, Lei Yu, Lixi Zhou, Li Xiong, Kanchan Chowdhury, Lulu Xie,
Xusheng Xiao, Jia Zou | Privacy and Accuracy-Aware AI/ML Model Deduplication | null | null | null | null | cs.CR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growing adoption of privacy-preserving machine learning algorithms,
such as Differentially Private Stochastic Gradient Descent (DP-SGD), training
or fine-tuning models on private datasets has become increasingly prevalent.
This shift has led to the need for models offering varying privacy guarantees
and utility levels to satisfy diverse user requirements. However, managing
numerous versions of large models introduces significant operational
challenges, including increased inference latency, higher resource consumption,
and elevated costs. Model deduplication is a technique widely used by many
model serving and database systems to support high-performance and low-cost
inference queries and model diagnosis queries. However, none of the existing
model deduplication works has considered privacy, leading to unbounded
aggregation of privacy costs for certain deduplicated models and inefficiencies
when applied to deduplicate DP-trained models. We formalize the problems of
deduplicating DP-trained models for the first time and propose a novel privacy-
and accuracy-aware deduplication mechanism to address the problems. We
developed a greedy strategy to select and assign base models to target models
to minimize storage and privacy costs. When deduplicating a target model, we
dynamically schedule accuracy validations and apply the Sparse Vector Technique
to reduce the privacy costs associated with private validation data. Compared
to baselines that do not provide privacy guarantees, our approach improved the
compression ratio by up to $35\times$ for individual models (including large
language models and vision transformers). We also observed up to $43\times$
inference speedup due to the reduction of I/O operations.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 18:40:38 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Guan",
"Hong",
""
],
[
"Yu",
"Lei",
""
],
[
"Zhou",
"Lixi",
""
],
[
"Xiong",
"Li",
""
],
[
"Chowdhury",
"Kanchan",
""
],
[
"Xie",
"Lulu",
""
],
[
"Xiao",
"Xusheng",
""
],
[
"Zou",
"Jia",
""
]
]
| TITLE: Privacy and Accuracy-Aware AI/ML Model Deduplication
ABSTRACT: With the growing adoption of privacy-preserving machine learning algorithms,
such as Differentially Private Stochastic Gradient Descent (DP-SGD), training
or fine-tuning models on private datasets has become increasingly prevalent.
This shift has led to the need for models offering varying privacy guarantees
and utility levels to satisfy diverse user requirements. However, managing
numerous versions of large models introduces significant operational
challenges, including increased inference latency, higher resource consumption,
and elevated costs. Model deduplication is a technique widely used by many
model serving and database systems to support high-performance and low-cost
inference queries and model diagnosis queries. However, none of the existing
model deduplication works has considered privacy, leading to unbounded
aggregation of privacy costs for certain deduplicated models and inefficiencies
when applied to deduplicate DP-trained models. We formalize the problems of
deduplicating DP-trained models for the first time and propose a novel privacy-
and accuracy-aware deduplication mechanism to address the problems. We
developed a greedy strategy to select and assign base models to target models
to minimize storage and privacy costs. When deduplicating a target model, we
dynamically schedule accuracy validations and apply the Sparse Vector Technique
to reduce the privacy costs associated with private validation data. Compared
to baselines that do not provide privacy guarantees, our approach improved the
compression ratio by up to $35\times$ for individual models (including large
language models and vision transformers). We also observed up to $43\times$
inference speedup due to the reduction of I/O operations.
| no_new_dataset | 0.9463 |
1909.11957 | Haoran You | Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan
Chen, Richard G. Baraniuk, Zhangyang Wang, and Yingyan Celine Lin | Drawing Early-Bird Tickets: Towards More Efficient Training of Deep
Networks | Accepted as ICLR2020 Spotlight | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | (Frankle & Carbin, 2019) shows that there exist winning tickets (small but
critical subnetworks) for dense, randomly initialized networks, that can be
trained alone to achieve comparable accuracies to the latter in a similar
number of iterations. However, the identification of these winning tickets
still requires the costly train-prune-retrain process, limiting their practical
benefits. In this paper, we discover for the first time that the winning
tickets can be identified at the very early training stage, which we term as
early-bird (EB) tickets, via low-cost training schemes (e.g., early stopping
and low-precision training) at large learning rates. Our finding of EB tickets
is consistent with recently reported observations that the key connectivity
patterns of neural networks emerge early. Furthermore, we propose a mask
distance metric that can be used to identify EB tickets with low computational
overhead, without needing to know the true winning tickets that emerge after
the full training. Finally, we leverage the existence of EB tickets and the
proposed mask distance to develop efficient training methods, which are
achieved by first identifying EB tickets via low-cost schemes, and then
continuing to train merely the EB tickets towards the target accuracy.
Experiments based on various deep networks and datasets validate: 1) the
existence of EB tickets, and the effectiveness of mask distance in efficiently
identifying them; and 2) that the proposed efficient training via EB tickets
can achieve up to 4.7x energy savings while maintaining comparable or even
better accuracy, demonstrating a promising and easily adopted method for
tackling cost-prohibitive deep network training. Code available at
https://github.com/RICE-EIC/Early-Bird-Tickets.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2019 07:43:56 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Feb 2020 05:44:12 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Feb 2020 21:21:44 GMT"
},
{
"version": "v4",
"created": "Fri, 7 Aug 2020 06:12:58 GMT"
},
{
"version": "v5",
"created": "Wed, 16 Feb 2022 22:55:00 GMT"
},
{
"version": "v6",
"created": "Mon, 3 Mar 2025 17:04:08 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"You",
"Haoran",
""
],
[
"Li",
"Chaojian",
""
],
[
"Xu",
"Pengfei",
""
],
[
"Fu",
"Yonggan",
""
],
[
"Wang",
"Yue",
""
],
[
"Chen",
"Xiaohan",
""
],
[
"Baraniuk",
"Richard G.",
""
],
[
"Wang",
"Zhangyang",
""
],
[
"Lin",
"Yingyan Celine",
""
]
]
| TITLE: Drawing Early-Bird Tickets: Towards More Efficient Training of Deep
Networks
ABSTRACT: (Frankle & Carbin, 2019) shows that there exist winning tickets (small but
critical subnetworks) for dense, randomly initialized networks, that can be
trained alone to achieve comparable accuracies to the latter in a similar
number of iterations. However, the identification of these winning tickets
still requires the costly train-prune-retrain process, limiting their practical
benefits. In this paper, we discover for the first time that the winning
tickets can be identified at the very early training stage, which we term as
early-bird (EB) tickets, via low-cost training schemes (e.g., early stopping
and low-precision training) at large learning rates. Our finding of EB tickets
is consistent with recently reported observations that the key connectivity
patterns of neural networks emerge early. Furthermore, we propose a mask
distance metric that can be used to identify EB tickets with low computational
overhead, without needing to know the true winning tickets that emerge after
the full training. Finally, we leverage the existence of EB tickets and the
proposed mask distance to develop efficient training methods, which are
achieved by first identifying EB tickets via low-cost schemes, and then
continuing to train merely the EB tickets towards the target accuracy.
Experiments based on various deep networks and datasets validate: 1) the
existence of EB tickets, and the effectiveness of mask distance in efficiently
identifying them; and 2) that the proposed efficient training via EB tickets
can achieve up to 4.7x energy savings while maintaining comparable or even
better accuracy, demonstrating a promising and easily adopted method for
tackling cost-prohibitive deep network training. Code available at
https://github.com/RICE-EIC/Early-Bird-Tickets.
| no_new_dataset | 0.947186 |
2004.12571 | Xinjian Luo | Xinjian Luo, Xianglong Zhang | Exploiting Defenses against GAN-Based Feature Inference Attacks in
Federated Learning | null | null | 10.1145/3719350 | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated learning (FL) is a decentralized model training framework that aims
to merge isolated data islands while maintaining data privacy. However, recent
studies have revealed that Generative Adversarial Network (GAN) based attacks
can be employed in FL to learn the distribution of private datasets and
reconstruct recognizable images. In this paper, we exploit defenses against
GAN-based attacks in FL and propose a framework, Anti-GAN, to prevent attackers
from learning the real distribution of the victim's data. The core idea of
Anti-GAN is to manipulate the visual features of private training images to
make them indistinguishable to human eyes even restored by attackers.
Specifically, Anti-GAN projects the private dataset onto a GAN's generator and
combines the generated fake images with the actual images to create the
training dataset, which is then used for federated model training. The
experimental results demonstrate that Anti-GAN is effective in preventing
attackers from learning the distribution of private images while causing
minimal harm to the accuracy of the federated model.
| [
{
"version": "v1",
"created": "Mon, 27 Apr 2020 03:45:48 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Aug 2021 09:22:30 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Aug 2024 14:11:18 GMT"
},
{
"version": "v4",
"created": "Sun, 16 Feb 2025 12:05:54 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Luo",
"Xinjian",
""
],
[
"Zhang",
"Xianglong",
""
]
]
| TITLE: Exploiting Defenses against GAN-Based Feature Inference Attacks in
Federated Learning
ABSTRACT: Federated learning (FL) is a decentralized model training framework that aims
to merge isolated data islands while maintaining data privacy. However, recent
studies have revealed that Generative Adversarial Network (GAN) based attacks
can be employed in FL to learn the distribution of private datasets and
reconstruct recognizable images. In this paper, we exploit defenses against
GAN-based attacks in FL and propose a framework, Anti-GAN, to prevent attackers
from learning the real distribution of the victim's data. The core idea of
Anti-GAN is to manipulate the visual features of private training images to
make them indistinguishable to human eyes even restored by attackers.
Specifically, Anti-GAN projects the private dataset onto a GAN's generator and
combines the generated fake images with the actual images to create the
training dataset, which is then used for federated model training. The
experimental results demonstrate that Anti-GAN is effective in preventing
attackers from learning the distribution of private images while causing
minimal harm to the accuracy of the federated model.
| no_new_dataset | 0.941975 |
2103.00794 | Haoran You | Haoran You, Zhihan Lu, Zijian Zhou, Yonggan Fu, Yingyan Celine Lin | Early-Bird GCNs: Graph-Network Co-Optimization Towards More Efficient
GCN Training and Inference via Drawing Early-Bird Lottery Tickets | Accepted by AAAI 2022 | null | null | null | cs.LG cs.CV cs.SI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art deep
learning model for representation learning on graphs. However, it remains
notoriously challenging to train and inference GCNs over large graph datasets,
limiting their application to large real-world graphs and hindering the
exploration of deeper and more sophisticated GCN graphs. This is because as the
graph size grows, the sheer number of node features and the large adjacency
matrix can easily explode the required memory and data movements. To tackle the
aforementioned challenges, we explore the possibility of drawing lottery
tickets when sparsifying GCN graphs, i.e., subgraphs that largely shrink the
adjacency matrix yet are capable of achieving accuracy comparable to or even
better than their full graphs. Specifically, we for the first time discover the
existence of graph early-bird (GEB) tickets that emerge at the very early stage
when sparsifying GCN graphs, and propose a simple yet effective detector to
automatically identify the emergence of such GEB tickets. Furthermore, we
advocate graph-model co-optimization and develop a generic efficient GCN
early-bird training framework dubbed GEBT that can significantly boost the
efficiency of GCN training by (1) drawing joint early-bird tickets between the
GCN graphs and models and (2) enabling simultaneously sparsification of both
the GCN graphs and models. Experiments on various GCN models and datasets
consistently validate our GEB finding and the effectiveness of our GEBT, e.g.,
our GEBT achieves up to 80.2% ~ 85.6% and 84.6% ~ 87.5% savings of GCN training
and inference costs while offering a comparable or even better accuracy as
compared to state-of-the-art methods. Our source code and supplementary
appendix are available at https://github.com/RICE-EIC/Early-Bird-GCN.
| [
{
"version": "v1",
"created": "Mon, 1 Mar 2021 06:36:24 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Dec 2021 05:32:06 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 17:05:17 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"You",
"Haoran",
""
],
[
"Lu",
"Zhihan",
""
],
[
"Zhou",
"Zijian",
""
],
[
"Fu",
"Yonggan",
""
],
[
"Lin",
"Yingyan Celine",
""
]
]
| TITLE: Early-Bird GCNs: Graph-Network Co-Optimization Towards More Efficient
GCN Training and Inference via Drawing Early-Bird Lottery Tickets
ABSTRACT: Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art deep
learning model for representation learning on graphs. However, it remains
notoriously challenging to train and inference GCNs over large graph datasets,
limiting their application to large real-world graphs and hindering the
exploration of deeper and more sophisticated GCN graphs. This is because as the
graph size grows, the sheer number of node features and the large adjacency
matrix can easily explode the required memory and data movements. To tackle the
aforementioned challenges, we explore the possibility of drawing lottery
tickets when sparsifying GCN graphs, i.e., subgraphs that largely shrink the
adjacency matrix yet are capable of achieving accuracy comparable to or even
better than their full graphs. Specifically, we for the first time discover the
existence of graph early-bird (GEB) tickets that emerge at the very early stage
when sparsifying GCN graphs, and propose a simple yet effective detector to
automatically identify the emergence of such GEB tickets. Furthermore, we
advocate graph-model co-optimization and develop a generic efficient GCN
early-bird training framework dubbed GEBT that can significantly boost the
efficiency of GCN training by (1) drawing joint early-bird tickets between the
GCN graphs and models and (2) enabling simultaneously sparsification of both
the GCN graphs and models. Experiments on various GCN models and datasets
consistently validate our GEB finding and the effectiveness of our GEBT, e.g.,
our GEBT achieves up to 80.2% ~ 85.6% and 84.6% ~ 87.5% savings of GCN training
and inference costs while offering a comparable or even better accuracy as
compared to state-of-the-art methods. Our source code and supplementary
appendix are available at https://github.com/RICE-EIC/Early-Bird-GCN.
| no_new_dataset | 0.948728 |
2109.12887 | Yule Wang | Yule Wang, Xin Xin, Yue Ding, Yunzhe Li, Dong Wang | ICPE: An Item Cluster-Wise Pareto-Efficient Framework for Recommendation
Debiasing | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender system based on historical user-item interactions is of vital
importance for web-based services. However, the observed data used to train the
recommender model suffers from severe bias issues. Practically, the item
frequency distribution of the dataset is a highly skewed power-law
distribution. Interactions of a small fraction of head items account for almost
the whole training data. The normal training paradigm from such biased data
tends to repetitively generate recommendations from the head items, which
further exacerbates the biases and affects the exploration of potentially
interesting items from the niche set. In this work, we innovatively explore the
central theme of recommendation debiasing from an item cluster-wise
multi-objective optimization perspective. Aiming to balance the learning on
various item clusters that differ in popularity during the training process, we
propose a model-agnostic framework namely Item Cluster-Wise Pareto-Efficient
Recommendation (ICPE). In detail, we define our item cluster-wise optimization
target as the recommender model should balance all item clusters that differ in
popularity, thus we set the model learning on each item cluster as a unique
optimization objective. To achieve this goal, we first explore items'
popularity levels from a novel causal reasoning perspective. Then, we devise
popularity discrepancy-based bisecting clustering to separate the item
clusters. Next, we adaptively find the overall harmonious gradient direction
for cluster-wise optimization objectives from a Pareto-efficient solver.
Finally, in the prediction stage, we perform counterfactual inference to
further eliminate the impact of global propensity. Extensive experimental
results verify the superiorities of ICPE on overall recommendation performance
and biases elimination.
| [
{
"version": "v1",
"created": "Mon, 27 Sep 2021 09:17:53 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Dec 2021 03:39:29 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Oct 2022 03:36:56 GMT"
},
{
"version": "v4",
"created": "Sun, 23 Jul 2023 01:30:10 GMT"
},
{
"version": "v5",
"created": "Sat, 1 Mar 2025 22:46:43 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wang",
"Yule",
""
],
[
"Xin",
"Xin",
""
],
[
"Ding",
"Yue",
""
],
[
"Li",
"Yunzhe",
""
],
[
"Wang",
"Dong",
""
]
]
| TITLE: ICPE: An Item Cluster-Wise Pareto-Efficient Framework for Recommendation
Debiasing
ABSTRACT: Recommender system based on historical user-item interactions is of vital
importance for web-based services. However, the observed data used to train the
recommender model suffers from severe bias issues. Practically, the item
frequency distribution of the dataset is a highly skewed power-law
distribution. Interactions of a small fraction of head items account for almost
the whole training data. The normal training paradigm from such biased data
tends to repetitively generate recommendations from the head items, which
further exacerbates the biases and affects the exploration of potentially
interesting items from the niche set. In this work, we innovatively explore the
central theme of recommendation debiasing from an item cluster-wise
multi-objective optimization perspective. Aiming to balance the learning on
various item clusters that differ in popularity during the training process, we
propose a model-agnostic framework namely Item Cluster-Wise Pareto-Efficient
Recommendation (ICPE). In detail, we define our item cluster-wise optimization
target as the recommender model should balance all item clusters that differ in
popularity, thus we set the model learning on each item cluster as a unique
optimization objective. To achieve this goal, we first explore items'
popularity levels from a novel causal reasoning perspective. Then, we devise
popularity discrepancy-based bisecting clustering to separate the item
clusters. Next, we adaptively find the overall harmonious gradient direction
for cluster-wise optimization objectives from a Pareto-efficient solver.
Finally, in the prediction stage, we perform counterfactual inference to
further eliminate the impact of global propensity. Extensive experimental
results verify the superiorities of ICPE on overall recommendation performance
and biases elimination.
| no_new_dataset | 0.949809 |
2201.13164 | Mingfu Xue | Mingfu Xue, Shifeng Ni, Yinghao Wu, Yushu Zhang, Jian Wang, Weiqiang
Liu | Imperceptible and Multi-channel Backdoor Attack against Deep Neural
Networks | null | Applied Intelligence, 2023 | 10.1007/s10489-023-05228-6 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent researches demonstrate that Deep Neural Networks (DNN) models are
vulnerable to backdoor attacks. The backdoored DNN model will behave
maliciously when images containing backdoor triggers arrive. To date, existing
backdoor attacks are single-trigger and single-target attacks, and the triggers
of most existing backdoor attacks are obvious thus are easy to be detected or
noticed. In this paper, we propose a novel imperceptible and multi-channel
backdoor attack against Deep Neural Networks by exploiting Discrete Cosine
Transform (DCT) steganography. Based on the proposed backdoor attack method, we
implement two variants of backdoor attacks, i.e., N-to-N backdoor attack and
N-to-One backdoor attack. Specifically, for a colored image, we utilize DCT
steganography to construct the trigger on different channels of the image. As a
result, the trigger is stealthy and natural. Based on the proposed method, we
implement multi-target and multi-trigger backdoor attacks. Experimental results
demonstrate that the average attack success rate of the N-to-N backdoor attack
is 93.95% on CIFAR-10 dataset and 91.55% on TinyImageNet dataset, respectively.
The average attack success rate of N-to-One attack is 90.22% and 89.53% on
CIFAR-10 and TinyImageNet datasets, respectively. Meanwhile, the proposed
backdoor attack does not affect the classification accuracy of the DNN model.
Moreover, the proposed attack is demonstrated to be robust to the
state-of-the-art backdoor defense (Neural Cleanse).
| [
{
"version": "v1",
"created": "Mon, 31 Jan 2022 12:19:28 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xue",
"Mingfu",
""
],
[
"Ni",
"Shifeng",
""
],
[
"Wu",
"Yinghao",
""
],
[
"Zhang",
"Yushu",
""
],
[
"Wang",
"Jian",
""
],
[
"Liu",
"Weiqiang",
""
]
]
| TITLE: Imperceptible and Multi-channel Backdoor Attack against Deep Neural
Networks
ABSTRACT: Recent researches demonstrate that Deep Neural Networks (DNN) models are
vulnerable to backdoor attacks. The backdoored DNN model will behave
maliciously when images containing backdoor triggers arrive. To date, existing
backdoor attacks are single-trigger and single-target attacks, and the triggers
of most existing backdoor attacks are obvious thus are easy to be detected or
noticed. In this paper, we propose a novel imperceptible and multi-channel
backdoor attack against Deep Neural Networks by exploiting Discrete Cosine
Transform (DCT) steganography. Based on the proposed backdoor attack method, we
implement two variants of backdoor attacks, i.e., N-to-N backdoor attack and
N-to-One backdoor attack. Specifically, for a colored image, we utilize DCT
steganography to construct the trigger on different channels of the image. As a
result, the trigger is stealthy and natural. Based on the proposed method, we
implement multi-target and multi-trigger backdoor attacks. Experimental results
demonstrate that the average attack success rate of the N-to-N backdoor attack
is 93.95% on CIFAR-10 dataset and 91.55% on TinyImageNet dataset, respectively.
The average attack success rate of N-to-One attack is 90.22% and 89.53% on
CIFAR-10 and TinyImageNet datasets, respectively. Meanwhile, the proposed
backdoor attack does not affect the classification accuracy of the DNN model.
Moreover, the proposed attack is demonstrated to be robust to the
state-of-the-art backdoor defense (Neural Cleanse).
| no_new_dataset | 0.946399 |
2205.08119 | Haoran You | Haoran You, Baopu Li, Huihong Shi, Yonggan Fu, Yingyan Celine Lin | ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient
Neural Networks | Accepted by ICML 2022 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Neural networks (NNs) with intensive multiplications (e.g., convolutions and
transformers) are capable yet power hungry, impeding their more extensive
deployment into resource-constrained devices. As such, multiplication-free
networks, which follow a common practice in energy-efficient hardware
implementation to parameterize NNs with more efficient operators (e.g., bitwise
shifts and additions), have gained growing attention. However,
multiplication-free networks usually under-perform their vanilla counterparts
in terms of the achieved accuracy. To this end, this work advocates hybrid NNs
that consist of both powerful yet costly multiplications and efficient yet less
powerful operators for marrying the best of both worlds, and proposes
ShiftAddNAS, which can automatically search for more accurate and more
efficient NNs. Our ShiftAddNAS highlights two enablers. Specifically, it
integrates (1) the first hybrid search space that incorporates both
multiplication-based and multiplication-free operators for facilitating the
development of both accurate and efficient hybrid NNs; and (2) a novel weight
sharing strategy that enables effective weight sharing among different
operators that follow heterogeneous distributions (e.g., Gaussian for
convolutions vs. Laplacian for add operators) and simultaneously leads to a
largely reduced supernet size and much better searched networks. Extensive
experiments and ablation studies on various models, datasets, and tasks
consistently validate the efficacy of ShiftAddNAS, e.g., achieving up to a
+7.7% higher accuracy or a +4.9 better BLEU score compared to state-of-the-art
NN, while leading to up to 93% or 69% energy and latency savings, respectively.
Codes and pretrained models are available at
https://github.com/RICE-EIC/ShiftAddNAS.
| [
{
"version": "v1",
"created": "Tue, 17 May 2022 06:40:13 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jul 2022 07:18:29 GMT"
},
{
"version": "v3",
"created": "Thu, 18 Aug 2022 22:46:35 GMT"
},
{
"version": "v4",
"created": "Mon, 3 Mar 2025 17:00:47 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"You",
"Haoran",
""
],
[
"Li",
"Baopu",
""
],
[
"Shi",
"Huihong",
""
],
[
"Fu",
"Yonggan",
""
],
[
"Lin",
"Yingyan Celine",
""
]
]
| TITLE: ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient
Neural Networks
ABSTRACT: Neural networks (NNs) with intensive multiplications (e.g., convolutions and
transformers) are capable yet power hungry, impeding their more extensive
deployment into resource-constrained devices. As such, multiplication-free
networks, which follow a common practice in energy-efficient hardware
implementation to parameterize NNs with more efficient operators (e.g., bitwise
shifts and additions), have gained growing attention. However,
multiplication-free networks usually under-perform their vanilla counterparts
in terms of the achieved accuracy. To this end, this work advocates hybrid NNs
that consist of both powerful yet costly multiplications and efficient yet less
powerful operators for marrying the best of both worlds, and proposes
ShiftAddNAS, which can automatically search for more accurate and more
efficient NNs. Our ShiftAddNAS highlights two enablers. Specifically, it
integrates (1) the first hybrid search space that incorporates both
multiplication-based and multiplication-free operators for facilitating the
development of both accurate and efficient hybrid NNs; and (2) a novel weight
sharing strategy that enables effective weight sharing among different
operators that follow heterogeneous distributions (e.g., Gaussian for
convolutions vs. Laplacian for add operators) and simultaneously leads to a
largely reduced supernet size and much better searched networks. Extensive
experiments and ablation studies on various models, datasets, and tasks
consistently validate the efficacy of ShiftAddNAS, e.g., achieving up to a
+7.7% higher accuracy or a +4.9 better BLEU score compared to state-of-the-art
NN, while leading to up to 93% or 69% energy and latency savings, respectively.
Codes and pretrained models are available at
https://github.com/RICE-EIC/ShiftAddNAS.
| no_new_dataset | 0.951188 |
2207.03677 | Haoran You | Haoran You, Baopu Li, Zhanyi Sun, Xu Ouyang, Yingyan Celine Lin | SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via
Jointly Architecture Searching and Parameter Pruning | Accepted by ECCV 2022 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Neural architecture search (NAS) has demonstrated amazing success in
searching for efficient deep neural networks (DNNs) from a given supernet. In
parallel, the lottery ticket hypothesis has shown that DNNs contain small
subnetworks that can be trained from scratch to achieve a comparable or higher
accuracy than original DNNs. As such, it is currently a common practice to
develop efficient DNNs via a pipeline of first search and then prune.
Nevertheless, doing so often requires a search-train-prune-retrain process and
thus prohibitive computational cost. In this paper, we discover for the first
time that both efficient DNNs and their lottery subnetworks (i.e., lottery
tickets) can be directly identified from a supernet, which we term as
SuperTickets, via a two-in-one training scheme with jointly architecture
searching and parameter pruning. Moreover, we develop a progressive and unified
SuperTickets identification strategy that allows the connectivity of
subnetworks to change during supernet training, achieving better accuracy and
efficiency trade-offs than conventional sparse training. Finally, we evaluate
whether such identified SuperTickets drawn from one task can transfer well to
other tasks, validating their potential of handling multiple tasks
simultaneously. Extensive experiments and ablation studies on three tasks and
four benchmark datasets validate that our proposed SuperTickets achieve boosted
accuracy and efficiency trade-offs than both typical NAS and pruning pipelines,
regardless of having retraining or not. Codes and pretrained models are
available at https://github.com/RICE-EIC/SuperTickets.
| [
{
"version": "v1",
"created": "Fri, 8 Jul 2022 03:44:34 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jul 2022 07:07:34 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Sep 2022 04:34:42 GMT"
},
{
"version": "v4",
"created": "Mon, 19 Dec 2022 03:06:16 GMT"
},
{
"version": "v5",
"created": "Mon, 3 Mar 2025 16:56:55 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"You",
"Haoran",
""
],
[
"Li",
"Baopu",
""
],
[
"Sun",
"Zhanyi",
""
],
[
"Ouyang",
"Xu",
""
],
[
"Lin",
"Yingyan Celine",
""
]
]
| TITLE: SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via
Jointly Architecture Searching and Parameter Pruning
ABSTRACT: Neural architecture search (NAS) has demonstrated amazing success in
searching for efficient deep neural networks (DNNs) from a given supernet. In
parallel, the lottery ticket hypothesis has shown that DNNs contain small
subnetworks that can be trained from scratch to achieve a comparable or higher
accuracy than original DNNs. As such, it is currently a common practice to
develop efficient DNNs via a pipeline of first search and then prune.
Nevertheless, doing so often requires a search-train-prune-retrain process and
thus prohibitive computational cost. In this paper, we discover for the first
time that both efficient DNNs and their lottery subnetworks (i.e., lottery
tickets) can be directly identified from a supernet, which we term as
SuperTickets, via a two-in-one training scheme with jointly architecture
searching and parameter pruning. Moreover, we develop a progressive and unified
SuperTickets identification strategy that allows the connectivity of
subnetworks to change during supernet training, achieving better accuracy and
efficiency trade-offs than conventional sparse training. Finally, we evaluate
whether such identified SuperTickets drawn from one task can transfer well to
other tasks, validating their potential of handling multiple tasks
simultaneously. Extensive experiments and ablation studies on three tasks and
four benchmark datasets validate that our proposed SuperTickets achieve boosted
accuracy and efficiency trade-offs than both typical NAS and pruning pipelines,
regardless of having retraining or not. Codes and pretrained models are
available at https://github.com/RICE-EIC/SuperTickets.
| no_new_dataset | 0.951908 |
2207.14776 | Farzad Khalvati | Khashayar Namdar, Matthias W. Wagner, Birgit B. Ertl-Wagner, Farzad
Khalvati | Open-radiomics: A Collection of Standardized Datasets and a Technical
Protocol for Reproducible Radiomics Machine Learning Pipelines | null | null | null | null | q-bio.QM cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: As an important branch of machine learning pipelines in medical
imaging, radiomics faces two major challenges namely reproducibility and
accessibility. In this work, we introduce open-radiomics, a set of radiomics
datasets along with a comprehensive radiomics pipeline based on our proposed
technical protocol to improve the reproducibility of the results. Methods: We
curated large-scale radiomics datasets based on three open-source datasets;
BraTS 2020 for high-grade glioma (HGG) versus low-grade glioma (LGG)
classification and survival analysis, BraTS 2023 for O6-methylguanine-DNA
methyltransferase classification, and non-small cell lung cancer survival
analysis from the Cancer Imaging Archive. Using BraTS 2020 Magnetic Resonance
Imaging (MRI) dataset, we applied our protocol to 369 brain tumor patients (76
LGG, 293 HGG). Leveraging PyRadiomics for LGG vs. HGG classification, we
generated 288 datasets from 4 MRI sequences, 3 binWidths, 6 normalization
methods, and 4 tumor subregions. Random Forest classifiers were trained and
validated (60%,20%,20%) across 100 different data splits (28,800 test results),
evaluating Area Under the Receiver Operating Characteristic Curve (AUROC).
Results: Unlike binWidth and image normalization, tumor subregion and imaging
sequence significantly affected performance of the models. T1 contrast-enhanced
sequence and the union of Necrotic and the non-enhancing tumor core subregions
resulted in the highest AUROCs (average test AUROC 0.951, 95% confidence
interval of (0.949, 0.952)). Although several settings and data splits (28 out
of 28800) yielded test AUROC of 1, they were irreproducible. Conclusion: Our
experiments demonstrate the sources of variability in radiomics pipelines
(e.g., tumor subregion) can have a significant impact on the results, which may
lead to superficial perfect performances that are irreproducible.
| [
{
"version": "v1",
"created": "Fri, 29 Jul 2022 16:37:46 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Oct 2023 18:41:44 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 19:37:42 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Namdar",
"Khashayar",
""
],
[
"Wagner",
"Matthias W.",
""
],
[
"Ertl-Wagner",
"Birgit B.",
""
],
[
"Khalvati",
"Farzad",
""
]
]
| TITLE: Open-radiomics: A Collection of Standardized Datasets and a Technical
Protocol for Reproducible Radiomics Machine Learning Pipelines
ABSTRACT: Background: As an important branch of machine learning pipelines in medical
imaging, radiomics faces two major challenges namely reproducibility and
accessibility. In this work, we introduce open-radiomics, a set of radiomics
datasets along with a comprehensive radiomics pipeline based on our proposed
technical protocol to improve the reproducibility of the results. Methods: We
curated large-scale radiomics datasets based on three open-source datasets;
BraTS 2020 for high-grade glioma (HGG) versus low-grade glioma (LGG)
classification and survival analysis, BraTS 2023 for O6-methylguanine-DNA
methyltransferase classification, and non-small cell lung cancer survival
analysis from the Cancer Imaging Archive. Using BraTS 2020 Magnetic Resonance
Imaging (MRI) dataset, we applied our protocol to 369 brain tumor patients (76
LGG, 293 HGG). Leveraging PyRadiomics for LGG vs. HGG classification, we
generated 288 datasets from 4 MRI sequences, 3 binWidths, 6 normalization
methods, and 4 tumor subregions. Random Forest classifiers were trained and
validated (60%,20%,20%) across 100 different data splits (28,800 test results),
evaluating Area Under the Receiver Operating Characteristic Curve (AUROC).
Results: Unlike binWidth and image normalization, tumor subregion and imaging
sequence significantly affected performance of the models. T1 contrast-enhanced
sequence and the union of Necrotic and the non-enhancing tumor core subregions
resulted in the highest AUROCs (average test AUROC 0.951, 95% confidence
interval of (0.949, 0.952)). Although several settings and data splits (28 out
of 28800) yielded test AUROC of 1, they were irreproducible. Conclusion: Our
experiments demonstrate the sources of variability in radiomics pipelines
(e.g., tumor subregion) can have a significant impact on the results, which may
lead to superficial perfect performances that are irreproducible.
| no_new_dataset | 0.953057 |
2208.02820 | Yiming Li | Yiming Li, Linghui Zhu, Xiaojun Jia, Yang Bai, Yong Jiang, Shu-Tao
Xia, Xiaochun Cao, Kui Ren | MOVE: Effective and Harmless Ownership Verification via Embedded
External Features | This paper has been accepted by IEEE TPAMI 2025. It is the journal
extension of our conference paper in AAAI 2022
(https://ojs.aaai.org/index.php/AAAI/article/view/20036). 18 pages | null | null | null | cs.CR cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Currently, deep neural networks (DNNs) are widely adopted in different
applications. Despite its commercial values, training a well-performing DNN is
resource-consuming. Accordingly, the well-trained model is valuable
intellectual property for its owner. However, recent studies revealed the
threats of model stealing, where the adversaries can obtain a function-similar
copy of the victim model, even when they can only query the model. In this
paper, we propose an effective and harmless model ownership verification (MOVE)
to defend against different types of model stealing simultaneously, without
introducing new security risks. In general, we conduct the ownership
verification by verifying whether a suspicious model contains the knowledge of
defender-specified external features. Specifically, we embed the external
features by modifying a few training samples with style transfer. We then train
a meta-classifier to determine whether a model is stolen from the victim. This
approach is inspired by the understanding that the stolen models should contain
the knowledge of features learned by the victim model. In particular,
\revision{we develop our MOVE method under both white-box and black-box
settings and analyze its theoretical foundation to provide comprehensive model
protection.} Extensive experiments on benchmark datasets verify the
effectiveness of our method and its resistance to potential adaptive attacks.
The codes for reproducing the main experiments of our method are available at
https://github.com/THUYimingLi/MOVE.
| [
{
"version": "v1",
"created": "Thu, 4 Aug 2022 02:22:29 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 13:14:11 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Yiming",
""
],
[
"Zhu",
"Linghui",
""
],
[
"Jia",
"Xiaojun",
""
],
[
"Bai",
"Yang",
""
],
[
"Jiang",
"Yong",
""
],
[
"Xia",
"Shu-Tao",
""
],
[
"Cao",
"Xiaochun",
""
],
[
"Ren",
"Kui",
""
]
]
| TITLE: MOVE: Effective and Harmless Ownership Verification via Embedded
External Features
ABSTRACT: Currently, deep neural networks (DNNs) are widely adopted in different
applications. Despite its commercial values, training a well-performing DNN is
resource-consuming. Accordingly, the well-trained model is valuable
intellectual property for its owner. However, recent studies revealed the
threats of model stealing, where the adversaries can obtain a function-similar
copy of the victim model, even when they can only query the model. In this
paper, we propose an effective and harmless model ownership verification (MOVE)
to defend against different types of model stealing simultaneously, without
introducing new security risks. In general, we conduct the ownership
verification by verifying whether a suspicious model contains the knowledge of
defender-specified external features. Specifically, we embed the external
features by modifying a few training samples with style transfer. We then train
a meta-classifier to determine whether a model is stolen from the victim. This
approach is inspired by the understanding that the stolen models should contain
the knowledge of features learned by the victim model. In particular,
\revision{we develop our MOVE method under both white-box and black-box
settings and analyze its theoretical foundation to provide comprehensive model
protection.} Extensive experiments on benchmark datasets verify the
effectiveness of our method and its resistance to potential adaptive attacks.
The codes for reproducing the main experiments of our method are available at
https://github.com/THUYimingLi/MOVE.
| no_new_dataset | 0.942135 |
2209.04821 | Nathanael Lemessa Baisa | Nathanael L. Baisa | Local-Aware Global Attention Network for Person Re-Identification Based
on Body and Hand Images | arXiv admin note: substantial text overlap with arXiv:2108.02234 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning representative, robust and discriminative information from images is
essential for effective person re-identification (Re-Id). In this paper, we
propose a compound approach for end-to-end discriminative deep feature learning
for person Re-Id based on both body and hand images. We carefully design the
Local-Aware Global Attention Network (LAGA-Net), a multi-branch deep network
architecture consisting of one branch for spatial attention, one branch for
channel attention, one branch for global feature representations and another
branch for local feature representations. The attention branches focus on the
relevant features of the image while suppressing the irrelevant backgrounds. In
order to overcome the weakness of the attention mechanisms, equivariant to
pixel shuffling, we integrate relative positional encodings into the spatial
attention module to capture the spatial positions of pixels. The global branch
intends to preserve the global context or structural information. For the the
local branch, which intends to capture the fine-grained information, we perform
uniform partitioning to generate stripes on the conv-layer horizontally. We
retrieve the parts by conducting a soft partition without explicitly
partitioning the images or requiring external cues such as pose estimation. A
set of ablation study shows that each component contributes to the increased
performance of the LAGA-Net. Extensive evaluations on four popular body-based
person Re-Id benchmarks and two publicly available hand datasets demonstrate
that our proposed method consistently outperforms existing state-of-the-art
methods.
| [
{
"version": "v1",
"created": "Sun, 11 Sep 2022 09:43:42 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 11:26:56 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Jul 2024 13:50:35 GMT"
},
{
"version": "v4",
"created": "Sat, 1 Mar 2025 13:11:01 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Baisa",
"Nathanael L.",
""
]
]
| TITLE: Local-Aware Global Attention Network for Person Re-Identification Based
on Body and Hand Images
ABSTRACT: Learning representative, robust and discriminative information from images is
essential for effective person re-identification (Re-Id). In this paper, we
propose a compound approach for end-to-end discriminative deep feature learning
for person Re-Id based on both body and hand images. We carefully design the
Local-Aware Global Attention Network (LAGA-Net), a multi-branch deep network
architecture consisting of one branch for spatial attention, one branch for
channel attention, one branch for global feature representations and another
branch for local feature representations. The attention branches focus on the
relevant features of the image while suppressing the irrelevant backgrounds. In
order to overcome the weakness of the attention mechanisms, equivariant to
pixel shuffling, we integrate relative positional encodings into the spatial
attention module to capture the spatial positions of pixels. The global branch
intends to preserve the global context or structural information. For the the
local branch, which intends to capture the fine-grained information, we perform
uniform partitioning to generate stripes on the conv-layer horizontally. We
retrieve the parts by conducting a soft partition without explicitly
partitioning the images or requiring external cues such as pose estimation. A
set of ablation study shows that each component contributes to the increased
performance of the LAGA-Net. Extensive evaluations on four popular body-based
person Re-Id benchmarks and two publicly available hand datasets demonstrate
that our proposed method consistently outperforms existing state-of-the-art
methods.
| no_new_dataset | 0.949669 |
2212.03399 | Md Nadim | Md Nadim, Banani Roy | Utilizing Source Code Syntax Patterns to Detect Bug Inducing Commits
using Machine Learning Models | null | null | 10.1007/s11219-022-09611-3 | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Detecting Bug Inducing Commit (BIC) or Just in Time (JIT) defect prediction
using Machine Learning (ML) based models requires tabulated feature values
extracted from the source code or historical maintenance data of a software
system. Existing studies have utilized meta-data from source code repositories
(we named them GitHub Statistics or GS), n-gram-based source code text
processing, and developer's information (e.g., the experience of a developer)
as the feature values in ML-based bug detection models. However, these feature
values do not represent the source code syntax styles or patterns that a
developer might prefer over available valid alternatives provided by
programming languages. This investigation proposed a method to extract features
from its source code syntax patterns to represent software commits and
investigate whether they are helpful in detecting bug proneness in software
systems. We utilize six manually and two automatically labeled datasets from
eight open-source software projects written in Java, C++, and Python
programming languages. Our datasets contain 642 manually labeled and 4,014
automatically labeled buggy and non-buggy commits from six and two subject
systems, respectively. The subject systems contain a diverse number of
revisions, and they are from various application domains. Our investigation
shows the inclusion of the proposed features increases the performance of
detecting buggy and non-buggy software commits using five different machine
learning classification models. Our proposed features also perform better in
detecting buggy commits using the Deep Belief Network generated features and
classification model. This investigation also implemented a state-of-the-art
tool to compare the explainability of predicted buggy commits using our
proposed and traditional features and found that our proposed features provide
better reasoning about buggy.....
| [
{
"version": "v1",
"created": "Wed, 7 Dec 2022 01:46:28 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Nadim",
"Md",
""
],
[
"Roy",
"Banani",
""
]
]
| TITLE: Utilizing Source Code Syntax Patterns to Detect Bug Inducing Commits
using Machine Learning Models
ABSTRACT: Detecting Bug Inducing Commit (BIC) or Just in Time (JIT) defect prediction
using Machine Learning (ML) based models requires tabulated feature values
extracted from the source code or historical maintenance data of a software
system. Existing studies have utilized meta-data from source code repositories
(we named them GitHub Statistics or GS), n-gram-based source code text
processing, and developer's information (e.g., the experience of a developer)
as the feature values in ML-based bug detection models. However, these feature
values do not represent the source code syntax styles or patterns that a
developer might prefer over available valid alternatives provided by
programming languages. This investigation proposed a method to extract features
from its source code syntax patterns to represent software commits and
investigate whether they are helpful in detecting bug proneness in software
systems. We utilize six manually and two automatically labeled datasets from
eight open-source software projects written in Java, C++, and Python
programming languages. Our datasets contain 642 manually labeled and 4,014
automatically labeled buggy and non-buggy commits from six and two subject
systems, respectively. The subject systems contain a diverse number of
revisions, and they are from various application domains. Our investigation
shows the inclusion of the proposed features increases the performance of
detecting buggy and non-buggy software commits using five different machine
learning classification models. Our proposed features also perform better in
detecting buggy commits using the Deep Belief Network generated features and
classification model. This investigation also implemented a state-of-the-art
tool to compare the explainability of predicted buggy commits using our
proposed and traditional features and found that our proposed features provide
better reasoning about buggy.....
| no_new_dataset | 0.951414 |
2212.13706 | Shiyu Wang | Shiyu Wang, Fan Zhou, Yinbo Sun, Lintao Ma, James Zhang, Yangfei Zheng | End-to-End Modeling Hierarchical Time Series Using Autoregressive
Transformer and Conditional Normalizing Flow based Reconciliation | Accepted by the 22nd IEEE International Conference on Data Mining
(ICDM2022) | null | 10.1109/ICDMW58026.2022.00141 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Multivariate time series forecasting with hierarchical structure is pervasive
in real-world applications, demanding not only predicting each level of the
hierarchy, but also reconciling all forecasts to ensure coherency, i.e., the
forecasts should satisfy the hierarchical aggregation constraints. Moreover,
the disparities of statistical characteristics between levels can be huge,
worsened by non-Gaussian distributions and non-linear correlations. To this
extent, we propose a novel end-to-end hierarchical time series forecasting
model, based on conditioned normalizing flow-based autoregressive transformer
reconciliation, to represent complex data distribution while simultaneously
reconciling the forecasts to ensure coherency. Unlike other state-of-the-art
methods, we achieve the forecasting and reconciliation simultaneously without
requiring any explicit post-processing step. In addition, by harnessing the
power of deep model, we do not rely on any assumption such as unbiased
estimates or Gaussian distribution. Our evaluation experiments are conducted on
four real-world hierarchical datasets from different industrial domains (three
public ones and a dataset from the application servers of Alipay's data center)
and the preliminary results demonstrate efficacy of our proposed method.
| [
{
"version": "v1",
"created": "Wed, 28 Dec 2022 05:43:57 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 07:39:22 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 10:52:11 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wang",
"Shiyu",
""
],
[
"Zhou",
"Fan",
""
],
[
"Sun",
"Yinbo",
""
],
[
"Ma",
"Lintao",
""
],
[
"Zhang",
"James",
""
],
[
"Zheng",
"Yangfei",
""
]
]
| TITLE: End-to-End Modeling Hierarchical Time Series Using Autoregressive
Transformer and Conditional Normalizing Flow based Reconciliation
ABSTRACT: Multivariate time series forecasting with hierarchical structure is pervasive
in real-world applications, demanding not only predicting each level of the
hierarchy, but also reconciling all forecasts to ensure coherency, i.e., the
forecasts should satisfy the hierarchical aggregation constraints. Moreover,
the disparities of statistical characteristics between levels can be huge,
worsened by non-Gaussian distributions and non-linear correlations. To this
extent, we propose a novel end-to-end hierarchical time series forecasting
model, based on conditioned normalizing flow-based autoregressive transformer
reconciliation, to represent complex data distribution while simultaneously
reconciling the forecasts to ensure coherency. Unlike other state-of-the-art
methods, we achieve the forecasting and reconciliation simultaneously without
requiring any explicit post-processing step. In addition, by harnessing the
power of deep model, we do not rely on any assumption such as unbiased
estimates or Gaussian distribution. Our evaluation experiments are conducted on
four real-world hierarchical datasets from different industrial domains (three
public ones and a dataset from the application servers of Alipay's data center)
and the preliminary results demonstrate efficacy of our proposed method.
| no_new_dataset | 0.946151 |
2303.05345 | Alberto Maria Mongardini | Massimo La Morgia, Alessandro Mei, Alberto Maria Mongardini | TGDataset: Collecting and Exploring the Largest Telegram Channels
Dataset | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Telegram is one of the most popular instant messaging apps in today's digital
age. In addition to providing a private messaging service, Telegram, with its
channels, represents a valid medium for rapidly broadcasting content to a large
audience (COVID-19 announcements), but, unfortunately, also for disseminating
radical ideologies and coordinating attacks (Capitol Hill riot). This paper
presents the TGDataset, a new dataset that includes 120,979 Telegram channels
and over 400 million messages, making it the largest collection of Telegram
channels to the best of our knowledge. After a brief introduction to the data
collection process, we analyze the languages spoken within our dataset and the
topic covered by English channels. Finally, we discuss some use cases in which
our dataset can be extremely useful to understand better the Telegram
ecosystem, as well as to study the diffusion of questionable news. In addition
to the raw dataset, we released the scripts we used to analyze the dataset and
the list of channels belonging to the network of a new conspiracy theory called
Sabmyk.
| [
{
"version": "v1",
"created": "Thu, 9 Mar 2023 15:42:38 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Dec 2024 15:20:33 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 14:57:12 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"La Morgia",
"Massimo",
""
],
[
"Mei",
"Alessandro",
""
],
[
"Mongardini",
"Alberto Maria",
""
]
]
| TITLE: TGDataset: Collecting and Exploring the Largest Telegram Channels
Dataset
ABSTRACT: Telegram is one of the most popular instant messaging apps in today's digital
age. In addition to providing a private messaging service, Telegram, with its
channels, represents a valid medium for rapidly broadcasting content to a large
audience (COVID-19 announcements), but, unfortunately, also for disseminating
radical ideologies and coordinating attacks (Capitol Hill riot). This paper
presents the TGDataset, a new dataset that includes 120,979 Telegram channels
and over 400 million messages, making it the largest collection of Telegram
channels to the best of our knowledge. After a brief introduction to the data
collection process, we analyze the languages spoken within our dataset and the
topic covered by English channels. Finally, we discuss some use cases in which
our dataset can be extremely useful to understand better the Telegram
ecosystem, as well as to study the diffusion of questionable news. In addition
to the raw dataset, we released the scripts we used to analyze the dataset and
the list of channels belonging to the network of a new conspiracy theory called
Sabmyk.
| new_dataset | 0.969266 |
2303.15263 | Nathanael Lemessa Baisa | Nathanael L. Baisa | Joint Person Identity, Gender and Age Estimation from Hand Images using
Deep Multi-Task Representation Learning | arXiv admin note: text overlap with arXiv:2209.04821 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a multi-task representation learning framework to
jointly estimate the identity, gender and age of individuals from their hand
images for the purpose of criminal investigations since the hand images are
often the only available information in cases of serious crime such as sexual
abuse. We investigate different up-to-date deep learning architectures and
compare their performance for joint estimation of identity, gender and age from
hand images of perpetrators of serious crime. To simplify the age prediction,
we create age groups for the age estimation. We make extensive evaluations and
comparisons of both convolution-based and transformer-based deep learning
architectures on a publicly available 11k hands dataset. Our experimental
analysis shows that it is possible to efficiently estimate not only identity
but also other attributes such as gender and age of suspects jointly from hand
images for criminal investigations, which is crucial in assisting international
police forces in the court to identify and convict abusers.
| [
{
"version": "v1",
"created": "Mon, 27 Mar 2023 14:52:08 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 11:32:43 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Jun 2023 13:02:14 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Mar 2024 12:39:28 GMT"
},
{
"version": "v5",
"created": "Sat, 1 Mar 2025 23:43:08 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Baisa",
"Nathanael L.",
""
]
]
| TITLE: Joint Person Identity, Gender and Age Estimation from Hand Images using
Deep Multi-Task Representation Learning
ABSTRACT: In this paper, we propose a multi-task representation learning framework to
jointly estimate the identity, gender and age of individuals from their hand
images for the purpose of criminal investigations since the hand images are
often the only available information in cases of serious crime such as sexual
abuse. We investigate different up-to-date deep learning architectures and
compare their performance for joint estimation of identity, gender and age from
hand images of perpetrators of serious crime. To simplify the age prediction,
we create age groups for the age estimation. We make extensive evaluations and
comparisons of both convolution-based and transformer-based deep learning
architectures on a publicly available 11k hands dataset. Our experimental
analysis shows that it is possible to efficiently estimate not only identity
but also other attributes such as gender and age of suspects jointly from hand
images for criminal investigations, which is crucial in assisting international
police forces in the court to identify and convict abusers.
| no_new_dataset | 0.947962 |
2305.00706 | Shiyu Wang | Shiyu Wang, Yinbo Sun, Xiaoming Shi, Shiyi Zhu, Lin-Tao Ma, James
Zhang, Yifei Zheng, Jian Liu | Full Scaling Automation for Sustainable Development of Green Data
Centers | Accepted by the Thirty-Second(13th) International Joint Conference on
Artificial Intelligence (IJCAI-23) | https://www.ijcai.org/proceedings/2023/0695.pdf | 10.24963/ijcai.2023/695 | null | cs.DC cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The rapid rise in cloud computing has resulted in an alarming increase in
data centers' carbon emissions, which now accounts for >3% of global greenhouse
gas emissions, necessitating immediate steps to combat their mounting strain on
the global climate. An important focus of this effort is to improve resource
utilization in order to save electricity usage. Our proposed Full Scaling
Automation (FSA) mechanism is an effective method of dynamically adapting
resources to accommodate changing workloads in large-scale cloud computing
clusters, enabling the clusters in data centers to maintain their desired CPU
utilization target and thus improve energy efficiency. FSA harnesses the power
of deep representation learning to accurately predict the future workload of
each service and automatically stabilize the corresponding target CPU usage
level, unlike the previous autoscaling methods, such as Autopilot or FIRM, that
need to adjust computing resources with statistical models and expert
knowledge. Our approach achieves significant performance improvement compared
to the existing work in real-world datasets. We also deployed FSA on
large-scale cloud computing clusters in industrial data centers, and according
to the certification of the China Environmental United Certification Center
(CEC), a reduction of 947 tons of carbon dioxide, equivalent to a saving of
1538,000 kWh of electricity, was achieved during the Double 11 shopping
festival of 2022, marking a critical step for our company's strategic goal
towards carbon neutrality by 2030.
| [
{
"version": "v1",
"created": "Mon, 1 May 2023 08:11:00 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 15:57:31 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wang",
"Shiyu",
""
],
[
"Sun",
"Yinbo",
""
],
[
"Shi",
"Xiaoming",
""
],
[
"Zhu",
"Shiyi",
""
],
[
"Ma",
"Lin-Tao",
""
],
[
"Zhang",
"James",
""
],
[
"Zheng",
"Yifei",
""
],
[
"Liu",
"Jian",
""
]
]
| TITLE: Full Scaling Automation for Sustainable Development of Green Data
Centers
ABSTRACT: The rapid rise in cloud computing has resulted in an alarming increase in
data centers' carbon emissions, which now accounts for >3% of global greenhouse
gas emissions, necessitating immediate steps to combat their mounting strain on
the global climate. An important focus of this effort is to improve resource
utilization in order to save electricity usage. Our proposed Full Scaling
Automation (FSA) mechanism is an effective method of dynamically adapting
resources to accommodate changing workloads in large-scale cloud computing
clusters, enabling the clusters in data centers to maintain their desired CPU
utilization target and thus improve energy efficiency. FSA harnesses the power
of deep representation learning to accurately predict the future workload of
each service and automatically stabilize the corresponding target CPU usage
level, unlike the previous autoscaling methods, such as Autopilot or FIRM, that
need to adjust computing resources with statistical models and expert
knowledge. Our approach achieves significant performance improvement compared
to the existing work in real-world datasets. We also deployed FSA on
large-scale cloud computing clusters in industrial data centers, and according
to the certification of the China Environmental United Certification Center
(CEC), a reduction of 947 tons of carbon dioxide, equivalent to a saving of
1538,000 kWh of electricity, was achieved during the Double 11 shopping
festival of 2022, marking a critical step for our company's strategic goal
towards carbon neutrality by 2030.
| no_new_dataset | 0.949902 |
2305.14445 | Jenny Chen | Jenny Chen (1), Benjamin Ades-Aron (1), Hong-Hsi Lee (2), Subah Mehrin
(1), Michelle Pang (3), Dmitry S. Novikov (1), Jelle Veraart (1), Els
Fieremans (1) | Optimization and Validation of the DESIGNER dMRI preprocessing pipeline
in white matter aging | null | Journal reference: Imaging Neuroscience, 2, 1-17, 2024 | 10.1162/imag_a_00125 | null | physics.med-ph physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Various diffusion MRI (dMRI) preprocessing pipelines are currently available
to yield more accurate diffusion parameters. Here, we evaluated accuracy and
robustness of the optimized Diffusion parameter EStImation with Gibbs and NoisE
Removal (DESIGNER) pipeline in a large clinical dMRI dataset and using ground
truth phantoms. DESIGNER has been modified to improve denoising and target
Gibbs ringing for partial Fourier acquisitions. We compared the revisited
DESIGNER (Dv2) (including denoising, Gibbs removal, correction for motion, EPI
distortion, and eddy currents) against the original DESIGNER (Dv1) pipeline,
minimal preprocessing (including correction for motion, EPI distortion, and
eddy currents only), and no preprocessing on a large clinical dMRI dataset of
524 control subjects with ages between 25 and 75 years old. We evaluated the
effect of specific processing steps on age correlations in white matter with
DTI and DKI metrics. We also evaluated the added effect of minimal Gaussian
smoothing to deal with noise and to reduce outliers in parameter maps compared
to DESIGNER (Dv2)'s noise removal method. Moreover, DESIGNER (Dv2)'s updated
noise and Gibbs removal methods were assessed using ground truth dMRI phantom
to evaluate accuracy. Results show age correlation in white matter with DTI and
DKI metrics were affected by the preprocessing pipeline, causing systematic
differences in absolute parameter values and loss or gain of statistical
significance. Both in clinical dMRI and ground truth phantoms, DESIGNER (Dv2)
pipeline resulted in the smallest number of outlier voxels and improved
accuracy in DTI and DKI metrics as noise was reduced and Gibbs removal was
improved. Thus, DESIGNER (Dv2) provides more accurate and robust DTI and DKI
parameter maps as compared to no preprocessing or minimal preprocessing.
| [
{
"version": "v1",
"created": "Tue, 23 May 2023 18:09:56 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Mar 2024 15:23:19 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Jenny",
""
],
[
"Ades-Aron",
"Benjamin",
""
],
[
"Lee",
"Hong-Hsi",
""
],
[
"Mehrin",
"Subah",
""
],
[
"Pang",
"Michelle",
""
],
[
"Novikov",
"Dmitry S.",
""
],
[
"Veraart",
"Jelle",
""
],
[
"Fieremans",
"Els",
""
]
]
| TITLE: Optimization and Validation of the DESIGNER dMRI preprocessing pipeline
in white matter aging
ABSTRACT: Various diffusion MRI (dMRI) preprocessing pipelines are currently available
to yield more accurate diffusion parameters. Here, we evaluated accuracy and
robustness of the optimized Diffusion parameter EStImation with Gibbs and NoisE
Removal (DESIGNER) pipeline in a large clinical dMRI dataset and using ground
truth phantoms. DESIGNER has been modified to improve denoising and target
Gibbs ringing for partial Fourier acquisitions. We compared the revisited
DESIGNER (Dv2) (including denoising, Gibbs removal, correction for motion, EPI
distortion, and eddy currents) against the original DESIGNER (Dv1) pipeline,
minimal preprocessing (including correction for motion, EPI distortion, and
eddy currents only), and no preprocessing on a large clinical dMRI dataset of
524 control subjects with ages between 25 and 75 years old. We evaluated the
effect of specific processing steps on age correlations in white matter with
DTI and DKI metrics. We also evaluated the added effect of minimal Gaussian
smoothing to deal with noise and to reduce outliers in parameter maps compared
to DESIGNER (Dv2)'s noise removal method. Moreover, DESIGNER (Dv2)'s updated
noise and Gibbs removal methods were assessed using ground truth dMRI phantom
to evaluate accuracy. Results show age correlation in white matter with DTI and
DKI metrics were affected by the preprocessing pipeline, causing systematic
differences in absolute parameter values and loss or gain of statistical
significance. Both in clinical dMRI and ground truth phantoms, DESIGNER (Dv2)
pipeline resulted in the smallest number of outlier voxels and improved
accuracy in DTI and DKI metrics as noise was reduced and Gibbs removal was
improved. Thus, DESIGNER (Dv2) provides more accurate and robust DTI and DKI
parameter maps as compared to no preprocessing or minimal preprocessing.
| no_new_dataset | 0.944842 |
2305.18076 | Tao Feng | Tao Feng, Jie Zhang, Huashan Liu, Zhijie Wang, Shengyuan Pang | Towards Efficient Deep Hashing Retrieval: Condensing Your Data via
Feature-Embedding Matching | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep hashing retrieval has gained widespread use in big data retrieval due to
its robust feature extraction and efficient hashing process. However, training
advanced deep hashing models has become more expensive due to complex
optimizations and large datasets. Coreset selection and Dataset Condensation
lower overall training costs by reducing the volume of training data without
significantly compromising model accuracy for classification task. In this
paper, we explore the effect of mainstream dataset condensation methods for
deep hashing retrieval and propose IEM (Information-intensive feature Embedding
Matching), which is centered on distribution matching and incorporates model
and data augmentation techniques to further enhance the feature of hashing
space. Extensive experiments demonstrate the superior performance and
efficiency of our approach.
| [
{
"version": "v1",
"created": "Mon, 29 May 2023 13:23:55 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 09:26:18 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Feng",
"Tao",
""
],
[
"Zhang",
"Jie",
""
],
[
"Liu",
"Huashan",
""
],
[
"Wang",
"Zhijie",
""
],
[
"Pang",
"Shengyuan",
""
]
]
| TITLE: Towards Efficient Deep Hashing Retrieval: Condensing Your Data via
Feature-Embedding Matching
ABSTRACT: Deep hashing retrieval has gained widespread use in big data retrieval due to
its robust feature extraction and efficient hashing process. However, training
advanced deep hashing models has become more expensive due to complex
optimizations and large datasets. Coreset selection and Dataset Condensation
lower overall training costs by reducing the volume of training data without
significantly compromising model accuracy for classification task. In this
paper, we explore the effect of mainstream dataset condensation methods for
deep hashing retrieval and propose IEM (Information-intensive feature Embedding
Matching), which is centered on distribution matching and incorporates model
and data augmentation techniques to further enhance the feature of hashing
space. Extensive experiments demonstrate the superior performance and
efficiency of our approach.
| no_new_dataset | 0.950732 |
2307.06616 | Mohamed Amine Ferrag | Mohamed Amine Ferrag, Ammar Battah, Norbert Tihanyi, Ridhi Jain, Diana
Maimut, Fatima Alwahedi, Thierry Lestable, Narinderjit Singh Thandi,
Abdechakour Mechri, Merouane Debbah, Lucas C. Cordeiro | SecureFalcon: Are We There Yet in Automated Software Vulnerability
Detection with LLMs? | The paper is accepted for publication in IEEE Transactions on
Software Engineering | null | null | null | cs.CR cs.AI | http://creativecommons.org/licenses/by/4.0/ | Software vulnerabilities can cause numerous problems, including crashes, data
loss, and security breaches. These issues greatly compromise quality and can
negatively impact the market adoption of software applications and systems.
Traditional bug-fixing methods, such as static analysis, often produce false
positives. While bounded model checking, a form of Formal Verification (FV),
can provide more accurate outcomes compared to static analyzers, it demands
substantial resources and significantly hinders developer productivity. Can
Machine Learning (ML) achieve accuracy comparable to FV methods and be used in
popular instant code completion frameworks in near real-time? In this paper, we
introduce SecureFalcon, an innovative model architecture with only 121 million
parameters derived from the Falcon-40B model and explicitly tailored for
classifying software vulnerabilities. To achieve the best performance, we
trained our model using two datasets, namely the FormAI dataset and the
FalconVulnDB. The FalconVulnDB is a combination of recent public datasets,
namely the SySeVR framework, Draper VDISC, Bigvul, Diversevul, SARD Juliet, and
ReVeal datasets. These datasets contain the top 25 most dangerous software
weaknesses, such as CWE-119, CWE-120, CWE-476, CWE-122, CWE-190, CWE-121,
CWE-78, CWE-787, CWE-20, and CWE-762. SecureFalcon achieves 94% accuracy in
binary classification and up to 92% in multiclassification, with instant CPU
inference times. It outperforms existing models such as BERT, RoBERTa,
CodeBERT, and traditional ML algorithms, promising to push the boundaries of
software vulnerability detection and instant code completion frameworks.
| [
{
"version": "v1",
"created": "Thu, 13 Jul 2023 08:34:09 GMT"
},
{
"version": "v2",
"created": "Wed, 29 May 2024 18:22:48 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 12:12:22 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ferrag",
"Mohamed Amine",
""
],
[
"Battah",
"Ammar",
""
],
[
"Tihanyi",
"Norbert",
""
],
[
"Jain",
"Ridhi",
""
],
[
"Maimut",
"Diana",
""
],
[
"Alwahedi",
"Fatima",
""
],
[
"Lestable",
"Thierry",
""
],
[
"Thandi",
"Narinderjit Singh",
""
],
[
"Mechri",
"Abdechakour",
""
],
[
"Debbah",
"Merouane",
""
],
[
"Cordeiro",
"Lucas C.",
""
]
]
| TITLE: SecureFalcon: Are We There Yet in Automated Software Vulnerability
Detection with LLMs?
ABSTRACT: Software vulnerabilities can cause numerous problems, including crashes, data
loss, and security breaches. These issues greatly compromise quality and can
negatively impact the market adoption of software applications and systems.
Traditional bug-fixing methods, such as static analysis, often produce false
positives. While bounded model checking, a form of Formal Verification (FV),
can provide more accurate outcomes compared to static analyzers, it demands
substantial resources and significantly hinders developer productivity. Can
Machine Learning (ML) achieve accuracy comparable to FV methods and be used in
popular instant code completion frameworks in near real-time? In this paper, we
introduce SecureFalcon, an innovative model architecture with only 121 million
parameters derived from the Falcon-40B model and explicitly tailored for
classifying software vulnerabilities. To achieve the best performance, we
trained our model using two datasets, namely the FormAI dataset and the
FalconVulnDB. The FalconVulnDB is a combination of recent public datasets,
namely the SySeVR framework, Draper VDISC, Bigvul, Diversevul, SARD Juliet, and
ReVeal datasets. These datasets contain the top 25 most dangerous software
weaknesses, such as CWE-119, CWE-120, CWE-476, CWE-122, CWE-190, CWE-121,
CWE-78, CWE-787, CWE-20, and CWE-762. SecureFalcon achieves 94% accuracy in
binary classification and up to 92% in multiclassification, with instant CPU
inference times. It outperforms existing models such as BERT, RoBERTa,
CodeBERT, and traditional ML algorithms, promising to push the boundaries of
software vulnerability detection and instant code completion frameworks.
| no_new_dataset | 0.904903 |
2309.13838 | Georg Hahn | Rebecca M. Hurwitz and Georg Hahn | Penalized Principal Component Analysis Using Smoothing | null | null | null | null | stat.AP cs.LG cs.NA math.NA q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Principal components computed via PCA (principal component analysis) are
traditionally used to reduce dimensionality in genomic data or to correct for
population stratification. In this paper, we explore the penalized eigenvalue
problem (PEP) which reformulates the computation of the first eigenvector as an
optimization problem and adds an $L_1$ penalty constraint to enforce sparseness
of the solution. The contribution of our article is threefold. First, we extend
PEP by applying smoothing to the original LASSO-type $L_1$ penalty. This allows
one to compute analytical gradients which enable faster and more efficient
minimization of the objective function associated with the optimization
problem. Second, we demonstrate how higher order eigenvectors can be calculated
with PEP using established results from singular value decomposition (SVD).
Third, we present four experimental studies to demonstrate the usefulness of
the smoothed penalized eigenvectors. Using data from the 1000 Genomes Project
dataset, we empirically demonstrate that our proposed smoothed PEP allows one
to increase numerical stability and obtain meaningful eigenvectors. We also
employ the penalized eigenvector approach in two additional real data
applications (computation of a polygenic risk score and clustering),
demonstrating that exchanging the penalized eigenvectors for their smoothed
counterparts can increase prediction accuracy in polygenic risk scores and
enhance discernibility of clusterings. Moreover, we compare our proposed
smoothed PEP to seven state-of-the-art algorithms for sparse PCA and evaluate
the accuracy of the obtained eigenvectors, their support recovery, and their
runtime.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 02:50:22 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 01:47:00 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Hurwitz",
"Rebecca M.",
""
],
[
"Hahn",
"Georg",
""
]
]
| TITLE: Penalized Principal Component Analysis Using Smoothing
ABSTRACT: Principal components computed via PCA (principal component analysis) are
traditionally used to reduce dimensionality in genomic data or to correct for
population stratification. In this paper, we explore the penalized eigenvalue
problem (PEP) which reformulates the computation of the first eigenvector as an
optimization problem and adds an $L_1$ penalty constraint to enforce sparseness
of the solution. The contribution of our article is threefold. First, we extend
PEP by applying smoothing to the original LASSO-type $L_1$ penalty. This allows
one to compute analytical gradients which enable faster and more efficient
minimization of the objective function associated with the optimization
problem. Second, we demonstrate how higher order eigenvectors can be calculated
with PEP using established results from singular value decomposition (SVD).
Third, we present four experimental studies to demonstrate the usefulness of
the smoothed penalized eigenvectors. Using data from the 1000 Genomes Project
dataset, we empirically demonstrate that our proposed smoothed PEP allows one
to increase numerical stability and obtain meaningful eigenvectors. We also
employ the penalized eigenvector approach in two additional real data
applications (computation of a polygenic risk score and clustering),
demonstrating that exchanging the penalized eigenvectors for their smoothed
counterparts can increase prediction accuracy in polygenic risk scores and
enhance discernibility of clusterings. Moreover, we compare our proposed
smoothed PEP to seven state-of-the-art algorithms for sparse PCA and evaluate
the accuracy of the obtained eigenvectors, their support recovery, and their
runtime.
| no_new_dataset | 0.945801 |
2310.08537 | Yifei Zhang | Yifei Zhang, James Song, Siyi Gu, Tianxu Jiang, Bo Pan, Guangji Bai,
Liang Zhao | Saliency-Bench: A Comprehensive Benchmark for Evaluating Visual
Explanations | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Explainable AI (XAI) has gained significant attention for providing insights
into the decision-making processes of deep learning models, particularly for
image classification tasks through visual explanations visualized by saliency
maps. Despite their success, challenges remain due to the lack of annotated
datasets and standardized evaluation pipelines. In this paper, we introduce
Saliency-Bench, a novel benchmark suite designed to evaluate visual
explanations generated by saliency methods across multiple datasets. We
curated, constructed, and annotated eight datasets, each covering diverse tasks
such as scene classification, cancer diagnosis, object classification, and
action classification, with corresponding ground-truth explanations. The
benchmark includes a standardized and unified evaluation pipeline for assessing
faithfulness and alignment of the visual explanation, providing a holistic
visual explanation performance assessment. We benchmark these eight datasets
with widely used saliency methods on different image classifier architectures
to evaluate explanation quality. Additionally, we developed an easy-to-use API
for automating the evaluation pipeline, from data accessing, and data loading,
to result evaluation. The benchmark is available via our website:
https://xaidataset.github.io.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2023 17:26:16 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Nov 2023 01:35:45 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 09:26:26 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Yifei",
""
],
[
"Song",
"James",
""
],
[
"Gu",
"Siyi",
""
],
[
"Jiang",
"Tianxu",
""
],
[
"Pan",
"Bo",
""
],
[
"Bai",
"Guangji",
""
],
[
"Zhao",
"Liang",
""
]
]
| TITLE: Saliency-Bench: A Comprehensive Benchmark for Evaluating Visual
Explanations
ABSTRACT: Explainable AI (XAI) has gained significant attention for providing insights
into the decision-making processes of deep learning models, particularly for
image classification tasks through visual explanations visualized by saliency
maps. Despite their success, challenges remain due to the lack of annotated
datasets and standardized evaluation pipelines. In this paper, we introduce
Saliency-Bench, a novel benchmark suite designed to evaluate visual
explanations generated by saliency methods across multiple datasets. We
curated, constructed, and annotated eight datasets, each covering diverse tasks
such as scene classification, cancer diagnosis, object classification, and
action classification, with corresponding ground-truth explanations. The
benchmark includes a standardized and unified evaluation pipeline for assessing
faithfulness and alignment of the visual explanation, providing a holistic
visual explanation performance assessment. We benchmark these eight datasets
with widely used saliency methods on different image classifier architectures
to evaluate explanation quality. Additionally, we developed an easy-to-use API
for automating the evaluation pipeline, from data accessing, and data loading,
to result evaluation. The benchmark is available via our website:
https://xaidataset.github.io.
| no_new_dataset | 0.527134 |
2310.13104 | Zhiru Zhu | Zhiru Zhu, Raul Castro Fernandez | Within-Dataset Disclosure Risk for Differential Privacy | null | null | null | null | cs.DB cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differential privacy (DP) enables private data analysis. In a typical DP
deployment, controllers manage individuals' sensitive data and are responsible
for answering analysts' queries while protecting individuals' privacy. They do
so by choosing the privacy parameter $\epsilon$, which controls the degree of
privacy for all individuals in all possible datasets. However, it is
challenging for controllers to choose $\epsilon$ because of the difficulty of
interpreting the privacy implications of such a choice on the within-dataset
individuals.
To address this challenge, we first derive a relative disclosure risk
indicator (RDR) that indicates the impact of choosing $\epsilon$ on the
within-dataset individuals' disclosure risk. We then design an algorithm to
find $\epsilon$ based on controllers' privacy preferences expressed as a
function of the within-dataset individuals' RDRs, and an alternative algorithm
that finds and releases $\epsilon$ while satisfying DP. Lastly, we propose a
solution that bounds the total privacy leakage when using the algorithm to
answer multiple queries without requiring controllers to set the total privacy
budget. We evaluate our contributions through an IRB-approved user study that
shows the RDR is useful for helping controllers choose $\epsilon$, and
experimental evaluations showing our algorithms are efficient and scalable.
| [
{
"version": "v1",
"created": "Thu, 19 Oct 2023 19:01:27 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Mar 2024 23:21:52 GMT"
},
{
"version": "v3",
"created": "Mon, 2 Dec 2024 22:14:44 GMT"
},
{
"version": "v4",
"created": "Mon, 3 Mar 2025 03:45:05 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhu",
"Zhiru",
""
],
[
"Fernandez",
"Raul Castro",
""
]
]
| TITLE: Within-Dataset Disclosure Risk for Differential Privacy
ABSTRACT: Differential privacy (DP) enables private data analysis. In a typical DP
deployment, controllers manage individuals' sensitive data and are responsible
for answering analysts' queries while protecting individuals' privacy. They do
so by choosing the privacy parameter $\epsilon$, which controls the degree of
privacy for all individuals in all possible datasets. However, it is
challenging for controllers to choose $\epsilon$ because of the difficulty of
interpreting the privacy implications of such a choice on the within-dataset
individuals.
To address this challenge, we first derive a relative disclosure risk
indicator (RDR) that indicates the impact of choosing $\epsilon$ on the
within-dataset individuals' disclosure risk. We then design an algorithm to
find $\epsilon$ based on controllers' privacy preferences expressed as a
function of the within-dataset individuals' RDRs, and an alternative algorithm
that finds and releases $\epsilon$ while satisfying DP. Lastly, we propose a
solution that bounds the total privacy leakage when using the algorithm to
answer multiple queries without requiring controllers to set the total privacy
budget. We evaluate our contributions through an IRB-approved user study that
shows the RDR is useful for helping controllers choose $\epsilon$, and
experimental evaluations showing our algorithms are efficient and scalable.
| no_new_dataset | 0.942295 |
2310.14451 | Yasmin Moslem | Yasmin Moslem, Gianfranco Romani, Mahdi Molaei, Rejwanul Haque, John
D. Kelleher, Andy Way | Domain Terminology Integration into Machine Translation: Leveraging
Large Language Models | WMT 2023 | Proceedings of the Eighth Conference on Machine Translation
(2023), pages 902-911 | 10.18653/v1/2023.wmt-1.82 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper discusses the methods that we used for our submissions to the WMT
2023 Terminology Shared Task for German-to-English (DE-EN), English-to-Czech
(EN-CS), and Chinese-to-English (ZH-EN) language pairs. The task aims to
advance machine translation (MT) by challenging participants to develop systems
that accurately translate technical terms, ultimately enhancing communication
and understanding in specialised domains. To this end, we conduct experiments
that utilise large language models (LLMs) for two purposes: generating
synthetic bilingual terminology-based data, and post-editing translations
generated by an MT model through incorporating pre-approved terms. Our system
employs a four-step process: (i) using an LLM to generate bilingual synthetic
data based on the provided terminology, (ii) fine-tuning a generic
encoder-decoder MT model, with a mix of the terminology-based synthetic data
generated in the first step and a randomly sampled portion of the original
generic training data, (iii) generating translations with the fine-tuned MT
model, and (iv) finally, leveraging an LLM for terminology-constrained
automatic post-editing of the translations that do not include the required
terms. The results demonstrate the effectiveness of our proposed approach in
improving the integration of pre-approved terms into translations. The number
of terms incorporated into the translations of the blind dataset increases from
an average of 36.67% with the generic model to an average of 72.88% by the end
of the process. In other words, successful utilisation of terms nearly doubles
across the three language pairs.
| [
{
"version": "v1",
"created": "Sun, 22 Oct 2023 23:25:28 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Moslem",
"Yasmin",
""
],
[
"Romani",
"Gianfranco",
""
],
[
"Molaei",
"Mahdi",
""
],
[
"Haque",
"Rejwanul",
""
],
[
"Kelleher",
"John D.",
""
],
[
"Way",
"Andy",
""
]
]
| TITLE: Domain Terminology Integration into Machine Translation: Leveraging
Large Language Models
ABSTRACT: This paper discusses the methods that we used for our submissions to the WMT
2023 Terminology Shared Task for German-to-English (DE-EN), English-to-Czech
(EN-CS), and Chinese-to-English (ZH-EN) language pairs. The task aims to
advance machine translation (MT) by challenging participants to develop systems
that accurately translate technical terms, ultimately enhancing communication
and understanding in specialised domains. To this end, we conduct experiments
that utilise large language models (LLMs) for two purposes: generating
synthetic bilingual terminology-based data, and post-editing translations
generated by an MT model through incorporating pre-approved terms. Our system
employs a four-step process: (i) using an LLM to generate bilingual synthetic
data based on the provided terminology, (ii) fine-tuning a generic
encoder-decoder MT model, with a mix of the terminology-based synthetic data
generated in the first step and a randomly sampled portion of the original
generic training data, (iii) generating translations with the fine-tuned MT
model, and (iv) finally, leveraging an LLM for terminology-constrained
automatic post-editing of the translations that do not include the required
terms. The results demonstrate the effectiveness of our proposed approach in
improving the integration of pre-approved terms into translations. The number
of terms incorporated into the translations of the blind dataset increases from
an average of 36.67% with the generic model to an average of 72.88% by the end
of the process. In other words, successful utilisation of terms nearly doubles
across the three language pairs.
| no_new_dataset | 0.91957 |
2310.17631 | Lianghui Zhu | Lianghui Zhu, Xinggang Wang, Xinlong Wang | JudgeLM: Fine-tuned Large Language Models are Scalable Judges | JudgeLM is accepted by ICLR2025. Code is available at
https://github.com/baaivision/JudgeLM | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluating Large Language Models (LLMs) in open-ended scenarios is
challenging because existing benchmarks and metrics can not measure them
comprehensively. To address this problem, we propose to fine-tune LLMs as
scalable judges (JudgeLM) to evaluate LLMs efficiently and effectively in
open-ended benchmarks. We first propose a comprehensive, large-scale,
high-quality dataset containing task seeds, LLMs-generated answers, and
GPT-4-generated judgments for fine-tuning high-performance judges, as well as a
new benchmark for evaluating the judges. We train JudgeLM at different scales
from 7B, 13B, to 33B parameters, and conduct a systematic analysis of its
capabilities and behaviors. We then analyze the key biases in fine-tuning LLM
as a judge and consider them as position bias, knowledge bias, and format bias.
To address these issues, JudgeLM introduces a bag of techniques including swap
augmentation, reference support, and reference drop, which clearly enhance the
judge's performance. JudgeLM obtains the state-of-the-art judge performance on
both the existing PandaLM benchmark and our proposed new benchmark. Our JudgeLM
is efficient and the JudgeLM-7B only needs 3 minutes to judge 5K samples with 8
A100 GPUs. JudgeLM obtains high agreement with the teacher judge, achieving an
agreement exceeding 90% that even surpasses human-to-human agreement. JudgeLM
also demonstrates extended capabilities in being judges of the single answer,
multimodal models, multiple answers, multi-turn chat, etc. Code is available at
https://github.com/baaivision/JudgeLM.
| [
{
"version": "v1",
"created": "Thu, 26 Oct 2023 17:48:58 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 17:06:43 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhu",
"Lianghui",
""
],
[
"Wang",
"Xinggang",
""
],
[
"Wang",
"Xinlong",
""
]
]
| TITLE: JudgeLM: Fine-tuned Large Language Models are Scalable Judges
ABSTRACT: Evaluating Large Language Models (LLMs) in open-ended scenarios is
challenging because existing benchmarks and metrics can not measure them
comprehensively. To address this problem, we propose to fine-tune LLMs as
scalable judges (JudgeLM) to evaluate LLMs efficiently and effectively in
open-ended benchmarks. We first propose a comprehensive, large-scale,
high-quality dataset containing task seeds, LLMs-generated answers, and
GPT-4-generated judgments for fine-tuning high-performance judges, as well as a
new benchmark for evaluating the judges. We train JudgeLM at different scales
from 7B, 13B, to 33B parameters, and conduct a systematic analysis of its
capabilities and behaviors. We then analyze the key biases in fine-tuning LLM
as a judge and consider them as position bias, knowledge bias, and format bias.
To address these issues, JudgeLM introduces a bag of techniques including swap
augmentation, reference support, and reference drop, which clearly enhance the
judge's performance. JudgeLM obtains the state-of-the-art judge performance on
both the existing PandaLM benchmark and our proposed new benchmark. Our JudgeLM
is efficient and the JudgeLM-7B only needs 3 minutes to judge 5K samples with 8
A100 GPUs. JudgeLM obtains high agreement with the teacher judge, achieving an
agreement exceeding 90% that even surpasses human-to-human agreement. JudgeLM
also demonstrates extended capabilities in being judges of the single answer,
multimodal models, multiple answers, multi-turn chat, etc. Code is available at
https://github.com/baaivision/JudgeLM.
| new_dataset | 0.96944 |
2310.17953 | Peng Xie | Peng Xie, Kani Chen | Developing a Multilingual Dataset and Evaluation Metrics for
Code-Switching: A Focus on Hong Kong's Polylingual Dynamics | null | null | null | null | cs.SD cs.CL eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The existing audio datasets are predominantly tailored towards single
languages, overlooking the complex linguistic behaviors of multilingual
communities that engage in code-switching. This practice, where individuals
frequently mix two or more languages in their daily interactions, is
particularly prevalent in multilingual regions such as Hong Kong, China. To
bridge this gap, we have developed a 34.8-hour dataset of Mixed Cantonese and
English (MCE) audio using our Multi-Agent Data Generation Framework (MADGF). We
fine-tuned the open-source multilingual Automatic Speech Recognition (ASR)
model, Whisper, with the MCE dataset, leading to impressive zero-shot
performance. The traditional metrics overlook important factors such as latency
in real-world applications and code-switching scenarios. We have introduced a
novel evaluation metric called Fidelity to the Original Audio, Accuracy, and
Latency (FAL). This metric aims to overcome the limitations of traditional
metrics used to assess ASR systems.
| [
{
"version": "v1",
"created": "Fri, 27 Oct 2023 08:01:55 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Feb 2024 08:24:56 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Jun 2024 12:06:43 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Mar 2025 12:17:06 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xie",
"Peng",
""
],
[
"Chen",
"Kani",
""
]
]
| TITLE: Developing a Multilingual Dataset and Evaluation Metrics for
Code-Switching: A Focus on Hong Kong's Polylingual Dynamics
ABSTRACT: The existing audio datasets are predominantly tailored towards single
languages, overlooking the complex linguistic behaviors of multilingual
communities that engage in code-switching. This practice, where individuals
frequently mix two or more languages in their daily interactions, is
particularly prevalent in multilingual regions such as Hong Kong, China. To
bridge this gap, we have developed a 34.8-hour dataset of Mixed Cantonese and
English (MCE) audio using our Multi-Agent Data Generation Framework (MADGF). We
fine-tuned the open-source multilingual Automatic Speech Recognition (ASR)
model, Whisper, with the MCE dataset, leading to impressive zero-shot
performance. The traditional metrics overlook important factors such as latency
in real-world applications and code-switching scenarios. We have introduced a
novel evaluation metric called Fidelity to the Original Audio, Accuracy, and
Latency (FAL). This metric aims to overcome the limitations of traditional
metrics used to assess ASR systems.
| new_dataset | 0.961425 |
2310.18709 | Yaru Chen | Ruohao Guo, Xianghua Ying, Yaru Chen, Dantong Niu, Guangyao Li, Liao
Qu, Yanyu Qi, Jinxing Zhou, Bowei Xing, Wenzhen Yue, Ji Shi, Qixun Wang,
Peiliang Zhang, Buwen Liang | Audio-Visual Instance Segmentation | Accepted by CVPR 2025 | null | null | null | cs.CV cs.LG cs.MM cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose a new multi-modal task, termed audio-visual
instance segmentation (AVIS), which aims to simultaneously identify, segment
and track individual sounding object instances in audible videos. To facilitate
this research, we introduce a high-quality benchmark named AVISeg, containing
over 90K instance masks from 26 semantic categories in 926 long videos.
Additionally, we propose a strong baseline model for this task. Our model first
localizes sound source within each frame, and condenses object-specific
contexts into concise tokens. Then it builds long-range audio-visual
dependencies between these tokens using window-based attention, and tracks
sounding objects among the entire video sequences. Extensive experiments reveal
that our method performs best on AVISeg, surpassing the existing methods from
related tasks. We further conduct the evaluation on several multi-modal large
models. Unfortunately, they exhibits subpar performance on instance-level sound
source localization and temporal perception. We expect that AVIS will inspire
the community towards a more comprehensive multi-modal understanding. Dataset
and code is available at https://github.com/ruohaoguo/avis.
| [
{
"version": "v1",
"created": "Sat, 28 Oct 2023 13:37:52 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Oct 2024 12:19:39 GMT"
},
{
"version": "v3",
"created": "Sat, 2 Nov 2024 11:09:37 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Mar 2025 15:37:39 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Guo",
"Ruohao",
""
],
[
"Ying",
"Xianghua",
""
],
[
"Chen",
"Yaru",
""
],
[
"Niu",
"Dantong",
""
],
[
"Li",
"Guangyao",
""
],
[
"Qu",
"Liao",
""
],
[
"Qi",
"Yanyu",
""
],
[
"Zhou",
"Jinxing",
""
],
[
"Xing",
"Bowei",
""
],
[
"Yue",
"Wenzhen",
""
],
[
"Shi",
"Ji",
""
],
[
"Wang",
"Qixun",
""
],
[
"Zhang",
"Peiliang",
""
],
[
"Liang",
"Buwen",
""
]
]
| TITLE: Audio-Visual Instance Segmentation
ABSTRACT: In this paper, we propose a new multi-modal task, termed audio-visual
instance segmentation (AVIS), which aims to simultaneously identify, segment
and track individual sounding object instances in audible videos. To facilitate
this research, we introduce a high-quality benchmark named AVISeg, containing
over 90K instance masks from 26 semantic categories in 926 long videos.
Additionally, we propose a strong baseline model for this task. Our model first
localizes sound source within each frame, and condenses object-specific
contexts into concise tokens. Then it builds long-range audio-visual
dependencies between these tokens using window-based attention, and tracks
sounding objects among the entire video sequences. Extensive experiments reveal
that our method performs best on AVISeg, surpassing the existing methods from
related tasks. We further conduct the evaluation on several multi-modal large
models. Unfortunately, they exhibits subpar performance on instance-level sound
source localization and temporal perception. We expect that AVIS will inspire
the community towards a more comprehensive multi-modal understanding. Dataset
and code is available at https://github.com/ruohaoguo/avis.
| new_dataset | 0.9549 |
2310.19651 | Chiyu Song | Chiyu Song, Zhanchao Zhou, Jianhao Yan, Yuejiao Fei, Zhenzhong Lan,
Yue Zhang | Dynamics of Instruction Fine-Tuning for Chinese Large Language Models | Accepted to COLING 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Instruction tuning is a burgeoning method to elicit the general intelligence
of Large Language Models (LLMs). While numerous studies have examined the
impact of factors such as data volume and model size on English models, the
scaling properties of instruction tuning in other languages remain largely
unexplored. In this work, we systematically investigate the effects of data
quantity, model size, and data construction methods on instruction tuning for
Chinese LLMs. We utilize a newly curated dataset, DoIT, which includes over
40,000 high-quality instruction instances covering ten underlying abilities,
such as creative writing, code generation, and logical reasoning. Our
experiments, conducted on models ranging from 7b to 33b parameters, yield three
key findings: (i) While these factors directly affect overall model
performance, some abilities are more responsive to scaling, whereas others
demonstrate significant resistance. (ii) The scaling sensitivity of different
abilities to these factors can be explained by two features: Complexity and
Transference. (iii) By tailoring training strategies to their varying
sensitivities, specific abilities can be efficiently learned, enhancing
performance on two public benchmarks.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 15:37:10 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Feb 2024 13:21:27 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 07:49:17 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Song",
"Chiyu",
""
],
[
"Zhou",
"Zhanchao",
""
],
[
"Yan",
"Jianhao",
""
],
[
"Fei",
"Yuejiao",
""
],
[
"Lan",
"Zhenzhong",
""
],
[
"Zhang",
"Yue",
""
]
]
| TITLE: Dynamics of Instruction Fine-Tuning for Chinese Large Language Models
ABSTRACT: Instruction tuning is a burgeoning method to elicit the general intelligence
of Large Language Models (LLMs). While numerous studies have examined the
impact of factors such as data volume and model size on English models, the
scaling properties of instruction tuning in other languages remain largely
unexplored. In this work, we systematically investigate the effects of data
quantity, model size, and data construction methods on instruction tuning for
Chinese LLMs. We utilize a newly curated dataset, DoIT, which includes over
40,000 high-quality instruction instances covering ten underlying abilities,
such as creative writing, code generation, and logical reasoning. Our
experiments, conducted on models ranging from 7b to 33b parameters, yield three
key findings: (i) While these factors directly affect overall model
performance, some abilities are more responsive to scaling, whereas others
demonstrate significant resistance. (ii) The scaling sensitivity of different
abilities to these factors can be explained by two features: Complexity and
Transference. (iii) By tailoring training strategies to their varying
sensitivities, specific abilities can be efficiently learned, enhancing
performance on two public benchmarks.
| new_dataset | 0.956675 |
2311.13810 | Mahdy Rahman Chowdhury Mahdy | Mohammad Junayed Hasan and M.R.C.Mahdy | Bridging Classical and Quantum Machine Learning: Knowledge Transfer From
Classical to Quantum Neural Networks Using Knowledge Distillation | 26 pages | null | null | null | quant-ph cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Quantum neural networks (QNNs), harnessing superposition and entanglement,
have shown potential to surpass classical methods in complex learning tasks but
remain limited by hardware constraints and noisy conditions. In this work, we
present a novel framework for transferring knowledge from classical
convolutional neural networks (CNNs) to QNNs via knowledge distillation,
thereby reducing the need for resource intensive quantum training and error
mitigation. We conduct extensive experiments using two parameterized quantum
circuits (PQCs) with 4 and 8 qubits on MNIST, Fashion MNIST, and CIFAR10
datasets. The approach demonstrates consistent accuracy improvements attributed
to distilled knowledge from larger classical networks. Through ablation
studies, we systematically compare the effect of state of the art
dimensionality reduction techniques fully connected layers, center cropping,
principal component analysis, and pooling to compress high-dimensional image
data prior to quantum encoding. Our findings reveal that fully connected layers
retain the most salient features for QNN inference, thereby surpassing other
down sampling approaches. Additionally, we examine state of the art data
encoding methods (amplitude, angle, and qubit encoding) and identify amplitude
encoding as the optimal strategy, yielding superior accuracy across all tested
datasets and qubit configurations. Through computational analyses, we show that
our distilled 4-qubit and 8-qubit QNNs achieve competitive performance while
utilizing significantly fewer parameters than their classical counterparts. Our
results establish a promising paradigm for bridging classical deep learning and
emerging quantum computing, paving the way for more powerful, resource
conscious models in quantum machine intelligence.
| [
{
"version": "v1",
"created": "Thu, 23 Nov 2023 05:06:43 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 17:21:39 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Hasan",
"Mohammad Junayed",
""
],
[
"Mahdy",
"M. R. C.",
""
]
]
| TITLE: Bridging Classical and Quantum Machine Learning: Knowledge Transfer From
Classical to Quantum Neural Networks Using Knowledge Distillation
ABSTRACT: Quantum neural networks (QNNs), harnessing superposition and entanglement,
have shown potential to surpass classical methods in complex learning tasks but
remain limited by hardware constraints and noisy conditions. In this work, we
present a novel framework for transferring knowledge from classical
convolutional neural networks (CNNs) to QNNs via knowledge distillation,
thereby reducing the need for resource intensive quantum training and error
mitigation. We conduct extensive experiments using two parameterized quantum
circuits (PQCs) with 4 and 8 qubits on MNIST, Fashion MNIST, and CIFAR10
datasets. The approach demonstrates consistent accuracy improvements attributed
to distilled knowledge from larger classical networks. Through ablation
studies, we systematically compare the effect of state of the art
dimensionality reduction techniques fully connected layers, center cropping,
principal component analysis, and pooling to compress high-dimensional image
data prior to quantum encoding. Our findings reveal that fully connected layers
retain the most salient features for QNN inference, thereby surpassing other
down sampling approaches. Additionally, we examine state of the art data
encoding methods (amplitude, angle, and qubit encoding) and identify amplitude
encoding as the optimal strategy, yielding superior accuracy across all tested
datasets and qubit configurations. Through computational analyses, we show that
our distilled 4-qubit and 8-qubit QNNs achieve competitive performance while
utilizing significantly fewer parameters than their classical counterparts. Our
results establish a promising paradigm for bridging classical deep learning and
emerging quantum computing, paving the way for more powerful, resource
conscious models in quantum machine intelligence.
| no_new_dataset | 0.951051 |
2311.14922 | Ge Sun | Ge Sun, Sheng Wang, Lei Zhu, Ming Liu, Jun Ma | GDTS: Goal-Guided Diffusion Model with Tree Sampling for Multi-Modal
Pedestrian Trajectory Prediction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate prediction of pedestrian trajectories is crucial for improving the
safety of autonomous driving. However, this task is generally nontrivial due to
the inherent stochasticity of human motion, which naturally requires the
predictor to generate multi-modal prediction. Previous works leverage various
generative methods, such as GAN and VAE, for pedestrian trajectory prediction.
Nevertheless, these methods may suffer from mode collapse and relatively
low-quality results. The denoising diffusion probabilistic model (DDPM) has
recently been applied to trajectory prediction due to its simple training
process and powerful reconstruction ability. However, current diffusion-based
methods do not fully utilize input information and usually require many
denoising iterations that lead to a long inference time or an additional
network for initialization. To address these challenges and facilitate the use
of diffusion models in multi-modal trajectory prediction, we propose GDTS, a
novel Goal-Guided Diffusion Model with Tree Sampling for multi-modal trajectory
prediction. Considering the "goal-driven" characteristics of human motion, GDTS
leverages goal estimation to guide the generation of the diffusion network. A
two-stage tree sampling algorithm is presented, which leverages common features
to reduce the inference time and improve accuracy for multi-modal prediction.
Experimental results demonstrate that our proposed framework achieves
comparable state-of-the-art performance with real-time inference speed in
public datasets.
| [
{
"version": "v1",
"created": "Sat, 25 Nov 2023 03:55:06 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Sep 2024 12:39:06 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 07:41:00 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Sun",
"Ge",
""
],
[
"Wang",
"Sheng",
""
],
[
"Zhu",
"Lei",
""
],
[
"Liu",
"Ming",
""
],
[
"Ma",
"Jun",
""
]
]
| TITLE: GDTS: Goal-Guided Diffusion Model with Tree Sampling for Multi-Modal
Pedestrian Trajectory Prediction
ABSTRACT: Accurate prediction of pedestrian trajectories is crucial for improving the
safety of autonomous driving. However, this task is generally nontrivial due to
the inherent stochasticity of human motion, which naturally requires the
predictor to generate multi-modal prediction. Previous works leverage various
generative methods, such as GAN and VAE, for pedestrian trajectory prediction.
Nevertheless, these methods may suffer from mode collapse and relatively
low-quality results. The denoising diffusion probabilistic model (DDPM) has
recently been applied to trajectory prediction due to its simple training
process and powerful reconstruction ability. However, current diffusion-based
methods do not fully utilize input information and usually require many
denoising iterations that lead to a long inference time or an additional
network for initialization. To address these challenges and facilitate the use
of diffusion models in multi-modal trajectory prediction, we propose GDTS, a
novel Goal-Guided Diffusion Model with Tree Sampling for multi-modal trajectory
prediction. Considering the "goal-driven" characteristics of human motion, GDTS
leverages goal estimation to guide the generation of the diffusion network. A
two-stage tree sampling algorithm is presented, which leverages common features
to reduce the inference time and improve accuracy for multi-modal prediction.
Experimental results demonstrate that our proposed framework achieves
comparable state-of-the-art performance with real-time inference speed in
public datasets.
| no_new_dataset | 0.943191 |
2312.04465 | Stathis Galanakis | Stathis Galanakis, Alexandros Lattas, Stylianos Moschoglou, Stefanos
Zafeiriou | FitDiff: Robust monocular 3D facial shape and reflectance estimation
using Diffusion Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The remarkable progress in 3D face reconstruction has resulted in high-detail
and photorealistic facial representations. Recently, Diffusion Models have
revolutionized the capabilities of generative methods by surpassing the
performance of GANs. In this work, we present FitDiff, a diffusion-based 3D
facial avatar generative model. Leveraging diffusion principles, our model
accurately generates relightable facial avatars, utilizing an identity
embedding extracted from an "in-the-wild" 2D facial image. The introduced
multi-modal diffusion model is the first to concurrently output facial
reflectance maps (diffuse and specular albedo and normals) and shapes,
showcasing great generalization capabilities. It is solely trained on an
annotated subset of a public facial dataset, paired with 3D reconstructions. We
revisit the typical 3D facial fitting approach by guiding a reverse diffusion
process using perceptual and face recognition losses. Being the first 3D LDM
conditioned on face recognition embeddings, FitDiff reconstructs relightable
human avatars, that can be used as-is in common rendering engines, starting
only from an unconstrained facial image, and achieving state-of-the-art
performance.
| [
{
"version": "v1",
"created": "Thu, 7 Dec 2023 17:35:49 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jun 2024 11:08:25 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 22:24:56 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Galanakis",
"Stathis",
""
],
[
"Lattas",
"Alexandros",
""
],
[
"Moschoglou",
"Stylianos",
""
],
[
"Zafeiriou",
"Stefanos",
""
]
]
| TITLE: FitDiff: Robust monocular 3D facial shape and reflectance estimation
using Diffusion Models
ABSTRACT: The remarkable progress in 3D face reconstruction has resulted in high-detail
and photorealistic facial representations. Recently, Diffusion Models have
revolutionized the capabilities of generative methods by surpassing the
performance of GANs. In this work, we present FitDiff, a diffusion-based 3D
facial avatar generative model. Leveraging diffusion principles, our model
accurately generates relightable facial avatars, utilizing an identity
embedding extracted from an "in-the-wild" 2D facial image. The introduced
multi-modal diffusion model is the first to concurrently output facial
reflectance maps (diffuse and specular albedo and normals) and shapes,
showcasing great generalization capabilities. It is solely trained on an
annotated subset of a public facial dataset, paired with 3D reconstructions. We
revisit the typical 3D facial fitting approach by guiding a reverse diffusion
process using perceptual and face recognition losses. Being the first 3D LDM
conditioned on face recognition embeddings, FitDiff reconstructs relightable
human avatars, that can be used as-is in common rendering engines, starting
only from an unconstrained facial image, and achieving state-of-the-art
performance.
| no_new_dataset | 0.948585 |
2312.15289 | Lokesh Veeramacheneni | Lokesh Veeramacheneni (University of Bonn) and Moritz Wolter
(University of Bonn) and Hildegard Kuehne (University of Tuebingen, MIT-IBM
Watson AI Lab) and Juergen Gall (University of Bonn, Lamarr Institute for
Machine Learning and Artificial Intelligence) | Fr\'echet Wavelet Distance: A Domain-Agnostic Metric for Image
Generation | null | null | null | null | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Modern metrics for generative learning like Fr\'echet Inception Distance
(FID) and DINOv2-Fr\'echet Distance (FD-DINOv2) demonstrate impressive
performance. However, they suffer from various shortcomings, like a bias
towards specific generators and datasets. To address this problem, we propose
the Fr\'echet Wavelet Distance (FWD) as a domain-agnostic metric based on the
Wavelet Packet Transform ($W_p$). FWD provides a sight across a broad spectrum
of frequencies in images with a high resolution, preserving both spatial and
textural aspects. Specifically, we use $W_p$ to project generated and real
images to the packet coefficient space. We then compute the Fr\'echet distance
with the resultant coefficients to evaluate the quality of a generator. This
metric is general-purpose and dataset-domain agnostic, as it does not rely on
any pre-trained network, while being more interpretable due to its ability to
compute Fr\'echet distance per packet, enhancing transparency. We conclude with
an extensive evaluation of a wide variety of generators across various datasets
that the proposed FWD can generalize and improve robustness to domain shifts
and various corruptions compared to other metrics.
| [
{
"version": "v1",
"created": "Sat, 23 Dec 2023 16:10:53 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jun 2024 09:45:32 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 18:36:56 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Veeramacheneni",
"Lokesh",
"",
"University of Bonn"
],
[
"Wolter",
"Moritz",
"",
"University of Bonn"
],
[
"Kuehne",
"Hildegard",
"",
"University of Tuebingen, MIT-IBM\n Watson AI Lab"
],
[
"Gall",
"Juergen",
"",
"University of Bonn, Lamarr Institute for\n Machine Learning and Artificial Intelligence"
]
]
| TITLE: Fr\'echet Wavelet Distance: A Domain-Agnostic Metric for Image
Generation
ABSTRACT: Modern metrics for generative learning like Fr\'echet Inception Distance
(FID) and DINOv2-Fr\'echet Distance (FD-DINOv2) demonstrate impressive
performance. However, they suffer from various shortcomings, like a bias
towards specific generators and datasets. To address this problem, we propose
the Fr\'echet Wavelet Distance (FWD) as a domain-agnostic metric based on the
Wavelet Packet Transform ($W_p$). FWD provides a sight across a broad spectrum
of frequencies in images with a high resolution, preserving both spatial and
textural aspects. Specifically, we use $W_p$ to project generated and real
images to the packet coefficient space. We then compute the Fr\'echet distance
with the resultant coefficients to evaluate the quality of a generator. This
metric is general-purpose and dataset-domain agnostic, as it does not rely on
any pre-trained network, while being more interpretable due to its ability to
compute Fr\'echet distance per packet, enhancing transparency. We conclude with
an extensive evaluation of a wide variety of generators across various datasets
that the proposed FWD can generalize and improve robustness to domain shifts
and various corruptions compared to other metrics.
| no_new_dataset | 0.951729 |
2401.04364 | Shahroz Tariq | Binh M. Le, Jiwon Kim, Simon S. Woo, Kristen Moore, Alsharif Abuadbba,
Shahroz Tariq | SoK: Systematization and Benchmarking of Deepfake Detectors in a Unified
Framework | 20 pages, 6 figures, 7 table, Accepted at IEEE European Symposium on
security and privacy 2025 (EuroS&P '25) | null | null | null | cs.CV cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deepfakes have rapidly emerged as a serious threat to society due to their
ease of creation and dissemination, triggering the accelerated development of
detection technologies. However, many existing detectors rely on labgenerated
datasets for validation, which may not prepare them for novel, real-world
deepfakes. This paper extensively reviews and analyzes state-of-the-art
deepfake detectors, evaluating them against several critical criteria. These
criteria categorize detectors into 4 high-level groups and 13 finegrained
sub-groups, aligned with a unified conceptual framework we propose. This
classification offers practical insights into the factors affecting detector
efficacy. We evaluate the generalizability of 16 leading detectors across
comprehensive attack scenarios, including black-box, white-box, and graybox
settings. Our systematized analysis and experiments provide a deeper
understanding of deepfake detectors and their generalizability, paving the way
for future research and the development of more proactive defenses against
deepfakes.
| [
{
"version": "v1",
"created": "Tue, 9 Jan 2024 05:32:22 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jun 2024 09:02:42 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Feb 2025 10:52:15 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Mar 2025 02:32:25 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Le",
"Binh M.",
""
],
[
"Kim",
"Jiwon",
""
],
[
"Woo",
"Simon S.",
""
],
[
"Moore",
"Kristen",
""
],
[
"Abuadbba",
"Alsharif",
""
],
[
"Tariq",
"Shahroz",
""
]
]
| TITLE: SoK: Systematization and Benchmarking of Deepfake Detectors in a Unified
Framework
ABSTRACT: Deepfakes have rapidly emerged as a serious threat to society due to their
ease of creation and dissemination, triggering the accelerated development of
detection technologies. However, many existing detectors rely on labgenerated
datasets for validation, which may not prepare them for novel, real-world
deepfakes. This paper extensively reviews and analyzes state-of-the-art
deepfake detectors, evaluating them against several critical criteria. These
criteria categorize detectors into 4 high-level groups and 13 finegrained
sub-groups, aligned with a unified conceptual framework we propose. This
classification offers practical insights into the factors affecting detector
efficacy. We evaluate the generalizability of 16 leading detectors across
comprehensive attack scenarios, including black-box, white-box, and graybox
settings. Our systematized analysis and experiments provide a deeper
understanding of deepfake detectors and their generalizability, paving the way
for future research and the development of more proactive defenses against
deepfakes.
| no_new_dataset | 0.950319 |
2401.08603 | Achref Jaziri | Achref Jaziri, Sina Ditzel, Iuliia Pliushch, Visvanathan Ramesh | Representation Learning in a Decomposed Encoder Design for Bio-inspired
Hebbian Learning | Published at ECCV2024 Human-Inspired Computer Vision Workshop | null | null | null | cs.NE cs.LG | http://creativecommons.org/licenses/by/4.0/ | Modern data-driven machine learning system designs exploit inductive biases
in architectural structure, invariance and equivariance requirements,
task-specific loss functions, and computational optimization tools. Previous
works have illustrated that human-specified quasi-invariant filters can serve
as a powerful inductive bias in the early layers of the encoder, enhancing
robustness and transparency in learned classifiers. This paper explores this
further within the context of representation learning with bio-inspired Hebbian
learning rules. We propose a modular framework trained with a bio-inspired
variant of contrastive predictive coding, comprising parallel encoders that
leverage different invariant visual descriptors as inductive biases. We
evaluate the representation learning capacity of our system in classification
scenarios using diverse image datasets (GTSRB, STL10, CODEBRIM) and video
datasets (UCF101). Our findings indicate that this form of inductive bias
significantly improves the robustness of learned representations and narrows
the performance gap between models using local Hebbian plasticity rules and
those using backpropagation, while also achieving superior performance compared
to non-decomposed encoders.
| [
{
"version": "v1",
"created": "Wed, 22 Nov 2023 07:58:14 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 14:17:49 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Jaziri",
"Achref",
""
],
[
"Ditzel",
"Sina",
""
],
[
"Pliushch",
"Iuliia",
""
],
[
"Ramesh",
"Visvanathan",
""
]
]
| TITLE: Representation Learning in a Decomposed Encoder Design for Bio-inspired
Hebbian Learning
ABSTRACT: Modern data-driven machine learning system designs exploit inductive biases
in architectural structure, invariance and equivariance requirements,
task-specific loss functions, and computational optimization tools. Previous
works have illustrated that human-specified quasi-invariant filters can serve
as a powerful inductive bias in the early layers of the encoder, enhancing
robustness and transparency in learned classifiers. This paper explores this
further within the context of representation learning with bio-inspired Hebbian
learning rules. We propose a modular framework trained with a bio-inspired
variant of contrastive predictive coding, comprising parallel encoders that
leverage different invariant visual descriptors as inductive biases. We
evaluate the representation learning capacity of our system in classification
scenarios using diverse image datasets (GTSRB, STL10, CODEBRIM) and video
datasets (UCF101). Our findings indicate that this form of inductive bias
significantly improves the robustness of learned representations and narrows
the performance gap between models using local Hebbian plasticity rules and
those using backpropagation, while also achieving superior performance compared
to non-decomposed encoders.
| no_new_dataset | 0.945951 |
2401.17116 | Sahil Gulania | Sahil Gulania, Yuri Alexeev, Stephen K. Gray, Bo Peng, Niranjan Govind | Quantum time dynamics mediated by the Yang-Baxter equation and
artificial neural networks | null | null | null | null | quant-ph cond-mat.soft cs.LG physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | Quantum computing shows great potential, but errors pose a significant
challenge. This study explores new strategies for mitigating quantum errors
using artificial neural networks (ANN) and the Yang-Baxter equation (YBE).
Unlike traditional error mitigation methods, which are computationally
intensive, we investigate artificial error mitigation. We developed a novel
method that combines ANN for noise mitigation combined with the YBE to generate
noisy data. This approach effectively reduces noise in quantum simulations,
enhancing the accuracy of the results. The YBE rigorously preserves quantum
correlations and symmetries in spin chain simulations in certain classes of
integrable lattice models, enabling effective compression of quantum circuits
while retaining linear scalability with the number of qubits. This compression
facilitates both full and partial implementations, allowing the generation of
noisy quantum data on hardware alongside noiseless simulations using classical
platforms. By introducing controlled noise through the YBE, we enhance the
dataset for error mitigation. We train an ANN model on partial data from
quantum simulations, demonstrating its effectiveness in mitigating errors in
time-evolving quantum states, providing a scalable framework to enhance quantum
computation fidelity, particularly in noisy intermediate-scale quantum (NISQ)
systems. We demonstrate the efficacy of this approach by performing quantum
time dynamics simulations using the Heisenberg XY Hamiltonian on real quantum
devices.
| [
{
"version": "v1",
"created": "Tue, 30 Jan 2024 15:50:06 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 23:04:57 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Gulania",
"Sahil",
""
],
[
"Alexeev",
"Yuri",
""
],
[
"Gray",
"Stephen K.",
""
],
[
"Peng",
"Bo",
""
],
[
"Govind",
"Niranjan",
""
]
]
| TITLE: Quantum time dynamics mediated by the Yang-Baxter equation and
artificial neural networks
ABSTRACT: Quantum computing shows great potential, but errors pose a significant
challenge. This study explores new strategies for mitigating quantum errors
using artificial neural networks (ANN) and the Yang-Baxter equation (YBE).
Unlike traditional error mitigation methods, which are computationally
intensive, we investigate artificial error mitigation. We developed a novel
method that combines ANN for noise mitigation combined with the YBE to generate
noisy data. This approach effectively reduces noise in quantum simulations,
enhancing the accuracy of the results. The YBE rigorously preserves quantum
correlations and symmetries in spin chain simulations in certain classes of
integrable lattice models, enabling effective compression of quantum circuits
while retaining linear scalability with the number of qubits. This compression
facilitates both full and partial implementations, allowing the generation of
noisy quantum data on hardware alongside noiseless simulations using classical
platforms. By introducing controlled noise through the YBE, we enhance the
dataset for error mitigation. We train an ANN model on partial data from
quantum simulations, demonstrating its effectiveness in mitigating errors in
time-evolving quantum states, providing a scalable framework to enhance quantum
computation fidelity, particularly in noisy intermediate-scale quantum (NISQ)
systems. We demonstrate the efficacy of this approach by performing quantum
time dynamics simulations using the Heisenberg XY Hamiltonian on real quantum
devices.
| no_new_dataset | 0.950732 |
2402.02005 | Minho Lee | Yun Young Choi, Sun Woo Park, Minho Lee, Youngho Woo | Topology-Informed Graph Transformer | Proceedings of the Geometry-grounded Representation Learning and
Generative Modeling Workshop (GRaM) at ICML 2024 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Transformers have revolutionized performance in Natural Language Processing
and Vision, paving the way for their integration with Graph Neural Networks
(GNNs). One key challenge in enhancing graph transformers is strengthening the
discriminative power of distinguishing isomorphisms of graphs, which plays a
crucial role in boosting their predictive performances. To address this
challenge, we introduce 'Topology-Informed Graph Transformer (TIGT)', a novel
transformer enhancing both discriminative power in detecting graph isomorphisms
and the overall performance of Graph Transformers. TIGT consists of four
components: A topological positional embedding layer using non-isomorphic
universal covers based on cyclic subgraphs of graphs to ensure unique graph
representation: A dual-path message-passing layer to explicitly encode
topological characteristics throughout the encoder layers: A global attention
mechanism: And a graph information layer to recalibrate channel-wise graph
features for better feature representation. TIGT outperforms previous Graph
Transformers in classifying synthetic dataset aimed at distinguishing
isomorphism classes of graphs. Additionally, mathematical analysis and
empirical evaluations highlight our model's competitive edge over
state-of-the-art Graph Transformers across various benchmark datasets.
| [
{
"version": "v1",
"created": "Sat, 3 Feb 2024 03:17:44 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 13:45:42 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Choi",
"Yun Young",
""
],
[
"Park",
"Sun Woo",
""
],
[
"Lee",
"Minho",
""
],
[
"Woo",
"Youngho",
""
]
]
| TITLE: Topology-Informed Graph Transformer
ABSTRACT: Transformers have revolutionized performance in Natural Language Processing
and Vision, paving the way for their integration with Graph Neural Networks
(GNNs). One key challenge in enhancing graph transformers is strengthening the
discriminative power of distinguishing isomorphisms of graphs, which plays a
crucial role in boosting their predictive performances. To address this
challenge, we introduce 'Topology-Informed Graph Transformer (TIGT)', a novel
transformer enhancing both discriminative power in detecting graph isomorphisms
and the overall performance of Graph Transformers. TIGT consists of four
components: A topological positional embedding layer using non-isomorphic
universal covers based on cyclic subgraphs of graphs to ensure unique graph
representation: A dual-path message-passing layer to explicitly encode
topological characteristics throughout the encoder layers: A global attention
mechanism: And a graph information layer to recalibrate channel-wise graph
features for better feature representation. TIGT outperforms previous Graph
Transformers in classifying synthetic dataset aimed at distinguishing
isomorphism classes of graphs. Additionally, mathematical analysis and
empirical evaluations highlight our model's competitive edge over
state-of-the-art Graph Transformers across various benchmark datasets.
| no_new_dataset | 0.941922 |
2402.02112 | Yurui Chen | Yurui Chen, Junge Zhang, Ziyang Xie, Wenye Li, Feihu Zhang, Jiachen
Lu, Li Zhang | S-NeRF++: Autonomous Driving Simulation via Neural Reconstruction and
Generation | IEEE TPAMI 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous driving simulation system plays a crucial role in enhancing
self-driving data and simulating complex and rare traffic scenarios, ensuring
navigation safety. However, traditional simulation systems, which often heavily
rely on manual modeling and 2D image editing, struggled with scaling to
extensive scenes and generating realistic simulation data. In this study, we
present S-NeRF++, an innovative autonomous driving simulation system based on
neural reconstruction. Trained on widely-used self-driving datasets such as
nuScenes and Waymo, S-NeRF++ can generate a large number of realistic street
scenes and foreground objects with high rendering quality as well as offering
considerable flexibility in manipulation and simulation. Specifically, S-NeRF++
is an enhanced neural radiance field for synthesizing large-scale scenes and
moving vehicles, with improved scene parameterization and camera pose learning.
The system effectively utilizes noisy and sparse LiDAR data to refine training
and address depth outliers, ensuring high-quality reconstruction and novel-view
rendering. It also provides a diverse foreground asset bank by reconstructing
and generating different foreground vehicles to support comprehensive scenario
creation.Moreover, we have developed an advanced foreground-background fusion
pipeline that skillfully integrates illumination and shadow effects, further
enhancing the realism of our simulations. With the high-quality simulated data
provided by our S-NeRF++, we found the perception methods enjoy performance
boosts on several autonomous driving downstream tasks, further demonstrating
our proposed simulator's effectiveness.
| [
{
"version": "v1",
"created": "Sat, 3 Feb 2024 10:35:42 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Sep 2024 02:20:05 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Jan 2025 08:23:51 GMT"
},
{
"version": "v4",
"created": "Fri, 21 Feb 2025 07:11:48 GMT"
},
{
"version": "v5",
"created": "Mon, 3 Mar 2025 04:42:15 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Yurui",
""
],
[
"Zhang",
"Junge",
""
],
[
"Xie",
"Ziyang",
""
],
[
"Li",
"Wenye",
""
],
[
"Zhang",
"Feihu",
""
],
[
"Lu",
"Jiachen",
""
],
[
"Zhang",
"Li",
""
]
]
| TITLE: S-NeRF++: Autonomous Driving Simulation via Neural Reconstruction and
Generation
ABSTRACT: Autonomous driving simulation system plays a crucial role in enhancing
self-driving data and simulating complex and rare traffic scenarios, ensuring
navigation safety. However, traditional simulation systems, which often heavily
rely on manual modeling and 2D image editing, struggled with scaling to
extensive scenes and generating realistic simulation data. In this study, we
present S-NeRF++, an innovative autonomous driving simulation system based on
neural reconstruction. Trained on widely-used self-driving datasets such as
nuScenes and Waymo, S-NeRF++ can generate a large number of realistic street
scenes and foreground objects with high rendering quality as well as offering
considerable flexibility in manipulation and simulation. Specifically, S-NeRF++
is an enhanced neural radiance field for synthesizing large-scale scenes and
moving vehicles, with improved scene parameterization and camera pose learning.
The system effectively utilizes noisy and sparse LiDAR data to refine training
and address depth outliers, ensuring high-quality reconstruction and novel-view
rendering. It also provides a diverse foreground asset bank by reconstructing
and generating different foreground vehicles to support comprehensive scenario
creation.Moreover, we have developed an advanced foreground-background fusion
pipeline that skillfully integrates illumination and shadow effects, further
enhancing the realism of our simulations. With the high-quality simulated data
provided by our S-NeRF++, we found the perception methods enjoy performance
boosts on several autonomous driving downstream tasks, further demonstrating
our proposed simulator's effectiveness.
| no_new_dataset | 0.951414 |
2402.02611 | Krishna Kartik | Chinmay Mittal, Krishna Kartik, Mausam, Parag Singla | FCoReBench: Can Large Language Models Solve Challenging First-Order
Combinatorial Reasoning Problems? | null | null | null | null | cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Can the large language models (LLMs) solve challenging first-order
combinatorial reasoning problems such as graph coloring, knapsack, and
cryptarithmetic? By first-order, we mean these problems can be instantiated
with potentially an infinite number of problem instances of varying sizes. They
are also challenging being NP-hard and requiring several reasoning steps to
reach a solution. While existing work has focused on coming up with datasets
with hard benchmarks, there is limited work which exploits the first-order
nature of the problem structure. To address this challenge, we present
FCoReBench, a dataset of 40 such challenging problems, along with scripts to
generate problem instances of varying sizes and automatically verify and
generate their solutions. We first observe that LLMs, even when aided by
symbolic solvers, perform rather poorly on our dataset, being unable to
leverage the underlying structure of these problems. We specifically observe a
drop in performance with increasing problem size. In response, we propose a new
approach, SymPro-LM, which combines LLMs with both symbolic solvers and program
interpreters, along with feedback from a few solved examples, to achieve huge
performance gains. Our proposed approach is robust to changes in the problem
size, and has the unique characteristic of not requiring any LLM call during
inference time, unlike earlier approaches. As an additional experiment, we also
demonstrate SymPro-LM's effectiveness on other logical reasoning benchmarks.
| [
{
"version": "v1",
"created": "Sun, 4 Feb 2024 20:56:09 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Feb 2024 14:42:45 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 12:46:25 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Mittal",
"Chinmay",
""
],
[
"Kartik",
"Krishna",
""
],
[
"Mausam",
"",
""
],
[
"Singla",
"Parag",
""
]
]
| TITLE: FCoReBench: Can Large Language Models Solve Challenging First-Order
Combinatorial Reasoning Problems?
ABSTRACT: Can the large language models (LLMs) solve challenging first-order
combinatorial reasoning problems such as graph coloring, knapsack, and
cryptarithmetic? By first-order, we mean these problems can be instantiated
with potentially an infinite number of problem instances of varying sizes. They
are also challenging being NP-hard and requiring several reasoning steps to
reach a solution. While existing work has focused on coming up with datasets
with hard benchmarks, there is limited work which exploits the first-order
nature of the problem structure. To address this challenge, we present
FCoReBench, a dataset of 40 such challenging problems, along with scripts to
generate problem instances of varying sizes and automatically verify and
generate their solutions. We first observe that LLMs, even when aided by
symbolic solvers, perform rather poorly on our dataset, being unable to
leverage the underlying structure of these problems. We specifically observe a
drop in performance with increasing problem size. In response, we propose a new
approach, SymPro-LM, which combines LLMs with both symbolic solvers and program
interpreters, along with feedback from a few solved examples, to achieve huge
performance gains. Our proposed approach is robust to changes in the problem
size, and has the unique characteristic of not requiring any LLM call during
inference time, unlike earlier approaches. As an additional experiment, we also
demonstrate SymPro-LM's effectiveness on other logical reasoning benchmarks.
| new_dataset | 0.957636 |
2402.13496 | Mingyu Guan | Mingyu Guan, Jack W. Stokes, Qinlong Luo, Fuchen Liu, Purvanshi Mehta,
Elnaz Nouri, Taesoo Kim | Heterogeneous Graph Neural Network on Semantic Tree | Accepted at AAAI 2025 | null | null | null | cs.LG cs.SI | http://creativecommons.org/licenses/by/4.0/ | The recent past has seen an increasing interest in Heterogeneous Graph Neural
Networks (HGNNs), since many real-world graphs are heterogeneous in nature,
from citation graphs to email graphs. However, existing methods ignore a tree
hierarchy among metapaths, naturally constituted by different node types and
relation types. In this paper, we present HetTree, a novel HGNN that models
both the graph structure and heterogeneous aspects in a scalable and effective
manner. Specifically, HetTree builds a semantic tree data structure to capture
the hierarchy among metapaths. To effectively encode the semantic tree, HetTree
uses a novel subtree attention mechanism to emphasize metapaths that are more
helpful in encoding parent-child relationships. Moreover, HetTree proposes
carefully matching pre-computed features and labels correspondingly,
constituting a complete metapath representation. Our evaluation of HetTree on a
variety of real-world datasets demonstrates that it outperforms all existing
baselines on open benchmarks and efficiently scales to large real-world graphs
with millions of nodes and edges.
| [
{
"version": "v1",
"created": "Wed, 21 Feb 2024 03:14:45 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 22:34:01 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Guan",
"Mingyu",
""
],
[
"Stokes",
"Jack W.",
""
],
[
"Luo",
"Qinlong",
""
],
[
"Liu",
"Fuchen",
""
],
[
"Mehta",
"Purvanshi",
""
],
[
"Nouri",
"Elnaz",
""
],
[
"Kim",
"Taesoo",
""
]
]
| TITLE: Heterogeneous Graph Neural Network on Semantic Tree
ABSTRACT: The recent past has seen an increasing interest in Heterogeneous Graph Neural
Networks (HGNNs), since many real-world graphs are heterogeneous in nature,
from citation graphs to email graphs. However, existing methods ignore a tree
hierarchy among metapaths, naturally constituted by different node types and
relation types. In this paper, we present HetTree, a novel HGNN that models
both the graph structure and heterogeneous aspects in a scalable and effective
manner. Specifically, HetTree builds a semantic tree data structure to capture
the hierarchy among metapaths. To effectively encode the semantic tree, HetTree
uses a novel subtree attention mechanism to emphasize metapaths that are more
helpful in encoding parent-child relationships. Moreover, HetTree proposes
carefully matching pre-computed features and labels correspondingly,
constituting a complete metapath representation. Our evaluation of HetTree on a
variety of real-world datasets demonstrates that it outperforms all existing
baselines on open benchmarks and efficiently scales to large real-world graphs
with millions of nodes and edges.
| no_new_dataset | 0.945801 |
2402.15724 | Yuanhanqing Huang | Yuanhanqing Huang and Jianghai Hu | Offline Learning of Decision Functions in Multiplayer Games with
Expectation Constraints | null | null | null | null | math.OC cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore a class of stochastic multiplayer games where each player in the
game aims to optimize its objective under uncertainty and adheres to some
expectation constraints. The study employs an offline learning paradigm,
leveraging a pre-existing dataset containing auxiliary features. While prior
research in deterministic and stochastic multiplayer games primarily explored
vector-valued decisions, this work departs by considering function-valued
decisions that incorporate auxiliary features as input. We leverage the law of
large deviations and degree theory to establish the almost sure convergence of
the offline learning solution to the true solution as the number of data
samples increases. Finally, we demonstrate the validity of our method via a
multi-account portfolio optimization problem.
| [
{
"version": "v1",
"created": "Sat, 24 Feb 2024 05:19:33 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 03:44:02 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Huang",
"Yuanhanqing",
""
],
[
"Hu",
"Jianghai",
""
]
]
| TITLE: Offline Learning of Decision Functions in Multiplayer Games with
Expectation Constraints
ABSTRACT: We explore a class of stochastic multiplayer games where each player in the
game aims to optimize its objective under uncertainty and adheres to some
expectation constraints. The study employs an offline learning paradigm,
leveraging a pre-existing dataset containing auxiliary features. While prior
research in deterministic and stochastic multiplayer games primarily explored
vector-valued decisions, this work departs by considering function-valued
decisions that incorporate auxiliary features as input. We leverage the law of
large deviations and degree theory to establish the almost sure convergence of
the offline learning solution to the true solution as the number of data
samples increases. Finally, we demonstrate the validity of our method via a
multi-account portfolio optimization problem.
| no_new_dataset | 0.946448 |
2402.17371 | Michael Toker | Michael Toker, Oren Mishali, Ophir M\"unz-Manor, Benny Kimelfeld,
Yonatan Belinkov | A Dataset for Metaphor Detection in Early Medieval Hebrew Poetry | EACL 2024. Project webpage: https://tokeron.github.io/metaphor/ | https://aclanthology.org/2024.eacl-short.39/ | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a large volume of late antique and medieval Hebrew texts. They
represent a crucial linguistic and cultural bridge between Biblical and modern
Hebrew. Poetry is prominent in these texts and one of its main haracteristics
is the frequent use of metaphor. Distinguishing figurative and literal language
use is a major task for scholars of the Humanities, especially in the fields of
literature, linguistics, and hermeneutics. This paper presents a new,
challenging dataset of late antique and medieval Hebrew poetry with expert
annotations of metaphor, as well as some baseline results, which we hope will
facilitate further research in this area.
| [
{
"version": "v1",
"created": "Tue, 27 Feb 2024 10:09:40 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Toker",
"Michael",
""
],
[
"Mishali",
"Oren",
""
],
[
"Münz-Manor",
"Ophir",
""
],
[
"Kimelfeld",
"Benny",
""
],
[
"Belinkov",
"Yonatan",
""
]
]
| TITLE: A Dataset for Metaphor Detection in Early Medieval Hebrew Poetry
ABSTRACT: There is a large volume of late antique and medieval Hebrew texts. They
represent a crucial linguistic and cultural bridge between Biblical and modern
Hebrew. Poetry is prominent in these texts and one of its main haracteristics
is the frequent use of metaphor. Distinguishing figurative and literal language
use is a major task for scholars of the Humanities, especially in the fields of
literature, linguistics, and hermeneutics. This paper presents a new,
challenging dataset of late antique and medieval Hebrew poetry with expert
annotations of metaphor, as well as some baseline results, which we hope will
facilitate further research in this area.
| new_dataset | 0.956796 |
2402.18180 | Qiujie Xie | Qiuejie Xie, Qiming Feng, Tianqi Zhang, Qingqiu Li, Linyi Yang, Yuejie
Zhang, Rui Feng, Liang He, Shang Gao, Yue Zhang | Human Simulacra: Benchmarking the Personification of Large Language
Models | ICLR 2025 | null | null | null | cs.CY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large language models (LLMs) are recognized as systems that closely mimic
aspects of human intelligence. This capability has attracted attention from the
social science community, who see the potential in leveraging LLMs to replace
human participants in experiments, thereby reducing research costs and
complexity. In this paper, we introduce a framework for large language models
personification, including a strategy for constructing virtual characters' life
stories from the ground up, a Multi-Agent Cognitive Mechanism capable of
simulating human cognitive processes, and a psychology-guided evaluation method
to assess human simulations from both self and observational perspectives.
Experimental results demonstrate that our constructed simulacra can produce
personified responses that align with their target characters. Our work is a
preliminary exploration which offers great potential in practical applications.
All the code and datasets will be released, with the hope of inspiring further
investigations. Our code and dataset are available at:
https://github.com/hasakiXie123/Human-Simulacra.
| [
{
"version": "v1",
"created": "Wed, 28 Feb 2024 09:11:14 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Mar 2024 08:49:08 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Mar 2024 13:03:51 GMT"
},
{
"version": "v4",
"created": "Mon, 18 Mar 2024 07:29:43 GMT"
},
{
"version": "v5",
"created": "Mon, 10 Jun 2024 02:56:59 GMT"
},
{
"version": "v6",
"created": "Sun, 2 Mar 2025 05:03:25 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xie",
"Qiuejie",
""
],
[
"Feng",
"Qiming",
""
],
[
"Zhang",
"Tianqi",
""
],
[
"Li",
"Qingqiu",
""
],
[
"Yang",
"Linyi",
""
],
[
"Zhang",
"Yuejie",
""
],
[
"Feng",
"Rui",
""
],
[
"He",
"Liang",
""
],
[
"Gao",
"Shang",
""
],
[
"Zhang",
"Yue",
""
]
]
| TITLE: Human Simulacra: Benchmarking the Personification of Large Language
Models
ABSTRACT: Large language models (LLMs) are recognized as systems that closely mimic
aspects of human intelligence. This capability has attracted attention from the
social science community, who see the potential in leveraging LLMs to replace
human participants in experiments, thereby reducing research costs and
complexity. In this paper, we introduce a framework for large language models
personification, including a strategy for constructing virtual characters' life
stories from the ground up, a Multi-Agent Cognitive Mechanism capable of
simulating human cognitive processes, and a psychology-guided evaluation method
to assess human simulations from both self and observational perspectives.
Experimental results demonstrate that our constructed simulacra can produce
personified responses that align with their target characters. Our work is a
preliminary exploration which offers great potential in practical applications.
All the code and datasets will be released, with the hope of inspiring further
investigations. Our code and dataset are available at:
https://github.com/hasakiXie123/Human-Simulacra.
| no_new_dataset | 0.76895 |
2403.02957 | Benedikt Fesl | Benedikt Fesl and Benedikt B\"ock and Florian Strasser and Michael
Baur and Michael Joham and Wolfgang Utschick | On the Asymptotic Mean Square Error Optimality of Diffusion Models | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Diffusion models (DMs) as generative priors have recently shown great
potential for denoising tasks but lack theoretical understanding with respect
to their mean square error (MSE) optimality. This paper proposes a novel
denoising strategy inspired by the structure of the MSE-optimal conditional
mean estimator (CME). The resulting DM-based denoiser can be conveniently
employed using a pre-trained DM, being particularly fast by truncating reverse
diffusion steps and not requiring stochastic re-sampling. We present a
comprehensive (non-)asymptotic optimality analysis of the proposed
diffusion-based denoiser, demonstrating polynomial-time convergence to the CME
under mild conditions. Our analysis also derives a novel Lipschitz constant
that depends solely on the DM's hyperparameters. Further, we offer a new
perspective on DMs, showing that they inherently combine an asymptotically
optimal denoiser with a powerful generator, modifiable by switching re-sampling
in the reverse process on or off. The theoretical findings are thoroughly
validated with experiments based on various benchmark datasets
| [
{
"version": "v1",
"created": "Tue, 5 Mar 2024 13:25:44 GMT"
},
{
"version": "v2",
"created": "Thu, 23 May 2024 09:39:31 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Feb 2025 17:16:19 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Mar 2025 10:59:52 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Fesl",
"Benedikt",
""
],
[
"Böck",
"Benedikt",
""
],
[
"Strasser",
"Florian",
""
],
[
"Baur",
"Michael",
""
],
[
"Joham",
"Michael",
""
],
[
"Utschick",
"Wolfgang",
""
]
]
| TITLE: On the Asymptotic Mean Square Error Optimality of Diffusion Models
ABSTRACT: Diffusion models (DMs) as generative priors have recently shown great
potential for denoising tasks but lack theoretical understanding with respect
to their mean square error (MSE) optimality. This paper proposes a novel
denoising strategy inspired by the structure of the MSE-optimal conditional
mean estimator (CME). The resulting DM-based denoiser can be conveniently
employed using a pre-trained DM, being particularly fast by truncating reverse
diffusion steps and not requiring stochastic re-sampling. We present a
comprehensive (non-)asymptotic optimality analysis of the proposed
diffusion-based denoiser, demonstrating polynomial-time convergence to the CME
under mild conditions. Our analysis also derives a novel Lipschitz constant
that depends solely on the DM's hyperparameters. Further, we offer a new
perspective on DMs, showing that they inherently combine an asymptotically
optimal denoiser with a powerful generator, modifiable by switching re-sampling
in the reverse process on or off. The theoretical findings are thoroughly
validated with experiments based on various benchmark datasets
| no_new_dataset | 0.942454 |
2403.03636 | Yibin Chen | Yibin Chen, Yifu Yuan, Zeyu Zhang, Yan Zheng, Jinyi Liu, Fei Ni,
Jianye Hao, Hangyu Mao, Fuzheng Zhang | SheetAgent: Towards A Generalist Agent for Spreadsheet Reasoning and
Manipulation via Large Language Models | Accepted by International World Wide Web Conference (WWW) 2025 (oral) | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spreadsheets are ubiquitous across the World Wide Web, playing a critical
role in enhancing work efficiency across various domains. Large language model
(LLM) has been recently attempted for automatic spreadsheet manipulation but
has not yet been investigated in complicated and realistic tasks where
reasoning challenges exist (e.g., long horizon manipulation with multi-step
reasoning and ambiguous requirements). To bridge the gap with the real-world
requirements, we introduce SheetRM, a benchmark featuring long-horizon and
multi-category tasks with reasoning-dependent manipulation caused by real-life
challenges. To mitigate the above challenges, we further propose SheetAgent, a
novel autonomous agent that utilizes the power of LLMs. SheetAgent consists of
three collaborative modules: Planner, Informer, and Retriever, achieving both
advanced reasoning and accurate manipulation over spreadsheets without human
interaction through iterative task reasoning and reflection. Extensive
experiments demonstrate that SheetAgent delivers 20--40\% pass rate
improvements on multiple benchmarks over baselines, achieving enhanced
precision in spreadsheet manipulation and demonstrating superior table
reasoning abilities. More details and visualizations are available at the
project website: https://sheetagent.github.io/. The datasets and source code
are available at https://anonymous.4open.science/r/SheetAgent.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 11:48:08 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Aug 2024 17:03:11 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 06:56:29 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Yibin",
""
],
[
"Yuan",
"Yifu",
""
],
[
"Zhang",
"Zeyu",
""
],
[
"Zheng",
"Yan",
""
],
[
"Liu",
"Jinyi",
""
],
[
"Ni",
"Fei",
""
],
[
"Hao",
"Jianye",
""
],
[
"Mao",
"Hangyu",
""
],
[
"Zhang",
"Fuzheng",
""
]
]
| TITLE: SheetAgent: Towards A Generalist Agent for Spreadsheet Reasoning and
Manipulation via Large Language Models
ABSTRACT: Spreadsheets are ubiquitous across the World Wide Web, playing a critical
role in enhancing work efficiency across various domains. Large language model
(LLM) has been recently attempted for automatic spreadsheet manipulation but
has not yet been investigated in complicated and realistic tasks where
reasoning challenges exist (e.g., long horizon manipulation with multi-step
reasoning and ambiguous requirements). To bridge the gap with the real-world
requirements, we introduce SheetRM, a benchmark featuring long-horizon and
multi-category tasks with reasoning-dependent manipulation caused by real-life
challenges. To mitigate the above challenges, we further propose SheetAgent, a
novel autonomous agent that utilizes the power of LLMs. SheetAgent consists of
three collaborative modules: Planner, Informer, and Retriever, achieving both
advanced reasoning and accurate manipulation over spreadsheets without human
interaction through iterative task reasoning and reflection. Extensive
experiments demonstrate that SheetAgent delivers 20--40\% pass rate
improvements on multiple benchmarks over baselines, achieving enhanced
precision in spreadsheet manipulation and demonstrating superior table
reasoning abilities. More details and visualizations are available at the
project website: https://sheetagent.github.io/. The datasets and source code
are available at https://anonymous.4open.science/r/SheetAgent.
| no_new_dataset | 0.949949 |
2403.06865 | Mohamed El Louadi | Mohamed El Louadi | On the Preservation of Africa's Cultural Heritage in the Age of
Artificial Intelligence | 11 pages, 2 figures | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | In this paper we delve into the historical evolution of data as a fundamental
element in communication and knowledge transmission. The paper traces the
stages of knowledge dissemination from oral traditions to the digital era,
highlighting the significance of languages and cultural diversity in this
progression. It also explores the impact of digital technologies on memory,
communication, and cultural preservation, emphasizing the need for promoting a
culture of the digital (rather than a digital culture) in Africa and beyond.
Additionally, it discusses the challenges and opportunities presented by data
biases in AI development, underscoring the importance of creating diverse
datasets for equitable representation. We advocate for investing in data as a
crucial raw material for fostering digital literacy, economic development, and,
above all, cultural preservation in the digital age.
| [
{
"version": "v1",
"created": "Mon, 11 Mar 2024 16:18:40 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Mar 2024 15:44:23 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 22:38:02 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Louadi",
"Mohamed El",
""
]
]
| TITLE: On the Preservation of Africa's Cultural Heritage in the Age of
Artificial Intelligence
ABSTRACT: In this paper we delve into the historical evolution of data as a fundamental
element in communication and knowledge transmission. The paper traces the
stages of knowledge dissemination from oral traditions to the digital era,
highlighting the significance of languages and cultural diversity in this
progression. It also explores the impact of digital technologies on memory,
communication, and cultural preservation, emphasizing the need for promoting a
culture of the digital (rather than a digital culture) in Africa and beyond.
Additionally, it discusses the challenges and opportunities presented by data
biases in AI development, underscoring the importance of creating diverse
datasets for equitable representation. We advocate for investing in data as a
crucial raw material for fostering digital literacy, economic development, and,
above all, cultural preservation in the digital age.
| no_new_dataset | 0.949389 |
2403.07260 | Yumeng Fu | Yumeng Fu, Junjie Wu, Zhongjie Wang, Meishan Zhang, Lili Shan, Yulin
Wu, Bingquan Li | LaERC-S: Improving LLM-based Emotion Recognition in Conversation with
Speaker Characteristics | COLING 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emotion recognition in conversation (ERC), the task of discerning human
emotions for each utterance within a conversation, has garnered significant
attention in human-computer interaction systems. Previous ERC studies focus on
speaker-specific information that predominantly stems from relationships among
utterances, which lacks sufficient information around conversations. Recent
research in ERC has sought to exploit pre-trained large language models (LLMs)
with speaker modelling to comprehend emotional states. Although these methods
have achieved encouraging results, the extracted speaker-specific information
struggles to indicate emotional dynamics. In this paper, motivated by the fact
that speaker characteristics play a crucial role and LLMs have rich world
knowledge, we present LaERC-S, a novel framework that stimulates LLMs to
explore speaker characteristics involving the mental state and behavior of
interlocutors, for accurate emotion predictions. To endow LLMs with this
knowledge information, we adopt the two-stage learning to make the models
reason speaker characteristics and track the emotion of the speaker in complex
conversation scenarios. Extensive experiments on three benchmark datasets
demonstrate the superiority of LaERC-S, reaching the new state-of-the-art.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 02:37:11 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 09:36:14 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Fu",
"Yumeng",
""
],
[
"Wu",
"Junjie",
""
],
[
"Wang",
"Zhongjie",
""
],
[
"Zhang",
"Meishan",
""
],
[
"Shan",
"Lili",
""
],
[
"Wu",
"Yulin",
""
],
[
"Li",
"Bingquan",
""
]
]
| TITLE: LaERC-S: Improving LLM-based Emotion Recognition in Conversation with
Speaker Characteristics
ABSTRACT: Emotion recognition in conversation (ERC), the task of discerning human
emotions for each utterance within a conversation, has garnered significant
attention in human-computer interaction systems. Previous ERC studies focus on
speaker-specific information that predominantly stems from relationships among
utterances, which lacks sufficient information around conversations. Recent
research in ERC has sought to exploit pre-trained large language models (LLMs)
with speaker modelling to comprehend emotional states. Although these methods
have achieved encouraging results, the extracted speaker-specific information
struggles to indicate emotional dynamics. In this paper, motivated by the fact
that speaker characteristics play a crucial role and LLMs have rich world
knowledge, we present LaERC-S, a novel framework that stimulates LLMs to
explore speaker characteristics involving the mental state and behavior of
interlocutors, for accurate emotion predictions. To endow LLMs with this
knowledge information, we adopt the two-stage learning to make the models
reason speaker characteristics and track the emotion of the speaker in complex
conversation scenarios. Extensive experiments on three benchmark datasets
demonstrate the superiority of LaERC-S, reaching the new state-of-the-art.
| no_new_dataset | 0.945551 |
2403.07693 | Yanyue Zhang | Yanyue Zhang, Pengfei Li, Yilong Lai, Deyu Zhou, Yulan He | Large, Small or Both: A Novel Data Augmentation Framework Based on
Language Models for Debiasing Opinion Summarization | null | COLING2025 | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As more than 70$\%$ of reviews in the existing opinion summary data set are
positive, current opinion summarization approaches are reluctant to generate
negative summaries given the input of negative texts. To address such sentiment
bias, a direct approach without the over-reliance on a specific framework is to
generate additional data based on large language models to balance the
emotional distribution of the dataset. However, data augmentation based on
large language models faces two disadvantages: 1) the potential issues or
toxicity in the augmented data; 2) the expensive costs. Therefore, in this
paper, we propose a novel data augmentation framework based on both large and
small language models for debiasing opinion summarization. In specific, a small
size of synthesized negative reviews is obtained by rewriting the positive text
via a large language model. Then, a disentangle reconstruction model is trained
based on the generated data. After training, a large amount of synthetic data
can be obtained by decoding the new representation obtained from the
combination of different sample representations and filtering based on
confusion degree and sentiment classification. Experiments have proved that our
framework can effectively alleviate emotional bias same as using only large
models, but more economically.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 14:37:03 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Mar 2024 19:20:05 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Yanyue",
""
],
[
"Li",
"Pengfei",
""
],
[
"Lai",
"Yilong",
""
],
[
"Zhou",
"Deyu",
""
],
[
"He",
"Yulan",
""
]
]
| TITLE: Large, Small or Both: A Novel Data Augmentation Framework Based on
Language Models for Debiasing Opinion Summarization
ABSTRACT: As more than 70$\%$ of reviews in the existing opinion summary data set are
positive, current opinion summarization approaches are reluctant to generate
negative summaries given the input of negative texts. To address such sentiment
bias, a direct approach without the over-reliance on a specific framework is to
generate additional data based on large language models to balance the
emotional distribution of the dataset. However, data augmentation based on
large language models faces two disadvantages: 1) the potential issues or
toxicity in the augmented data; 2) the expensive costs. Therefore, in this
paper, we propose a novel data augmentation framework based on both large and
small language models for debiasing opinion summarization. In specific, a small
size of synthesized negative reviews is obtained by rewriting the positive text
via a large language model. Then, a disentangle reconstruction model is trained
based on the generated data. After training, a large amount of synthetic data
can be obtained by decoding the new representation obtained from the
combination of different sample representations and filtering based on
confusion degree and sentiment classification. Experiments have proved that our
framework can effectively alleviate emotional bias same as using only large
models, but more economically.
| no_new_dataset | 0.950319 |
2403.08632 | Zhuang Liu | Zhuang Liu, Kaiming He | A Decade's Battle on Dataset Bias: Are We There Yet? | Published in ICLR 2025 (Oral Presentation) | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We revisit the "dataset classification" experiment suggested by Torralba &
Efros (2011) a decade ago, in the new era with large-scale, diverse, and
hopefully less biased datasets as well as more capable neural network
architectures. Surprisingly, we observe that modern neural networks can achieve
excellent accuracy in classifying which dataset an image is from: e.g., we
report 84.7% accuracy on held-out validation data for the three-way
classification problem consisting of the YFCC, CC, and DataComp datasets. Our
further experiments show that such a dataset classifier could learn semantic
features that are generalizable and transferable, which cannot be explained by
memorization. We hope our discovery will inspire the community to rethink
issues involving dataset bias.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2024 15:46:37 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 12:01:27 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liu",
"Zhuang",
""
],
[
"He",
"Kaiming",
""
]
]
| TITLE: A Decade's Battle on Dataset Bias: Are We There Yet?
ABSTRACT: We revisit the "dataset classification" experiment suggested by Torralba &
Efros (2011) a decade ago, in the new era with large-scale, diverse, and
hopefully less biased datasets as well as more capable neural network
architectures. Surprisingly, we observe that modern neural networks can achieve
excellent accuracy in classifying which dataset an image is from: e.g., we
report 84.7% accuracy on held-out validation data for the three-way
classification problem consisting of the YFCC, CC, and DataComp datasets. Our
further experiments show that such a dataset classifier could learn semantic
features that are generalizable and transferable, which cannot be explained by
memorization. We hope our discovery will inspire the community to rethink
issues involving dataset bias.
| no_new_dataset | 0.940079 |
2403.08694 | Shangding Gu | Shangding Gu, Alois Knoll, Ming Jin | TeaMs-RL: Teaching LLMs to Generate Better Instruction Datasets via
Reinforcement Learning | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of Large Language Models (LLMs) often confronts challenges
stemming from the heavy reliance on human annotators in the reinforcement
learning with human feedback (RLHF) framework, or the frequent and costly
external queries tied to the self-instruct paradigm. In this work, we pivot to
Reinforcement Learning (RL) -- but with a twist. Diverging from the typical
RLHF, which refines LLMs following instruction data training, we use RL to
directly generate the foundational instruction dataset that alone suffices for
fine-tuning. Our method, TeaMs-RL, uses a suite of textual operations and
rules, prioritizing the diversification of training datasets. It facilitates
the generation of high-quality data without excessive reliance on external
advanced models, paving the way for a single fine-tuning step and negating the
need for subsequent RLHF stages. Our findings highlight key advantages of our
approach: reduced need for human involvement and fewer model queries (only
5.73% of the strong baseline's total), along with enhanced capabilities of LLMs
in crafting and comprehending complex instructions compared to strong
baselines, and substantially improved model privacy protection. Code is
available at the link: https://github.com/SafeRL-Lab/TeaMs-RL
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2024 16:57:57 GMT"
},
{
"version": "v2",
"created": "Fri, 3 May 2024 22:44:24 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Aug 2024 04:54:36 GMT"
},
{
"version": "v4",
"created": "Sat, 1 Mar 2025 19:25:49 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Gu",
"Shangding",
""
],
[
"Knoll",
"Alois",
""
],
[
"Jin",
"Ming",
""
]
]
| TITLE: TeaMs-RL: Teaching LLMs to Generate Better Instruction Datasets via
Reinforcement Learning
ABSTRACT: The development of Large Language Models (LLMs) often confronts challenges
stemming from the heavy reliance on human annotators in the reinforcement
learning with human feedback (RLHF) framework, or the frequent and costly
external queries tied to the self-instruct paradigm. In this work, we pivot to
Reinforcement Learning (RL) -- but with a twist. Diverging from the typical
RLHF, which refines LLMs following instruction data training, we use RL to
directly generate the foundational instruction dataset that alone suffices for
fine-tuning. Our method, TeaMs-RL, uses a suite of textual operations and
rules, prioritizing the diversification of training datasets. It facilitates
the generation of high-quality data without excessive reliance on external
advanced models, paving the way for a single fine-tuning step and negating the
need for subsequent RLHF stages. Our findings highlight key advantages of our
approach: reduced need for human involvement and fewer model queries (only
5.73% of the strong baseline's total), along with enhanced capabilities of LLMs
in crafting and comprehending complex instructions compared to strong
baselines, and substantially improved model privacy protection. Code is
available at the link: https://github.com/SafeRL-Lab/TeaMs-RL
| no_new_dataset | 0.946745 |
2403.08743 | Jingling Li | Jingling Li, Zeyu Tang, Xiaoyu Liu, Peter Spirtes, Kun Zhang, Liu
Leqi, Yang Liu | Prompting Fairness: Integrating Causality to Debias Large Language
Models | 24 pages, 10 figures | The 13th International Conference on Learning Representations
(ICLR 2025) | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large language models (LLMs), despite their remarkable capabilities, are
susceptible to generating biased and discriminatory responses. As LLMs
increasingly influence high-stakes decision-making (e.g., hiring and
healthcare), mitigating these biases becomes critical. In this work, we propose
a causality-guided debiasing framework to tackle social biases, aiming to
reduce the objectionable dependence between LLMs' decisions and the social
information in the input. Our framework introduces a novel perspective to
identify how social information can affect an LLM's decision through different
causal pathways. Leveraging these causal insights, we outline principled
prompting strategies that regulate these pathways through selection mechanisms.
This framework not only unifies existing prompting-based debiasing techniques,
but also opens up new directions for reducing bias by encouraging the model to
prioritize fact-based reasoning over reliance on biased social cues. We
validate our framework through extensive experiments on real-world datasets
across multiple domains, demonstrating its effectiveness in debiasing LLM
decisions, even with only black-box access to the model.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2024 17:46:28 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 17:33:03 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Jingling",
""
],
[
"Tang",
"Zeyu",
""
],
[
"Liu",
"Xiaoyu",
""
],
[
"Spirtes",
"Peter",
""
],
[
"Zhang",
"Kun",
""
],
[
"Leqi",
"Liu",
""
],
[
"Liu",
"Yang",
""
]
]
| TITLE: Prompting Fairness: Integrating Causality to Debias Large Language
Models
ABSTRACT: Large language models (LLMs), despite their remarkable capabilities, are
susceptible to generating biased and discriminatory responses. As LLMs
increasingly influence high-stakes decision-making (e.g., hiring and
healthcare), mitigating these biases becomes critical. In this work, we propose
a causality-guided debiasing framework to tackle social biases, aiming to
reduce the objectionable dependence between LLMs' decisions and the social
information in the input. Our framework introduces a novel perspective to
identify how social information can affect an LLM's decision through different
causal pathways. Leveraging these causal insights, we outline principled
prompting strategies that regulate these pathways through selection mechanisms.
This framework not only unifies existing prompting-based debiasing techniques,
but also opens up new directions for reducing bias by encouraging the model to
prioritize fact-based reasoning over reliance on biased social cues. We
validate our framework through extensive experiments on real-world datasets
across multiple domains, demonstrating its effectiveness in debiasing LLM
decisions, even with only black-box access to the model.
| no_new_dataset | 0.945399 |
2403.09752 | Ayoub Si-Ahmed Mr | Ayoub Si-ahmed, Mohammed Ali Al-Garadi, Narhimene Boustia | Explainable Machine Learning-Based Security and Privacy Protection
Framework for Internet of Medical Things Systems | 39 pages, 14 figures, 15 tables, journal paper | null | null | null | cs.CR cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The Internet of Medical Things (IoMT) transcends traditional medical
boundaries, enabling a transition from reactive treatment to proactive
prevention. This innovative method revolutionizes healthcare by facilitating
early disease detection and tailored care, particularly in chronic disease
management, where IoMT automates treatments based on real-time health data
collection. Nonetheless, its benefits are countered by significant security
challenges that endanger the lives of its users due to the sensitivity and
value of the processed data, thereby attracting malicious interests. Moreover,
the utilization of wireless communication for data transmission exposes medical
data to interception and tampering by cybercriminals. Additionally, anomalies
may arise due to human error, network interference, or hardware malfunctions.
In this context, anomaly detection based on Machine Learning (ML) is an
interesting solution, but it comes up against obstacles in terms of
explicability and privacy protection. To address these challenges, a new
framework for Intrusion Detection Systems is introduced, leveraging Artificial
Neural Networks for intrusion detection while utilizing Federated Learning (FL)
for privacy preservation. Additionally, eXplainable Artificial Intelligence
methods are incorporated to enhance model explanation and interpretation. The
efficacy of the proposed framework is evaluated and compared with centralized
approaches using multiple datasets containing network and medical data,
simulating various attack types impacting the confidentiality, integrity, and
availability of medical and physiological data. The results obtained offer
compelling evidence that the FL method performs comparably to the centralized
method, demonstrating high performance. Additionally, it affords the dual
advantage of safeguarding privacy and providing model explanation while
adhering to ethical principles.
| [
{
"version": "v1",
"created": "Thu, 14 Mar 2024 11:57:26 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 13:42:04 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Si-ahmed",
"Ayoub",
""
],
[
"Al-Garadi",
"Mohammed Ali",
""
],
[
"Boustia",
"Narhimene",
""
]
]
| TITLE: Explainable Machine Learning-Based Security and Privacy Protection
Framework for Internet of Medical Things Systems
ABSTRACT: The Internet of Medical Things (IoMT) transcends traditional medical
boundaries, enabling a transition from reactive treatment to proactive
prevention. This innovative method revolutionizes healthcare by facilitating
early disease detection and tailored care, particularly in chronic disease
management, where IoMT automates treatments based on real-time health data
collection. Nonetheless, its benefits are countered by significant security
challenges that endanger the lives of its users due to the sensitivity and
value of the processed data, thereby attracting malicious interests. Moreover,
the utilization of wireless communication for data transmission exposes medical
data to interception and tampering by cybercriminals. Additionally, anomalies
may arise due to human error, network interference, or hardware malfunctions.
In this context, anomaly detection based on Machine Learning (ML) is an
interesting solution, but it comes up against obstacles in terms of
explicability and privacy protection. To address these challenges, a new
framework for Intrusion Detection Systems is introduced, leveraging Artificial
Neural Networks for intrusion detection while utilizing Federated Learning (FL)
for privacy preservation. Additionally, eXplainable Artificial Intelligence
methods are incorporated to enhance model explanation and interpretation. The
efficacy of the proposed framework is evaluated and compared with centralized
approaches using multiple datasets containing network and medical data,
simulating various attack types impacting the confidentiality, integrity, and
availability of medical and physiological data. The results obtained offer
compelling evidence that the FL method performs comparably to the centralized
method, demonstrating high performance. Additionally, it affords the dual
advantage of safeguarding privacy and providing model explanation while
adhering to ethical principles.
| no_new_dataset | 0.9463 |
2403.16513 | Ziyou Liang | Ziyou Liang and Weifeng Liu and Run Wang and Mengjie Wu and Boheng Li
and Yuyang Zhang and Lina Wang and Xinyi Yang | Transfer Learning of Real Image Features with Soft Contrastive Loss for
Fake Image Detection | null | null | null | null | cs.CV cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the last few years, the artifact patterns in fake images synthesized by
different generative models have been inconsistent, leading to the failure of
previous research that relied on spotting subtle differences between real and
fake. In our preliminary experiments, we find that the artifacts in fake images
always change with the development of the generative model, while natural
images exhibit stable statistical properties. In this paper, we employ natural
traces shared only by real images as an additional target for a classifier.
Specifically, we introduce a self-supervised feature mapping process for
natural trace extraction and develop a transfer learning based on soft
contrastive loss to bring them closer to real images and further away from fake
ones. This motivates the detector to make decisions based on the proximity of
images to the natural traces. To conduct a comprehensive experiment, we built a
high-quality and diverse dataset that includes generative models comprising
GANs and diffusion models, to evaluate the effectiveness in generalizing
unknown forgery techniques and robustness in surviving different
transformations. Experimental results show that our proposed method gives 96.2%
mAP significantly outperforms the baselines. Extensive experiments conducted on
popular commercial platforms reveal that our proposed method achieves an
accuracy exceeding 78.4%, underscoring its practicality for real-world
application deployment.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 07:58:58 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 16:12:09 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liang",
"Ziyou",
""
],
[
"Liu",
"Weifeng",
""
],
[
"Wang",
"Run",
""
],
[
"Wu",
"Mengjie",
""
],
[
"Li",
"Boheng",
""
],
[
"Zhang",
"Yuyang",
""
],
[
"Wang",
"Lina",
""
],
[
"Yang",
"Xinyi",
""
]
]
| TITLE: Transfer Learning of Real Image Features with Soft Contrastive Loss for
Fake Image Detection
ABSTRACT: In the last few years, the artifact patterns in fake images synthesized by
different generative models have been inconsistent, leading to the failure of
previous research that relied on spotting subtle differences between real and
fake. In our preliminary experiments, we find that the artifacts in fake images
always change with the development of the generative model, while natural
images exhibit stable statistical properties. In this paper, we employ natural
traces shared only by real images as an additional target for a classifier.
Specifically, we introduce a self-supervised feature mapping process for
natural trace extraction and develop a transfer learning based on soft
contrastive loss to bring them closer to real images and further away from fake
ones. This motivates the detector to make decisions based on the proximity of
images to the natural traces. To conduct a comprehensive experiment, we built a
high-quality and diverse dataset that includes generative models comprising
GANs and diffusion models, to evaluate the effectiveness in generalizing
unknown forgery techniques and robustness in surviving different
transformations. Experimental results show that our proposed method gives 96.2%
mAP significantly outperforms the baselines. Extensive experiments conducted on
popular commercial platforms reveal that our proposed method achieves an
accuracy exceeding 78.4%, underscoring its practicality for real-world
application deployment.
| new_dataset | 0.946745 |
2403.16829 | Tingting Ni | Titouan Renard, Andreas Schlaginhaufen, Tingting Ni, Maryam Kamgarpour | Convergence of a model-free entropy-regularized inverse reinforcement
learning algorithm | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Given a dataset of expert demonstrations, inverse reinforcement learning
(IRL) aims to recover a reward for which the expert is optimal. This work
proposes a model-free algorithm to solve entropy-regularized IRL problem. In
particular, we employ a stochastic gradient descent update for the reward and a
stochastic soft policy iteration update for the policy. Assuming access to a
generative model, we prove that our algorithm is guaranteed to recover a reward
for which the expert is $\varepsilon$-optimal using
$\mathcal{O}(1/\varepsilon^{2})$ samples of the Markov decision process (MDP).
Furthermore, with $\mathcal{O}(1/\varepsilon^{4})$ samples we prove that the
optimal policy corresponding to the recovered reward is $\varepsilon$-close to
the expert policy in total variation distance.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 14:54:42 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Apr 2024 13:54:27 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 18:01:44 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Renard",
"Titouan",
""
],
[
"Schlaginhaufen",
"Andreas",
""
],
[
"Ni",
"Tingting",
""
],
[
"Kamgarpour",
"Maryam",
""
]
]
| TITLE: Convergence of a model-free entropy-regularized inverse reinforcement
learning algorithm
ABSTRACT: Given a dataset of expert demonstrations, inverse reinforcement learning
(IRL) aims to recover a reward for which the expert is optimal. This work
proposes a model-free algorithm to solve entropy-regularized IRL problem. In
particular, we employ a stochastic gradient descent update for the reward and a
stochastic soft policy iteration update for the policy. Assuming access to a
generative model, we prove that our algorithm is guaranteed to recover a reward
for which the expert is $\varepsilon$-optimal using
$\mathcal{O}(1/\varepsilon^{2})$ samples of the Markov decision process (MDP).
Furthermore, with $\mathcal{O}(1/\varepsilon^{4})$ samples we prove that the
optimal policy corresponding to the recovered reward is $\varepsilon$-close to
the expert policy in total variation distance.
| no_new_dataset | 0.942612 |
2403.17010 | Lingdong Kong | Lingdong Kong and Xiang Xu and Jun Cen and Wenwei Zhang and Liang Pan
and Kai Chen and Ziwei Liu | Calib3D: Calibrating Model Preferences for Reliable 3D Scene
Understanding | WACV 2025 Oral; 26 pages, 8 figures, 12 tables; Code at
https://github.com/ldkong1205/Calib3D | null | null | null | cs.CV cs.LG cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | Safety-critical 3D scene understanding tasks necessitate not only accurate
but also confident predictions from 3D perception models. This study introduces
Calib3D, a pioneering effort to benchmark and scrutinize the reliability of 3D
scene understanding models from an uncertainty estimation viewpoint. We
comprehensively evaluate 28 state-of-the-art models across 10 diverse 3D
datasets, uncovering insightful phenomena that cope with both the aleatoric and
epistemic uncertainties in 3D scene understanding. We discover that despite
achieving impressive levels of accuracy, existing models frequently fail to
provide reliable uncertainty estimates -- a pitfall that critically undermines
their applicability in safety-sensitive contexts. Through extensive analysis of
key factors such as network capacity, LiDAR representations, rasterization
resolutions, and 3D data augmentation techniques, we correlate these aspects
directly with the model calibration efficacy. Furthermore, we introduce DeptS,
a novel depth-aware scaling approach aimed at enhancing 3D model calibration.
Extensive experiments across a wide range of configurations validate the
superiority of our method. We hope this work could serve as a cornerstone for
fostering reliable 3D scene understanding. Code and benchmark toolkit are
publicly available.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 17:59:59 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Dec 2024 15:33:29 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 04:22:19 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Kong",
"Lingdong",
""
],
[
"Xu",
"Xiang",
""
],
[
"Cen",
"Jun",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Pan",
"Liang",
""
],
[
"Chen",
"Kai",
""
],
[
"Liu",
"Ziwei",
""
]
]
| TITLE: Calib3D: Calibrating Model Preferences for Reliable 3D Scene
Understanding
ABSTRACT: Safety-critical 3D scene understanding tasks necessitate not only accurate
but also confident predictions from 3D perception models. This study introduces
Calib3D, a pioneering effort to benchmark and scrutinize the reliability of 3D
scene understanding models from an uncertainty estimation viewpoint. We
comprehensively evaluate 28 state-of-the-art models across 10 diverse 3D
datasets, uncovering insightful phenomena that cope with both the aleatoric and
epistemic uncertainties in 3D scene understanding. We discover that despite
achieving impressive levels of accuracy, existing models frequently fail to
provide reliable uncertainty estimates -- a pitfall that critically undermines
their applicability in safety-sensitive contexts. Through extensive analysis of
key factors such as network capacity, LiDAR representations, rasterization
resolutions, and 3D data augmentation techniques, we correlate these aspects
directly with the model calibration efficacy. Furthermore, we introduce DeptS,
a novel depth-aware scaling approach aimed at enhancing 3D model calibration.
Extensive experiments across a wide range of configurations validate the
superiority of our method. We hope this work could serve as a cornerstone for
fostering reliable 3D scene understanding. Code and benchmark toolkit are
publicly available.
| no_new_dataset | 0.93611 |
2404.07465 | Soichiro Nishimori | Soichiro Nishimori, Xin-Qiang Cai, Johannes Ackermann, Masashi
Sugiyama | Offline Reinforcement Learning with Domain-Unlabeled Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Offline reinforcement learning (RL) is vital in areas where active data
collection is expensive or infeasible, such as robotics or healthcare. In the
real world, offline datasets often involve multiple domains that share the same
state and action spaces but have distinct dynamics, and only a small fraction
of samples are clearly labeled as belonging to the target domain we are
interested in. For example, in robotics, precise system identification may only
have been performed for part of the deployments. To address this challenge, we
consider Positive-Unlabeled Offline RL (PUORL), a novel offline RL setting in
which we have a small amount of labeled target-domain data and a large amount
of domain-unlabeled data from multiple domains, including the target domain.
For PUORL, we propose a plug-and-play approach that leverages
positive-unlabeled (PU) learning to train a domain classifier. The classifier
then extracts target-domain samples from the domain-unlabeled data, augmenting
the scarce target-domain data. Empirical results on a modified version of the
D4RL benchmark demonstrate the effectiveness of our method: even when only 1 to
3 percent of the dataset is domain-labeled, our approach accurately identifies
target-domain samples and achieves high performance, even under substantial
dynamics shift. Our plug-and-play algorithm seamlessly integrates PU learning
with existing offline RL pipelines, enabling effective multi-domain data
utilization in scenarios where comprehensive domain labeling is prohibitive.
| [
{
"version": "v1",
"created": "Thu, 11 Apr 2024 04:02:20 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 00:09:19 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Nishimori",
"Soichiro",
""
],
[
"Cai",
"Xin-Qiang",
""
],
[
"Ackermann",
"Johannes",
""
],
[
"Sugiyama",
"Masashi",
""
]
]
| TITLE: Offline Reinforcement Learning with Domain-Unlabeled Data
ABSTRACT: Offline reinforcement learning (RL) is vital in areas where active data
collection is expensive or infeasible, such as robotics or healthcare. In the
real world, offline datasets often involve multiple domains that share the same
state and action spaces but have distinct dynamics, and only a small fraction
of samples are clearly labeled as belonging to the target domain we are
interested in. For example, in robotics, precise system identification may only
have been performed for part of the deployments. To address this challenge, we
consider Positive-Unlabeled Offline RL (PUORL), a novel offline RL setting in
which we have a small amount of labeled target-domain data and a large amount
of domain-unlabeled data from multiple domains, including the target domain.
For PUORL, we propose a plug-and-play approach that leverages
positive-unlabeled (PU) learning to train a domain classifier. The classifier
then extracts target-domain samples from the domain-unlabeled data, augmenting
the scarce target-domain data. Empirical results on a modified version of the
D4RL benchmark demonstrate the effectiveness of our method: even when only 1 to
3 percent of the dataset is domain-labeled, our approach accurately identifies
target-domain samples and achieves high performance, even under substantial
dynamics shift. Our plug-and-play algorithm seamlessly integrates PU learning
with existing offline RL pipelines, enabling effective multi-domain data
utilization in scenarios where comprehensive domain labeling is prohibitive.
| no_new_dataset | 0.949248 |
2404.07533 | Raju Halder | Dipika Jha, Ankit K. Bhagat, Raju Halder, Rajendra N. Paramanik,
Chandra M. Kumar | Exploring the Decentraland Economy: Multifaceted Parcel Attributes, Key
Insights, and Benchmarking | null | null | null | null | cs.LG cs.AI cs.ET | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper presents a comprehensive Decentraland parcels dataset, called
IITP-VDLand, sourced from diverse platforms such as Decentraland, OpenSea,
Etherscan, Google BigQuery, and various Social Media Platforms. Unlike existing
datasets which have limited attributes and records, IITP-VDLand offers a rich
array of attributes, encompassing parcel characteristics, trading history, past
activities, transactions, and social media interactions. Alongside, we
introduce a key attribute in the dataset, namely Rarity score, which measures
the uniqueness of each parcel within the virtual world. Addressing the
significant challenge posed by the dispersed nature of this data across various
sources, we employ a systematic approach, utilizing both available APIs and
custom scripts, to gather it. Subsequently, we meticulously curate and organize
the information into four distinct fragments: (1) Characteristics, (2) OpenSea
Trading History, (3) Ethereum Activity Transactions, and (4) Social Media. We
envisage that this dataset would serve as a robust resource for training
machine- and deep-learning models specifically designed to address real-world
challenges within the domain of Decentraland parcels. The performance
benchmarking of more than 20 state-of-the-art price prediction models on our
dataset yields promising results, achieving a maximum R2 score of 0.8251 and an
accuracy of 74.23% in case of Extra Trees Regressor and Classifier. The key
findings reveal that the ensemble models perform better than both deep learning
and linear models for our dataset. We observe a significant impact of
coordinates, geographical proximity, rarity score, and few other economic
indicators on the prediction of parcel prices.
| [
{
"version": "v1",
"created": "Thu, 11 Apr 2024 07:54:14 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 11:48:51 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 07:59:30 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Jha",
"Dipika",
""
],
[
"Bhagat",
"Ankit K.",
""
],
[
"Halder",
"Raju",
""
],
[
"Paramanik",
"Rajendra N.",
""
],
[
"Kumar",
"Chandra M.",
""
]
]
| TITLE: Exploring the Decentraland Economy: Multifaceted Parcel Attributes, Key
Insights, and Benchmarking
ABSTRACT: This paper presents a comprehensive Decentraland parcels dataset, called
IITP-VDLand, sourced from diverse platforms such as Decentraland, OpenSea,
Etherscan, Google BigQuery, and various Social Media Platforms. Unlike existing
datasets which have limited attributes and records, IITP-VDLand offers a rich
array of attributes, encompassing parcel characteristics, trading history, past
activities, transactions, and social media interactions. Alongside, we
introduce a key attribute in the dataset, namely Rarity score, which measures
the uniqueness of each parcel within the virtual world. Addressing the
significant challenge posed by the dispersed nature of this data across various
sources, we employ a systematic approach, utilizing both available APIs and
custom scripts, to gather it. Subsequently, we meticulously curate and organize
the information into four distinct fragments: (1) Characteristics, (2) OpenSea
Trading History, (3) Ethereum Activity Transactions, and (4) Social Media. We
envisage that this dataset would serve as a robust resource for training
machine- and deep-learning models specifically designed to address real-world
challenges within the domain of Decentraland parcels. The performance
benchmarking of more than 20 state-of-the-art price prediction models on our
dataset yields promising results, achieving a maximum R2 score of 0.8251 and an
accuracy of 74.23% in case of Extra Trees Regressor and Classifier. The key
findings reveal that the ensemble models perform better than both deep learning
and linear models for our dataset. We observe a significant impact of
coordinates, geographical proximity, rarity score, and few other economic
indicators on the prediction of parcel prices.
| new_dataset | 0.916409 |
2404.07575 | Tien-Hong Lo | Tien-Hong Lo, Fu-An Chao, Tzu-I Wu, Yao-Ting Sung, Berlin Chen | An Effective Automated Speaking Assessment Approach to Mitigating Data
Scarcity and Imbalanced Distribution | Accepted to NAACL 2024 Findings | null | null | null | cs.SD cs.AI eess.AS | http://creativecommons.org/licenses/by/4.0/ | Automated speaking assessment (ASA) typically involves automatic speech
recognition (ASR) and hand-crafted feature extraction from the ASR transcript
of a learner's speech. Recently, self-supervised learning (SSL) has shown
stellar performance compared to traditional methods. However, SSL-based ASA
systems are faced with at least three data-related challenges: limited
annotated data, uneven distribution of learner proficiency levels and
non-uniform score intervals between different CEFR proficiency levels. To
address these challenges, we explore the use of two novel modeling strategies:
metric-based classification and loss reweighting, leveraging distinct SSL-based
embedding features. Extensive experimental results on the ICNALE benchmark
dataset suggest that our approach can outperform existing strong baselines by a
sizable margin, achieving a significant improvement of more than 10% in CEFR
prediction accuracy.
| [
{
"version": "v1",
"created": "Thu, 11 Apr 2024 09:06:49 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Apr 2024 01:22:47 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Feb 2025 07:19:22 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Mar 2025 13:55:52 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Lo",
"Tien-Hong",
""
],
[
"Chao",
"Fu-An",
""
],
[
"Wu",
"Tzu-I",
""
],
[
"Sung",
"Yao-Ting",
""
],
[
"Chen",
"Berlin",
""
]
]
| TITLE: An Effective Automated Speaking Assessment Approach to Mitigating Data
Scarcity and Imbalanced Distribution
ABSTRACT: Automated speaking assessment (ASA) typically involves automatic speech
recognition (ASR) and hand-crafted feature extraction from the ASR transcript
of a learner's speech. Recently, self-supervised learning (SSL) has shown
stellar performance compared to traditional methods. However, SSL-based ASA
systems are faced with at least three data-related challenges: limited
annotated data, uneven distribution of learner proficiency levels and
non-uniform score intervals between different CEFR proficiency levels. To
address these challenges, we explore the use of two novel modeling strategies:
metric-based classification and loss reweighting, leveraging distinct SSL-based
embedding features. Extensive experimental results on the ICNALE benchmark
dataset suggest that our approach can outperform existing strong baselines by a
sizable margin, achieving a significant improvement of more than 10% in CEFR
prediction accuracy.
| no_new_dataset | 0.945298 |
2404.12379 | Isabella Liu | Isabella Liu, Hao Su, Xiaolong Wang | Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Dynamic
Scenes | Project page: https://www.liuisabella.com/DG-Mesh | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern 3D engines and graphics pipelines require mesh as a memory-efficient
representation, which allows efficient rendering, geometry processing, texture
editing, and many other downstream operations. However, it is still highly
difficult to obtain high-quality mesh in terms of detailed structure and time
consistency from dynamic observations. To this end, we introduce Dynamic
Gaussians Mesh (DG-Mesh), a framework to reconstruct a high-fidelity and
time-consistent mesh from dynamic input. Our work leverages the recent
advancement in 3D Gaussian Splatting to construct the mesh sequence with
temporal consistency from dynamic observations. Building on top of this
representation, DG-Mesh recovers high-quality meshes from the Gaussian points
and can track the mesh vertices over time, which enables applications such as
texture editing on dynamic objects. We introduce the Gaussian-Mesh Anchoring,
which encourages evenly distributed Gaussians, resulting better mesh
reconstruction through mesh-guided densification and pruning on the deformed
Gaussians. By applying cycle-consistent deformation between the canonical and
the deformed space, we can project the anchored Gaussian back to the canonical
space and optimize Gaussians across all time frames. During the evaluation on
different datasets, DG-Mesh provides significantly better mesh reconstruction
and rendering than baselines. Project page: https://www.liuisabella.com/DG-Mesh
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 17:58:16 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Apr 2024 17:59:27 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 05:31:09 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liu",
"Isabella",
""
],
[
"Su",
"Hao",
""
],
[
"Wang",
"Xiaolong",
""
]
]
| TITLE: Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Dynamic
Scenes
ABSTRACT: Modern 3D engines and graphics pipelines require mesh as a memory-efficient
representation, which allows efficient rendering, geometry processing, texture
editing, and many other downstream operations. However, it is still highly
difficult to obtain high-quality mesh in terms of detailed structure and time
consistency from dynamic observations. To this end, we introduce Dynamic
Gaussians Mesh (DG-Mesh), a framework to reconstruct a high-fidelity and
time-consistent mesh from dynamic input. Our work leverages the recent
advancement in 3D Gaussian Splatting to construct the mesh sequence with
temporal consistency from dynamic observations. Building on top of this
representation, DG-Mesh recovers high-quality meshes from the Gaussian points
and can track the mesh vertices over time, which enables applications such as
texture editing on dynamic objects. We introduce the Gaussian-Mesh Anchoring,
which encourages evenly distributed Gaussians, resulting better mesh
reconstruction through mesh-guided densification and pruning on the deformed
Gaussians. By applying cycle-consistent deformation between the canonical and
the deformed space, we can project the anchored Gaussian back to the canonical
space and optimize Gaussians across all time frames. During the evaluation on
different datasets, DG-Mesh provides significantly better mesh reconstruction
and rendering than baselines. Project page: https://www.liuisabella.com/DG-Mesh
| no_new_dataset | 0.953319 |
2404.14396 | Yuying Ge | Yuying Ge, Sijie Zhao, Jinguo Zhu, Yixiao Ge, Kun Yi, Lin Song, Chen
Li, Xiaohan Ding, Ying Shan | SEED-X: Multimodal Models with Unified Multi-granularity Comprehension
and Generation | We added benchmark results (without updating models) and ablation
study in this version. Project released at:
https://github.com/AILab-CVC/SEED-X | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The rapid evolution of multimodal foundation model has demonstrated
significant progresses in vision-language understanding and generation, e.g.,
our previous work SEED-LLaMA. However, there remains a gap between its
capability and the real-world applicability, primarily due to the model's
limited capacity to effectively respond to various user instructions and
interact with diverse visual data. In this work, we focus on bridging this gap
through integrating two enhanced features: (1) comprehending images of
arbitrary sizes and ratios, and (2) enabling multi-granularity image
generation. We present a unified and versatile foundation model, namely,
SEED-X, which is able to model multi-granularity visual semantics for
comprehension and generation tasks. Besides the competitive results on public
benchmarks, SEED-X demonstrates its effectiveness in handling real-world
applications across various domains after instruction tuning. We hope that our
work will inspire future research into what can be achieved by versatile
multimodal foundation models in real-world applications. The models, codes, and
datasets are released in https://github.com/AILab-CVC/SEED-X.
| [
{
"version": "v1",
"created": "Mon, 22 Apr 2024 17:56:09 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 07:53:44 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ge",
"Yuying",
""
],
[
"Zhao",
"Sijie",
""
],
[
"Zhu",
"Jinguo",
""
],
[
"Ge",
"Yixiao",
""
],
[
"Yi",
"Kun",
""
],
[
"Song",
"Lin",
""
],
[
"Li",
"Chen",
""
],
[
"Ding",
"Xiaohan",
""
],
[
"Shan",
"Ying",
""
]
]
| TITLE: SEED-X: Multimodal Models with Unified Multi-granularity Comprehension
and Generation
ABSTRACT: The rapid evolution of multimodal foundation model has demonstrated
significant progresses in vision-language understanding and generation, e.g.,
our previous work SEED-LLaMA. However, there remains a gap between its
capability and the real-world applicability, primarily due to the model's
limited capacity to effectively respond to various user instructions and
interact with diverse visual data. In this work, we focus on bridging this gap
through integrating two enhanced features: (1) comprehending images of
arbitrary sizes and ratios, and (2) enabling multi-granularity image
generation. We present a unified and versatile foundation model, namely,
SEED-X, which is able to model multi-granularity visual semantics for
comprehension and generation tasks. Besides the competitive results on public
benchmarks, SEED-X demonstrates its effectiveness in handling real-world
applications across various domains after instruction tuning. We hope that our
work will inspire future research into what can be achieved by versatile
multimodal foundation models in real-world applications. The models, codes, and
datasets are released in https://github.com/AILab-CVC/SEED-X.
| no_new_dataset | 0.915053 |
2404.15161 | Merey Ramazanova | Merey Ramazanova and Alejandro Pardo and Bernard Ghanem and Motasem
Alfarra | Test-Time Adaptation for Combating Missing Modalities in Egocentric
Videos | null | ICLR 2025 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Understanding videos that contain multiple modalities is crucial, especially
in egocentric videos, where combining various sensory inputs significantly
improves tasks like action recognition and moment localization. However,
real-world applications often face challenges with incomplete modalities due to
privacy concerns, efficiency needs, or hardware issues. Current methods, while
effective, often necessitate retraining the model entirely to handle missing
modalities, making them computationally intensive, particularly with large
training datasets. In this study, we propose a novel approach to address this
issue at test time without requiring retraining. We frame the problem as a
test-time adaptation task, where the model adjusts to the available unlabeled
data at test time. Our method, MiDl~(Mutual information with
self-Distillation), encourages the model to be insensitive to the specific
modality source present during testing by minimizing the mutual information
between the prediction and the available modality. Additionally, we incorporate
self-distillation to maintain the model's original performance when both
modalities are available. MiDl represents the first self-supervised, online
solution for handling missing modalities exclusively at test time. Through
experiments with various pretrained models and datasets, MiDl demonstrates
substantial performance improvement without the need for retraining.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 16:01:33 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 13:49:21 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ramazanova",
"Merey",
""
],
[
"Pardo",
"Alejandro",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Alfarra",
"Motasem",
""
]
]
| TITLE: Test-Time Adaptation for Combating Missing Modalities in Egocentric
Videos
ABSTRACT: Understanding videos that contain multiple modalities is crucial, especially
in egocentric videos, where combining various sensory inputs significantly
improves tasks like action recognition and moment localization. However,
real-world applications often face challenges with incomplete modalities due to
privacy concerns, efficiency needs, or hardware issues. Current methods, while
effective, often necessitate retraining the model entirely to handle missing
modalities, making them computationally intensive, particularly with large
training datasets. In this study, we propose a novel approach to address this
issue at test time without requiring retraining. We frame the problem as a
test-time adaptation task, where the model adjusts to the available unlabeled
data at test time. Our method, MiDl~(Mutual information with
self-Distillation), encourages the model to be insensitive to the specific
modality source present during testing by minimizing the mutual information
between the prediction and the available modality. Additionally, we incorporate
self-distillation to maintain the model's original performance when both
modalities are available. MiDl represents the first self-supervised, online
solution for handling missing modalities exclusively at test time. Through
experiments with various pretrained models and datasets, MiDl demonstrates
substantial performance improvement without the need for retraining.
| no_new_dataset | 0.947088 |
2404.16880 | Yikun Zhang | Yikun Zhang, Geyan Ye, Chaohao Yuan, Bo Han, Long-Kai Huang, Jianhua
Yao, Wei Liu, Yu Rong | Atomas: Hierarchical Alignment on Molecule-Text for Unified Molecule
Understanding and Generation | null | null | null | null | q-bio.QM cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecule-and-text cross-modal representation learning has emerged as a
promising direction for enhancing the quality of molecular representation,
thereby improving performance in various scientific fields. However, most
approaches employ a global alignment approach to learn the knowledge from
different modalities that may fail to capture fine-grained information, such as
molecule-and-text fragments and stereoisomeric nuances, which is crucial for
downstream tasks. Furthermore, it is incapable of modeling such information
using a similar global alignment strategy due to the lack of annotations about
the fine-grained fragments in the existing dataset. In this paper, we propose
Atomas, a hierarchical molecular representation learning framework that jointly
learns representations from SMILES strings and text. We design a Hierarchical
Adaptive Alignment model to automatically learn the fine-grained fragment
correspondence between two modalities and align these representations at three
semantic levels. Atomas's end-to-end training framework supports understanding
and generating molecules, enabling a wider range of downstream tasks. Atomas
achieves superior performance across 12 tasks on 11 datasets, outperforming 11
baseline models thus highlighting the effectiveness and versatility of our
method. Scaling experiments further demonstrate Atomas's robustness and
scalability. Moreover, visualization and qualitative analysis, validated by
human experts, confirm the chemical relevance of our approach. Codes are
released on https://github.com/yikunpku/Atomas.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 12:35:44 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 16:19:08 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 16:34:19 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Yikun",
""
],
[
"Ye",
"Geyan",
""
],
[
"Yuan",
"Chaohao",
""
],
[
"Han",
"Bo",
""
],
[
"Huang",
"Long-Kai",
""
],
[
"Yao",
"Jianhua",
""
],
[
"Liu",
"Wei",
""
],
[
"Rong",
"Yu",
""
]
]
| TITLE: Atomas: Hierarchical Alignment on Molecule-Text for Unified Molecule
Understanding and Generation
ABSTRACT: Molecule-and-text cross-modal representation learning has emerged as a
promising direction for enhancing the quality of molecular representation,
thereby improving performance in various scientific fields. However, most
approaches employ a global alignment approach to learn the knowledge from
different modalities that may fail to capture fine-grained information, such as
molecule-and-text fragments and stereoisomeric nuances, which is crucial for
downstream tasks. Furthermore, it is incapable of modeling such information
using a similar global alignment strategy due to the lack of annotations about
the fine-grained fragments in the existing dataset. In this paper, we propose
Atomas, a hierarchical molecular representation learning framework that jointly
learns representations from SMILES strings and text. We design a Hierarchical
Adaptive Alignment model to automatically learn the fine-grained fragment
correspondence between two modalities and align these representations at three
semantic levels. Atomas's end-to-end training framework supports understanding
and generating molecules, enabling a wider range of downstream tasks. Atomas
achieves superior performance across 12 tasks on 11 datasets, outperforming 11
baseline models thus highlighting the effectiveness and versatility of our
method. Scaling experiments further demonstrate Atomas's robustness and
scalability. Moreover, visualization and qualitative analysis, validated by
human experts, confirm the chemical relevance of our approach. Codes are
released on https://github.com/yikunpku/Atomas.
| no_new_dataset | 0.953579 |
2404.18479 | Daniel Nyg{\aa}rd Ege | Daniel Nyg{\aa}rd Ege, Henrik H. {\O}vreb{\o}, Vegar Stubberud, Martin
Francis Berg, Christer Elverum, Martin Steinert, H{\aa}vard Vestad | ChatGPT as an inventor: Eliciting the strengths and weaknesses of
current large language models against humans in engineering design | null | null | 10.1017/S0890060425000010 | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | This study compares the design practices and performance of ChatGPT 4.0, a
large language model (LLM), against graduate engineering students in a 48-hour
prototyping hackathon, based on a dataset comprising more than 100 prototypes.
The LLM participated by instructing two participants who executed its
instructions and provided objective feedback, generated ideas autonomously and
made all design decisions without human intervention. The LLM exhibited similar
prototyping practices to human participants and finished second among six
teams, successfully designing and providing building instructions for
functional prototypes. The LLM's concept generation capabilities were
particularly strong. However, the LLM prematurely abandoned promising concepts
when facing minor difficulties, added unnecessary complexity to designs, and
experienced design fixation. Communication between the LLM and participants was
challenging due to vague or unclear descriptions, and the LLM had difficulty
maintaining continuity and relevance in answers. Based on these findings, six
recommendations for implementing an LLM like ChatGPT in the design process are
proposed, including leveraging it for ideation, ensuring human oversight for
key decisions, implementing iterative feedback loops, prompting it to consider
alternatives, and assigning specific and manageable tasks at a subsystem level.
| [
{
"version": "v1",
"created": "Mon, 29 Apr 2024 07:33:06 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ege",
"Daniel Nygård",
""
],
[
"Øvrebø",
"Henrik H.",
""
],
[
"Stubberud",
"Vegar",
""
],
[
"Berg",
"Martin Francis",
""
],
[
"Elverum",
"Christer",
""
],
[
"Steinert",
"Martin",
""
],
[
"Vestad",
"Håvard",
""
]
]
| TITLE: ChatGPT as an inventor: Eliciting the strengths and weaknesses of
current large language models against humans in engineering design
ABSTRACT: This study compares the design practices and performance of ChatGPT 4.0, a
large language model (LLM), against graduate engineering students in a 48-hour
prototyping hackathon, based on a dataset comprising more than 100 prototypes.
The LLM participated by instructing two participants who executed its
instructions and provided objective feedback, generated ideas autonomously and
made all design decisions without human intervention. The LLM exhibited similar
prototyping practices to human participants and finished second among six
teams, successfully designing and providing building instructions for
functional prototypes. The LLM's concept generation capabilities were
particularly strong. However, the LLM prematurely abandoned promising concepts
when facing minor difficulties, added unnecessary complexity to designs, and
experienced design fixation. Communication between the LLM and participants was
challenging due to vague or unclear descriptions, and the LLM had difficulty
maintaining continuity and relevance in answers. Based on these findings, six
recommendations for implementing an LLM like ChatGPT in the design process are
proposed, including leveraging it for ideation, ensuring human oversight for
key decisions, implementing iterative feedback loops, prompting it to consider
alternatives, and assigning specific and manageable tasks at a subsystem level.
| no_new_dataset | 0.885928 |
2404.18501 | Ruijie Tao | Ruijie Tao, Xinyuan Qian, Yidi Jiang, Junjie Li, Jiadong Wang and
Haizhou Li | Audio-Visual Target Speaker Extraction with Reverse Selective Auditory
Attention | null | null | null | null | eess.AS cs.SD | http://creativecommons.org/licenses/by/4.0/ | Audio-visual target speaker extraction (AV-TSE) aims to extract the specific
person's speech from the audio mixture given auxiliary visual cues. Previous
methods usually search for the target voice through speech-lip synchronization.
However, this strategy mainly focuses on the existence of target speech, while
ignoring the variations of the noise characteristics, i.e., interference
speaker and the background noise. That may result in extracting noisy signals
from the incorrect sound source in challenging acoustic situations. To this
end, we propose a novel selective auditory attention mechanism, which can
suppress interference speakers and non-speech signals to avoid incorrect
speaker extraction. By estimating and utilizing the undesired noisy signal
through this mechanism, we design an AV-TSE framework named
Subtraction-and-ExtrAction network (SEANet) to suppress the noisy signals. We
conduct abundant experiments by re-implementing three popular AV-TSE methods as
the baselines and involving nine metrics for evaluation. The experimental
results show that our proposed SEANet achieves state-of-the-art results and
performs well for all five datasets. The code can be found in:
https://github.com/TaoRuijie/SEANet.git
| [
{
"version": "v1",
"created": "Mon, 29 Apr 2024 08:43:57 GMT"
},
{
"version": "v2",
"created": "Wed, 8 May 2024 08:05:22 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 13:54:41 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Tao",
"Ruijie",
""
],
[
"Qian",
"Xinyuan",
""
],
[
"Jiang",
"Yidi",
""
],
[
"Li",
"Junjie",
""
],
[
"Wang",
"Jiadong",
""
],
[
"Li",
"Haizhou",
""
]
]
| TITLE: Audio-Visual Target Speaker Extraction with Reverse Selective Auditory
Attention
ABSTRACT: Audio-visual target speaker extraction (AV-TSE) aims to extract the specific
person's speech from the audio mixture given auxiliary visual cues. Previous
methods usually search for the target voice through speech-lip synchronization.
However, this strategy mainly focuses on the existence of target speech, while
ignoring the variations of the noise characteristics, i.e., interference
speaker and the background noise. That may result in extracting noisy signals
from the incorrect sound source in challenging acoustic situations. To this
end, we propose a novel selective auditory attention mechanism, which can
suppress interference speakers and non-speech signals to avoid incorrect
speaker extraction. By estimating and utilizing the undesired noisy signal
through this mechanism, we design an AV-TSE framework named
Subtraction-and-ExtrAction network (SEANet) to suppress the noisy signals. We
conduct abundant experiments by re-implementing three popular AV-TSE methods as
the baselines and involving nine metrics for evaluation. The experimental
results show that our proposed SEANet achieves state-of-the-art results and
performs well for all five datasets. The code can be found in:
https://github.com/TaoRuijie/SEANet.git
| no_new_dataset | 0.945751 |
2404.19489 | Charlotte Frenkel | Yufeng Yang, Adrian Kneip, Charlotte Frenkel | EvGNN: An Event-driven Graph Neural Network Accelerator for Edge Vision | Accepted for publication in the IEEE Transactions on Circuits and
Systems for Artificial Intelligence, 2025. 14 pages, 14 figures | null | 10.1109/TCASAI.2024.3520905 | null | cs.CV cs.AR cs.ET cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Edge vision systems combining sensing and embedded processing promise
low-latency, decentralized, and energy-efficient solutions that forgo reliance
on the cloud. As opposed to conventional frame-based vision sensors,
event-based cameras deliver a microsecond-scale temporal resolution with sparse
information encoding, thereby outlining new opportunities for edge vision
systems. However, mainstream algorithms for frame-based vision, which mostly
rely on convolutional neural networks (CNNs), can hardly exploit the advantages
of event-based vision as they are typically optimized for dense matrix-vector
multiplications. While event-driven graph neural networks (GNNs) have recently
emerged as a promising solution for sparse event-based vision, their irregular
structure is a challenge that currently hinders the design of efficient
hardware accelerators. In this paper, we propose EvGNN, the first event-driven
GNN accelerator for low-footprint, ultra-low-latency, and high-accuracy edge
vision with event-based cameras. It relies on three central ideas: (i) directed
dynamic graphs exploiting single-hop nodes with edge-free storage, (ii) event
queues for the efficient identification of local neighbors within a
spatiotemporally decoupled search range, and (iii) a novel layer-parallel
processing scheme allowing for a low-latency execution of multi-layer GNNs. We
deployed EvGNN on a Xilinx KV260 Ultrascale+ MPSoC platform and benchmarked it
on the N-CARS dataset for car recognition, demonstrating a classification
accuracy of 87.8% and an average latency per event of 16$\mu$s, thereby
enabling real-time, microsecond-resolution event-based vision at the edge.
| [
{
"version": "v1",
"created": "Tue, 30 Apr 2024 12:18:47 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 23:55:01 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yang",
"Yufeng",
""
],
[
"Kneip",
"Adrian",
""
],
[
"Frenkel",
"Charlotte",
""
]
]
| TITLE: EvGNN: An Event-driven Graph Neural Network Accelerator for Edge Vision
ABSTRACT: Edge vision systems combining sensing and embedded processing promise
low-latency, decentralized, and energy-efficient solutions that forgo reliance
on the cloud. As opposed to conventional frame-based vision sensors,
event-based cameras deliver a microsecond-scale temporal resolution with sparse
information encoding, thereby outlining new opportunities for edge vision
systems. However, mainstream algorithms for frame-based vision, which mostly
rely on convolutional neural networks (CNNs), can hardly exploit the advantages
of event-based vision as they are typically optimized for dense matrix-vector
multiplications. While event-driven graph neural networks (GNNs) have recently
emerged as a promising solution for sparse event-based vision, their irregular
structure is a challenge that currently hinders the design of efficient
hardware accelerators. In this paper, we propose EvGNN, the first event-driven
GNN accelerator for low-footprint, ultra-low-latency, and high-accuracy edge
vision with event-based cameras. It relies on three central ideas: (i) directed
dynamic graphs exploiting single-hop nodes with edge-free storage, (ii) event
queues for the efficient identification of local neighbors within a
spatiotemporally decoupled search range, and (iii) a novel layer-parallel
processing scheme allowing for a low-latency execution of multi-layer GNNs. We
deployed EvGNN on a Xilinx KV260 Ultrascale+ MPSoC platform and benchmarked it
on the N-CARS dataset for car recognition, demonstrating a classification
accuracy of 87.8% and an average latency per event of 16$\mu$s, thereby
enabling real-time, microsecond-resolution event-based vision at the edge.
| no_new_dataset | 0.951278 |
2405.01649 | Tianle Xia | Tianle Xia, Liang Ding, Guojia Wan, Yibing Zhan, Bo Du, Dacheng Tao | Improving Complex Reasoning over Knowledge Graph with Logic-Aware
Curriculum Tuning | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answering complex queries over incomplete knowledge graphs (KGs) is a
challenging job. Most previous works have focused on learning entity/relation
embeddings and simulating first-order logic operators with various neural
networks. However, they are bottlenecked by the inability to share world
knowledge to improve logical reasoning, thus resulting in suboptimal
performance. In this paper, we propose a complex reasoning schema over KG upon
large language models (LLMs), containing a curriculum-based logical-aware
instruction tuning framework, named LACT. Specifically, we augment the
arbitrary first-order logical queries via binary tree decomposition, to
stimulate the reasoning capability of LLMs. To address the difficulty gap among
different types of complex queries, we design a simple and flexible logic-aware
curriculum learning framework. Experiments across widely used datasets
demonstrate that LACT has substantial improvements~(brings an average +5.5% MRR
score) over advanced methods, achieving the new state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 2 May 2024 18:12:08 GMT"
},
{
"version": "v2",
"created": "Tue, 7 May 2024 16:10:51 GMT"
},
{
"version": "v3",
"created": "Wed, 8 May 2024 18:21:04 GMT"
},
{
"version": "v4",
"created": "Sat, 1 Mar 2025 17:24:49 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xia",
"Tianle",
""
],
[
"Ding",
"Liang",
""
],
[
"Wan",
"Guojia",
""
],
[
"Zhan",
"Yibing",
""
],
[
"Du",
"Bo",
""
],
[
"Tao",
"Dacheng",
""
]
]
| TITLE: Improving Complex Reasoning over Knowledge Graph with Logic-Aware
Curriculum Tuning
ABSTRACT: Answering complex queries over incomplete knowledge graphs (KGs) is a
challenging job. Most previous works have focused on learning entity/relation
embeddings and simulating first-order logic operators with various neural
networks. However, they are bottlenecked by the inability to share world
knowledge to improve logical reasoning, thus resulting in suboptimal
performance. In this paper, we propose a complex reasoning schema over KG upon
large language models (LLMs), containing a curriculum-based logical-aware
instruction tuning framework, named LACT. Specifically, we augment the
arbitrary first-order logical queries via binary tree decomposition, to
stimulate the reasoning capability of LLMs. To address the difficulty gap among
different types of complex queries, we design a simple and flexible logic-aware
curriculum learning framework. Experiments across widely used datasets
demonstrate that LACT has substantial improvements~(brings an average +5.5% MRR
score) over advanced methods, achieving the new state-of-the-art.
| no_new_dataset | 0.942718 |
2405.03049 | Daniele Lanzoni | Luis Mart\'in Encinar, Daniele Lanzoni, Andrea Fantasia, Fabrizio
Rovaris, Roberto Bergamaschini, Francesco Montalenti | Quantitative analysis of the prediction performance of a Convolutional
Neural Network evaluating the surface elastic energy of a strained film | 15 pages, 9 figures | null | 10.1016/j.commatsci.2024.113657 | null | physics.comp-ph cond-mat.mtrl-sci | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Deep Learning approach is devised to estimate the elastic energy density
$\rho$ at the free surface of an undulated stressed film. About 190000
arbitrary surface profiles h(x) are randomly generated by Perlin noise and
paired with the corresponding elastic energy density profiles $\rho(x)$,
computed by a semi-analytical Green's function approximation, suitable for
small-slope morphologies. The resulting dataset and smaller subsets of it are
used for the training of a Fully Convolutional Neural Network. The trained
models are shown to return quantitative predictions of $\rho$, not only in
terms of convergence of the loss function during training, but also in
validation and testing, with better results in the case of the larger dataset.
Extensive tests are performed to assess the generalization capability of the
Neural Network model when applied to profiles with localized features or
assigned geometries not included in the original dataset. Moreover, its
possible exploitation on domain sizes beyond the one used in the training is
also analyzed in-depth. The conditions providing a one-to-one reproduction of
the ground-truth $\rho(x)$ profiles computed by the Green's approximation are
highlighted along with critical cases. The accuracy and robustness of the
deep-learned $\rho(x)$ are further demonstrated in the time-integration of
surface evolution problems described by simple partial differential equations
of evaporation/condensation and surface diffusion.
| [
{
"version": "v1",
"created": "Sun, 5 May 2024 20:34:16 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Encinar",
"Luis Martín",
""
],
[
"Lanzoni",
"Daniele",
""
],
[
"Fantasia",
"Andrea",
""
],
[
"Rovaris",
"Fabrizio",
""
],
[
"Bergamaschini",
"Roberto",
""
],
[
"Montalenti",
"Francesco",
""
]
]
| TITLE: Quantitative analysis of the prediction performance of a Convolutional
Neural Network evaluating the surface elastic energy of a strained film
ABSTRACT: A Deep Learning approach is devised to estimate the elastic energy density
$\rho$ at the free surface of an undulated stressed film. About 190000
arbitrary surface profiles h(x) are randomly generated by Perlin noise and
paired with the corresponding elastic energy density profiles $\rho(x)$,
computed by a semi-analytical Green's function approximation, suitable for
small-slope morphologies. The resulting dataset and smaller subsets of it are
used for the training of a Fully Convolutional Neural Network. The trained
models are shown to return quantitative predictions of $\rho$, not only in
terms of convergence of the loss function during training, but also in
validation and testing, with better results in the case of the larger dataset.
Extensive tests are performed to assess the generalization capability of the
Neural Network model when applied to profiles with localized features or
assigned geometries not included in the original dataset. Moreover, its
possible exploitation on domain sizes beyond the one used in the training is
also analyzed in-depth. The conditions providing a one-to-one reproduction of
the ground-truth $\rho(x)$ profiles computed by the Green's approximation are
highlighted along with critical cases. The accuracy and robustness of the
deep-learned $\rho(x)$ are further demonstrated in the time-integration of
surface evolution problems described by simple partial differential equations
of evaporation/condensation and surface diffusion.
| no_new_dataset | 0.950041 |
2405.03239 | Shuhao Mei | Shuhao Mei, Xin Li, Yuxi Zhou, Jiahao Xu, Yong Zhang, Yuxuan Wan, Shan
Cao, Qinghao Zhao, Shijia Geng, Junqing Xie, Shengyong Chen, Shenda Hong | Deep Learning for Detecting and Early Predicting Chronic Obstructive
Pulmonary Disease from Spirogram Time Series | null | npj Syst. Biol. Appl. 11, 18 (2025) | 10.1038/s41540-025-00489-y | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chronic Obstructive Pulmonary Disease (COPD) is a chronic lung condition
characterized by airflow obstruction. Current diagnostic methods primarily rely
on identifying prominent features in spirometry (Volume-Flow time series) to
detect COPD, but they are not adept at predicting future COPD risk based on
subtle data patterns. In this study, we introduce a novel deep learning-based
approach, DeepSpiro, aimed at the early prediction of future COPD risk.
DeepSpiro consists of four key components: SpiroSmoother for stabilizing the
Volume-Flow curve, SpiroEncoder for capturing volume variability-pattern
through key patches of varying lengths, SpiroExplainer for integrating
heterogeneous data and explaining predictions through volume attention, and
SpiroPredictor for predicting the disease risk of undiagnosed high-risk
patients based on key patch concavity, with prediction horizons of 1, 2, 3, 4,
5 years, or even longer. Evaluated on the UK Biobank dataset, DeepSpiro
achieved an AUC of 0.8328 for COPD detection and demonstrated strong predictive
performance for future COPD risk (p-value < 0.001). In summary, DeepSpiro can
effectively predicts the long-term progression of the COPD disease.
| [
{
"version": "v1",
"created": "Mon, 6 May 2024 07:48:34 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Oct 2024 05:18:11 GMT"
},
{
"version": "v3",
"created": "Sat, 28 Dec 2024 14:18:37 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Mei",
"Shuhao",
""
],
[
"Li",
"Xin",
""
],
[
"Zhou",
"Yuxi",
""
],
[
"Xu",
"Jiahao",
""
],
[
"Zhang",
"Yong",
""
],
[
"Wan",
"Yuxuan",
""
],
[
"Cao",
"Shan",
""
],
[
"Zhao",
"Qinghao",
""
],
[
"Geng",
"Shijia",
""
],
[
"Xie",
"Junqing",
""
],
[
"Chen",
"Shengyong",
""
],
[
"Hong",
"Shenda",
""
]
]
| TITLE: Deep Learning for Detecting and Early Predicting Chronic Obstructive
Pulmonary Disease from Spirogram Time Series
ABSTRACT: Chronic Obstructive Pulmonary Disease (COPD) is a chronic lung condition
characterized by airflow obstruction. Current diagnostic methods primarily rely
on identifying prominent features in spirometry (Volume-Flow time series) to
detect COPD, but they are not adept at predicting future COPD risk based on
subtle data patterns. In this study, we introduce a novel deep learning-based
approach, DeepSpiro, aimed at the early prediction of future COPD risk.
DeepSpiro consists of four key components: SpiroSmoother for stabilizing the
Volume-Flow curve, SpiroEncoder for capturing volume variability-pattern
through key patches of varying lengths, SpiroExplainer for integrating
heterogeneous data and explaining predictions through volume attention, and
SpiroPredictor for predicting the disease risk of undiagnosed high-risk
patients based on key patch concavity, with prediction horizons of 1, 2, 3, 4,
5 years, or even longer. Evaluated on the UK Biobank dataset, DeepSpiro
achieved an AUC of 0.8328 for COPD detection and demonstrated strong predictive
performance for future COPD risk (p-value < 0.001). In summary, DeepSpiro can
effectively predicts the long-term progression of the COPD disease.
| no_new_dataset | 0.945951 |
2405.04286 | Junchao Wu | Junchao Wu, Runzhe Zhan, Derek F. Wong, Shu Yang, Xuebo Liu, Lidia S.
Chao, Min Zhang | Who Wrote This? The Key to Zero-Shot LLM-Generated Text Detection Is
GECScore | COLING 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The efficacy of detectors for texts generated by large language models (LLMs)
substantially depends on the availability of large-scale training data.
However, white-box zero-shot detectors, which require no such data, are limited
by the accessibility of the source model of the LLM-generated text. In this
paper, we propose a simple yet effective black-box zero-shot detection approach
based on the observation that, from the perspective of LLMs, human-written
texts typically contain more grammatical errors than LLM-generated texts. This
approach involves calculating the Grammar Error Correction Score (GECScore) for
the given text to differentiate between human-written and LLM-generated text.
Experimental results show that our method outperforms current state-of-the-art
(SOTA) zero-shot and supervised methods, achieving an average AUROC of 98.62%
across XSum and Writing Prompts dataset. Additionally, our approach
demonstrates strong reliability in the wild, exhibiting robust generalization
and resistance to paraphrasing attacks. Data and code are available at:
https://github.com/NLP2CT/GECScore.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 12:57:01 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 11:19:12 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wu",
"Junchao",
""
],
[
"Zhan",
"Runzhe",
""
],
[
"Wong",
"Derek F.",
""
],
[
"Yang",
"Shu",
""
],
[
"Liu",
"Xuebo",
""
],
[
"Chao",
"Lidia S.",
""
],
[
"Zhang",
"Min",
""
]
]
| TITLE: Who Wrote This? The Key to Zero-Shot LLM-Generated Text Detection Is
GECScore
ABSTRACT: The efficacy of detectors for texts generated by large language models (LLMs)
substantially depends on the availability of large-scale training data.
However, white-box zero-shot detectors, which require no such data, are limited
by the accessibility of the source model of the LLM-generated text. In this
paper, we propose a simple yet effective black-box zero-shot detection approach
based on the observation that, from the perspective of LLMs, human-written
texts typically contain more grammatical errors than LLM-generated texts. This
approach involves calculating the Grammar Error Correction Score (GECScore) for
the given text to differentiate between human-written and LLM-generated text.
Experimental results show that our method outperforms current state-of-the-art
(SOTA) zero-shot and supervised methods, achieving an average AUROC of 98.62%
across XSum and Writing Prompts dataset. Additionally, our approach
demonstrates strong reliability in the wild, exhibiting robust generalization
and resistance to paraphrasing attacks. Data and code are available at:
https://github.com/NLP2CT/GECScore.
| no_new_dataset | 0.949389 |
2405.05702 | Mingrui Li | Mingrui Li, Jingwei Huang, Lei Sun, Aaron Xuxiang Tian, Tianchen Deng,
Hongyu Wang | NGM-SLAM: Gaussian Splatting SLAM with Radiance Field Submap | 9pages, 4 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | SLAM systems based on Gaussian Splatting have garnered attention due to their
capabilities for rapid real-time rendering and high-fidelity mapping. However,
current Gaussian Splatting SLAM systems usually struggle with large scene
representation and lack effective loop closure detection. To address these
issues, we introduce NGM-SLAM, the first 3DGS based SLAM system that utilizes
neural radiance field submaps for progressive scene expression, effectively
integrating the strengths of neural radiance fields and 3D Gaussian Splatting.
We utilize neural radiance field submaps as supervision and achieve
high-quality scene expression and online loop closure adjustments through
Gaussian rendering of fused submaps. Our results on multiple real-world scenes
and large-scale scene datasets demonstrate that our method can achieve accurate
hole filling and high-quality scene expression, supporting monocular, stereo,
and RGB-D inputs, and achieving state-of-the-art scene reconstruction and
tracking performance.
| [
{
"version": "v1",
"created": "Thu, 9 May 2024 11:57:42 GMT"
},
{
"version": "v2",
"created": "Sat, 18 May 2024 06:55:30 GMT"
},
{
"version": "v3",
"created": "Thu, 23 May 2024 12:25:32 GMT"
},
{
"version": "v4",
"created": "Fri, 24 May 2024 08:42:37 GMT"
},
{
"version": "v5",
"created": "Mon, 27 May 2024 10:16:49 GMT"
},
{
"version": "v6",
"created": "Fri, 28 Jun 2024 06:23:27 GMT"
},
{
"version": "v7",
"created": "Sun, 2 Mar 2025 09:06:14 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Mingrui",
""
],
[
"Huang",
"Jingwei",
""
],
[
"Sun",
"Lei",
""
],
[
"Tian",
"Aaron Xuxiang",
""
],
[
"Deng",
"Tianchen",
""
],
[
"Wang",
"Hongyu",
""
]
]
| TITLE: NGM-SLAM: Gaussian Splatting SLAM with Radiance Field Submap
ABSTRACT: SLAM systems based on Gaussian Splatting have garnered attention due to their
capabilities for rapid real-time rendering and high-fidelity mapping. However,
current Gaussian Splatting SLAM systems usually struggle with large scene
representation and lack effective loop closure detection. To address these
issues, we introduce NGM-SLAM, the first 3DGS based SLAM system that utilizes
neural radiance field submaps for progressive scene expression, effectively
integrating the strengths of neural radiance fields and 3D Gaussian Splatting.
We utilize neural radiance field submaps as supervision and achieve
high-quality scene expression and online loop closure adjustments through
Gaussian rendering of fused submaps. Our results on multiple real-world scenes
and large-scale scene datasets demonstrate that our method can achieve accurate
hole filling and high-quality scene expression, supporting monocular, stereo,
and RGB-D inputs, and achieving state-of-the-art scene reconstruction and
tracking performance.
| no_new_dataset | 0.948537 |
2405.09980 | Jian Chen | Jian Chen, Peilin Zhou, Yining Hua, Yingxin Loh, Kehui Chen, Ziyuan
Li, Bing Zhu, Junwei Liang | FinTextQA: A Dataset for Long-form Financial Question Answering | null | null | 10.18653/v1/2024.acl-long.328 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Accurate evaluation of financial question answering (QA) systems necessitates
a comprehensive dataset encompassing diverse question types and contexts.
However, current financial QA datasets lack scope diversity and question
complexity. This work introduces FinTextQA, a novel dataset for long-form
question answering (LFQA) in finance. FinTextQA comprises 1,262 high-quality,
source-attributed QA pairs extracted and selected from finance textbooks and
government agency websites.Moreover, we developed a Retrieval-Augmented
Generation (RAG)-based LFQA system, comprising an embedder, retriever,
reranker, and generator. A multi-faceted evaluation approach, including human
ranking, automatic metrics, and GPT-4 scoring, was employed to benchmark the
performance of different LFQA system configurations under heightened noisy
conditions. The results indicate that: (1) Among all compared generators,
Baichuan2-7B competes closely with GPT-3.5-turbo in accuracy score; (2) The
most effective system configuration on our dataset involved setting the
embedder, retriever, reranker, and generator as Ada2, Automated Merged
Retrieval, Bge-Reranker-Base, and Baichuan2-7B, respectively; (3) models are
less susceptible to noise after the length of contexts reaching a specific
threshold.
| [
{
"version": "v1",
"created": "Thu, 16 May 2024 10:53:31 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Jian",
""
],
[
"Zhou",
"Peilin",
""
],
[
"Hua",
"Yining",
""
],
[
"Loh",
"Yingxin",
""
],
[
"Chen",
"Kehui",
""
],
[
"Li",
"Ziyuan",
""
],
[
"Zhu",
"Bing",
""
],
[
"Liang",
"Junwei",
""
]
]
| TITLE: FinTextQA: A Dataset for Long-form Financial Question Answering
ABSTRACT: Accurate evaluation of financial question answering (QA) systems necessitates
a comprehensive dataset encompassing diverse question types and contexts.
However, current financial QA datasets lack scope diversity and question
complexity. This work introduces FinTextQA, a novel dataset for long-form
question answering (LFQA) in finance. FinTextQA comprises 1,262 high-quality,
source-attributed QA pairs extracted and selected from finance textbooks and
government agency websites.Moreover, we developed a Retrieval-Augmented
Generation (RAG)-based LFQA system, comprising an embedder, retriever,
reranker, and generator. A multi-faceted evaluation approach, including human
ranking, automatic metrics, and GPT-4 scoring, was employed to benchmark the
performance of different LFQA system configurations under heightened noisy
conditions. The results indicate that: (1) Among all compared generators,
Baichuan2-7B competes closely with GPT-3.5-turbo in accuracy score; (2) The
most effective system configuration on our dataset involved setting the
embedder, retriever, reranker, and generator as Ada2, Automated Merged
Retrieval, Bge-Reranker-Base, and Baichuan2-7B, respectively; (3) models are
less susceptible to noise after the length of contexts reaching a specific
threshold.
| new_dataset | 0.967595 |
2405.12971 | Theodore Zhao | Theodore Zhao, Yu Gu, Jianwei Yang, Naoto Usuyama, Ho Hin Lee, Tristan
Naumann, Jianfeng Gao, Angela Crabtree, Jacob Abel, Christine Moung-Wen,
Brian Piening, Carlo Bifulco, Mu Wei, Hoifung Poon, Sheng Wang | BiomedParse: a biomedical foundation model for image parsing of
everything everywhere all at once | Project page: https://aka.ms/biomedparse-project . Nat Methods (2024) | Nat Methods 22, 166-176 (2025) | 10.1038/s41592-024-02499-w | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biomedical image analysis is fundamental for biomedical discovery in cell
biology, pathology, radiology, and many other biomedical domains. Holistic
image analysis comprises interdependent subtasks such as segmentation,
detection, and recognition of relevant objects. Here, we propose BiomedParse, a
biomedical foundation model for imaging parsing that can jointly conduct
segmentation, detection, and recognition for 82 object types across 9 imaging
modalities. Through joint learning, we can improve accuracy for individual
tasks and enable novel applications such as segmenting all relevant objects in
an image through a text prompt, rather than requiring users to laboriously
specify the bounding box for each object. We leveraged readily available
natural-language labels or descriptions accompanying those datasets and use
GPT-4 to harmonize the noisy, unstructured text information with established
biomedical object ontologies. We created a large dataset comprising over six
million triples of image, segmentation mask, and textual description. On image
segmentation, we showed that BiomedParse is broadly applicable, outperforming
state-of-the-art methods on 102,855 test image-mask-label triples across 9
imaging modalities (everything). On object detection, which aims to locate a
specific object of interest, BiomedParse again attained state-of-the-art
performance, especially on objects with irregular shapes (everywhere). On
object recognition, which aims to identify all objects in a given image along
with their semantic types, we showed that BiomedParse can simultaneously
segment and label all biomedical objects in an image (all at once). In summary,
BiomedParse is an all-in-one tool for biomedical image analysis by jointly
solving segmentation, detection, and recognition for all major biomedical image
modalities, paving the path for efficient and accurate image-based biomedical
discovery.
| [
{
"version": "v1",
"created": "Tue, 21 May 2024 17:54:06 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Jun 2024 00:28:58 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Jun 2024 18:16:52 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhao",
"Theodore",
""
],
[
"Gu",
"Yu",
""
],
[
"Yang",
"Jianwei",
""
],
[
"Usuyama",
"Naoto",
""
],
[
"Lee",
"Ho Hin",
""
],
[
"Naumann",
"Tristan",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Crabtree",
"Angela",
""
],
[
"Abel",
"Jacob",
""
],
[
"Moung-Wen",
"Christine",
""
],
[
"Piening",
"Brian",
""
],
[
"Bifulco",
"Carlo",
""
],
[
"Wei",
"Mu",
""
],
[
"Poon",
"Hoifung",
""
],
[
"Wang",
"Sheng",
""
]
]
| TITLE: BiomedParse: a biomedical foundation model for image parsing of
everything everywhere all at once
ABSTRACT: Biomedical image analysis is fundamental for biomedical discovery in cell
biology, pathology, radiology, and many other biomedical domains. Holistic
image analysis comprises interdependent subtasks such as segmentation,
detection, and recognition of relevant objects. Here, we propose BiomedParse, a
biomedical foundation model for imaging parsing that can jointly conduct
segmentation, detection, and recognition for 82 object types across 9 imaging
modalities. Through joint learning, we can improve accuracy for individual
tasks and enable novel applications such as segmenting all relevant objects in
an image through a text prompt, rather than requiring users to laboriously
specify the bounding box for each object. We leveraged readily available
natural-language labels or descriptions accompanying those datasets and use
GPT-4 to harmonize the noisy, unstructured text information with established
biomedical object ontologies. We created a large dataset comprising over six
million triples of image, segmentation mask, and textual description. On image
segmentation, we showed that BiomedParse is broadly applicable, outperforming
state-of-the-art methods on 102,855 test image-mask-label triples across 9
imaging modalities (everything). On object detection, which aims to locate a
specific object of interest, BiomedParse again attained state-of-the-art
performance, especially on objects with irregular shapes (everywhere). On
object recognition, which aims to identify all objects in a given image along
with their semantic types, we showed that BiomedParse can simultaneously
segment and label all biomedical objects in an image (all at once). In summary,
BiomedParse is an all-in-one tool for biomedical image analysis by jointly
solving segmentation, detection, and recognition for all major biomedical image
modalities, paving the path for efficient and accurate image-based biomedical
discovery.
| new_dataset | 0.960137 |
2405.13937 | Xingtong Yu | Xingtong Yu, Zhenghao Liu, Xinming Zhang, Yuan Fang | Node-Time Conditional Prompt Learning In Dynamic Graphs | Accepted by ICLR 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic graphs capture evolving interactions between entities, such as in
social networks, online learning platforms, and crowdsourcing projects. For
dynamic graph modeling, dynamic graph neural networks (DGNNs) have emerged as a
mainstream technique. However, they are generally pre-trained on the link
prediction task, leaving a significant gap from the objectives of downstream
tasks such as node classification. To bridge the gap, prompt-based learning has
gained traction on graphs, but most existing efforts focus on static graphs,
neglecting the evolution of dynamic graphs. In this paper, we propose
DYGPROMPT, a novel pre-training and prompt learning framework for dynamic graph
modeling. First, we design dual prompts to address the gap in both task
objectives and temporal variations across pre-training and downstream tasks.
Second, we recognize that node and time features mutually characterize each
other, and propose dual condition-nets to model the evolving node-time patterns
in downstream tasks. Finally, we thoroughly evaluate and analyze DYGPROMPT
through extensive experiments on four public datasets.
| [
{
"version": "v1",
"created": "Wed, 22 May 2024 19:10:24 GMT"
},
{
"version": "v2",
"created": "Sun, 26 May 2024 01:46:11 GMT"
},
{
"version": "v3",
"created": "Tue, 28 May 2024 10:07:29 GMT"
},
{
"version": "v4",
"created": "Tue, 2 Jul 2024 05:14:10 GMT"
},
{
"version": "v5",
"created": "Wed, 3 Jul 2024 02:06:07 GMT"
},
{
"version": "v6",
"created": "Thu, 3 Oct 2024 16:59:18 GMT"
},
{
"version": "v7",
"created": "Sun, 13 Oct 2024 03:40:08 GMT"
},
{
"version": "v8",
"created": "Mon, 3 Mar 2025 05:10:46 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yu",
"Xingtong",
""
],
[
"Liu",
"Zhenghao",
""
],
[
"Zhang",
"Xinming",
""
],
[
"Fang",
"Yuan",
""
]
]
| TITLE: Node-Time Conditional Prompt Learning In Dynamic Graphs
ABSTRACT: Dynamic graphs capture evolving interactions between entities, such as in
social networks, online learning platforms, and crowdsourcing projects. For
dynamic graph modeling, dynamic graph neural networks (DGNNs) have emerged as a
mainstream technique. However, they are generally pre-trained on the link
prediction task, leaving a significant gap from the objectives of downstream
tasks such as node classification. To bridge the gap, prompt-based learning has
gained traction on graphs, but most existing efforts focus on static graphs,
neglecting the evolution of dynamic graphs. In this paper, we propose
DYGPROMPT, a novel pre-training and prompt learning framework for dynamic graph
modeling. First, we design dual prompts to address the gap in both task
objectives and temporal variations across pre-training and downstream tasks.
Second, we recognize that node and time features mutually characterize each
other, and propose dual condition-nets to model the evolving node-time patterns
in downstream tasks. Finally, we thoroughly evaluate and analyze DYGPROMPT
through extensive experiments on four public datasets.
| no_new_dataset | 0.948489 |
2405.15273 | Qichao Shentu | Qichao Shentu, Beibu Li, Kai Zhao, Yang Shu, Zhongwen Rao, Lujia Pan,
Bin Yang, Chenjuan Guo | Towards a General Time Series Anomaly Detector with Adaptive Bottlenecks
and Dual Adversarial Decoders | Accepted by the 13th International Conference on Learning
Representations (ICLR 2025) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series anomaly detection plays a vital role in a wide range of
applications. Existing methods require training one specific model for each
dataset, which exhibits limited generalization capability across different
target datasets, hindering anomaly detection performance in various scenarios
with scarce training data. Aiming at this problem, we propose constructing a
general time series anomaly detection model, which is pre-trained on extensive
multi-domain datasets and can subsequently apply to a multitude of downstream
scenarios. The significant divergence of time series data across different
domains presents two primary challenges in building such a general model: (1)
meeting the diverse requirements of appropriate information bottlenecks
tailored to different datasets in one unified model, and (2) enabling
distinguishment between multiple normal and abnormal patterns, both are crucial
for effective anomaly detection in various target scenarios. To tackle these
two challenges, we propose a General time series anomaly Detector with Adaptive
Bottlenecks and Dual Adversarial Decoders (DADA), which enables flexible
selection of bottlenecks based on different data and explicitly enhances clear
differentiation between normal and abnormal series. We conduct extensive
experiments on nine target datasets from different domains. After pre-training
on multi-domain data, DADA, serving as a zero-shot anomaly detector for these
datasets, still achieves competitive or even superior results compared to those
models tailored to each specific dataset. The code is made available at
https://github.com/decisionintelligence/DADA.
| [
{
"version": "v1",
"created": "Fri, 24 May 2024 06:59:43 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Jun 2024 06:09:19 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Oct 2024 09:28:25 GMT"
},
{
"version": "v4",
"created": "Mon, 3 Mar 2025 12:40:28 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Shentu",
"Qichao",
""
],
[
"Li",
"Beibu",
""
],
[
"Zhao",
"Kai",
""
],
[
"Shu",
"Yang",
""
],
[
"Rao",
"Zhongwen",
""
],
[
"Pan",
"Lujia",
""
],
[
"Yang",
"Bin",
""
],
[
"Guo",
"Chenjuan",
""
]
]
| TITLE: Towards a General Time Series Anomaly Detector with Adaptive Bottlenecks
and Dual Adversarial Decoders
ABSTRACT: Time series anomaly detection plays a vital role in a wide range of
applications. Existing methods require training one specific model for each
dataset, which exhibits limited generalization capability across different
target datasets, hindering anomaly detection performance in various scenarios
with scarce training data. Aiming at this problem, we propose constructing a
general time series anomaly detection model, which is pre-trained on extensive
multi-domain datasets and can subsequently apply to a multitude of downstream
scenarios. The significant divergence of time series data across different
domains presents two primary challenges in building such a general model: (1)
meeting the diverse requirements of appropriate information bottlenecks
tailored to different datasets in one unified model, and (2) enabling
distinguishment between multiple normal and abnormal patterns, both are crucial
for effective anomaly detection in various target scenarios. To tackle these
two challenges, we propose a General time series anomaly Detector with Adaptive
Bottlenecks and Dual Adversarial Decoders (DADA), which enables flexible
selection of bottlenecks based on different data and explicitly enhances clear
differentiation between normal and abnormal series. We conduct extensive
experiments on nine target datasets from different domains. After pre-training
on multi-domain data, DADA, serving as a zero-shot anomaly detector for these
datasets, still achieves competitive or even superior results compared to those
models tailored to each specific dataset. The code is made available at
https://github.com/decisionintelligence/DADA.
| no_new_dataset | 0.949949 |
2405.18416 | Jingwei Xu | Jingwei Xu, Yikai Wang, Yiqun Zhao, Yanwei Fu, Shenghua Gao | 3D StreetUnveiler with Semantic-aware 2DGS -- a simple baseline | Project page: https://streetunveiler.github.io | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Unveiling an empty street from crowded observations captured by in-car
cameras is crucial for autonomous driving. However, removing all temporarily
static objects, such as stopped vehicles and standing pedestrians, presents a
significant challenge. Unlike object-centric 3D inpainting, which relies on
thorough observation in a small scene, street scene cases involve long
trajectories that differ from previous 3D inpainting tasks. The camera-centric
moving environment of captured videos further complicates the task due to the
limited degree and time duration of object observation. To address these
obstacles, we introduce StreetUnveiler to reconstruct an empty street.
StreetUnveiler learns a 3D representation of the empty street from crowded
observations. Our representation is based on the hard-label semantic 2D
Gaussian Splatting (2DGS) for its scalability and ability to identify Gaussians
to be removed. We inpaint rendered image after removing unwanted Gaussians to
provide pseudo-labels and subsequently re-optimize the 2DGS. Given its temporal
continuous movement, we divide the empty street scene into observed,
partial-observed, and unobserved regions, which we propose to locate through a
rendered alpha map. This decomposition helps us to minimize the regions that
need to be inpainted. To enhance the temporal consistency of the inpainting, we
introduce a novel time-reversal framework to inpaint frames in reverse order
and use later frames as references for earlier frames to fully utilize the
long-trajectory observations. Our experiments conducted on the street scene
dataset successfully reconstructed a 3D representation of the empty street. The
mesh representation of the empty street can be extracted for further
applications. The project page and more visualizations can be found at:
https://streetunveiler.github.io
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 17:57:12 GMT"
},
{
"version": "v2",
"created": "Thu, 30 May 2024 11:52:04 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 23:18:57 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xu",
"Jingwei",
""
],
[
"Wang",
"Yikai",
""
],
[
"Zhao",
"Yiqun",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Gao",
"Shenghua",
""
]
]
| TITLE: 3D StreetUnveiler with Semantic-aware 2DGS -- a simple baseline
ABSTRACT: Unveiling an empty street from crowded observations captured by in-car
cameras is crucial for autonomous driving. However, removing all temporarily
static objects, such as stopped vehicles and standing pedestrians, presents a
significant challenge. Unlike object-centric 3D inpainting, which relies on
thorough observation in a small scene, street scene cases involve long
trajectories that differ from previous 3D inpainting tasks. The camera-centric
moving environment of captured videos further complicates the task due to the
limited degree and time duration of object observation. To address these
obstacles, we introduce StreetUnveiler to reconstruct an empty street.
StreetUnveiler learns a 3D representation of the empty street from crowded
observations. Our representation is based on the hard-label semantic 2D
Gaussian Splatting (2DGS) for its scalability and ability to identify Gaussians
to be removed. We inpaint rendered image after removing unwanted Gaussians to
provide pseudo-labels and subsequently re-optimize the 2DGS. Given its temporal
continuous movement, we divide the empty street scene into observed,
partial-observed, and unobserved regions, which we propose to locate through a
rendered alpha map. This decomposition helps us to minimize the regions that
need to be inpainted. To enhance the temporal consistency of the inpainting, we
introduce a novel time-reversal framework to inpaint frames in reverse order
and use later frames as references for earlier frames to fully utilize the
long-trajectory observations. Our experiments conducted on the street scene
dataset successfully reconstructed a 3D representation of the empty street. The
mesh representation of the empty street can be extracted for further
applications. The project page and more visualizations can be found at:
https://streetunveiler.github.io
| no_new_dataset | 0.947186 |
2405.18448 | Thanh-Dung Le | Boammani Aser Lompo, Thanh-Dung Le | Multi-objective Representation for Numbers in Clinical Narratives: A
CamemBERT-Bio-Based Alternative to Large-Scale LLMs | Under the revision. arXiv admin note: substantial text overlap with
arXiv:2404.10171 | null | null | null | cs.CL eess.SP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The processing of numerical values is a rapidly developing area in the field
of Language Models (LLMs). Despite numerous advancements achieved by previous
research, significant challenges persist, particularly within the healthcare
domain. This paper investigates the limitations of Transformer models in
understanding numerical values. \textit{Objective:} this research aims to
categorize numerical values extracted from medical documents into eight
specific physiological categories using CamemBERT-bio. \textit{Methods:} In a
context where scalable methods and Large Language Models (LLMs) are emphasized,
we explore lifting the limitations of transformer-based models. We examine two
strategies: fine-tuning CamemBERT-bio on a small medical dataset, integrating
Label Embedding for Self-Attention (LESA), and combining LESA with additional
enhancement techniques such as Xval. Given that CamemBERT-bio is already
pre-trained on a large medical dataset, the first approach aims to update its
encoder with the newly added label embeddings technique. In contrast, the
second approach seeks to develop multiple representations of numbers
(contextual and magnitude-based) to achieve more robust number embeddings.
\textit{Results:} As anticipated, fine-tuning the standard CamemBERT-bio on our
small medical dataset did not improve F1 scores. However, significant
improvements were observed with CamemBERT-bio + LESA, resulting in an over 13\%
increase. Similar enhancements were noted when combining LESA with Xval,
outperforming conventional methods and giving comparable results to GPT-4
\textit{Conclusions and Novelty:} This study introduces two innovative
techniques for handling numerical data, which are also applicable to other
modalities. We illustrate how these techniques can improve the performance of
Transformer-based models, achieving more reliable classification results even
with small datasets.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 01:15:21 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Jul 2024 08:47:52 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 09:48:15 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Lompo",
"Boammani Aser",
""
],
[
"Le",
"Thanh-Dung",
""
]
]
| TITLE: Multi-objective Representation for Numbers in Clinical Narratives: A
CamemBERT-Bio-Based Alternative to Large-Scale LLMs
ABSTRACT: The processing of numerical values is a rapidly developing area in the field
of Language Models (LLMs). Despite numerous advancements achieved by previous
research, significant challenges persist, particularly within the healthcare
domain. This paper investigates the limitations of Transformer models in
understanding numerical values. \textit{Objective:} this research aims to
categorize numerical values extracted from medical documents into eight
specific physiological categories using CamemBERT-bio. \textit{Methods:} In a
context where scalable methods and Large Language Models (LLMs) are emphasized,
we explore lifting the limitations of transformer-based models. We examine two
strategies: fine-tuning CamemBERT-bio on a small medical dataset, integrating
Label Embedding for Self-Attention (LESA), and combining LESA with additional
enhancement techniques such as Xval. Given that CamemBERT-bio is already
pre-trained on a large medical dataset, the first approach aims to update its
encoder with the newly added label embeddings technique. In contrast, the
second approach seeks to develop multiple representations of numbers
(contextual and magnitude-based) to achieve more robust number embeddings.
\textit{Results:} As anticipated, fine-tuning the standard CamemBERT-bio on our
small medical dataset did not improve F1 scores. However, significant
improvements were observed with CamemBERT-bio + LESA, resulting in an over 13\%
increase. Similar enhancements were noted when combining LESA with Xval,
outperforming conventional methods and giving comparable results to GPT-4
\textit{Conclusions and Novelty:} This study introduces two innovative
techniques for handling numerical data, which are also applicable to other
modalities. We illustrate how these techniques can improve the performance of
Transformer-based models, achieving more reliable classification results even
with small datasets.
| no_new_dataset | 0.951818 |
2405.20986 | Linlin Yu | Linlin Yu, Bowen Yang, Tianhao Wang, Kangshuo Li, Feng Chen | Predictive Uncertainty Quantification for Bird's Eye View Segmentation:
A Benchmark and Novel Loss Function | ICLR 2025 | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fusion of raw sensor data to create a Bird's Eye View (BEV)
representation is critical for autonomous vehicle planning and control. Despite
the growing interest in using deep learning models for BEV semantic
segmentation, anticipating segmentation errors and enhancing the explainability
of these models remain underexplored. This paper introduces a comprehensive
benchmark for predictive uncertainty quantification in BEV segmentation,
evaluating multiple uncertainty quantification methods across three popular
datasets with three representative network architectures. Our study focuses on
the effectiveness of quantified uncertainty in detecting misclassified and
out-of-distribution (OOD) pixels while also improving model calibration.
Through empirical analysis, we uncover challenges in existing uncertainty
quantification methods and demonstrate the potential of evidential deep
learning techniques, which capture both aleatoric and epistemic uncertainty. To
address these challenges, we propose a novel loss function,
Uncertainty-Focal-Cross-Entropy (UFCE), specifically designed for highly
imbalanced data, along with a simple uncertainty-scaling regularization term
that improves both uncertainty quantification and model calibration for BEV
segmentation.
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 16:32:46 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 07:46:05 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yu",
"Linlin",
""
],
[
"Yang",
"Bowen",
""
],
[
"Wang",
"Tianhao",
""
],
[
"Li",
"Kangshuo",
""
],
[
"Chen",
"Feng",
""
]
]
| TITLE: Predictive Uncertainty Quantification for Bird's Eye View Segmentation:
A Benchmark and Novel Loss Function
ABSTRACT: The fusion of raw sensor data to create a Bird's Eye View (BEV)
representation is critical for autonomous vehicle planning and control. Despite
the growing interest in using deep learning models for BEV semantic
segmentation, anticipating segmentation errors and enhancing the explainability
of these models remain underexplored. This paper introduces a comprehensive
benchmark for predictive uncertainty quantification in BEV segmentation,
evaluating multiple uncertainty quantification methods across three popular
datasets with three representative network architectures. Our study focuses on
the effectiveness of quantified uncertainty in detecting misclassified and
out-of-distribution (OOD) pixels while also improving model calibration.
Through empirical analysis, we uncover challenges in existing uncertainty
quantification methods and demonstrate the potential of evidential deep
learning techniques, which capture both aleatoric and epistemic uncertainty. To
address these challenges, we propose a novel loss function,
Uncertainty-Focal-Cross-Entropy (UFCE), specifically designed for highly
imbalanced data, along with a simple uncertainty-scaling regularization term
that improves both uncertainty quantification and model calibration for BEV
segmentation.
| no_new_dataset | 0.943034 |
2406.00987 | Wenjing Chang | Wenjing Chang, Kay Liu, Philip S. Yu, Jianjun Yu | Enhancing Fairness in Unsupervised Graph Anomaly Detection through
Disentanglement | Accepted to TMLR. Code available at
https://github.com/AhaChang/DEFEND | null | null | null | cs.LG cs.CY cs.SI | http://creativecommons.org/licenses/by/4.0/ | Graph anomaly detection (GAD) is increasingly crucial in various
applications, ranging from financial fraud detection to fake news detection.
However, current GAD methods largely overlook the fairness problem, which might
result in discriminatory decisions skewed toward certain demographic groups
defined on sensitive attributes (e.g., gender, religion, ethnicity, etc.). This
greatly limits the applicability of these methods in real-world scenarios in
light of societal and ethical restrictions. To address this critical gap, we
make the first attempt to integrate fairness with utility in GAD
decision-making. Specifically, we devise a novel DisEntangle-based
FairnEss-aware aNomaly Detection framework on the attributed graph, named
DEFEND. DEFEND first introduces disentanglement in GNNs to capture informative
yet sensitive-irrelevant node representations, effectively reducing societal
bias inherent in graph representation learning. Besides, to alleviate
discriminatory bias in evaluating anomalous nodes, DEFEND adopts a
reconstruction-based anomaly detection, which concentrates solely on node
attributes without incorporating any graph structure. Additionally, given the
inherent association between input and sensitive attributes, DEFEND constrains
the correlation between the reconstruction error and the predicted sensitive
attributes. Our empirical evaluations on real-world datasets reveal that DEFEND
performs effectively in GAD and significantly enhances fairness compared to
state-of-the-art baselines. To foster reproducibility, our code is available at
https://github.com/AhaChang/DEFEND.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 04:48:45 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 14:14:00 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chang",
"Wenjing",
""
],
[
"Liu",
"Kay",
""
],
[
"Yu",
"Philip S.",
""
],
[
"Yu",
"Jianjun",
""
]
]
| TITLE: Enhancing Fairness in Unsupervised Graph Anomaly Detection through
Disentanglement
ABSTRACT: Graph anomaly detection (GAD) is increasingly crucial in various
applications, ranging from financial fraud detection to fake news detection.
However, current GAD methods largely overlook the fairness problem, which might
result in discriminatory decisions skewed toward certain demographic groups
defined on sensitive attributes (e.g., gender, religion, ethnicity, etc.). This
greatly limits the applicability of these methods in real-world scenarios in
light of societal and ethical restrictions. To address this critical gap, we
make the first attempt to integrate fairness with utility in GAD
decision-making. Specifically, we devise a novel DisEntangle-based
FairnEss-aware aNomaly Detection framework on the attributed graph, named
DEFEND. DEFEND first introduces disentanglement in GNNs to capture informative
yet sensitive-irrelevant node representations, effectively reducing societal
bias inherent in graph representation learning. Besides, to alleviate
discriminatory bias in evaluating anomalous nodes, DEFEND adopts a
reconstruction-based anomaly detection, which concentrates solely on node
attributes without incorporating any graph structure. Additionally, given the
inherent association between input and sensitive attributes, DEFEND constrains
the correlation between the reconstruction error and the predicted sensitive
attributes. Our empirical evaluations on real-world datasets reveal that DEFEND
performs effectively in GAD and significantly enhances fairness compared to
state-of-the-art baselines. To foster reproducibility, our code is available at
https://github.com/AhaChang/DEFEND.
| no_new_dataset | 0.945851 |
2406.04604 | Jiaxin Wen | Jiaxin Wen, Ruiqi Zhong, Pei Ke, Zhihong Shao, Hongning Wang, Minlie
Huang | Learning Task Decomposition to Assist Humans in Competitive Programming | ACL 2024 Main Conference | null | null | null | cs.CL cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When using language models (LMs) to solve complex problems, humans might
struggle to understand the LM-generated solutions and repair the flawed ones.
To assist humans in repairing them, we propose to automatically decompose
complex solutions into multiple simpler pieces that correspond to specific
subtasks. We introduce a novel objective for learning task decomposition,
termed assistive value (AssistV), which measures the feasibility and speed for
humans to repair the decomposed solution. We collect a dataset of human repair
experiences on different decomposed solutions. Utilizing the collected data as
in-context examples, we then learn to critique, refine, and rank decomposed
solutions to improve AssistV. We validate our method under competitive
programming problems: under 177 hours of human study, our method enables
non-experts to solve 33.3\% more problems, speeds them up by 3.3x, and empowers
them to match unassisted experts.
| [
{
"version": "v1",
"created": "Fri, 7 Jun 2024 03:27:51 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Jul 2024 20:24:44 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Jul 2024 18:26:32 GMT"
},
{
"version": "v4",
"created": "Sat, 1 Mar 2025 20:47:54 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wen",
"Jiaxin",
""
],
[
"Zhong",
"Ruiqi",
""
],
[
"Ke",
"Pei",
""
],
[
"Shao",
"Zhihong",
""
],
[
"Wang",
"Hongning",
""
],
[
"Huang",
"Minlie",
""
]
]
| TITLE: Learning Task Decomposition to Assist Humans in Competitive Programming
ABSTRACT: When using language models (LMs) to solve complex problems, humans might
struggle to understand the LM-generated solutions and repair the flawed ones.
To assist humans in repairing them, we propose to automatically decompose
complex solutions into multiple simpler pieces that correspond to specific
subtasks. We introduce a novel objective for learning task decomposition,
termed assistive value (AssistV), which measures the feasibility and speed for
humans to repair the decomposed solution. We collect a dataset of human repair
experiences on different decomposed solutions. Utilizing the collected data as
in-context examples, we then learn to critique, refine, and rank decomposed
solutions to improve AssistV. We validate our method under competitive
programming problems: under 177 hours of human study, our method enables
non-experts to solve 33.3\% more problems, speeds them up by 3.3x, and empowers
them to match unassisted experts.
| new_dataset | 0.967778 |
2406.05923 | Manuel Cherep | Manuel Cherep and Nikhil Singh | Contrastive Learning from Synthetic Audio Doppelg\"angers | Accepted to ICLR 2025 | null | null | null | cs.SD cs.LG eess.AS | http://creativecommons.org/licenses/by/4.0/ | Learning robust audio representations currently demands extensive datasets of
real-world sound recordings. By applying artificial transformations to these
recordings, models can learn to recognize similarities despite subtle
variations through techniques like contrastive learning. However, these
transformations are only approximations of the true diversity found in
real-world sounds, which are generated by complex interactions of physical
processes, from vocal cord vibrations to the resonance of musical instruments.
We propose a solution to both the data scale and transformation limitations,
leveraging synthetic audio. By randomly perturbing the parameters of a sound
synthesizer, we generate audio doppelg\"angers-synthetic positive pairs with
causally manipulated variations in timbre, pitch, and temporal envelopes. These
variations, difficult to achieve through augmentations of existing audio,
provide a rich source of contrastive information. Despite the shift to randomly
generated synthetic data, our method produces strong representations,
outperforming real data on several standard audio classification tasks.
Notably, our approach is lightweight, requires no data storage, and has only a
single hyperparameter, which we extensively analyze. We offer this method as a
complement to existing strategies for contrastive learning in audio, using
synthesized sounds to reduce the data burden on practitioners.
| [
{
"version": "v1",
"created": "Sun, 9 Jun 2024 21:44:06 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 02:57:06 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Cherep",
"Manuel",
""
],
[
"Singh",
"Nikhil",
""
]
]
| TITLE: Contrastive Learning from Synthetic Audio Doppelg\"angers
ABSTRACT: Learning robust audio representations currently demands extensive datasets of
real-world sound recordings. By applying artificial transformations to these
recordings, models can learn to recognize similarities despite subtle
variations through techniques like contrastive learning. However, these
transformations are only approximations of the true diversity found in
real-world sounds, which are generated by complex interactions of physical
processes, from vocal cord vibrations to the resonance of musical instruments.
We propose a solution to both the data scale and transformation limitations,
leveraging synthetic audio. By randomly perturbing the parameters of a sound
synthesizer, we generate audio doppelg\"angers-synthetic positive pairs with
causally manipulated variations in timbre, pitch, and temporal envelopes. These
variations, difficult to achieve through augmentations of existing audio,
provide a rich source of contrastive information. Despite the shift to randomly
generated synthetic data, our method produces strong representations,
outperforming real data on several standard audio classification tasks.
Notably, our approach is lightweight, requires no data storage, and has only a
single hyperparameter, which we extensively analyze. We offer this method as a
complement to existing strategies for contrastive learning in audio, using
synthesized sounds to reduce the data burden on practitioners.
| no_new_dataset | 0.951188 |
2406.07413 | Ziyue Qiao | Ziyue Qiao, Junren Xiao, Qingqiang Sun, Meng Xiao, Xiao Luo, Hui Xiong | Towards Continuous Reuse of Graph Models via Holistic Memory
Diversification | Accepted by ICLR 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the challenge of incremental learning in growing graphs
with increasingly complex tasks. The goal is to continuously train a graph
model to handle new tasks while retaining proficiency in previous tasks via
memory replay. Existing methods usually overlook the importance of memory
diversity, limiting in selecting high-quality memory from previous tasks and
remembering broad previous knowledge within the scarce memory on graphs. To
address that, we introduce a novel holistic Diversified Memory Selection and
Generation (DMSG) framework for incremental learning in graphs, which first
introduces a buffer selection strategy that considers both intra-class and
inter-class diversities, employing an efficient greedy algorithm for sampling
representative training nodes from graphs into memory buffers after learning
each new task. Then, to adequately rememorize the knowledge preserved in the
memory buffer when learning new tasks, a diversified memory generation replay
method is introduced. This method utilizes a variational layer to generate the
distribution of buffer node embeddings and sample synthesized ones for
replaying. Furthermore, an adversarial variational embedding learning method
and a reconstruction-based decoder are proposed to maintain the integrity and
consolidate the generalization of the synthesized node embeddings,
respectively. Extensive experimental results on publicly accessible datasets
demonstrate the superiority of \method{} over state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 11 Jun 2024 16:18:15 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 11:18:00 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Qiao",
"Ziyue",
""
],
[
"Xiao",
"Junren",
""
],
[
"Sun",
"Qingqiang",
""
],
[
"Xiao",
"Meng",
""
],
[
"Luo",
"Xiao",
""
],
[
"Xiong",
"Hui",
""
]
]
| TITLE: Towards Continuous Reuse of Graph Models via Holistic Memory
Diversification
ABSTRACT: This paper addresses the challenge of incremental learning in growing graphs
with increasingly complex tasks. The goal is to continuously train a graph
model to handle new tasks while retaining proficiency in previous tasks via
memory replay. Existing methods usually overlook the importance of memory
diversity, limiting in selecting high-quality memory from previous tasks and
remembering broad previous knowledge within the scarce memory on graphs. To
address that, we introduce a novel holistic Diversified Memory Selection and
Generation (DMSG) framework for incremental learning in graphs, which first
introduces a buffer selection strategy that considers both intra-class and
inter-class diversities, employing an efficient greedy algorithm for sampling
representative training nodes from graphs into memory buffers after learning
each new task. Then, to adequately rememorize the knowledge preserved in the
memory buffer when learning new tasks, a diversified memory generation replay
method is introduced. This method utilizes a variational layer to generate the
distribution of buffer node embeddings and sample synthesized ones for
replaying. Furthermore, an adversarial variational embedding learning method
and a reconstruction-based decoder are proposed to maintain the integrity and
consolidate the generalization of the synthesized node embeddings,
respectively. Extensive experimental results on publicly accessible datasets
demonstrate the superiority of \method{} over state-of-the-art methods.
| no_new_dataset | 0.947817 |
2406.08973 | Alexander Nikulin | Alexander Nikulin and Ilya Zisman and Alexey Zemtsov and Vladislav
Kurenkov | XLand-100B: A Large-Scale Multi-Task Dataset for In-Context
Reinforcement Learning | ICLR 2025, Poster, Source code:
https://github.com/dunnolab/xland-minigrid-datasets | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Following the success of the in-context learning paradigm in large-scale
language and computer vision models, the recently emerging field of in-context
reinforcement learning is experiencing a rapid growth. However, its development
has been held back by the lack of challenging benchmarks, as all the
experiments have been carried out in simple environments and on small-scale
datasets. We present XLand-100B, a large-scale dataset for in-context
reinforcement learning based on the XLand-MiniGrid environment, as a first step
to alleviate this problem. It contains complete learning histories for nearly
$30,000$ different tasks, covering $100$B transitions and 2.5B episodes. It
took 50,000 GPU hours to collect the dataset, which is beyond the reach of most
academic labs. Along with the dataset, we provide the utilities to reproduce or
expand it even further. We also benchmark common in-context RL baselines and
show that they struggle to generalize to novel and diverse tasks. With this
substantial effort, we aim to democratize research in the rapidly growing field
of in-context reinforcement learning and provide a solid foundation for further
scaling.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2024 10:04:17 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Feb 2025 16:51:24 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 09:36:02 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Nikulin",
"Alexander",
""
],
[
"Zisman",
"Ilya",
""
],
[
"Zemtsov",
"Alexey",
""
],
[
"Kurenkov",
"Vladislav",
""
]
]
| TITLE: XLand-100B: A Large-Scale Multi-Task Dataset for In-Context
Reinforcement Learning
ABSTRACT: Following the success of the in-context learning paradigm in large-scale
language and computer vision models, the recently emerging field of in-context
reinforcement learning is experiencing a rapid growth. However, its development
has been held back by the lack of challenging benchmarks, as all the
experiments have been carried out in simple environments and on small-scale
datasets. We present XLand-100B, a large-scale dataset for in-context
reinforcement learning based on the XLand-MiniGrid environment, as a first step
to alleviate this problem. It contains complete learning histories for nearly
$30,000$ different tasks, covering $100$B transitions and 2.5B episodes. It
took 50,000 GPU hours to collect the dataset, which is beyond the reach of most
academic labs. Along with the dataset, we provide the utilities to reproduce or
expand it even further. We also benchmark common in-context RL baselines and
show that they struggle to generalize to novel and diverse tasks. With this
substantial effort, we aim to democratize research in the rapidly growing field
of in-context reinforcement learning and provide a solid foundation for further
scaling.
| new_dataset | 0.965996 |
2406.09044 | Hanqing Wang | Hanqing Wang, Yixia Li, Shuo Wang, Guanhua Chen, Yun Chen | MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM
Finetuning | This paper has been accepted at NAACL 2025. Code is available at:
https://github.com/sufenlp/MiLoRA | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient finetuning of large language models (LLMs) aims to adapt the LLMs
with reduced computational and memory cost. Previous LoRA-based approaches
initialize the low-rank matrices with Gaussian distribution and zero values
while keeping the original weight matrices frozen. However, the trainable model
parameters optimized in an unguided subspace might interfere with the
well-learned subspace of the pretrained weight matrices. In this paper, we
propose MiLoRA, a simple yet effective LLM finetuning approach that only
updates the minor singular components of the weight matrix while keeping the
principal singular components frozen. It is observed that the minor matrix
corresponds to the noisy or long-tail information, while the principal matrix
contains important knowledge. The MiLoRA initializes the low-rank matrices
within a subspace that is orthogonal to the principal matrix, thus the
pretrained knowledge is expected to be well preserved. During finetuning,
MiLoRA makes the most use of the less-optimized subspace for learning the
labeled dataset. Extensive experiments on commonsense reasoning, math
reasoning, instruction following and visual instruction following benchmarks
present the superior performance of our method.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2024 12:30:02 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Sep 2024 02:57:12 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 04:45:56 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wang",
"Hanqing",
""
],
[
"Li",
"Yixia",
""
],
[
"Wang",
"Shuo",
""
],
[
"Chen",
"Guanhua",
""
],
[
"Chen",
"Yun",
""
]
]
| TITLE: MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM
Finetuning
ABSTRACT: Efficient finetuning of large language models (LLMs) aims to adapt the LLMs
with reduced computational and memory cost. Previous LoRA-based approaches
initialize the low-rank matrices with Gaussian distribution and zero values
while keeping the original weight matrices frozen. However, the trainable model
parameters optimized in an unguided subspace might interfere with the
well-learned subspace of the pretrained weight matrices. In this paper, we
propose MiLoRA, a simple yet effective LLM finetuning approach that only
updates the minor singular components of the weight matrix while keeping the
principal singular components frozen. It is observed that the minor matrix
corresponds to the noisy or long-tail information, while the principal matrix
contains important knowledge. The MiLoRA initializes the low-rank matrices
within a subspace that is orthogonal to the principal matrix, thus the
pretrained knowledge is expected to be well preserved. During finetuning,
MiLoRA makes the most use of the less-optimized subspace for learning the
labeled dataset. Extensive experiments on commonsense reasoning, math
reasoning, instruction following and visual instruction following benchmarks
present the superior performance of our method.
| no_new_dataset | 0.948442 |
2406.09870 | Haonan Yuan | Jiawen Qin, Haonan Yuan, Qingyun Sun, Lyujin Xu, Jiaqi Yuan, Pengfeng
Huang, Zhaonan Wang, Xingcheng Fu, Hao Peng, Jianxin Li, Philip S. Yu | IGL-Bench: Establishing the Comprehensive Benchmark for Imbalanced Graph
Learning | The Thirteenth International Conference on Learning Representations
(ICLR'25) | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deep graph learning has gained grand popularity over the past years due to
its versatility and success in representing graph data across a wide range of
domains. However, the pervasive issue of imbalanced graph data distributions,
where certain parts exhibit disproportionally abundant data while others remain
sparse, undermines the efficacy of conventional graph learning algorithms,
leading to biased outcomes. To address this challenge, Imbalanced Graph
Learning (IGL) has garnered substantial attention, enabling more balanced data
distributions and better task performance. Despite the proliferation of IGL
algorithms, the absence of consistent experimental protocols and fair
performance comparisons pose a significant barrier to comprehending
advancements in this field. To bridge this gap, we introduce IGL-Bench, a
foundational comprehensive benchmark for imbalanced graph learning, embarking
on 16 diverse graph datasets and 24 distinct IGL algorithms with uniform data
processing and splitting strategies. Specifically, IGL-Bench systematically
investigates state-of-the-art IGL algorithms in terms of effectiveness,
robustness, and efficiency on node-level and graph-level tasks, with the scope
of class-imbalance and topology-imbalance. Extensive experiments demonstrate
the potential benefits of IGL algorithms on various imbalanced conditions,
offering insights and opportunities in the IGL field. Further, we have
developed an open-sourced and unified package to facilitate reproducible
evaluation and inspire further innovative research, which is available at
https://github.com/RingBDStack/IGL-Bench.
| [
{
"version": "v1",
"created": "Fri, 14 Jun 2024 09:30:18 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Jun 2024 07:34:40 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 14:35:37 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Qin",
"Jiawen",
""
],
[
"Yuan",
"Haonan",
""
],
[
"Sun",
"Qingyun",
""
],
[
"Xu",
"Lyujin",
""
],
[
"Yuan",
"Jiaqi",
""
],
[
"Huang",
"Pengfeng",
""
],
[
"Wang",
"Zhaonan",
""
],
[
"Fu",
"Xingcheng",
""
],
[
"Peng",
"Hao",
""
],
[
"Li",
"Jianxin",
""
],
[
"Yu",
"Philip S.",
""
]
]
| TITLE: IGL-Bench: Establishing the Comprehensive Benchmark for Imbalanced Graph
Learning
ABSTRACT: Deep graph learning has gained grand popularity over the past years due to
its versatility and success in representing graph data across a wide range of
domains. However, the pervasive issue of imbalanced graph data distributions,
where certain parts exhibit disproportionally abundant data while others remain
sparse, undermines the efficacy of conventional graph learning algorithms,
leading to biased outcomes. To address this challenge, Imbalanced Graph
Learning (IGL) has garnered substantial attention, enabling more balanced data
distributions and better task performance. Despite the proliferation of IGL
algorithms, the absence of consistent experimental protocols and fair
performance comparisons pose a significant barrier to comprehending
advancements in this field. To bridge this gap, we introduce IGL-Bench, a
foundational comprehensive benchmark for imbalanced graph learning, embarking
on 16 diverse graph datasets and 24 distinct IGL algorithms with uniform data
processing and splitting strategies. Specifically, IGL-Bench systematically
investigates state-of-the-art IGL algorithms in terms of effectiveness,
robustness, and efficiency on node-level and graph-level tasks, with the scope
of class-imbalance and topology-imbalance. Extensive experiments demonstrate
the potential benefits of IGL algorithms on various imbalanced conditions,
offering insights and opportunities in the IGL field. Further, we have
developed an open-sourced and unified package to facilitate reproducible
evaluation and inspire further innovative research, which is available at
https://github.com/RingBDStack/IGL-Bench.
| no_new_dataset | 0.876052 |
2406.10279 | Joseph Spracklen | Joseph Spracklen, Raveen Wijewickrama, A H M Nazmus Sakib, Anindya
Maiti, Bimal Viswanath, Murtuza Jadliwala | We Have a Package for You! A Comprehensive Analysis of Package
Hallucinations by Code Generating LLMs | To appear in the 2025 USENIX Security Symposium. 22 pages, 14
figures, 8 tables. Edited from original version for submission to a different
conference. No change to original results or findings | null | null | null | cs.SE cs.AI cs.CR cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The reliance of popular programming languages such as Python and JavaScript
on centralized package repositories and open-source software, combined with the
emergence of code-generating Large Language Models (LLMs), has created a new
type of threat to the software supply chain: package hallucinations. These
hallucinations, which arise from fact-conflicting errors when generating code
using LLMs, represent a novel form of package confusion attack that poses a
critical threat to the integrity of the software supply chain. This paper
conducts a rigorous and comprehensive evaluation of package hallucinations
across different programming languages, settings, and parameters, exploring how
a diverse set of models and configurations affect the likelihood of generating
erroneous package recommendations and identifying the root causes of this
phenomenon. Using 16 popular LLMs for code generation and two unique prompt
datasets, we generate 576,000 code samples in two programming languages that we
analyze for package hallucinations. Our findings reveal that that the average
percentage of hallucinated packages is at least 5.2% for commercial models and
21.7% for open-source models, including a staggering 205,474 unique examples of
hallucinated package names, further underscoring the severity and pervasiveness
of this threat. To overcome this problem, we implement several hallucination
mitigation strategies and show that they are able to significantly reduce the
number of package hallucinations while maintaining code quality. Our
experiments and findings highlight package hallucinations as a persistent and
systemic phenomenon while using state-of-the-art LLMs for code generation, and
a significant challenge which deserves the research community's urgent
attention.
| [
{
"version": "v1",
"created": "Wed, 12 Jun 2024 03:29:06 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Sep 2024 21:46:56 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 21:03:52 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Spracklen",
"Joseph",
""
],
[
"Wijewickrama",
"Raveen",
""
],
[
"Sakib",
"A H M Nazmus",
""
],
[
"Maiti",
"Anindya",
""
],
[
"Viswanath",
"Bimal",
""
],
[
"Jadliwala",
"Murtuza",
""
]
]
| TITLE: We Have a Package for You! A Comprehensive Analysis of Package
Hallucinations by Code Generating LLMs
ABSTRACT: The reliance of popular programming languages such as Python and JavaScript
on centralized package repositories and open-source software, combined with the
emergence of code-generating Large Language Models (LLMs), has created a new
type of threat to the software supply chain: package hallucinations. These
hallucinations, which arise from fact-conflicting errors when generating code
using LLMs, represent a novel form of package confusion attack that poses a
critical threat to the integrity of the software supply chain. This paper
conducts a rigorous and comprehensive evaluation of package hallucinations
across different programming languages, settings, and parameters, exploring how
a diverse set of models and configurations affect the likelihood of generating
erroneous package recommendations and identifying the root causes of this
phenomenon. Using 16 popular LLMs for code generation and two unique prompt
datasets, we generate 576,000 code samples in two programming languages that we
analyze for package hallucinations. Our findings reveal that that the average
percentage of hallucinated packages is at least 5.2% for commercial models and
21.7% for open-source models, including a staggering 205,474 unique examples of
hallucinated package names, further underscoring the severity and pervasiveness
of this threat. To overcome this problem, we implement several hallucination
mitigation strategies and show that they are able to significantly reduce the
number of package hallucinations while maintaining code quality. Our
experiments and findings highlight package hallucinations as a persistent and
systemic phenomenon while using state-of-the-art LLMs for code generation, and
a significant challenge which deserves the research community's urgent
attention.
| new_dataset | 0.846578 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.