id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2502.20527
|
Jake Renzella Dr
|
Emily Ross, Yuval Kansal, Jake Renzella, Alexandra Vassar, Andrew
Taylor
|
Supervised Fine-Tuning LLMs to Behave as Pedagogical Agents in
Programming Education
| null | null | null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) are increasingly being explored in higher
education, yet their effectiveness as teaching agents remains underexamined. In
this paper, we present the development of GuideLM, a fine-tuned LLM designed
for programming education. GuideLM has been integrated into the Debugging C
Compiler (DCC), an educational C compiler that leverages LLMs to generate
pedagogically sound error explanations. Previously, DCC relied on off-the-shelf
OpenAI models, which, while accurate, often over-assisted students by directly
providing solutions despite contrary prompting.
To address this, we employed supervised fine-tuning (SFT) on a dataset of 528
student-question/teacher-answer pairs, creating two models: GuideLM and
GuideLM-mini, fine-tuned on ChatGPT-4o and 4o-mini, respectively. We conducted
an expert analysis of 400 responses per model, comparing their pedagogical
effectiveness against base OpenAI models. Our evaluation, grounded in
constructivism and cognitive load theory, assessed factors such as conceptual
scaffolding, clarity, and Socratic guidance.
Results indicate that GuideLM and GuideLM-mini improve pedagogical
performance, with an 8% increase in Socratic guidance and a 58% improvement in
economy of words compared to GPT-4o. However, this refinement comes at the cost
of a slight reduction in general accuracy. While further work is needed, our
findings suggest that fine-tuning LLMs with targeted datasets is a promising
approach for developing models better suited to educational contexts.
|
[
{
"version": "v1",
"created": "Thu, 27 Feb 2025 21:23:56 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Ross",
"Emily",
""
],
[
"Kansal",
"Yuval",
""
],
[
"Renzella",
"Jake",
""
],
[
"Vassar",
"Alexandra",
""
],
[
"Taylor",
"Andrew",
""
]
] |
TITLE: Supervised Fine-Tuning LLMs to Behave as Pedagogical Agents in
Programming Education
ABSTRACT: Large language models (LLMs) are increasingly being explored in higher
education, yet their effectiveness as teaching agents remains underexamined. In
this paper, we present the development of GuideLM, a fine-tuned LLM designed
for programming education. GuideLM has been integrated into the Debugging C
Compiler (DCC), an educational C compiler that leverages LLMs to generate
pedagogically sound error explanations. Previously, DCC relied on off-the-shelf
OpenAI models, which, while accurate, often over-assisted students by directly
providing solutions despite contrary prompting.
To address this, we employed supervised fine-tuning (SFT) on a dataset of 528
student-question/teacher-answer pairs, creating two models: GuideLM and
GuideLM-mini, fine-tuned on ChatGPT-4o and 4o-mini, respectively. We conducted
an expert analysis of 400 responses per model, comparing their pedagogical
effectiveness against base OpenAI models. Our evaluation, grounded in
constructivism and cognitive load theory, assessed factors such as conceptual
scaffolding, clarity, and Socratic guidance.
Results indicate that GuideLM and GuideLM-mini improve pedagogical
performance, with an 8% increase in Socratic guidance and a 58% improvement in
economy of words compared to GPT-4o. However, this refinement comes at the cost
of a slight reduction in general accuracy. While further work is needed, our
findings suggest that fine-tuning LLMs with targeted datasets is a promising
approach for developing models better suited to educational contexts.
|
no_new_dataset
| 0.943815
|
2502.20537
|
Phil\'emon Houdaille
|
Phil\'emon Houdaille (University of Rennes, France / CNRS, France /
Inria, France), Djamel Eddine Khelladi (CNRS, France / University of Rennes,
France), Benoit Combemale (University of Rennes, France), Gunter Mussbacher
(McGill University, Canada / Inria, France), Tijs van der Storm (CWI,
Netherlands / University of Groningen, Netherlands)
|
PolyDebug: A Framework for Polyglot Debugging
| null |
The Art, Science, and Engineering of Programming, 2025, Vol. 10,
Issue 1, Article 13
|
10.22152/programming-journal.org/2026/10/13
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As software grows increasingly complex, the quantity and diversity of
concerns to be addressed also rises. To answer this diversity of concerns,
developers may end up using multiple programming languages in a single software
project, a practice known as polyglot programming. This practice has gained
momentum with the rise of execution platforms capable of supporting polyglot
systems. However, despite this momentum, there is a notable lack of development
tooling support for developers working on polyglot programs, such as in
debugging facilities. Not all polyglot execution platforms provide debugging
capabilities, and for those that do, implementing support for new languages can
be costly.
This paper addresses this gap by introducing a novel debugger framework that
is language-agnostic yet leverages existing language-specific debuggers. The
proposed framework is dynamically extensible to accommodate the evolving
combination of languages used in polyglot software development. It utilizes the
Debug Adapter Protocol (DAP) to integrate and coordinate existing debuggers
within a debugging session. We found that using our approach, we were able to
implement polyglot debugging support for three different languages with little
development effort. We also found that our debugger did not introduce an
overhead significant enough to hinder debugging tasks in many scenarios;
however performance did deteriorate with the amount of polyglot calls, making
the approach not suitable for every polyglot program structure. The
effectiveness of this approach is demonstrated through the development of a
prototype, PolyDebug, and its application to use cases involving C, JavaScript,
and Python. We evaluated PolyDebug on a dataset of traditional benchmark
programs, modified to fit our criteria of polyglot programs. We also assessed
the development effort by measuring the source lines of code (SLOC) for the
prototype as a whole as well as its components. Debugging is a fundamental part
of developing and maintaining software. Lack of debug tools can lead to
difficulty in locating software bugs and slow down the development process. We
believe this work is relevant to help provide developers proper debugging
support regardless of the runtime environment.
|
[
{
"version": "v1",
"created": "Thu, 27 Feb 2025 21:39:05 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Houdaille",
"Philémon",
"",
"University of Rennes, France / CNRS, France /\n Inria, France"
],
[
"Khelladi",
"Djamel Eddine",
"",
"CNRS, France / University of Rennes,\n France"
],
[
"Combemale",
"Benoit",
"",
"University of Rennes, France"
],
[
"Mussbacher",
"Gunter",
"",
"McGill University, Canada / Inria, France"
],
[
"van der Storm",
"Tijs",
"",
"CWI,\n Netherlands / University of Groningen, Netherlands"
]
] |
TITLE: PolyDebug: A Framework for Polyglot Debugging
ABSTRACT: As software grows increasingly complex, the quantity and diversity of
concerns to be addressed also rises. To answer this diversity of concerns,
developers may end up using multiple programming languages in a single software
project, a practice known as polyglot programming. This practice has gained
momentum with the rise of execution platforms capable of supporting polyglot
systems. However, despite this momentum, there is a notable lack of development
tooling support for developers working on polyglot programs, such as in
debugging facilities. Not all polyglot execution platforms provide debugging
capabilities, and for those that do, implementing support for new languages can
be costly.
This paper addresses this gap by introducing a novel debugger framework that
is language-agnostic yet leverages existing language-specific debuggers. The
proposed framework is dynamically extensible to accommodate the evolving
combination of languages used in polyglot software development. It utilizes the
Debug Adapter Protocol (DAP) to integrate and coordinate existing debuggers
within a debugging session. We found that using our approach, we were able to
implement polyglot debugging support for three different languages with little
development effort. We also found that our debugger did not introduce an
overhead significant enough to hinder debugging tasks in many scenarios;
however performance did deteriorate with the amount of polyglot calls, making
the approach not suitable for every polyglot program structure. The
effectiveness of this approach is demonstrated through the development of a
prototype, PolyDebug, and its application to use cases involving C, JavaScript,
and Python. We evaluated PolyDebug on a dataset of traditional benchmark
programs, modified to fit our criteria of polyglot programs. We also assessed
the development effort by measuring the source lines of code (SLOC) for the
prototype as a whole as well as its components. Debugging is a fundamental part
of developing and maintaining software. Lack of debug tools can lead to
difficulty in locating software bugs and slow down the development process. We
believe this work is relevant to help provide developers proper debugging
support regardless of the runtime environment.
|
no_new_dataset
| 0.923454
|
2502.20545
|
Shiwei Liu
|
Kechen Li, Wenqi Zhu, Coralia Cartis, Tianbo Ji, Shiwei Liu
|
SoS1: O1 and R1-Like Reasoning LLMs are Sum-of-Square Solvers
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs) have achieved human-level proficiency across
diverse tasks, but their ability to perform rigorous mathematical problem
solving remains an open challenge. In this work, we investigate a fundamental
yet computationally intractable problem: determining whether a given
multivariate polynomial is nonnegative. This problem, closely related to
Hilbert's Seventeenth Problem, plays a crucial role in global polynomial
optimization and has applications in various fields. First, we introduce
SoS-1K, a meticulously curated dataset of approximately 1,000 polynomials,
along with expert-designed reasoning instructions based on five progressively
challenging criteria. Evaluating multiple state-of-the-art LLMs, we find that
without structured guidance, all models perform only slightly above the random
guess baseline 50%. However, high-quality reasoning instructions significantly
improve accuracy, boosting performance up to 81%. Furthermore, our 7B model,
SoS-7B, fine-tuned on SoS-1K for just 4 hours, outperforms the 671B DeepSeek-V3
and GPT-4o-mini in accuracy while only requiring 1.8% and 5% of the computation
time needed for letters, respectively. Our findings highlight the potential of
LLMs to push the boundaries of mathematical reasoning and tackle NP-hard
problems.
|
[
{
"version": "v1",
"created": "Thu, 27 Feb 2025 21:41:43 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Li",
"Kechen",
""
],
[
"Zhu",
"Wenqi",
""
],
[
"Cartis",
"Coralia",
""
],
[
"Ji",
"Tianbo",
""
],
[
"Liu",
"Shiwei",
""
]
] |
TITLE: SoS1: O1 and R1-Like Reasoning LLMs are Sum-of-Square Solvers
ABSTRACT: Large Language Models (LLMs) have achieved human-level proficiency across
diverse tasks, but their ability to perform rigorous mathematical problem
solving remains an open challenge. In this work, we investigate a fundamental
yet computationally intractable problem: determining whether a given
multivariate polynomial is nonnegative. This problem, closely related to
Hilbert's Seventeenth Problem, plays a crucial role in global polynomial
optimization and has applications in various fields. First, we introduce
SoS-1K, a meticulously curated dataset of approximately 1,000 polynomials,
along with expert-designed reasoning instructions based on five progressively
challenging criteria. Evaluating multiple state-of-the-art LLMs, we find that
without structured guidance, all models perform only slightly above the random
guess baseline 50%. However, high-quality reasoning instructions significantly
improve accuracy, boosting performance up to 81%. Furthermore, our 7B model,
SoS-7B, fine-tuned on SoS-1K for just 4 hours, outperforms the 671B DeepSeek-V3
and GPT-4o-mini in accuracy while only requiring 1.8% and 5% of the computation
time needed for letters, respectively. Our findings highlight the potential of
LLMs to push the boundaries of mathematical reasoning and tackle NP-hard
problems.
|
new_dataset
| 0.958693
|
2502.20548
|
Jin Peng Zhou
|
Jin Peng Zhou, Kaiwen Wang, Jonathan Chang, Zhaolin Gao, Nathan
Kallus, Kilian Q. Weinberger, Kiant\'e Brantley, Wen Sun
|
$Q\sharp$: Provably Optimal Distributional RL for LLM Post-Training
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning (RL) post-training is crucial for LLM alignment and
reasoning, but existing policy-based methods, such as PPO and DPO, can fall
short of fixing shortcuts inherited from pre-training. In this work, we
introduce $Q\sharp$, a value-based algorithm for KL-regularized RL that guides
the reference policy using the optimal regularized $Q$ function. We propose to
learn the optimal $Q$ function using distributional RL on an aggregated online
dataset. Unlike prior value-based baselines that guide the model using
unregularized $Q$-values, our method is theoretically principled and provably
learns the optimal policy for the KL-regularized RL problem. Empirically,
$Q\sharp$ outperforms prior baselines in math reasoning benchmarks while
maintaining a smaller KL divergence to the reference policy. Theoretically, we
establish a reduction from KL-regularized RL to no-regret online learning,
providing the first bounds for deterministic MDPs under only realizability.
Thanks to distributional RL, our bounds are also variance-dependent and
converge faster when the reference policy has small variance. In sum, our
results highlight $Q\sharp$ as an effective approach for post-training LLMs,
offering both improved performance and theoretical guarantees. The code can be
found at https://github.com/jinpz/q_sharp.
|
[
{
"version": "v1",
"created": "Thu, 27 Feb 2025 21:43:00 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Zhou",
"Jin Peng",
""
],
[
"Wang",
"Kaiwen",
""
],
[
"Chang",
"Jonathan",
""
],
[
"Gao",
"Zhaolin",
""
],
[
"Kallus",
"Nathan",
""
],
[
"Weinberger",
"Kilian Q.",
""
],
[
"Brantley",
"Kianté",
""
],
[
"Sun",
"Wen",
""
]
] |
TITLE: $Q\sharp$: Provably Optimal Distributional RL for LLM Post-Training
ABSTRACT: Reinforcement learning (RL) post-training is crucial for LLM alignment and
reasoning, but existing policy-based methods, such as PPO and DPO, can fall
short of fixing shortcuts inherited from pre-training. In this work, we
introduce $Q\sharp$, a value-based algorithm for KL-regularized RL that guides
the reference policy using the optimal regularized $Q$ function. We propose to
learn the optimal $Q$ function using distributional RL on an aggregated online
dataset. Unlike prior value-based baselines that guide the model using
unregularized $Q$-values, our method is theoretically principled and provably
learns the optimal policy for the KL-regularized RL problem. Empirically,
$Q\sharp$ outperforms prior baselines in math reasoning benchmarks while
maintaining a smaller KL divergence to the reference policy. Theoretically, we
establish a reduction from KL-regularized RL to no-regret online learning,
providing the first bounds for deterministic MDPs under only realizability.
Thanks to distributional RL, our bounds are also variance-dependent and
converge faster when the reference policy has small variance. In sum, our
results highlight $Q\sharp$ as an effective approach for post-training LLMs,
offering both improved performance and theoretical guarantees. The code can be
found at https://github.com/jinpz/q_sharp.
|
no_new_dataset
| 0.942082
|
2502.20552
|
Botond Barta
|
Botond Barta, Endre Hamerlik, Mil\'an Konor Nyist, Judit \'Acs
|
HuAMR: A Hungarian AMR Parser and Dataset
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present HuAMR, the first Abstract Meaning Representation (AMR) dataset and
a suite of large language model-based AMR parsers for Hungarian, targeting the
scarcity of semantic resources for non-English languages. To create HuAMR, we
employed Llama-3.1-70B to automatically generate silver-standard AMR
annotations, which we then refined manually to ensure quality. Building on this
dataset, we investigate how different model architectures - mT5 Large and
Llama-3.2-1B - and fine-tuning strategies affect AMR parsing performance.
While incorporating silver-standard AMRs from Llama-3.1-70B into the training
data of smaller models does not consistently boost overall scores, our results
show that these techniques effectively enhance parsing accuracy on Hungarian
news data (the domain of HuAMR). We evaluate our parsers using Smatch scores
and confirm the potential of HuAMR and our parsers for advancing semantic
parsing research.
|
[
{
"version": "v1",
"created": "Thu, 27 Feb 2025 21:48:11 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Barta",
"Botond",
""
],
[
"Hamerlik",
"Endre",
""
],
[
"Nyist",
"Milán Konor",
""
],
[
"Ács",
"Judit",
""
]
] |
TITLE: HuAMR: A Hungarian AMR Parser and Dataset
ABSTRACT: We present HuAMR, the first Abstract Meaning Representation (AMR) dataset and
a suite of large language model-based AMR parsers for Hungarian, targeting the
scarcity of semantic resources for non-English languages. To create HuAMR, we
employed Llama-3.1-70B to automatically generate silver-standard AMR
annotations, which we then refined manually to ensure quality. Building on this
dataset, we investigate how different model architectures - mT5 Large and
Llama-3.2-1B - and fine-tuning strategies affect AMR parsing performance.
While incorporating silver-standard AMRs from Llama-3.1-70B into the training
data of smaller models does not consistently boost overall scores, our results
show that these techniques effectively enhance parsing accuracy on Hungarian
news data (the domain of HuAMR). We evaluate our parsers using Smatch scores
and confirm the potential of HuAMR and our parsers for advancing semantic
parsing research.
|
new_dataset
| 0.964623
|
2502.20562
|
Joana C. Costa
|
Joana C. Costa and Tiago Roxo and Hugo Proen\c{c}a and Pedro R. M.
In\'acio
|
LISArD: Learning Image Similarity to Defend Against Gray-box Adversarial
Attacks
| null | null | null | null |
cs.CV cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
State-of-the-art defense mechanisms are typically evaluated in the context of
white-box attacks, which is not realistic, as it assumes the attacker can
access the gradients of the target network. To protect against this scenario,
Adversarial Training (AT) and Adversarial Distillation (AD) include adversarial
examples during the training phase, and Adversarial Purification uses a
generative model to reconstruct all the images given to the classifier. This
paper considers an even more realistic evaluation scenario: gray-box attacks,
which assume that the attacker knows the architecture and the dataset used to
train the target network, but cannot access its gradients. We provide empirical
evidence that models are vulnerable to gray-box attacks and propose LISArD, a
defense mechanism that does not increase computational and temporal costs but
provides robustness against gray-box and white-box attacks without including
AT. Our method approximates a cross-correlation matrix, created with the
embeddings of perturbed and clean images, to a diagonal matrix while
simultaneously conducting classification learning. Our results show that LISArD
can effectively protect against gray-box attacks, can be used in multiple
architectures, and carries over its resilience to the white-box scenario. Also,
state-of-the-art AD models underperform greatly when removing AT and/or moving
to gray-box settings, highlighting the lack of robustness from existing
approaches to perform in various conditions (aside from white-box settings).
All the source code is available at https://github.com/Joana-Cabral/LISArD.
|
[
{
"version": "v1",
"created": "Thu, 27 Feb 2025 22:02:06 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Costa",
"Joana C.",
""
],
[
"Roxo",
"Tiago",
""
],
[
"Proença",
"Hugo",
""
],
[
"Inácio",
"Pedro R. M.",
""
]
] |
TITLE: LISArD: Learning Image Similarity to Defend Against Gray-box Adversarial
Attacks
ABSTRACT: State-of-the-art defense mechanisms are typically evaluated in the context of
white-box attacks, which is not realistic, as it assumes the attacker can
access the gradients of the target network. To protect against this scenario,
Adversarial Training (AT) and Adversarial Distillation (AD) include adversarial
examples during the training phase, and Adversarial Purification uses a
generative model to reconstruct all the images given to the classifier. This
paper considers an even more realistic evaluation scenario: gray-box attacks,
which assume that the attacker knows the architecture and the dataset used to
train the target network, but cannot access its gradients. We provide empirical
evidence that models are vulnerable to gray-box attacks and propose LISArD, a
defense mechanism that does not increase computational and temporal costs but
provides robustness against gray-box and white-box attacks without including
AT. Our method approximates a cross-correlation matrix, created with the
embeddings of perturbed and clean images, to a diagonal matrix while
simultaneously conducting classification learning. Our results show that LISArD
can effectively protect against gray-box attacks, can be used in multiple
architectures, and carries over its resilience to the white-box scenario. Also,
state-of-the-art AD models underperform greatly when removing AT and/or moving
to gray-box settings, highlighting the lack of robustness from existing
approaches to perform in various conditions (aside from white-box settings).
All the source code is available at https://github.com/Joana-Cabral/LISArD.
|
no_new_dataset
| 0.941975
|
2502.20571
|
David Anastasiu
|
Yanhong Li and David C. Anastasiu
|
PFformer: A Position-Free Transformer Variant for Extreme-Adaptive
Multivariate Time Series Forecasting
|
PAKDD 2025 special session on Data Science: Foundations and
Applications (DSFA)
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Multivariate time series (MTS) forecasting is vital in fields like weather,
energy, and finance. However, despite deep learning advancements, traditional
Transformer-based models often diminish the effect of crucial inter-variable
relationships by singular token embedding and struggle to effectively capture
complex dependencies among variables, especially in datasets with rare or
extreme events. These events create significant imbalances and lead to high
skewness, complicating accurate prediction efforts. This study introduces
PFformer, a position-free Transformer-based model designed for single-target
MTS forecasting, specifically for challenging datasets characterized by extreme
variability. PFformer integrates two novel embedding strategies: Enhanced
Feature-based Embedding (EFE) and Auto-Encoder-based Embedding (AEE). EFE
effectively encodes inter-variable dependencies by mapping related sequence
subsets to high-dimensional spaces without positional constraints, enhancing
the encoder's functionality. PFformer shows superior forecasting accuracy
without the traditional limitations of positional encoding in MTS modeling. We
evaluated PFformer across four challenging datasets, focusing on two key
forecasting scenarios: long sequence prediction for 3 days ahead and rolling
predictions every four hours to reflect real-time decision-making processes in
water management. PFformer demonstrated remarkable improvements, from 20% to
60%, compared with state-of-the-art models.
|
[
{
"version": "v1",
"created": "Thu, 27 Feb 2025 22:21:27 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Li",
"Yanhong",
""
],
[
"Anastasiu",
"David C.",
""
]
] |
TITLE: PFformer: A Position-Free Transformer Variant for Extreme-Adaptive
Multivariate Time Series Forecasting
ABSTRACT: Multivariate time series (MTS) forecasting is vital in fields like weather,
energy, and finance. However, despite deep learning advancements, traditional
Transformer-based models often diminish the effect of crucial inter-variable
relationships by singular token embedding and struggle to effectively capture
complex dependencies among variables, especially in datasets with rare or
extreme events. These events create significant imbalances and lead to high
skewness, complicating accurate prediction efforts. This study introduces
PFformer, a position-free Transformer-based model designed for single-target
MTS forecasting, specifically for challenging datasets characterized by extreme
variability. PFformer integrates two novel embedding strategies: Enhanced
Feature-based Embedding (EFE) and Auto-Encoder-based Embedding (AEE). EFE
effectively encodes inter-variable dependencies by mapping related sequence
subsets to high-dimensional spaces without positional constraints, enhancing
the encoder's functionality. PFformer shows superior forecasting accuracy
without the traditional limitations of positional encoding in MTS modeling. We
evaluated PFformer across four challenging datasets, focusing on two key
forecasting scenarios: long sequence prediction for 3 days ahead and rolling
predictions every four hours to reflect real-time decision-making processes in
water management. PFformer demonstrated remarkable improvements, from 20% to
60%, compared with state-of-the-art models.
|
no_new_dataset
| 0.946695
|
2502.20572
|
Huthaifa I. Ashqar
|
Mohammad Abu Tami, Mohammed Elhenawy, and Huthaifa I. Ashqar
|
HazardNet: A Small-Scale Vision Language Model for Real-Time Traffic
Safety Detection at Edge Devices
| null | null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Traffic safety remains a vital concern in contemporary urban settings,
intensified by the increase of vehicles and the complicated nature of road
networks. Traditional safety-critical event detection systems predominantly
rely on sensor-based approaches and conventional machine learning algorithms,
necessitating extensive data collection and complex training processes to
adhere to traffic safety regulations. This paper introduces HazardNet, a
small-scale Vision Language Model designed to enhance traffic safety by
leveraging the reasoning capabilities of advanced language and vision models.
We built HazardNet by fine-tuning the pre-trained Qwen2-VL-2B model, chosen for
its superior performance among open-source alternatives and its compact size of
two billion parameters. This helps to facilitate deployment on edge devices
with efficient inference throughput. In addition, we present HazardQA, a novel
Vision Question Answering (VQA) dataset constructed specifically for training
HazardNet on real-world scenarios involving safety-critical events. Our
experimental results show that the fine-tuned HazardNet outperformed the base
model up to an 89% improvement in F1-Score and has comparable results with
improvement in some cases reach up to 6% when compared to larger models, such
as GPT-4o. These advancements underscore the potential of HazardNet in
providing real-time, reliable traffic safety event detection, thereby
contributing to reduced accidents and improved traffic management in urban
environments. Both HazardNet model and the HazardQA dataset are available at
https://huggingface.co/Tami3/HazardNet and
https://huggingface.co/datasets/Tami3/HazardQA, respectively.
|
[
{
"version": "v1",
"created": "Thu, 27 Feb 2025 22:21:45 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Tami",
"Mohammad Abu",
""
],
[
"Elhenawy",
"Mohammed",
""
],
[
"Ashqar",
"Huthaifa I.",
""
]
] |
TITLE: HazardNet: A Small-Scale Vision Language Model for Real-Time Traffic
Safety Detection at Edge Devices
ABSTRACT: Traffic safety remains a vital concern in contemporary urban settings,
intensified by the increase of vehicles and the complicated nature of road
networks. Traditional safety-critical event detection systems predominantly
rely on sensor-based approaches and conventional machine learning algorithms,
necessitating extensive data collection and complex training processes to
adhere to traffic safety regulations. This paper introduces HazardNet, a
small-scale Vision Language Model designed to enhance traffic safety by
leveraging the reasoning capabilities of advanced language and vision models.
We built HazardNet by fine-tuning the pre-trained Qwen2-VL-2B model, chosen for
its superior performance among open-source alternatives and its compact size of
two billion parameters. This helps to facilitate deployment on edge devices
with efficient inference throughput. In addition, we present HazardQA, a novel
Vision Question Answering (VQA) dataset constructed specifically for training
HazardNet on real-world scenarios involving safety-critical events. Our
experimental results show that the fine-tuned HazardNet outperformed the base
model up to an 89% improvement in F1-Score and has comparable results with
improvement in some cases reach up to 6% when compared to larger models, such
as GPT-4o. These advancements underscore the potential of HazardNet in
providing real-time, reliable traffic safety event detection, thereby
contributing to reduced accidents and improved traffic management in urban
environments. Both HazardNet model and the HazardQA dataset are available at
https://huggingface.co/Tami3/HazardNet and
https://huggingface.co/datasets/Tami3/HazardQA, respectively.
|
new_dataset
| 0.965152
|
2502.20582
|
Zihao He
|
Javin Liu, Aryan Vats, Zihao He
|
CS-PaperSum: A Large-Scale Dataset of AI-Generated Summaries for
Scientific Papers
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid expansion of scientific literature in computer science presents
challenges in tracking research trends and extracting key insights. Existing
datasets provide metadata but lack structured summaries that capture core
contributions and methodologies. We introduce CS-PaperSum, a large-scale
dataset of 91,919 papers from 31 top-tier computer science conferences,
enriched with AI-generated structured summaries using ChatGPT. To assess
summary quality, we conduct embedding alignment analysis and keyword overlap
analysis, demonstrating strong preservation of key concepts. We further present
a case study on AI research trends, highlighting shifts in methodologies and
interdisciplinary crossovers, including the rise of self-supervised learning,
retrieval-augmented generation, and multimodal AI. Our dataset enables
automated literature analysis, research trend forecasting, and AI-driven
scientific discovery, providing a valuable resource for researchers,
policymakers, and scientific information retrieval systems.
|
[
{
"version": "v1",
"created": "Thu, 27 Feb 2025 22:48:35 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Liu",
"Javin",
""
],
[
"Vats",
"Aryan",
""
],
[
"He",
"Zihao",
""
]
] |
TITLE: CS-PaperSum: A Large-Scale Dataset of AI-Generated Summaries for
Scientific Papers
ABSTRACT: The rapid expansion of scientific literature in computer science presents
challenges in tracking research trends and extracting key insights. Existing
datasets provide metadata but lack structured summaries that capture core
contributions and methodologies. We introduce CS-PaperSum, a large-scale
dataset of 91,919 papers from 31 top-tier computer science conferences,
enriched with AI-generated structured summaries using ChatGPT. To assess
summary quality, we conduct embedding alignment analysis and keyword overlap
analysis, demonstrating strong preservation of key concepts. We further present
a case study on AI research trends, highlighting shifts in methodologies and
interdisciplinary crossovers, including the rise of self-supervised learning,
retrieval-augmented generation, and multimodal AI. Our dataset enables
automated literature analysis, research trend forecasting, and AI-driven
scientific discovery, providing a valuable resource for researchers,
policymakers, and scientific information retrieval systems.
|
new_dataset
| 0.958963
|
2502.20583
|
Keisuke Kamahori
|
Keisuke Kamahori, Jungo Kasai, Noriyuki Kojima, Baris Kasikci
|
LiteASR: Efficient Automatic Speech Recognition with Low-Rank
Approximation
| null | null | null | null |
cs.LG cs.AI cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Modern automatic speech recognition (ASR) models, such as OpenAI's Whisper,
rely on deep encoder-decoder architectures, and their encoders are a critical
bottleneck for efficient deployment due to high computational intensity. We
introduce LiteASR, a low-rank compression scheme for ASR encoders that
significantly reduces inference costs while maintaining transcription accuracy.
Our approach leverages the strong low-rank properties observed in intermediate
activations: by applying principal component analysis (PCA) with a small
calibration dataset, we approximate linear transformations with a chain of
low-rank matrix multiplications, and further optimize self-attention to work in
the reduced dimension. Evaluation results show that our method can compress
Whisper large-v3's encoder size by over 50%, matching Whisper medium's size
with better transcription accuracy, thereby establishing a new Pareto-optimal
frontier of efficiency and performance. The code of LiteASR is available at
https://github.com/efeslab/LiteASR.
|
[
{
"version": "v1",
"created": "Thu, 27 Feb 2025 22:52:21 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Kamahori",
"Keisuke",
""
],
[
"Kasai",
"Jungo",
""
],
[
"Kojima",
"Noriyuki",
""
],
[
"Kasikci",
"Baris",
""
]
] |
TITLE: LiteASR: Efficient Automatic Speech Recognition with Low-Rank
Approximation
ABSTRACT: Modern automatic speech recognition (ASR) models, such as OpenAI's Whisper,
rely on deep encoder-decoder architectures, and their encoders are a critical
bottleneck for efficient deployment due to high computational intensity. We
introduce LiteASR, a low-rank compression scheme for ASR encoders that
significantly reduces inference costs while maintaining transcription accuracy.
Our approach leverages the strong low-rank properties observed in intermediate
activations: by applying principal component analysis (PCA) with a small
calibration dataset, we approximate linear transformations with a chain of
low-rank matrix multiplications, and further optimize self-attention to work in
the reduced dimension. Evaluation results show that our method can compress
Whisper large-v3's encoder size by over 50%, matching Whisper medium's size
with better transcription accuracy, thereby establishing a new Pareto-optimal
frontier of efficiency and performance. The code of LiteASR is available at
https://github.com/efeslab/LiteASR.
|
no_new_dataset
| 0.94428
|
2502.20596
|
Linh Ngo
|
Nguyen Xuan Thanh, Anh Duc Le, Quyen Tran, Thanh-Thien Le, Linh Ngo
Van, Thien Huu Nguyen
|
Few-Shot, No Problem: Descriptive Continual Relation Extraction
|
Accepted to AAAI 2025
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Few-shot Continual Relation Extraction is a crucial challenge for enabling AI
systems to identify and adapt to evolving relationships in dynamic real-world
domains. Traditional memory-based approaches often overfit to limited samples,
failing to reinforce old knowledge, with the scarcity of data in few-shot
scenarios further exacerbating these issues by hindering effective data
augmentation in the latent space. In this paper, we propose a novel
retrieval-based solution, starting with a large language model to generate
descriptions for each relation. From these descriptions, we introduce a
bi-encoder retrieval training paradigm to enrich both sample and class
representation learning. Leveraging these enhanced representations, we design a
retrieval-based prediction method where each sample "retrieves" the best
fitting relation via a reciprocal rank fusion score that integrates both
relation description vectors and class prototypes. Extensive experiments on
multiple datasets demonstrate that our method significantly advances the
state-of-the-art by maintaining robust performance across sequential tasks,
effectively addressing catastrophic forgetting.
|
[
{
"version": "v1",
"created": "Thu, 27 Feb 2025 23:44:30 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Thanh",
"Nguyen Xuan",
""
],
[
"Le",
"Anh Duc",
""
],
[
"Tran",
"Quyen",
""
],
[
"Le",
"Thanh-Thien",
""
],
[
"Van",
"Linh Ngo",
""
],
[
"Nguyen",
"Thien Huu",
""
]
] |
TITLE: Few-Shot, No Problem: Descriptive Continual Relation Extraction
ABSTRACT: Few-shot Continual Relation Extraction is a crucial challenge for enabling AI
systems to identify and adapt to evolving relationships in dynamic real-world
domains. Traditional memory-based approaches often overfit to limited samples,
failing to reinforce old knowledge, with the scarcity of data in few-shot
scenarios further exacerbating these issues by hindering effective data
augmentation in the latent space. In this paper, we propose a novel
retrieval-based solution, starting with a large language model to generate
descriptions for each relation. From these descriptions, we introduce a
bi-encoder retrieval training paradigm to enrich both sample and class
representation learning. Leveraging these enhanced representations, we design a
retrieval-based prediction method where each sample "retrieves" the best
fitting relation via a reciprocal rank fusion score that integrates both
relation description vectors and class prototypes. Extensive experiments on
multiple datasets demonstrate that our method significantly advances the
state-of-the-art by maintaining robust performance across sequential tasks,
effectively addressing catastrophic forgetting.
|
no_new_dataset
| 0.947962
|
2502.20603
|
Ira Shokar Mr
|
Ira J. S. Shokar, Peter H. Haynes, Rich R. Kerswell
|
Deep Learning of the Evolution Operator Enables Forecasting of
Out-of-Training Dynamics in Chaotic Systems
| null | null | null | null |
cs.LG math.DS nlin.CD
|
http://creativecommons.org/licenses/by/4.0/
|
We demonstrate that a deep learning emulator for chaotic systems can forecast
phenomena absent from training data. Using the Kuramoto-Sivashinsky and
beta-plane turbulence models, we evaluate the emulator through scenarios
probing the fundamental phenomena of both systems: forecasting spontaneous
relaminarisation, capturing initialisation of arbitrary chaotic states,
zero-shot prediction of dynamics with parameter values outside of the training
range, and characterisation of dynamical statistics from artificially
restricted training datasets. Our results show that deep learning emulators can
uncover emergent behaviours and rare events in complex systems by learning
underlying mathematical rules, rather than merely mimicking observed patterns.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 00:07:18 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Shokar",
"Ira J. S.",
""
],
[
"Haynes",
"Peter H.",
""
],
[
"Kerswell",
"Rich R.",
""
]
] |
TITLE: Deep Learning of the Evolution Operator Enables Forecasting of
Out-of-Training Dynamics in Chaotic Systems
ABSTRACT: We demonstrate that a deep learning emulator for chaotic systems can forecast
phenomena absent from training data. Using the Kuramoto-Sivashinsky and
beta-plane turbulence models, we evaluate the emulator through scenarios
probing the fundamental phenomena of both systems: forecasting spontaneous
relaminarisation, capturing initialisation of arbitrary chaotic states,
zero-shot prediction of dynamics with parameter values outside of the training
range, and characterisation of dynamical statistics from artificially
restricted training datasets. Our results show that deep learning emulators can
uncover emergent behaviours and rare events in complex systems by learning
underlying mathematical rules, rather than merely mimicking observed patterns.
|
no_new_dataset
| 0.949389
|
2502.20604
|
Hao Xuan
|
Hao Xuan, Bokai Yang, Xingyu Li
|
Exploring the Impact of Temperature Scaling in Softmax for
Classification and Adversarial Robustness
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The softmax function is a fundamental component in deep learning. This study
delves into the often-overlooked parameter within the softmax function, known
as "temperature," providing novel insights into the practical and theoretical
aspects of temperature scaling for image classification. Our empirical studies,
adopting convolutional neural networks and transformers on multiple benchmark
datasets, reveal that moderate temperatures generally introduce better overall
performance. Through extensive experiments and rigorous theoretical analysis,
we explore the role of temperature scaling in model training and unveil that
temperature not only influences learning step size but also shapes the model's
optimization direction. Moreover, for the first time, we discover a surprising
benefit of elevated temperatures: enhanced model robustness against common
corruption, natural perturbation, and non-targeted adversarial attacks like
Projected Gradient Descent. We extend our discoveries to adversarial training,
demonstrating that, compared to the standard softmax function with the default
temperature value, higher temperatures have the potential to enhance
adversarial training. The insights of this work open new avenues for improving
model performance and security in deep learning applications.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 00:07:45 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Xuan",
"Hao",
""
],
[
"Yang",
"Bokai",
""
],
[
"Li",
"Xingyu",
""
]
] |
TITLE: Exploring the Impact of Temperature Scaling in Softmax for
Classification and Adversarial Robustness
ABSTRACT: The softmax function is a fundamental component in deep learning. This study
delves into the often-overlooked parameter within the softmax function, known
as "temperature," providing novel insights into the practical and theoretical
aspects of temperature scaling for image classification. Our empirical studies,
adopting convolutional neural networks and transformers on multiple benchmark
datasets, reveal that moderate temperatures generally introduce better overall
performance. Through extensive experiments and rigorous theoretical analysis,
we explore the role of temperature scaling in model training and unveil that
temperature not only influences learning step size but also shapes the model's
optimization direction. Moreover, for the first time, we discover a surprising
benefit of elevated temperatures: enhanced model robustness against common
corruption, natural perturbation, and non-targeted adversarial attacks like
Projected Gradient Descent. We extend our discoveries to adversarial training,
demonstrating that, compared to the standard softmax function with the default
temperature value, higher temperatures have the potential to enhance
adversarial training. The insights of this work open new avenues for improving
model performance and security in deep learning applications.
|
no_new_dataset
| 0.946646
|
2502.20607
|
Zhefan Xu
|
Zhefan Xu, Haoyu Shen, Xinming Han, Hanyu Jin, Kanlong Ye, Kenji
Shimada
|
LV-DOT: LiDAR-visual dynamic obstacle detection and tracking for
autonomous robot navigation
|
8 pages, 7 figures, 2 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate perception of dynamic obstacles is essential for autonomous robot
navigation in indoor environments. Although sophisticated 3D object detection
and tracking methods have been investigated and developed thoroughly in the
fields of computer vision and autonomous driving, their demands on expensive
and high-accuracy sensor setups and substantial computational resources from
large neural networks make them unsuitable for indoor robotics. Recently, more
lightweight perception algorithms leveraging onboard cameras or LiDAR sensors
have emerged as promising alternatives. However, relying on a single sensor
poses significant limitations: cameras have limited fields of view and can
suffer from high noise, whereas LiDAR sensors operate at lower frequencies and
lack the richness of visual features. To address this limitation, we propose a
dynamic obstacle detection and tracking framework that uses both onboard camera
and LiDAR data to enable lightweight and accurate perception. Our proposed
method expands on our previous ensemble detection approach, which integrates
outputs from multiple low-accuracy but computationally efficient detectors to
ensure real-time performance on the onboard computer. In this work, we propose
a more robust fusion strategy that integrates both LiDAR and visual data to
enhance detection accuracy further. We then utilize a tracking module that
adopts feature-based object association and the Kalman filter to track and
estimate detected obstacles' states. Besides, a dynamic obstacle classification
algorithm is designed to robustly identify moving objects. The dataset
evaluation demonstrates a better perception performance compared to benchmark
methods. The physical experiments on a quadcopter robot confirms the
feasibility for real-world navigation.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 00:12:35 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Xu",
"Zhefan",
""
],
[
"Shen",
"Haoyu",
""
],
[
"Han",
"Xinming",
""
],
[
"Jin",
"Hanyu",
""
],
[
"Ye",
"Kanlong",
""
],
[
"Shimada",
"Kenji",
""
]
] |
TITLE: LV-DOT: LiDAR-visual dynamic obstacle detection and tracking for
autonomous robot navigation
ABSTRACT: Accurate perception of dynamic obstacles is essential for autonomous robot
navigation in indoor environments. Although sophisticated 3D object detection
and tracking methods have been investigated and developed thoroughly in the
fields of computer vision and autonomous driving, their demands on expensive
and high-accuracy sensor setups and substantial computational resources from
large neural networks make them unsuitable for indoor robotics. Recently, more
lightweight perception algorithms leveraging onboard cameras or LiDAR sensors
have emerged as promising alternatives. However, relying on a single sensor
poses significant limitations: cameras have limited fields of view and can
suffer from high noise, whereas LiDAR sensors operate at lower frequencies and
lack the richness of visual features. To address this limitation, we propose a
dynamic obstacle detection and tracking framework that uses both onboard camera
and LiDAR data to enable lightweight and accurate perception. Our proposed
method expands on our previous ensemble detection approach, which integrates
outputs from multiple low-accuracy but computationally efficient detectors to
ensure real-time performance on the onboard computer. In this work, we propose
a more robust fusion strategy that integrates both LiDAR and visual data to
enhance detection accuracy further. We then utilize a tracking module that
adopts feature-based object association and the Kalman filter to track and
estimate detected obstacles' states. Besides, a dynamic obstacle classification
algorithm is designed to robustly identify moving objects. The dataset
evaluation demonstrates a better perception performance compared to benchmark
methods. The physical experiments on a quadcopter robot confirms the
feasibility for real-world navigation.
|
no_new_dataset
| 0.945951
|
2502.20609
|
Mateusz Lango
|
J\k{e}drzej Warczy\'nski, Mateusz Lango, Ondrej Dusek
|
Leveraging Large Language Models for Building Interpretable Rule-Based
Data-to-Text Systems
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a simple approach that uses a large language model (LLM) to
automatically implement a fully interpretable rule-based data-to-text system in
pure Python. Experimental evaluation on the WebNLG dataset showed that such a
constructed system produces text of better quality (according to the BLEU and
BLEURT metrics) than the same LLM prompted to directly produce outputs, and
produces fewer hallucinations than a BART language model fine-tuned on the same
data. Furthermore, at runtime, the approach generates text in a fraction of the
processing time required by neural approaches, using only a single CPU
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 00:23:55 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Warczyński",
"Jędrzej",
""
],
[
"Lango",
"Mateusz",
""
],
[
"Dusek",
"Ondrej",
""
]
] |
TITLE: Leveraging Large Language Models for Building Interpretable Rule-Based
Data-to-Text Systems
ABSTRACT: We introduce a simple approach that uses a large language model (LLM) to
automatically implement a fully interpretable rule-based data-to-text system in
pure Python. Experimental evaluation on the WebNLG dataset showed that such a
constructed system produces text of better quality (according to the BLEU and
BLEURT metrics) than the same LLM prompted to directly produce outputs, and
produces fewer hallucinations than a BART language model fine-tuned on the same
data. Furthermore, at runtime, the approach generates text in a fraction of the
processing time required by neural approaches, using only a single CPU
|
no_new_dataset
| 0.948728
|
2502.20612
|
Vicente Balmaseda
|
Vicente Balmaseda, Bokun Wang, Ching-Long Lin, Tianbao Yang
|
Discovering Global False Negatives On the Fly for Self-supervised
Contrastive Learning
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In self-supervised contrastive learning, negative pairs are typically
constructed using an anchor image and a sample drawn from the entire dataset,
excluding the anchor. However, this approach can result in the creation of
negative pairs with similar semantics, referred to as "false negatives",
leading to their embeddings being falsely pushed apart. To address this issue,
we introduce GloFND, an optimization-based approach that automatically learns
on the fly the threshold for each anchor data to identify its false negatives
during training. In contrast to previous methods for false negative discovery,
our approach globally detects false negatives across the entire dataset rather
than locally within the mini-batch. Moreover, its per-iteration computation
cost remains independent of the dataset size. Experimental results on image and
image-text data demonstrate the effectiveness of the proposed method. Our
implementation is available at https://github.com/vibalcam/GloFND .
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 00:28:25 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Balmaseda",
"Vicente",
""
],
[
"Wang",
"Bokun",
""
],
[
"Lin",
"Ching-Long",
""
],
[
"Yang",
"Tianbao",
""
]
] |
TITLE: Discovering Global False Negatives On the Fly for Self-supervised
Contrastive Learning
ABSTRACT: In self-supervised contrastive learning, negative pairs are typically
constructed using an anchor image and a sample drawn from the entire dataset,
excluding the anchor. However, this approach can result in the creation of
negative pairs with similar semantics, referred to as "false negatives",
leading to their embeddings being falsely pushed apart. To address this issue,
we introduce GloFND, an optimization-based approach that automatically learns
on the fly the threshold for each anchor data to identify its false negatives
during training. In contrast to previous methods for false negative discovery,
our approach globally detects false negatives across the entire dataset rather
than locally within the mini-batch. Moreover, its per-iteration computation
cost remains independent of the dataset size. Experimental results on image and
image-text data demonstrate the effectiveness of the proposed method. Our
implementation is available at https://github.com/vibalcam/GloFND .
|
no_new_dataset
| 0.951278
|
2502.20616
|
Juntao Tan
|
Juntao Tan, Liangwei Yang, Zuxin Liu, Zhiwei Liu, Rithesh Murthy,
Tulika Manoj Awalgaonkar, Jianguo Zhang, Weiran Yao, Ming Zhu, Shirley
Kokane, Silvio Savarese, Huan Wang, Caiming Xiong, Shelby Heinecke
|
PersonaBench: Evaluating AI Models on Understanding Personal Information
through Accessing (Synthetic) Private User Data
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Personalization is critical in AI assistants, particularly in the context of
private AI models that work with individual users. A key scenario in this
domain involves enabling AI models to access and interpret a user's private
data (e.g., conversation history, user-AI interactions, app usage) to
understand personal details such as biographical information, preferences, and
social connections. However, due to the sensitive nature of such data, there
are no publicly available datasets that allow us to assess an AI model's
ability to understand users through direct access to personal information.
To address this gap, we introduce a synthetic data generation pipeline that
creates diverse, realistic user profiles and private documents simulating human
activities. Leveraging this synthetic data, we present PersonaBench, a
benchmark designed to evaluate AI models' performance in understanding personal
information derived from simulated private user data.
We evaluate Retrieval-Augmented Generation (RAG) pipelines using questions
directly related to a user's personal information, supported by the relevant
private documents provided to the models. Our results reveal that current
retrieval-augmented AI models struggle to answer private questions by
extracting personal information from user documents, highlighting the need for
improved methodologies to enhance personalization capabilities in AI.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 00:43:35 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Tan",
"Juntao",
""
],
[
"Yang",
"Liangwei",
""
],
[
"Liu",
"Zuxin",
""
],
[
"Liu",
"Zhiwei",
""
],
[
"Murthy",
"Rithesh",
""
],
[
"Awalgaonkar",
"Tulika Manoj",
""
],
[
"Zhang",
"Jianguo",
""
],
[
"Yao",
"Weiran",
""
],
[
"Zhu",
"Ming",
""
],
[
"Kokane",
"Shirley",
""
],
[
"Savarese",
"Silvio",
""
],
[
"Wang",
"Huan",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Heinecke",
"Shelby",
""
]
] |
TITLE: PersonaBench: Evaluating AI Models on Understanding Personal Information
through Accessing (Synthetic) Private User Data
ABSTRACT: Personalization is critical in AI assistants, particularly in the context of
private AI models that work with individual users. A key scenario in this
domain involves enabling AI models to access and interpret a user's private
data (e.g., conversation history, user-AI interactions, app usage) to
understand personal details such as biographical information, preferences, and
social connections. However, due to the sensitive nature of such data, there
are no publicly available datasets that allow us to assess an AI model's
ability to understand users through direct access to personal information.
To address this gap, we introduce a synthetic data generation pipeline that
creates diverse, realistic user profiles and private documents simulating human
activities. Leveraging this synthetic data, we present PersonaBench, a
benchmark designed to evaluate AI models' performance in understanding personal
information derived from simulated private user data.
We evaluate Retrieval-Augmented Generation (RAG) pipelines using questions
directly related to a user's personal information, supported by the relevant
private documents provided to the models. Our results reveal that current
retrieval-augmented AI models struggle to answer private questions by
extracting personal information from user documents, highlighting the need for
improved methodologies to enhance personalization capabilities in AI.
|
no_new_dataset
| 0.576974
|
2502.20620
|
Ayana Niwa
|
Ayana Niwa and Masahiro Kaneko and Kentaro Inui
|
Rectifying Belief Space via Unlearning to Harness LLMs' Reasoning
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) can exhibit advanced reasoning yet still
generate incorrect answers. We hypothesize that such errors frequently stem
from spurious beliefs, propositions the model internally considers true but are
incorrect. To address this, we propose a method to rectify the belief space by
suppressing these spurious beliefs while simultaneously enhancing true ones,
thereby enabling more reliable inferences. Our approach first identifies the
beliefs that lead to incorrect or correct answers by prompting the model to
generate textual explanations, using our Forward-Backward Beam Search (FBBS).
We then apply unlearning to suppress the identified spurious beliefs and
enhance the true ones, effectively rectifying the model's belief space.
Empirical results on multiple QA datasets and LLMs show that our method
corrects previously misanswered questions without harming overall model
performance. Furthermore, our approach yields improved generalization on unseen
data, suggesting that rectifying a model's belief space is a promising
direction for mitigating errors and enhancing overall reliability.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 00:57:45 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Niwa",
"Ayana",
""
],
[
"Kaneko",
"Masahiro",
""
],
[
"Inui",
"Kentaro",
""
]
] |
TITLE: Rectifying Belief Space via Unlearning to Harness LLMs' Reasoning
ABSTRACT: Large language models (LLMs) can exhibit advanced reasoning yet still
generate incorrect answers. We hypothesize that such errors frequently stem
from spurious beliefs, propositions the model internally considers true but are
incorrect. To address this, we propose a method to rectify the belief space by
suppressing these spurious beliefs while simultaneously enhancing true ones,
thereby enabling more reliable inferences. Our approach first identifies the
beliefs that lead to incorrect or correct answers by prompting the model to
generate textual explanations, using our Forward-Backward Beam Search (FBBS).
We then apply unlearning to suppress the identified spurious beliefs and
enhance the true ones, effectively rectifying the model's belief space.
Empirical results on multiple QA datasets and LLMs show that our method
corrects previously misanswered questions without harming overall model
performance. Furthermore, our approach yields improved generalization on unseen
data, suggesting that rectifying a model's belief space is a promising
direction for mitigating errors and enhancing overall reliability.
|
no_new_dataset
| 0.952353
|
2502.20622
|
Chi Ruan
|
Chi Ruan
|
RTGen: Real-Time Generative Detection Transformer
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While open-vocabulary object detectors require predefined categories during
inference, generative object detectors overcome this limitation by endowing the
model with text generation capabilities. However, existing generative object
detection methods directly append an autoregressive language model to an object
detector to generate texts for each detected object. This straightforward
design leads to structural redundancy and increased processing time. In this
paper, we propose a Real-Time GENerative Detection Transformer (RTGen), a
real-time generative object detector with a succinct encoder-decoder
architecture. Specifically, we introduce a novel Region-Language Decoder
(RL-Decoder), which innovatively integrates a non-autoregressive language model
into the detection decoder, enabling concurrent processing of object and text
information. With these efficient designs, RTGen achieves a remarkable
inference speed of 60.41 FPS. Moreover, RTGen obtains 18.6 mAP on the LVIS
dataset, outperforming the previous SOTA method by 3.5 mAP.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 01:01:56 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Ruan",
"Chi",
""
]
] |
TITLE: RTGen: Real-Time Generative Detection Transformer
ABSTRACT: While open-vocabulary object detectors require predefined categories during
inference, generative object detectors overcome this limitation by endowing the
model with text generation capabilities. However, existing generative object
detection methods directly append an autoregressive language model to an object
detector to generate texts for each detected object. This straightforward
design leads to structural redundancy and increased processing time. In this
paper, we propose a Real-Time GENerative Detection Transformer (RTGen), a
real-time generative object detector with a succinct encoder-decoder
architecture. Specifically, we introduce a novel Region-Language Decoder
(RL-Decoder), which innovatively integrates a non-autoregressive language model
into the detection decoder, enabling concurrent processing of object and text
information. With these efficient designs, RTGen achieves a remarkable
inference speed of 60.41 FPS. Moreover, RTGen obtains 18.6 mAP on the LVIS
dataset, outperforming the previous SOTA method by 3.5 mAP.
|
no_new_dataset
| 0.946892
|
2502.20623
|
Yuepeng Hu
|
Yuepeng Hu, Zhengyuan Jiang, Neil Zhenqiang Gong
|
SafeText: Safe Text-to-image Models via Aligning the Text Encoder
| null | null | null | null |
cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-to-image models can generate harmful images when presented with unsafe
prompts, posing significant safety and societal risks. Alignment methods aim to
modify these models to ensure they generate only non-harmful images, even when
exposed to unsafe prompts. A typical text-to-image model comprises two main
components: 1) a text encoder and 2) a diffusion module. Existing alignment
methods mainly focus on modifying the diffusion module to prevent harmful image
generation. However, this often significantly impacts the model's behavior for
safe prompts, causing substantial quality degradation of generated images. In
this work, we propose SafeText, a novel alignment method that fine-tunes the
text encoder rather than the diffusion module. By adjusting the text encoder,
SafeText significantly alters the embedding vectors for unsafe prompts, while
minimally affecting those for safe prompts. As a result, the diffusion module
generates non-harmful images for unsafe prompts while preserving the quality of
images for safe prompts. We evaluate SafeText on multiple datasets of safe and
unsafe prompts, including those generated through jailbreak attacks. Our
results show that SafeText effectively prevents harmful image generation with
minor impact on the images for safe prompts, and SafeText outperforms six
existing alignment methods. We will publish our code and data after paper
acceptance.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 01:02:57 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Hu",
"Yuepeng",
""
],
[
"Jiang",
"Zhengyuan",
""
],
[
"Gong",
"Neil Zhenqiang",
""
]
] |
TITLE: SafeText: Safe Text-to-image Models via Aligning the Text Encoder
ABSTRACT: Text-to-image models can generate harmful images when presented with unsafe
prompts, posing significant safety and societal risks. Alignment methods aim to
modify these models to ensure they generate only non-harmful images, even when
exposed to unsafe prompts. A typical text-to-image model comprises two main
components: 1) a text encoder and 2) a diffusion module. Existing alignment
methods mainly focus on modifying the diffusion module to prevent harmful image
generation. However, this often significantly impacts the model's behavior for
safe prompts, causing substantial quality degradation of generated images. In
this work, we propose SafeText, a novel alignment method that fine-tunes the
text encoder rather than the diffusion module. By adjusting the text encoder,
SafeText significantly alters the embedding vectors for unsafe prompts, while
minimally affecting those for safe prompts. As a result, the diffusion module
generates non-harmful images for unsafe prompts while preserving the quality of
images for safe prompts. We evaluate SafeText on multiple datasets of safe and
unsafe prompts, including those generated through jailbreak attacks. Our
results show that SafeText effectively prevents harmful image generation with
minor impact on the images for safe prompts, and SafeText outperforms six
existing alignment methods. We will publish our code and data after paper
acceptance.
|
no_new_dataset
| 0.949856
|
2502.20639
|
Leming Shen
|
Leming Shen, Qiang Yang, Kaiyan Cui, Yuanqing Zheng, Xiao-Yong Wei,
Jianwei Liu, Jinsong Han
|
FedConv: A Learning-on-Model Paradigm for Heterogeneous Federated
Clients
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Federated Learning (FL) facilitates collaborative training of a shared global
model without exposing clients' private data. In practical FL systems, clients
(e.g., edge servers, smartphones, and wearables) typically have disparate
system resources. Conventional FL, however, adopts a one-size-fits-all
solution, where a homogeneous large global model is transmitted to and trained
on each client, resulting in an overwhelming workload for less capable clients
and starvation for other clients. To address this issue, we propose FedConv, a
client-friendly FL framework, which minimizes the computation and memory burden
on resource-constrained clients by providing heterogeneous customized
sub-models. FedConv features a novel learning-on-model paradigm that learns the
parameters of the heterogeneous sub-models via convolutional compression.
Unlike traditional compression methods, the compressed models in FedConv can be
directly trained on clients without decompression. To aggregate the
heterogeneous sub-models, we propose transposed convolutional dilation to
convert them back to large models with a unified size while retaining
personalized information from clients. The compression and dilation processes,
transparent to clients, are optimized on the server leveraging a small public
dataset. Extensive experiments on six datasets demonstrate that FedConv
outperforms state-of-the-art FL systems in terms of model accuracy (by more
than 35% on average), computation and communication overhead (with 33% and 25%
reduction, respectively).
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 01:39:53 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Shen",
"Leming",
""
],
[
"Yang",
"Qiang",
""
],
[
"Cui",
"Kaiyan",
""
],
[
"Zheng",
"Yuanqing",
""
],
[
"Wei",
"Xiao-Yong",
""
],
[
"Liu",
"Jianwei",
""
],
[
"Han",
"Jinsong",
""
]
] |
TITLE: FedConv: A Learning-on-Model Paradigm for Heterogeneous Federated
Clients
ABSTRACT: Federated Learning (FL) facilitates collaborative training of a shared global
model without exposing clients' private data. In practical FL systems, clients
(e.g., edge servers, smartphones, and wearables) typically have disparate
system resources. Conventional FL, however, adopts a one-size-fits-all
solution, where a homogeneous large global model is transmitted to and trained
on each client, resulting in an overwhelming workload for less capable clients
and starvation for other clients. To address this issue, we propose FedConv, a
client-friendly FL framework, which minimizes the computation and memory burden
on resource-constrained clients by providing heterogeneous customized
sub-models. FedConv features a novel learning-on-model paradigm that learns the
parameters of the heterogeneous sub-models via convolutional compression.
Unlike traditional compression methods, the compressed models in FedConv can be
directly trained on clients without decompression. To aggregate the
heterogeneous sub-models, we propose transposed convolutional dilation to
convert them back to large models with a unified size while retaining
personalized information from clients. The compression and dilation processes,
transparent to clients, are optimized on the server leveraging a small public
dataset. Extensive experiments on six datasets demonstrate that FedConv
outperforms state-of-the-art FL systems in terms of model accuracy (by more
than 35% on average), computation and communication overhead (with 33% and 25%
reduction, respectively).
|
no_new_dataset
| 0.951097
|
2502.20643
|
Pengyu Zhang
|
Pengyu Zhang, Xieyuanli Chen, Yuwei Chen, Beizhen Bi, Zhuo Xu, Tian
Jin, Xiaotao Huang, Liang Shen
|
EDENet: Echo Direction Encoding Network for Place Recognition Based on
Ground Penetrating Radar
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ground penetrating radar (GPR) based localization has gained significant
recognition in robotics due to its ability to detect stable subsurface
features, offering advantages in environments where traditional sensors like
cameras and LiDAR may struggle. However, existing methods are primarily focused
on small-scale place recognition (PR), leaving the challenges of PR in
large-scale maps unaddressed. These challenges include the inherent sparsity of
underground features and the variability in underground dielectric constants,
which complicate robust localization. In this work, we investigate the
geometric relationship between GPR echo sequences and underground scenes,
leveraging the robustness of directional features to inform our network design.
We introduce learnable Gabor filters for the precise extraction of directional
responses, coupled with a direction-aware attention mechanism for effective
geometric encoding. To further enhance performance, we incorporate a
shift-invariant unit and a multi-scale aggregation strategy to better
accommodate variations in di-electric constants. Experiments conducted on
public datasets demonstrate that our proposed EDENet not only surpasses
existing solutions in terms of PR performance but also offers advantages in
model size and computational efficiency.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 01:48:12 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Zhang",
"Pengyu",
""
],
[
"Chen",
"Xieyuanli",
""
],
[
"Chen",
"Yuwei",
""
],
[
"Bi",
"Beizhen",
""
],
[
"Xu",
"Zhuo",
""
],
[
"Jin",
"Tian",
""
],
[
"Huang",
"Xiaotao",
""
],
[
"Shen",
"Liang",
""
]
] |
TITLE: EDENet: Echo Direction Encoding Network for Place Recognition Based on
Ground Penetrating Radar
ABSTRACT: Ground penetrating radar (GPR) based localization has gained significant
recognition in robotics due to its ability to detect stable subsurface
features, offering advantages in environments where traditional sensors like
cameras and LiDAR may struggle. However, existing methods are primarily focused
on small-scale place recognition (PR), leaving the challenges of PR in
large-scale maps unaddressed. These challenges include the inherent sparsity of
underground features and the variability in underground dielectric constants,
which complicate robust localization. In this work, we investigate the
geometric relationship between GPR echo sequences and underground scenes,
leveraging the robustness of directional features to inform our network design.
We introduce learnable Gabor filters for the precise extraction of directional
responses, coupled with a direction-aware attention mechanism for effective
geometric encoding. To further enhance performance, we incorporate a
shift-invariant unit and a multi-scale aggregation strategy to better
accommodate variations in di-electric constants. Experiments conducted on
public datasets demonstrate that our proposed EDENet not only surpasses
existing solutions in terms of PR performance but also offers advantages in
model size and computational efficiency.
|
no_new_dataset
| 0.950778
|
2502.20647
|
Haleh Shahzad Dr
|
Colleen Gilhuly, Haleh Shahzad
|
Consistency Evaluation of News Article Summaries Generated by Large (and
Small) Language Models
|
21 pages, 6 figures, 4 tables
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Text summarizing is a critical Natural Language Processing (NLP) task with
applications ranging from information retrieval to content generation. Large
Language Models (LLMs) have shown remarkable promise in generating fluent
abstractive summaries but they can produce hallucinated details not grounded in
the source text. Regardless of the method of generating a summary, high quality
automated evaluations remain an open area of investigation. This paper embarks
on an exploration of text summarization with a diverse set of techniques,
including TextRank, BART, Mistral-7B-Instruct, and OpenAI GPT-3.5-Turbo. The
generated summaries are evaluated using traditional metrics such as the
Recall-Oriented Understudy for Gisting Evaluation (ROUGE) Score and
Bidirectional Encoder Representations from Transformers (BERT) Score, as well
as LLM-powered evaluation methods that directly assess a generated summary's
consistency with the source text. We introduce a meta evaluation score which
directly assesses the performance of the LLM evaluation system (prompt +
model). We find that that all summarization models produce consistent summaries
when tested on the XL-Sum dataset, exceeding the consistency of the reference
summaries.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 01:58:17 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Gilhuly",
"Colleen",
""
],
[
"Shahzad",
"Haleh",
""
]
] |
TITLE: Consistency Evaluation of News Article Summaries Generated by Large (and
Small) Language Models
ABSTRACT: Text summarizing is a critical Natural Language Processing (NLP) task with
applications ranging from information retrieval to content generation. Large
Language Models (LLMs) have shown remarkable promise in generating fluent
abstractive summaries but they can produce hallucinated details not grounded in
the source text. Regardless of the method of generating a summary, high quality
automated evaluations remain an open area of investigation. This paper embarks
on an exploration of text summarization with a diverse set of techniques,
including TextRank, BART, Mistral-7B-Instruct, and OpenAI GPT-3.5-Turbo. The
generated summaries are evaluated using traditional metrics such as the
Recall-Oriented Understudy for Gisting Evaluation (ROUGE) Score and
Bidirectional Encoder Representations from Transformers (BERT) Score, as well
as LLM-powered evaluation methods that directly assess a generated summary's
consistency with the source text. We introduce a meta evaluation score which
directly assesses the performance of the LLM evaluation system (prompt +
model). We find that that all summarization models produce consistent summaries
when tested on the XL-Sum dataset, exceeding the consistency of the reference
summaries.
|
no_new_dataset
| 0.935582
|
2502.20651
|
Sakshi Singh
|
Rishi Mukherjee, Sakshi Singh, Jack McWilliams, Junaed Sattar
|
The Common Objects Underwater (COU) Dataset for Robust Underwater Object
Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce COU: Common Objects Underwater, an instance-segmented image
dataset of commonly found man-made objects in multiple aquatic and marine
environments. COU contains approximately 10K segmented images, annotated from
images collected during a number of underwater robot field trials in diverse
locations. COU has been created to address the lack of datasets with robust
class coverage curated for underwater instance segmentation, which is
particularly useful for training light-weight, real-time capable detectors for
Autonomous Underwater Vehicles (AUVs). In addition, COU addresses the lack of
diversity in object classes since the commonly available underwater image
datasets focus only on marine life. Currently, COU contains images from both
closed-water (pool) and open-water (lakes and oceans) environments, of 24
different classes of objects including marine debris, dive tools, and AUVs. To
assess the efficacy of COU in training underwater object detectors, we use
three state-of-the-art models to evaluate its performance and accuracy, using a
combination of standard accuracy and efficiency metrics. The improved
performance of COU-trained detectors over those solely trained on terrestrial
data demonstrates the clear advantage of training with annotated underwater
images. We make COU available for broad use under open-source licenses.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 02:12:24 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Mukherjee",
"Rishi",
""
],
[
"Singh",
"Sakshi",
""
],
[
"McWilliams",
"Jack",
""
],
[
"Sattar",
"Junaed",
""
]
] |
TITLE: The Common Objects Underwater (COU) Dataset for Robust Underwater Object
Detection
ABSTRACT: We introduce COU: Common Objects Underwater, an instance-segmented image
dataset of commonly found man-made objects in multiple aquatic and marine
environments. COU contains approximately 10K segmented images, annotated from
images collected during a number of underwater robot field trials in diverse
locations. COU has been created to address the lack of datasets with robust
class coverage curated for underwater instance segmentation, which is
particularly useful for training light-weight, real-time capable detectors for
Autonomous Underwater Vehicles (AUVs). In addition, COU addresses the lack of
diversity in object classes since the commonly available underwater image
datasets focus only on marine life. Currently, COU contains images from both
closed-water (pool) and open-water (lakes and oceans) environments, of 24
different classes of objects including marine debris, dive tools, and AUVs. To
assess the efficacy of COU in training underwater object detectors, we use
three state-of-the-art models to evaluate its performance and accuracy, using a
combination of standard accuracy and efficiency metrics. The improved
performance of COU-trained detectors over those solely trained on terrestrial
data demonstrates the clear advantage of training with annotated underwater
images. We make COU available for broad use under open-source licenses.
|
new_dataset
| 0.97151
|
2502.20653
|
Shaobo Wang
|
Shaobo Wang and Yicun Yang and Zhiyuan Liu and Chenghao Sun and Xuming
Hu and Conghui He and Linfeng Zhang
|
Dataset Distillation with Neural Characteristic Function: A Minmax
Perspective
|
Accepted by CVPR 2025, 11 pages, 7 figures
|
Conference on Computer Vision and Pattern Recognition, 2025
| null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Dataset distillation has emerged as a powerful approach for reducing data
requirements in deep learning. Among various methods, distribution
matching-based approaches stand out for their balance of computational
efficiency and strong performance. However, existing distance metrics used in
distribution matching often fail to accurately capture distributional
differences, leading to unreliable measures of discrepancy. In this paper, we
reformulate dataset distillation as a minmax optimization problem and introduce
Neural Characteristic Function Discrepancy (NCFD), a comprehensive and
theoretically grounded metric for measuring distributional differences. NCFD
leverages the Characteristic Function (CF) to encapsulate full distributional
information, employing a neural network to optimize the sampling strategy for
the CF's frequency arguments, thereby maximizing the discrepancy to enhance
distance estimation. Simultaneously, we minimize the difference between real
and synthetic data under this optimized NCFD measure. Our approach, termed
Neural Characteristic Function Matching (\mymethod{}), inherently aligns the
phase and amplitude of neural features in the complex plane for both real and
synthetic data, achieving a balance between realism and diversity in synthetic
samples. Experiments demonstrate that our method achieves significant
performance gains over state-of-the-art methods on both low- and
high-resolution datasets. Notably, we achieve a 20.5\% accuracy boost on
ImageSquawk. Our method also reduces GPU memory usage by over 300$\times$ and
achieves 20$\times$ faster processing speeds compared to state-of-the-art
methods. To the best of our knowledge, this is the first work to achieve
lossless compression of CIFAR-100 on a single NVIDIA 2080 Ti GPU using only 2.3
GB of memory.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 02:14:55 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Wang",
"Shaobo",
""
],
[
"Yang",
"Yicun",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Sun",
"Chenghao",
""
],
[
"Hu",
"Xuming",
""
],
[
"He",
"Conghui",
""
],
[
"Zhang",
"Linfeng",
""
]
] |
TITLE: Dataset Distillation with Neural Characteristic Function: A Minmax
Perspective
ABSTRACT: Dataset distillation has emerged as a powerful approach for reducing data
requirements in deep learning. Among various methods, distribution
matching-based approaches stand out for their balance of computational
efficiency and strong performance. However, existing distance metrics used in
distribution matching often fail to accurately capture distributional
differences, leading to unreliable measures of discrepancy. In this paper, we
reformulate dataset distillation as a minmax optimization problem and introduce
Neural Characteristic Function Discrepancy (NCFD), a comprehensive and
theoretically grounded metric for measuring distributional differences. NCFD
leverages the Characteristic Function (CF) to encapsulate full distributional
information, employing a neural network to optimize the sampling strategy for
the CF's frequency arguments, thereby maximizing the discrepancy to enhance
distance estimation. Simultaneously, we minimize the difference between real
and synthetic data under this optimized NCFD measure. Our approach, termed
Neural Characteristic Function Matching (\mymethod{}), inherently aligns the
phase and amplitude of neural features in the complex plane for both real and
synthetic data, achieving a balance between realism and diversity in synthetic
samples. Experiments demonstrate that our method achieves significant
performance gains over state-of-the-art methods on both low- and
high-resolution datasets. Notably, we achieve a 20.5\% accuracy boost on
ImageSquawk. Our method also reduces GPU memory usage by over 300$\times$ and
achieves 20$\times$ faster processing speeds compared to state-of-the-art
methods. To the best of our knowledge, this is the first work to achieve
lossless compression of CIFAR-100 on a single NVIDIA 2080 Ti GPU using only 2.3
GB of memory.
|
no_new_dataset
| 0.949295
|
2502.20661
|
Chaeyun Jang
|
Hyungi Lee, Chaeyun Jang, Dongbok Lee, Juho Lee
|
Dimension Agnostic Neural Processes
|
10 pages, 5 figures, Accepted to ICLR 2025 (International Conference
on Learning Representations)
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Meta-learning aims to train models that can generalize to new tasks with
limited labeled data by extracting shared features across diverse task
datasets. Additionally, it accounts for prediction uncertainty during both
training and evaluation, a concept known as uncertainty-aware meta-learning.
Neural Process(NP) is a well-known uncertainty-aware meta-learning method that
constructs implicit stochastic processes using parametric neural networks,
enabling rapid adaptation to new tasks. However, existing NP methods face
challenges in accommodating diverse input dimensions and learned features,
limiting their broad applicability across regression tasks. To address these
limitations and advance the utility of NP models as general regressors, we
introduce Dimension Agnostic Neural Processes(DANP). DANP incorporates
Dimension Aggregator Block(DAB) to transform input features into a
fixed-dimensional space, enhancing the model's ability to handle diverse
datasets. Furthermore, leveraging the Transformer architecture and latent
encoding layers, DANP learns a wider range of features that are generalizable
across various tasks. Through comprehensive experimentation on various
synthetic and practical regression tasks, we empirically show that DANP
outperforms previous NP variations, showcasing its effectiveness in overcoming
the limitations of traditional NP models and its potential for broader
applicability in diverse regression scenarios.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 02:40:59 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Lee",
"Hyungi",
""
],
[
"Jang",
"Chaeyun",
""
],
[
"Lee",
"Dongbok",
""
],
[
"Lee",
"Juho",
""
]
] |
TITLE: Dimension Agnostic Neural Processes
ABSTRACT: Meta-learning aims to train models that can generalize to new tasks with
limited labeled data by extracting shared features across diverse task
datasets. Additionally, it accounts for prediction uncertainty during both
training and evaluation, a concept known as uncertainty-aware meta-learning.
Neural Process(NP) is a well-known uncertainty-aware meta-learning method that
constructs implicit stochastic processes using parametric neural networks,
enabling rapid adaptation to new tasks. However, existing NP methods face
challenges in accommodating diverse input dimensions and learned features,
limiting their broad applicability across regression tasks. To address these
limitations and advance the utility of NP models as general regressors, we
introduce Dimension Agnostic Neural Processes(DANP). DANP incorporates
Dimension Aggregator Block(DAB) to transform input features into a
fixed-dimensional space, enhancing the model's ability to handle diverse
datasets. Furthermore, leveraging the Transformer architecture and latent
encoding layers, DANP learns a wider range of features that are generalizable
across various tasks. Through comprehensive experimentation on various
synthetic and practical regression tasks, we empirically show that DANP
outperforms previous NP variations, showcasing its effectiveness in overcoming
the limitations of traditional NP models and its potential for broader
applicability in diverse regression scenarios.
|
no_new_dataset
| 0.944434
|
2502.20667
|
Ojonugwa Ejiga Peter
|
Ojonugwa Oluwafemi Ejiga Peter, Md Mahmudur Rahman, and Fahmi Khalifa
|
Advancing AI-Powered Medical Image Synthesis: Insights from MedVQA-GI
Challenge Using CLIP, Fine-Tuned Stable Diffusion, and Dream-Booth + LoRA
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The MEDVQA-GI challenge addresses the integration of AI-driven text-to-image
generative models in medical diagnostics, aiming to enhance diagnostic
capabilities through synthetic image generation. Existing methods primarily
focus on static image analysis and lack the dynamic generation of medical
imagery from textual descriptions. This study intends to partially close this
gap by introducing a novel approach based on fine-tuned generative models to
generate dynamic, scalable, and precise images from textual descriptions.
Particularly, our system integrates fine-tuned Stable Diffusion and DreamBooth
models, as well as Low-Rank Adaptation (LORA), to generate high-fidelity
medical images. The problem is around two sub-tasks namely: image synthesis
(IS) and optimal prompt production (OPG). The former creates medical images via
verbal prompts, whereas the latter provides prompts that produce high-quality
images in specified categories. The study emphasizes the limitations of
traditional medical image generation methods, such as hand sketching,
constrained datasets, static procedures, and generic models. Our evaluation
measures showed that Stable Diffusion surpasses CLIP and DreamBooth + LORA in
terms of producing high-quality, diversified images. Specifically, Stable
Diffusion had the lowest Fr\'echet Inception Distance (FID) scores (0.099 for
single center, 0.064 for multi-center, and 0.067 for combined), indicating
higher image quality. Furthermore, it had the highest average Inception Score
(2.327 across all datasets), indicating exceptional diversity and quality. This
advances the field of AI-powered medical diagnosis. Future research will
concentrate on model refining, dataset augmentation, and ethical considerations
for efficiently implementing these advances into clinical practice
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 02:49:45 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Peter",
"Ojonugwa Oluwafemi Ejiga",
""
],
[
"Rahman",
"Md Mahmudur",
""
],
[
"Khalifa",
"Fahmi",
""
]
] |
TITLE: Advancing AI-Powered Medical Image Synthesis: Insights from MedVQA-GI
Challenge Using CLIP, Fine-Tuned Stable Diffusion, and Dream-Booth + LoRA
ABSTRACT: The MEDVQA-GI challenge addresses the integration of AI-driven text-to-image
generative models in medical diagnostics, aiming to enhance diagnostic
capabilities through synthetic image generation. Existing methods primarily
focus on static image analysis and lack the dynamic generation of medical
imagery from textual descriptions. This study intends to partially close this
gap by introducing a novel approach based on fine-tuned generative models to
generate dynamic, scalable, and precise images from textual descriptions.
Particularly, our system integrates fine-tuned Stable Diffusion and DreamBooth
models, as well as Low-Rank Adaptation (LORA), to generate high-fidelity
medical images. The problem is around two sub-tasks namely: image synthesis
(IS) and optimal prompt production (OPG). The former creates medical images via
verbal prompts, whereas the latter provides prompts that produce high-quality
images in specified categories. The study emphasizes the limitations of
traditional medical image generation methods, such as hand sketching,
constrained datasets, static procedures, and generic models. Our evaluation
measures showed that Stable Diffusion surpasses CLIP and DreamBooth + LORA in
terms of producing high-quality, diversified images. Specifically, Stable
Diffusion had the lowest Fr\'echet Inception Distance (FID) scores (0.099 for
single center, 0.064 for multi-center, and 0.067 for combined), indicating
higher image quality. Furthermore, it had the highest average Inception Score
(2.327 across all datasets), indicating exceptional diversity and quality. This
advances the field of AI-powered medical diagnosis. Future research will
concentrate on model refining, dataset augmentation, and ethical considerations
for efficiently implementing these advances into clinical practice
|
no_new_dataset
| 0.955068
|
2502.20668
|
Xiang Xiang
|
Xiang Xiang, Zhuo Xu, Yao Deng, Qinhao Zhou, Yifan Liang, Ke Chen,
Qingfang Zheng, Yaowei Wang, Xilin Chen, Wen Gao
|
OpenEarthSensing: Large-Scale Fine-Grained Benchmark for Open-World
Remote Sensing
| null | null | null | null |
cs.CV cs.AI cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In open-world remote sensing, deployed models must continuously adapt to a
steady influx of new data, which often exhibits various shifts compared to what
the model encountered during the training phase. To effectively handle the new
data, models are required to detect semantic shifts, adapt to covariate shifts,
and continuously update themselves. These challenges give rise to a variety of
open-world tasks. However, existing open-world remote sensing studies typically
train and test within a single dataset to simulate open-world conditions.
Currently, there is a lack of large-scale benchmarks capable of evaluating
multiple open-world tasks. In this paper, we introduce OpenEarthSensing, a
large-scale fine-grained benchmark for open-world remote sensing.
OpenEarthSensing includes 189 scene and objects categories, covering the vast
majority of potential semantic shifts that may occur in the real world.
Additionally, OpenEarthSensing encompasses five data domains with significant
covariate shifts, including two RGB satellite domians, one RGB aerial domian,
one MS RGB domian, and one infrared domian. The various domains provide a more
comprehensive testbed for evaluating the generalization performance of
open-world models. We conduct the baseline evaluation of current mainstream
open-world tasks and methods on OpenEarthSensing, demonstrating that it serves
as a challenging benchmark for open-world remote sensing.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 02:49:52 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Xiang",
"Xiang",
""
],
[
"Xu",
"Zhuo",
""
],
[
"Deng",
"Yao",
""
],
[
"Zhou",
"Qinhao",
""
],
[
"Liang",
"Yifan",
""
],
[
"Chen",
"Ke",
""
],
[
"Zheng",
"Qingfang",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Chen",
"Xilin",
""
],
[
"Gao",
"Wen",
""
]
] |
TITLE: OpenEarthSensing: Large-Scale Fine-Grained Benchmark for Open-World
Remote Sensing
ABSTRACT: In open-world remote sensing, deployed models must continuously adapt to a
steady influx of new data, which often exhibits various shifts compared to what
the model encountered during the training phase. To effectively handle the new
data, models are required to detect semantic shifts, adapt to covariate shifts,
and continuously update themselves. These challenges give rise to a variety of
open-world tasks. However, existing open-world remote sensing studies typically
train and test within a single dataset to simulate open-world conditions.
Currently, there is a lack of large-scale benchmarks capable of evaluating
multiple open-world tasks. In this paper, we introduce OpenEarthSensing, a
large-scale fine-grained benchmark for open-world remote sensing.
OpenEarthSensing includes 189 scene and objects categories, covering the vast
majority of potential semantic shifts that may occur in the real world.
Additionally, OpenEarthSensing encompasses five data domains with significant
covariate shifts, including two RGB satellite domians, one RGB aerial domian,
one MS RGB domian, and one infrared domian. The various domains provide a more
comprehensive testbed for evaluating the generalization performance of
open-world models. We conduct the baseline evaluation of current mainstream
open-world tasks and methods on OpenEarthSensing, demonstrating that it serves
as a challenging benchmark for open-world remote sensing.
|
no_new_dataset
| 0.909667
|
2502.20669
|
John Han
|
John J. Han, Jie Ying Wu
|
EndoPBR: Material and Lighting Estimation for Photorealistic Surgical
Simulations via Physically-based Rendering
|
10 pages, 3 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The lack of labeled datasets in 3D vision for surgical scenes inhibits the
development of robust 3D reconstruction algorithms in the medical domain.
Despite the popularity of Neural Radiance Fields and 3D Gaussian Splatting in
the general computer vision community, these systems have yet to find
consistent success in surgical scenes due to challenges such as non-stationary
lighting and non-Lambertian surfaces. As a result, the need for labeled
surgical datasets continues to grow. In this work, we introduce a
differentiable rendering framework for material and lighting estimation from
endoscopic images and known geometry. Compared to previous approaches that
model lighting and material jointly as radiance, we explicitly disentangle
these scene properties for robust and photorealistic novel view synthesis. To
disambiguate the training process, we formulate domain-specific properties
inherent in surgical scenes. Specifically, we model the scene lighting as a
simple spotlight and material properties as a bidirectional reflectance
distribution function, parameterized by a neural network. By grounding color
predictions in the rendering equation, we can generate photorealistic images at
arbitrary camera poses. We evaluate our method with various sequences from the
Colonoscopy 3D Video Dataset and show that our method produces competitive
novel view synthesis results compared with other approaches. Furthermore, we
demonstrate that synthetic data can be used to develop 3D vision algorithms by
finetuning a depth estimation model with our rendered outputs. Overall, we see
that the depth estimation performance is on par with fine-tuning with the
original real images.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 02:50:59 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Han",
"John J.",
""
],
[
"Wu",
"Jie Ying",
""
]
] |
TITLE: EndoPBR: Material and Lighting Estimation for Photorealistic Surgical
Simulations via Physically-based Rendering
ABSTRACT: The lack of labeled datasets in 3D vision for surgical scenes inhibits the
development of robust 3D reconstruction algorithms in the medical domain.
Despite the popularity of Neural Radiance Fields and 3D Gaussian Splatting in
the general computer vision community, these systems have yet to find
consistent success in surgical scenes due to challenges such as non-stationary
lighting and non-Lambertian surfaces. As a result, the need for labeled
surgical datasets continues to grow. In this work, we introduce a
differentiable rendering framework for material and lighting estimation from
endoscopic images and known geometry. Compared to previous approaches that
model lighting and material jointly as radiance, we explicitly disentangle
these scene properties for robust and photorealistic novel view synthesis. To
disambiguate the training process, we formulate domain-specific properties
inherent in surgical scenes. Specifically, we model the scene lighting as a
simple spotlight and material properties as a bidirectional reflectance
distribution function, parameterized by a neural network. By grounding color
predictions in the rendering equation, we can generate photorealistic images at
arbitrary camera poses. We evaluate our method with various sequences from the
Colonoscopy 3D Video Dataset and show that our method produces competitive
novel view synthesis results compared with other approaches. Furthermore, we
demonstrate that synthetic data can be used to develop 3D vision algorithms by
finetuning a depth estimation model with our rendered outputs. Overall, we see
that the depth estimation performance is on par with fine-tuning with the
original real images.
|
no_new_dataset
| 0.951953
|
2502.20676
|
Shanshan Wan
|
Shanshan Wan and Yingmei Wei and Lai Kang and Tianrui Shen and Haixuan
Wang and Yee-Hong Yang
|
SciceVPR: Stable Cross-Image Correlation Enhanced Model for Visual Place
Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual Place Recognition (VPR) is a major challenge for robotics and
autonomous systems, with the goal of predicting the location of an image based
solely on its visual features. State-of-the-art (SOTA) models extract global
descriptors using the powerful foundation model DINOv2 as backbone. These
models either explore the cross-image correlation or propose a time-consuming
two-stage re-ranking strategy to achieve better performance. However, existing
works only utilize the final output of DINOv2, and the current cross-image
correlation causes unstable retrieval results. To produce both discriminative
and constant global descriptors, this paper proposes stable cross-image
correlation enhanced model for VPR called SciceVPR. This model explores the
full potential of DINOv2 in providing useful feature representations that
implicitly encode valuable contextual knowledge. Specifically, SciceVPR first
uses a multi-layer feature fusion module to capture increasingly detailed
task-relevant channel and spatial information from the multi-layer output of
DINOv2. Secondly, SciceVPR considers the invariant correlation between images
within a batch as valuable knowledge to be distilled into the proposed
self-enhanced encoder. In this way, SciceVPR can acquire fairly robust global
features regardless of domain shifts (e.g., changes in illumination, weather
and viewpoint between pictures taken in the same place). Experimental results
demonstrate that the base variant, SciceVPR-B, outperforms SOTA one-stage
methods with single input on multiple datasets with varying domain conditions.
The large variant, SciceVPR-L, performs on par with SOTA two-stage models,
scoring over 3% higher in Recall@1 compared to existing models on the
challenging Tokyo24/7 dataset. Our code will be released at
https://github.com/shuimushan/SciceVPR.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 03:05:30 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Wan",
"Shanshan",
""
],
[
"Wei",
"Yingmei",
""
],
[
"Kang",
"Lai",
""
],
[
"Shen",
"Tianrui",
""
],
[
"Wang",
"Haixuan",
""
],
[
"Yang",
"Yee-Hong",
""
]
] |
TITLE: SciceVPR: Stable Cross-Image Correlation Enhanced Model for Visual Place
Recognition
ABSTRACT: Visual Place Recognition (VPR) is a major challenge for robotics and
autonomous systems, with the goal of predicting the location of an image based
solely on its visual features. State-of-the-art (SOTA) models extract global
descriptors using the powerful foundation model DINOv2 as backbone. These
models either explore the cross-image correlation or propose a time-consuming
two-stage re-ranking strategy to achieve better performance. However, existing
works only utilize the final output of DINOv2, and the current cross-image
correlation causes unstable retrieval results. To produce both discriminative
and constant global descriptors, this paper proposes stable cross-image
correlation enhanced model for VPR called SciceVPR. This model explores the
full potential of DINOv2 in providing useful feature representations that
implicitly encode valuable contextual knowledge. Specifically, SciceVPR first
uses a multi-layer feature fusion module to capture increasingly detailed
task-relevant channel and spatial information from the multi-layer output of
DINOv2. Secondly, SciceVPR considers the invariant correlation between images
within a batch as valuable knowledge to be distilled into the proposed
self-enhanced encoder. In this way, SciceVPR can acquire fairly robust global
features regardless of domain shifts (e.g., changes in illumination, weather
and viewpoint between pictures taken in the same place). Experimental results
demonstrate that the base variant, SciceVPR-B, outperforms SOTA one-stage
methods with single input on multiple datasets with varying domain conditions.
The large variant, SciceVPR-L, performs on par with SOTA two-stage models,
scoring over 3% higher in Recall@1 compared to existing models on the
challenging Tokyo24/7 dataset. Our code will be released at
https://github.com/shuimushan/SciceVPR.
|
no_new_dataset
| 0.945045
|
2502.20677
|
Youbing Hu
|
Youbing Hu, Yun Cheng, Zimu Zhou, Anqi Lu, Zhiqiang Cao, Zhijun Li
|
FoCTTA: Low-Memory Continual Test-Time Adaptation with Focus
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continual adaptation to domain shifts at test time (CTTA) is crucial for
enhancing the intelligence of deep learning enabled IoT applications. However,
prevailing TTA methods, which typically update all batch normalization (BN)
layers, exhibit two memory inefficiencies. First, the reliance on BN layers for
adaptation necessitates large batch sizes, leading to high memory usage.
Second, updating all BN layers requires storing the activations of all BN
layers for backpropagation, exacerbating the memory demand. Both factors lead
to substantial memory costs, making existing solutions impractical for IoT
devices. In this paper, we present FoCTTA, a low-memory CTTA strategy. The key
is to automatically identify and adapt a few drift-sensitive representation
layers, rather than blindly update all BN layers. The shift from BN to
representation layers eliminates the need for large batch sizes. Also, by
updating adaptation-critical layers only, FoCTTA avoids storing excessive
activations. This focused adaptation approach ensures that FoCTTA is not only
memory-efficient but also maintains effective adaptation. Evaluations show that
FoCTTA improves the adaptation accuracy over the state-of-the-arts by 4.5%,
4.9%, and 14.8% on CIFAR10-C, CIFAR100-C, and ImageNet-C under the same memory
constraints. Across various batch sizes, FoCTTA reduces the memory usage by
3-fold on average, while improving the accuracy by 8.1%, 3.6%, and 0.2%,
respectively, on the three datasets.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 03:06:15 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Hu",
"Youbing",
""
],
[
"Cheng",
"Yun",
""
],
[
"Zhou",
"Zimu",
""
],
[
"Lu",
"Anqi",
""
],
[
"Cao",
"Zhiqiang",
""
],
[
"Li",
"Zhijun",
""
]
] |
TITLE: FoCTTA: Low-Memory Continual Test-Time Adaptation with Focus
ABSTRACT: Continual adaptation to domain shifts at test time (CTTA) is crucial for
enhancing the intelligence of deep learning enabled IoT applications. However,
prevailing TTA methods, which typically update all batch normalization (BN)
layers, exhibit two memory inefficiencies. First, the reliance on BN layers for
adaptation necessitates large batch sizes, leading to high memory usage.
Second, updating all BN layers requires storing the activations of all BN
layers for backpropagation, exacerbating the memory demand. Both factors lead
to substantial memory costs, making existing solutions impractical for IoT
devices. In this paper, we present FoCTTA, a low-memory CTTA strategy. The key
is to automatically identify and adapt a few drift-sensitive representation
layers, rather than blindly update all BN layers. The shift from BN to
representation layers eliminates the need for large batch sizes. Also, by
updating adaptation-critical layers only, FoCTTA avoids storing excessive
activations. This focused adaptation approach ensures that FoCTTA is not only
memory-efficient but also maintains effective adaptation. Evaluations show that
FoCTTA improves the adaptation accuracy over the state-of-the-arts by 4.5%,
4.9%, and 14.8% on CIFAR10-C, CIFAR100-C, and ImageNet-C under the same memory
constraints. Across various batch sizes, FoCTTA reduces the memory usage by
3-fold on average, while improving the accuracy by 8.1%, 3.6%, and 0.2%,
respectively, on the three datasets.
|
no_new_dataset
| 0.948537
|
2502.20681
|
Zixuan Gong
|
Zixuan Gong, Jiaye Teng, Yong Liu
|
Disentangling Feature Structure: A Mathematically Provable Two-Stage
Training Dynamics in Transformers
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers may exhibit two-stage training dynamics during the real-world
training process. For instance, when training GPT-2 on the Counterfact dataset,
the answers progress from syntactically incorrect to syntactically correct to
semantically correct. However, existing theoretical analyses hardly account for
this two-stage phenomenon. In this paper, we theoretically demonstrate how such
two-stage training dynamics occur in transformers. Specifically, we analyze the
dynamics of transformers using feature learning techniques under in-context
learning regimes, based on a disentangled two-type feature structure. Such
disentanglement of feature structure is general in practice, e.g., natural
languages contain syntax and semantics, and proteins contain primary and
secondary structures. To our best known, this is the first rigorous result
regarding a two-stage optimization process in transformers. Additionally, a
corollary indicates that such a two-stage process is closely related to the
spectral properties of the attention weights, which accords well with empirical
findings.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 03:27:24 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Gong",
"Zixuan",
""
],
[
"Teng",
"Jiaye",
""
],
[
"Liu",
"Yong",
""
]
] |
TITLE: Disentangling Feature Structure: A Mathematically Provable Two-Stage
Training Dynamics in Transformers
ABSTRACT: Transformers may exhibit two-stage training dynamics during the real-world
training process. For instance, when training GPT-2 on the Counterfact dataset,
the answers progress from syntactically incorrect to syntactically correct to
semantically correct. However, existing theoretical analyses hardly account for
this two-stage phenomenon. In this paper, we theoretically demonstrate how such
two-stage training dynamics occur in transformers. Specifically, we analyze the
dynamics of transformers using feature learning techniques under in-context
learning regimes, based on a disentangled two-type feature structure. Such
disentanglement of feature structure is general in practice, e.g., natural
languages contain syntax and semantics, and proteins contain primary and
secondary structures. To our best known, this is the first rigorous result
regarding a two-stage optimization process in transformers. Additionally, a
corollary indicates that such a two-stage process is closely related to the
spectral properties of the attention weights, which accords well with empirical
findings.
|
no_new_dataset
| 0.947478
|
2502.20682
|
Gibson Nkhata
|
Gibson Nkhata, Susan Gauch, Usman Anjum and Justin Zhan
|
Fine-tuning BERT with Bidirectional LSTM for Fine-grained Movie Reviews
Sentiment Analysis
|
14 pages, 5 figures, published in International Journal On Advances
in Systems and Measurements, volume 16, numbers 3 and 4, 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Sentiment Analysis (SA) is instrumental in understanding peoples viewpoints
facilitating social media monitoring recognizing products and brands and
gauging customer satisfaction. Consequently SA has evolved into an active
research domain within Natural Language Processing (NLP). Many approaches
outlined in the literature devise intricate frameworks aimed at achieving high
accuracy, focusing exclusively on either binary sentiment classification or
fine-grained sentiment classification. In this paper our objective is to
fine-tune the pre-trained BERT model with Bidirectional LSTM (BiLSTM) to
enhance both binary and fine-grained SA specifically for movie reviews. Our
approach involves conducting sentiment classification for each review followed
by computing the overall sentiment polarity across all reviews. We present our
findings on binary classification as well as fine-grained classification
utilizing benchmark datasets. Additionally we implement and assess two accuracy
improvement techniques Synthetic Minority Oversampling Technique (SMOTE) and
NLP Augmenter (NLPAUG) to bolster the models generalization in fine-grained
sentiment classification. Finally a heuristic algorithm is employed to
calculate the overall polarity of predicted reviews from the BERT+BiLSTM output
vector. Our approach performs comparably with state-of-the-art (SOTA)
techniques in both classifications. For instance in binary classification we
achieve 97.67% accuracy surpassing the leading SOTA model
NB-weighted-BON+dv-cosine by 0.27% on the renowned IMDb dataset. Conversely for
five-class classification on SST-5 while the top SOTA model
RoBERTa+large+Self-explaining attains 55.5% accuracy our model achieves 59.48%
accuracy surpassing the BERT-large baseline by 3.6%.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 03:30:48 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Nkhata",
"Gibson",
""
],
[
"Gauch",
"Susan",
""
],
[
"Anjum",
"Usman",
""
],
[
"Zhan",
"Justin",
""
]
] |
TITLE: Fine-tuning BERT with Bidirectional LSTM for Fine-grained Movie Reviews
Sentiment Analysis
ABSTRACT: Sentiment Analysis (SA) is instrumental in understanding peoples viewpoints
facilitating social media monitoring recognizing products and brands and
gauging customer satisfaction. Consequently SA has evolved into an active
research domain within Natural Language Processing (NLP). Many approaches
outlined in the literature devise intricate frameworks aimed at achieving high
accuracy, focusing exclusively on either binary sentiment classification or
fine-grained sentiment classification. In this paper our objective is to
fine-tune the pre-trained BERT model with Bidirectional LSTM (BiLSTM) to
enhance both binary and fine-grained SA specifically for movie reviews. Our
approach involves conducting sentiment classification for each review followed
by computing the overall sentiment polarity across all reviews. We present our
findings on binary classification as well as fine-grained classification
utilizing benchmark datasets. Additionally we implement and assess two accuracy
improvement techniques Synthetic Minority Oversampling Technique (SMOTE) and
NLP Augmenter (NLPAUG) to bolster the models generalization in fine-grained
sentiment classification. Finally a heuristic algorithm is employed to
calculate the overall polarity of predicted reviews from the BERT+BiLSTM output
vector. Our approach performs comparably with state-of-the-art (SOTA)
techniques in both classifications. For instance in binary classification we
achieve 97.67% accuracy surpassing the leading SOTA model
NB-weighted-BON+dv-cosine by 0.27% on the renowned IMDb dataset. Conversely for
five-class classification on SST-5 while the top SOTA model
RoBERTa+large+Self-explaining attains 55.5% accuracy our model achieves 59.48%
accuracy surpassing the BERT-large baseline by 3.6%.
|
no_new_dataset
| 0.953794
|
2502.20684
|
Abhishek Kumar Umrawal
|
Yingbing Huang, Deming Chen, and Abhishek K. Umrawal
|
JAM: Controllable and Responsible Text Generation via Causal Reasoning
and Latent Vector Manipulation
|
10 pages, 3 figures, and 6 tables
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While large language models (LLMs) have made significant strides in
generating coherent and contextually relevant text, they often function as
opaque black boxes, trained on vast unlabeled datasets with statistical
objectives, lacking an interpretable framework for responsible control. In this
paper, we introduce JAM (Just A Move), a novel framework that interprets and
controls text generation by integrating cause-effect analysis within the latent
space of LLMs. Based on our observations, we uncover the inherent causality in
LLM generation, which is critical for producing responsible and realistic
outputs. Moreover, we explore latent vectors as fundamental components in LLM
architectures, aiming to understand and manipulate them for more effective and
efficient controllable text generation. We evaluate our framework using a range
of tools, including the HHH criteria, toxicity reduction benchmarks, and GPT-4
alignment measures. Our results show that JAM achieves up to a 22% improvement
over previous Controllable Text Generation (CTG) methods across multiple
quantitative metrics and human-centric evaluations. Furthermore, JAM
demonstrates greater computational efficiency compared to other CTG methods.
These results highlight the effectiveness and efficiency of JAM for responsible
and realistic text generation, paving the way for more interpretable and
controllable models.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 03:31:48 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Huang",
"Yingbing",
""
],
[
"Chen",
"Deming",
""
],
[
"Umrawal",
"Abhishek K.",
""
]
] |
TITLE: JAM: Controllable and Responsible Text Generation via Causal Reasoning
and Latent Vector Manipulation
ABSTRACT: While large language models (LLMs) have made significant strides in
generating coherent and contextually relevant text, they often function as
opaque black boxes, trained on vast unlabeled datasets with statistical
objectives, lacking an interpretable framework for responsible control. In this
paper, we introduce JAM (Just A Move), a novel framework that interprets and
controls text generation by integrating cause-effect analysis within the latent
space of LLMs. Based on our observations, we uncover the inherent causality in
LLM generation, which is critical for producing responsible and realistic
outputs. Moreover, we explore latent vectors as fundamental components in LLM
architectures, aiming to understand and manipulate them for more effective and
efficient controllable text generation. We evaluate our framework using a range
of tools, including the HHH criteria, toxicity reduction benchmarks, and GPT-4
alignment measures. Our results show that JAM achieves up to a 22% improvement
over previous Controllable Text Generation (CTG) methods across multiple
quantitative metrics and human-centric evaluations. Furthermore, JAM
demonstrates greater computational efficiency compared to other CTG methods.
These results highlight the effectiveness and efficiency of JAM for responsible
and realistic text generation, paving the way for more interpretable and
controllable models.
|
no_new_dataset
| 0.945701
|
2502.20685
|
Dongki Jung
|
Dongki Jung, Jaehoon Choi, Yonghan Lee, Somi Jeong, Taejae Lee, Dinesh
Manocha, Suyong Yeon
|
EDM: Equirectangular Projection-Oriented Dense Kernelized Feature
Matching
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the first learning-based dense matching algorithm, termed
Equirectangular Projection-Oriented Dense Kernelized Feature Matching (EDM),
specifically designed for omnidirectional images. Equirectangular projection
(ERP) images, with their large fields of view, are particularly suited for
dense matching techniques that aim to establish comprehensive correspondences
across images. However, ERP images are subject to significant distortions,
which we address by leveraging the spherical camera model and geodesic flow
refinement in the dense matching method. To further mitigate these distortions,
we propose spherical positional embeddings based on 3D Cartesian coordinates of
the feature grid. Additionally, our method incorporates bidirectional
transformations between spherical and Cartesian coordinate systems during
refinement, utilizing a unit sphere to improve matching performance. We
demonstrate that our proposed method achieves notable performance enhancements,
with improvements of +26.72 and +42.62 in AUC@5{\deg} on the Matterport3D and
Stanford2D3D datasets.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 03:37:01 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Jung",
"Dongki",
""
],
[
"Choi",
"Jaehoon",
""
],
[
"Lee",
"Yonghan",
""
],
[
"Jeong",
"Somi",
""
],
[
"Lee",
"Taejae",
""
],
[
"Manocha",
"Dinesh",
""
],
[
"Yeon",
"Suyong",
""
]
] |
TITLE: EDM: Equirectangular Projection-Oriented Dense Kernelized Feature
Matching
ABSTRACT: We introduce the first learning-based dense matching algorithm, termed
Equirectangular Projection-Oriented Dense Kernelized Feature Matching (EDM),
specifically designed for omnidirectional images. Equirectangular projection
(ERP) images, with their large fields of view, are particularly suited for
dense matching techniques that aim to establish comprehensive correspondences
across images. However, ERP images are subject to significant distortions,
which we address by leveraging the spherical camera model and geodesic flow
refinement in the dense matching method. To further mitigate these distortions,
we propose spherical positional embeddings based on 3D Cartesian coordinates of
the feature grid. Additionally, our method incorporates bidirectional
transformations between spherical and Cartesian coordinate systems during
refinement, utilizing a unit sphere to improve matching performance. We
demonstrate that our proposed method achieves notable performance enhancements,
with improvements of +26.72 and +42.62 in AUC@5{\deg} on the Matterport3D and
Stanford2D3D datasets.
|
no_new_dataset
| 0.951459
|
2502.20687
|
Yihan Wang
|
Yihan Wang, Fei Xiong, Zhexin Han, Qi Song, Kaiqiao Zhan, Ben Wang
|
Unleashing the Potential of Two-Tower Models: Diffusion-Based
Cross-Interaction for Large-Scale Matching
| null | null | null | null |
cs.IR cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Two-tower models are widely adopted in the industrial-scale matching stage
across a broad range of application domains, such as content recommendations,
advertisement systems, and search engines. This model efficiently handles
large-scale candidate item screening by separating user and item
representations. However, the decoupling network also leads to a neglect of
potential information interaction between the user and item representations.
Current state-of-the-art (SOTA) approaches include adding a shallow fully
connected layer(i.e., COLD), which is limited by performance and can only be
used in the ranking stage. For performance considerations, another approach
attempts to capture historical positive interaction information from the other
tower by regarding them as the input features(i.e., DAT). Later research showed
that the gains achieved by this method are still limited because of lacking the
guidance on the next user intent. To address the aforementioned challenges, we
propose a "cross-interaction decoupling architecture" within our matching
paradigm. This user-tower architecture leverages a diffusion module to
reconstruct the next positive intention representation and employs a
mixed-attention module to facilitate comprehensive cross-interaction. During
the next positive intention generation, we further enhance the accuracy of its
reconstruction by explicitly extracting the temporal drift within user behavior
sequences. Experiments on two real-world datasets and one industrial dataset
demonstrate that our method outperforms the SOTA two-tower models
significantly, and our diffusion approach outperforms other generative models
in reconstructing item representations.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 03:40:37 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Wang",
"Yihan",
""
],
[
"Xiong",
"Fei",
""
],
[
"Han",
"Zhexin",
""
],
[
"Song",
"Qi",
""
],
[
"Zhan",
"Kaiqiao",
""
],
[
"Wang",
"Ben",
""
]
] |
TITLE: Unleashing the Potential of Two-Tower Models: Diffusion-Based
Cross-Interaction for Large-Scale Matching
ABSTRACT: Two-tower models are widely adopted in the industrial-scale matching stage
across a broad range of application domains, such as content recommendations,
advertisement systems, and search engines. This model efficiently handles
large-scale candidate item screening by separating user and item
representations. However, the decoupling network also leads to a neglect of
potential information interaction between the user and item representations.
Current state-of-the-art (SOTA) approaches include adding a shallow fully
connected layer(i.e., COLD), which is limited by performance and can only be
used in the ranking stage. For performance considerations, another approach
attempts to capture historical positive interaction information from the other
tower by regarding them as the input features(i.e., DAT). Later research showed
that the gains achieved by this method are still limited because of lacking the
guidance on the next user intent. To address the aforementioned challenges, we
propose a "cross-interaction decoupling architecture" within our matching
paradigm. This user-tower architecture leverages a diffusion module to
reconstruct the next positive intention representation and employs a
mixed-attention module to facilitate comprehensive cross-interaction. During
the next positive intention generation, we further enhance the accuracy of its
reconstruction by explicitly extracting the temporal drift within user behavior
sequences. Experiments on two real-world datasets and one industrial dataset
demonstrate that our method outperforms the SOTA two-tower models
significantly, and our diffusion approach outperforms other generative models
in reconstructing item representations.
|
no_new_dataset
| 0.948251
|
2502.20695
|
Yiping Sun
|
Yang Shi, Yiping Sun, Jiaolong Du, Xiaocheng Zhong, Zhiyong Wang, Yao
Hu
|
Scalable Overload-Aware Graph-Based Index Construction for
10-Billion-Scale Vector Similarity Search
|
Accepted by WWW'25
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Approximate Nearest Neighbor Search (ANNS) is essential for modern
data-driven applications that require efficient retrieval of top-k results from
massive vector databases. Although existing graph-based ANNS algorithms achieve
a high recall rate on billion-scale datasets, their slow construction speed and
limited scalability hinder their applicability to large-scale industrial
scenarios. In this paper, we introduce SOGAIC, the first Scalable
Overload-Aware Graph-Based ANNS Index Construction system tailored for
ultra-large-scale vector databases: 1) We propose a dynamic data partitioning
algorithm with overload constraints that adaptively introduces overlaps among
subsets; 2) To enable efficient distributed subgraph construction, we employ a
load-balancing task scheduling framework combined with an agglomerative merging
strategy; 3) Extensive experiments on various datasets demonstrate a reduction
of 47.3% in average construction time compared to existing methods. The
proposed method has also been successfully deployed in a real-world industrial
search engine, managing over 10 billion daily updated vectors and serving
hundreds of millions of users.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 04:03:23 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Shi",
"Yang",
""
],
[
"Sun",
"Yiping",
""
],
[
"Du",
"Jiaolong",
""
],
[
"Zhong",
"Xiaocheng",
""
],
[
"Wang",
"Zhiyong",
""
],
[
"Hu",
"Yao",
""
]
] |
TITLE: Scalable Overload-Aware Graph-Based Index Construction for
10-Billion-Scale Vector Similarity Search
ABSTRACT: Approximate Nearest Neighbor Search (ANNS) is essential for modern
data-driven applications that require efficient retrieval of top-k results from
massive vector databases. Although existing graph-based ANNS algorithms achieve
a high recall rate on billion-scale datasets, their slow construction speed and
limited scalability hinder their applicability to large-scale industrial
scenarios. In this paper, we introduce SOGAIC, the first Scalable
Overload-Aware Graph-Based ANNS Index Construction system tailored for
ultra-large-scale vector databases: 1) We propose a dynamic data partitioning
algorithm with overload constraints that adaptively introduces overlaps among
subsets; 2) To enable efficient distributed subgraph construction, we employ a
load-balancing task scheduling framework combined with an agglomerative merging
strategy; 3) Extensive experiments on various datasets demonstrate a reduction
of 47.3% in average construction time compared to existing methods. The
proposed method has also been successfully deployed in a real-world industrial
search engine, managing over 10 billion daily updated vectors and serving
hundreds of millions of users.
|
no_new_dataset
| 0.946399
|
2502.20708
|
Saminul Haque
|
John Duchi, Saminul Haque, Rohith Kuditipudi
|
A fast and slightly robust covariance estimator
|
39 pages
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Let $\mathcal{Z} = \{Z_1, \dots, Z_n\} \stackrel{\mathrm{i.i.d.}}{\sim} P
\subset \mathbb{R}^d$ from a distribution $P$ with mean zero and covariance
$\Sigma$. Given a dataset $\mathcal{X}$ such that
$d_{\mathrm{ham}}(\mathcal{X}, \mathcal{Z}) \leq \varepsilon n$, we are
interested in finding an efficient estimator $\widehat{\Sigma}$ that achieves
$\mathrm{err}(\widehat{\Sigma}, \Sigma) :=
\|\Sigma^{-\frac{1}{2}}\widehat{\Sigma}\Sigma^{-\frac{1}{2}} - I\|
_{\mathrm{op}} \leq 1/2$. We focus on the low contamination regime $\varepsilon
= o(1/\sqrt{d}$). In this regime, prior work required either $\Omega(d^{3/2})$
samples or runtime that is exponential in $d$. We present an algorithm that,
for subgaussian data, has near-linear sample complexity $n =
\widetilde{\Omega}(d)$ and runtime $O((n+d)^{\omega + \frac{1}{2}})$, where
$\omega$ is the matrix multiplication exponent. We also show that this
algorithm works for heavy-tailed data with near-linear sample complexity, but
in a smaller regime of $\varepsilon$. Concurrent to our work, Diakonikolas et
al. [2024] give Sum-of-Squares estimators that achieve similar sample
complexity but with large polynomial runtime.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 04:35:23 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Duchi",
"John",
""
],
[
"Haque",
"Saminul",
""
],
[
"Kuditipudi",
"Rohith",
""
]
] |
TITLE: A fast and slightly robust covariance estimator
ABSTRACT: Let $\mathcal{Z} = \{Z_1, \dots, Z_n\} \stackrel{\mathrm{i.i.d.}}{\sim} P
\subset \mathbb{R}^d$ from a distribution $P$ with mean zero and covariance
$\Sigma$. Given a dataset $\mathcal{X}$ such that
$d_{\mathrm{ham}}(\mathcal{X}, \mathcal{Z}) \leq \varepsilon n$, we are
interested in finding an efficient estimator $\widehat{\Sigma}$ that achieves
$\mathrm{err}(\widehat{\Sigma}, \Sigma) :=
\|\Sigma^{-\frac{1}{2}}\widehat{\Sigma}\Sigma^{-\frac{1}{2}} - I\|
_{\mathrm{op}} \leq 1/2$. We focus on the low contamination regime $\varepsilon
= o(1/\sqrt{d}$). In this regime, prior work required either $\Omega(d^{3/2})$
samples or runtime that is exponential in $d$. We present an algorithm that,
for subgaussian data, has near-linear sample complexity $n =
\widetilde{\Omega}(d)$ and runtime $O((n+d)^{\omega + \frac{1}{2}})$, where
$\omega$ is the matrix multiplication exponent. We also show that this
algorithm works for heavy-tailed data with near-linear sample complexity, but
in a smaller regime of $\varepsilon$. Concurrent to our work, Diakonikolas et
al. [2024] give Sum-of-Squares estimators that achieve similar sample
complexity but with large polynomial runtime.
|
no_new_dataset
| 0.934395
|
2502.20709
|
Zhengyi Zhong
|
Zhengyi Zhong, Weidong Bao, Ji Wang, Shuai Zhang, Jingxuan Zhou,
Lingjuan Lyu, Wei Yang Bryan Lim
|
Unlearning through Knowledge Overwriting: Reversible Federated
Unlearning via Selective Sparse Adapter
|
Accepted by CVPR2025
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated Learning is a promising paradigm for privacy-preserving
collaborative model training. In practice, it is essential not only to
continuously train the model to acquire new knowledge but also to guarantee old
knowledge the right to be forgotten (i.e., federated unlearning), especially
for privacy-sensitive information or harmful knowledge. However, current
federated unlearning methods face several challenges, including indiscriminate
unlearning of cross-client knowledge, irreversibility of unlearning, and
significant unlearning costs. To this end, we propose a method named FUSED,
which first identifies critical layers by analyzing each layer's sensitivity to
knowledge and constructs sparse unlearning adapters for sensitive ones. Then,
the adapters are trained without altering the original parameters, overwriting
the unlearning knowledge with the remaining knowledge. This knowledge
overwriting process enables FUSED to mitigate the effects of indiscriminate
unlearning. Moreover, the introduction of independent adapters makes unlearning
reversible and significantly reduces the unlearning costs. Finally, extensive
experiments on three datasets across various unlearning scenarios demonstrate
that FUSED's effectiveness is comparable to Retraining, surpassing all other
baselines while greatly reducing unlearning costs.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 04:35:26 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Zhong",
"Zhengyi",
""
],
[
"Bao",
"Weidong",
""
],
[
"Wang",
"Ji",
""
],
[
"Zhang",
"Shuai",
""
],
[
"Zhou",
"Jingxuan",
""
],
[
"Lyu",
"Lingjuan",
""
],
[
"Lim",
"Wei Yang Bryan",
""
]
] |
TITLE: Unlearning through Knowledge Overwriting: Reversible Federated
Unlearning via Selective Sparse Adapter
ABSTRACT: Federated Learning is a promising paradigm for privacy-preserving
collaborative model training. In practice, it is essential not only to
continuously train the model to acquire new knowledge but also to guarantee old
knowledge the right to be forgotten (i.e., federated unlearning), especially
for privacy-sensitive information or harmful knowledge. However, current
federated unlearning methods face several challenges, including indiscriminate
unlearning of cross-client knowledge, irreversibility of unlearning, and
significant unlearning costs. To this end, we propose a method named FUSED,
which first identifies critical layers by analyzing each layer's sensitivity to
knowledge and constructs sparse unlearning adapters for sensitive ones. Then,
the adapters are trained without altering the original parameters, overwriting
the unlearning knowledge with the remaining knowledge. This knowledge
overwriting process enables FUSED to mitigate the effects of indiscriminate
unlearning. Moreover, the introduction of independent adapters makes unlearning
reversible and significantly reduces the unlearning costs. Finally, extensive
experiments on three datasets across various unlearning scenarios demonstrate
that FUSED's effectiveness is comparable to Retraining, surpassing all other
baselines while greatly reducing unlearning costs.
|
no_new_dataset
| 0.948965
|
2502.20715
|
Kiranmayee Janardhan
|
Kiranmayee Janardhan, Christy Bobby Thomas
|
Glioma Classification using Multi-sequence MRI and Novel Wavelets-based
Feature Fusion
|
18 pages, 11 figures, 6 tables, journal paper
|
Journal of Computational Analysis and Applications, 2024
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Glioma, a prevalent and heterogeneous tumor originating from the glial cells,
can be differentiated as Low Grade Glioma (LGG) and High Grade Glioma (HGG)
according to World Health Organization's norms. Classifying gliomas is
essential for treatment protocols that depend extensively on subtype
differentiation. For non-invasive glioma evaluation, Magnetic Resonance Imaging
(MRI) offers vital information about the morphology and location of the the
tumor. The versatility of MRI allows the classification of gliomas as LGG and
HGG based on their texture, perfusion, and diffusion characteristics, and
further for improving the diagnosis and providing tailored treatments.
Nevertheless, the precise classification is complicated by tumor heterogeneity
and overlapping radiomic characteristics. Thus, in this work, wavelet based
novel fusion algorithm were implemented on multi-sequence T1, T1-contrast
enhanced (T1CE), T2 and Fluid Attenuated Inversion Recovery (FLAIR) MRI images
to compute the radiomics features. Furthermore, principal component analysis is
applied to reduce the feature space and XGBoost, Support Vector Machine, and
Random Forest Classifier are used for the classification. The result shows that
the SVM algorithm performs comparatively well with an accuracy of 90.17%,
precision of 91.04% and recall of 96.19%, F1-score of 93.53%, and AUC of 94.60%
when implemented on BraTS 2018 dataset and with an accuracy of 91.34%,
precision of 93.05% and recall of 96.13%, F1-score of 94.53%, and AUC of 93.71%
for BraTS 2018 dataset. Thus, the proposed algorithm could be potentially
implemented for the computer-aided diagnosis and grading system for gliomas.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 04:58:41 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Janardhan",
"Kiranmayee",
""
],
[
"Thomas",
"Christy Bobby",
""
]
] |
TITLE: Glioma Classification using Multi-sequence MRI and Novel Wavelets-based
Feature Fusion
ABSTRACT: Glioma, a prevalent and heterogeneous tumor originating from the glial cells,
can be differentiated as Low Grade Glioma (LGG) and High Grade Glioma (HGG)
according to World Health Organization's norms. Classifying gliomas is
essential for treatment protocols that depend extensively on subtype
differentiation. For non-invasive glioma evaluation, Magnetic Resonance Imaging
(MRI) offers vital information about the morphology and location of the the
tumor. The versatility of MRI allows the classification of gliomas as LGG and
HGG based on their texture, perfusion, and diffusion characteristics, and
further for improving the diagnosis and providing tailored treatments.
Nevertheless, the precise classification is complicated by tumor heterogeneity
and overlapping radiomic characteristics. Thus, in this work, wavelet based
novel fusion algorithm were implemented on multi-sequence T1, T1-contrast
enhanced (T1CE), T2 and Fluid Attenuated Inversion Recovery (FLAIR) MRI images
to compute the radiomics features. Furthermore, principal component analysis is
applied to reduce the feature space and XGBoost, Support Vector Machine, and
Random Forest Classifier are used for the classification. The result shows that
the SVM algorithm performs comparatively well with an accuracy of 90.17%,
precision of 91.04% and recall of 96.19%, F1-score of 93.53%, and AUC of 94.60%
when implemented on BraTS 2018 dataset and with an accuracy of 91.34%,
precision of 93.05% and recall of 96.13%, F1-score of 94.53%, and AUC of 93.71%
for BraTS 2018 dataset. Thus, the proposed algorithm could be potentially
implemented for the computer-aided diagnosis and grading system for gliomas.
|
no_new_dataset
| 0.956472
|
2502.20719
|
Guanglin Zhou
|
Guanglin Zhou and Sebastiano Barbieri
|
Generating Clinically Realistic EHR Data via a Hierarchy- and
Semantics-Guided Transformer
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Generating realistic synthetic electronic health records (EHRs) holds
tremendous promise for accelerating healthcare research, facilitating AI model
development and enhancing patient privacy. However, existing generative methods
typically treat EHRs as flat sequences of discrete medical codes. This approach
overlooks two critical aspects: the inherent hierarchical organization of
clinical coding systems and the rich semantic context provided by code
descriptions. Consequently, synthetic patient sequences often lack high
clinical fidelity and have limited utility in downstream clinical tasks. In
this paper, we propose the Hierarchy- and Semantics-Guided Transformer (HiSGT),
a novel framework that leverages both hierarchical and semantic information for
the generative process. HiSGT constructs a hierarchical graph to encode
parent-child and sibling relationships among clinical codes and employs a graph
neural network to derive hierarchy-aware embeddings. These are then fused with
semantic embeddings extracted from a pre-trained clinical language model (e.g.,
ClinicalBERT), enabling the Transformer-based generator to more accurately
model the nuanced clinical patterns inherent in real EHRs. Extensive
experiments on the MIMIC-III and MIMIC-IV datasets demonstrate that HiSGT
significantly improves the statistical alignment of synthetic data with real
patient records, as well as supports robust downstream applications such as
chronic disease classification. By addressing the limitations of conventional
raw code-based generative models, HiSGT represents a significant step toward
clinically high-fidelity synthetic data generation and a general paradigm
suitable for interpretable medical code representation, offering valuable
applications in data augmentation and privacy-preserving healthcare analytics.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 05:06:04 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Zhou",
"Guanglin",
""
],
[
"Barbieri",
"Sebastiano",
""
]
] |
TITLE: Generating Clinically Realistic EHR Data via a Hierarchy- and
Semantics-Guided Transformer
ABSTRACT: Generating realistic synthetic electronic health records (EHRs) holds
tremendous promise for accelerating healthcare research, facilitating AI model
development and enhancing patient privacy. However, existing generative methods
typically treat EHRs as flat sequences of discrete medical codes. This approach
overlooks two critical aspects: the inherent hierarchical organization of
clinical coding systems and the rich semantic context provided by code
descriptions. Consequently, synthetic patient sequences often lack high
clinical fidelity and have limited utility in downstream clinical tasks. In
this paper, we propose the Hierarchy- and Semantics-Guided Transformer (HiSGT),
a novel framework that leverages both hierarchical and semantic information for
the generative process. HiSGT constructs a hierarchical graph to encode
parent-child and sibling relationships among clinical codes and employs a graph
neural network to derive hierarchy-aware embeddings. These are then fused with
semantic embeddings extracted from a pre-trained clinical language model (e.g.,
ClinicalBERT), enabling the Transformer-based generator to more accurately
model the nuanced clinical patterns inherent in real EHRs. Extensive
experiments on the MIMIC-III and MIMIC-IV datasets demonstrate that HiSGT
significantly improves the statistical alignment of synthetic data with real
patient records, as well as supports robust downstream applications such as
chronic disease classification. By addressing the limitations of conventional
raw code-based generative models, HiSGT represents a significant step toward
clinically high-fidelity synthetic data generation and a general paradigm
suitable for interpretable medical code representation, offering valuable
applications in data augmentation and privacy-preserving healthcare analytics.
|
no_new_dataset
| 0.948585
|
2502.20729
|
Mostafa Rahimi Azghadi
|
Ben Walters, Yeshwanth Bethi, Taylor Kergan, Binh Nguyen, Amirali
Amirsoleimani, Jason K. Eshraghian, Saeed Afshar, Mostafa Rahimi Azghadi
|
NeuroMorse: A Temporally Structured Dataset For Neuromorphic Computing
| null | null | null | null |
cs.NE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Neuromorphic engineering aims to advance computing by mimicking the brain's
efficient processing, where data is encoded as asynchronous temporal events.
This eliminates the need for a synchronisation clock and minimises power
consumption when no data is present. However, many benchmarks for neuromorphic
algorithms primarily focus on spatial features, neglecting the temporal
dynamics that are inherent to most sequence-based tasks. This gap may lead to
evaluations that fail to fully capture the unique strengths and characteristics
of neuromorphic systems. In this paper, we present NeuroMorse, a temporally
structured dataset designed for benchmarking neuromorphic learning systems.
NeuroMorse converts the top 50 words in the English language into temporal
Morse code spike sequences. Despite using only two input spike channels for
Morse dots and dashes, complex information is encoded through temporal patterns
in the data. The proposed benchmark contains feature hierarchy at multiple
temporal scales that test the capacity of neuromorphic algorithms to decompose
input patterns into spatial and temporal hierarchies. We demonstrate that our
training set is challenging to categorise using a linear classifier and that
identifying keywords in the test set is difficult using conventional methods.
The NeuroMorse dataset is available at Zenodo, with our accompanying code on
GitHub at https://github.com/Ben-E-Walters/NeuroMorse.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 05:22:45 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Walters",
"Ben",
""
],
[
"Bethi",
"Yeshwanth",
""
],
[
"Kergan",
"Taylor",
""
],
[
"Nguyen",
"Binh",
""
],
[
"Amirsoleimani",
"Amirali",
""
],
[
"Eshraghian",
"Jason K.",
""
],
[
"Afshar",
"Saeed",
""
],
[
"Azghadi",
"Mostafa Rahimi",
""
]
] |
TITLE: NeuroMorse: A Temporally Structured Dataset For Neuromorphic Computing
ABSTRACT: Neuromorphic engineering aims to advance computing by mimicking the brain's
efficient processing, where data is encoded as asynchronous temporal events.
This eliminates the need for a synchronisation clock and minimises power
consumption when no data is present. However, many benchmarks for neuromorphic
algorithms primarily focus on spatial features, neglecting the temporal
dynamics that are inherent to most sequence-based tasks. This gap may lead to
evaluations that fail to fully capture the unique strengths and characteristics
of neuromorphic systems. In this paper, we present NeuroMorse, a temporally
structured dataset designed for benchmarking neuromorphic learning systems.
NeuroMorse converts the top 50 words in the English language into temporal
Morse code spike sequences. Despite using only two input spike channels for
Morse dots and dashes, complex information is encoded through temporal patterns
in the data. The proposed benchmark contains feature hierarchy at multiple
temporal scales that test the capacity of neuromorphic algorithms to decompose
input patterns into spatial and temporal hierarchies. We demonstrate that our
training set is challenging to categorise using a linear classifier and that
identifying keywords in the test set is difficult using conventional methods.
The NeuroMorse dataset is available at Zenodo, with our accompanying code on
GitHub at https://github.com/Ben-E-Walters/NeuroMorse.
|
new_dataset
| 0.957755
|
2502.20731
|
Sean Kouma
|
Sean Kouma, Rachel Masters
|
Indoor Localization for Autonomous Robot Navigation
|
10 pages, 6 figures
| null | null | null |
cs.RO cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Indoor positioning systems (IPSs) have gained attention as outdoor navigation
becomes prevalent in everyday life. Research is being actively conducted on how
indoor smartphone navigation can be accomplished and improved using received
signal strength indication (RSSI) and machine learning (ML). IPSs have more use
cases that need further exploration, and we aim to explore using IPSs for the
indoor navigation of an autonomous robot. We collected a dataset and trained
models to test on a robot. We also developed an A* path-planning algorithm so
that our robot could navigate itself using predicted directions. After testing
different network structures, our robot was able to successfully navigate
corners around 50 percent of the time. The findings of this paper indicate that
using IPSs for autonomous robots is a promising area of future research.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 05:25:04 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Kouma",
"Sean",
""
],
[
"Masters",
"Rachel",
""
]
] |
TITLE: Indoor Localization for Autonomous Robot Navigation
ABSTRACT: Indoor positioning systems (IPSs) have gained attention as outdoor navigation
becomes prevalent in everyday life. Research is being actively conducted on how
indoor smartphone navigation can be accomplished and improved using received
signal strength indication (RSSI) and machine learning (ML). IPSs have more use
cases that need further exploration, and we aim to explore using IPSs for the
indoor navigation of an autonomous robot. We collected a dataset and trained
models to test on a robot. We also developed an A* path-planning algorithm so
that our robot could navigate itself using predicted directions. After testing
different network structures, our robot was able to successfully navigate
corners around 50 percent of the time. The findings of this paper indicate that
using IPSs for autonomous robots is a promising area of future research.
|
new_dataset
| 0.960025
|
2502.20749
|
Yichi Zhang
|
Yichi Zhang, Bohao Lv, Le Xue, Wenbo Zhang, Yuchen Liu, Yu Fu, Yuan
Cheng, Yuan Qi
|
SemiSAM+: Rethinking Semi-Supervised Medical Image Segmentation in the
Era of Foundation Models
| null | null | null | null |
eess.IV cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning-based medical image segmentation typically requires large
amount of labeled data for training, making it less applicable in clinical
settings due to high annotation cost. Semi-supervised learning (SSL) has
emerged as an appealing strategy due to its less dependence on acquiring
abundant annotations from experts compared to fully supervised methods. Beyond
existing model-centric advancements of SSL by designing novel regularization
strategies, we anticipate a paradigmatic shift due to the emergence of
promptable segmentation foundation models with universal segmentation
capabilities using positional prompts represented by Segment Anything Model
(SAM). In this paper, we present SemiSAM+, a foundation model-driven SSL
framework to efficiently learn from limited labeled data for medical image
segmentation. SemiSAM+ consists of one or multiple promptable foundation models
as generalist models, and a trainable task-specific segmentation model as
specialist model. For a given new segmentation task, the training is based on
the specialist-generalist collaborative learning procedure, where the trainable
specialist model delivers positional prompts to interact with the frozen
generalist models to acquire pseudo-labels, and then the generalist model
output provides the specialist model with informative and efficient supervision
which benefits the automatic segmentation and prompt generation in turn.
Extensive experiments on two public datasets and one in-house clinical dataset
demonstrate that SemiSAM+ achieves significant performance improvement,
especially under extremely limited annotation scenarios, and shows strong
efficiency as a plug-and-play strategy that can be easily adapted to different
specialist and generalist models.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 05:54:41 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Zhang",
"Yichi",
""
],
[
"Lv",
"Bohao",
""
],
[
"Xue",
"Le",
""
],
[
"Zhang",
"Wenbo",
""
],
[
"Liu",
"Yuchen",
""
],
[
"Fu",
"Yu",
""
],
[
"Cheng",
"Yuan",
""
],
[
"Qi",
"Yuan",
""
]
] |
TITLE: SemiSAM+: Rethinking Semi-Supervised Medical Image Segmentation in the
Era of Foundation Models
ABSTRACT: Deep learning-based medical image segmentation typically requires large
amount of labeled data for training, making it less applicable in clinical
settings due to high annotation cost. Semi-supervised learning (SSL) has
emerged as an appealing strategy due to its less dependence on acquiring
abundant annotations from experts compared to fully supervised methods. Beyond
existing model-centric advancements of SSL by designing novel regularization
strategies, we anticipate a paradigmatic shift due to the emergence of
promptable segmentation foundation models with universal segmentation
capabilities using positional prompts represented by Segment Anything Model
(SAM). In this paper, we present SemiSAM+, a foundation model-driven SSL
framework to efficiently learn from limited labeled data for medical image
segmentation. SemiSAM+ consists of one or multiple promptable foundation models
as generalist models, and a trainable task-specific segmentation model as
specialist model. For a given new segmentation task, the training is based on
the specialist-generalist collaborative learning procedure, where the trainable
specialist model delivers positional prompts to interact with the frozen
generalist models to acquire pseudo-labels, and then the generalist model
output provides the specialist model with informative and efficient supervision
which benefits the automatic segmentation and prompt generation in turn.
Extensive experiments on two public datasets and one in-house clinical dataset
demonstrate that SemiSAM+ achieves significant performance improvement,
especially under extremely limited annotation scenarios, and shows strong
efficiency as a plug-and-play strategy that can be easily adapted to different
specialist and generalist models.
|
no_new_dataset
| 0.951639
|
2502.20755
|
Soumya Mukherjee
|
Soumya Mukherjee, Bharath K. Sriperumbudur
|
Minimax Optimal Kernel Two-Sample Tests with Random Features
|
82 pages, 10 figures, 5 tables
| null | null | null |
math.ST cs.LG stat.ML stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reproducing Kernel Hilbert Space (RKHS) embedding of probability
distributions has proved to be an effective approach, via MMD (maximum mean
discrepancy) for nonparametric hypothesis testing problems involving
distributions defined over general (non-Euclidean) domains. While a substantial
amount of work has been done on this topic, only recently, minimax optimal
two-sample tests have been constructed that incorporate, unlike MMD, both the
mean element and a regularized version of the covariance operator. However, as
with most kernel algorithms, the computational complexity of the optimal test
scales cubically in the sample size, limiting its applicability. In this paper,
we propose a spectral regularized two-sample test based on random Fourier
feature (RFF) approximation and investigate the trade-offs between statistical
optimality and computational efficiency. We show the proposed test to be
minimax optimal if the approximation order of RFF (which depends on the
smoothness of the likelihood ratio and the decay rate of the eigenvalues of the
integral operator) is sufficiently large. We develop a practically
implementable permutation-based version of the proposed test with a
data-adaptive strategy for selecting the regularization parameter and the
kernel. Finally, through numerical experiments on simulated and benchmark
datasets, we demonstrate that the proposed RFF-based test is computationally
efficient and performs almost similar (with a small drop in power) to the exact
test.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 06:12:00 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Mukherjee",
"Soumya",
""
],
[
"Sriperumbudur",
"Bharath K.",
""
]
] |
TITLE: Minimax Optimal Kernel Two-Sample Tests with Random Features
ABSTRACT: Reproducing Kernel Hilbert Space (RKHS) embedding of probability
distributions has proved to be an effective approach, via MMD (maximum mean
discrepancy) for nonparametric hypothesis testing problems involving
distributions defined over general (non-Euclidean) domains. While a substantial
amount of work has been done on this topic, only recently, minimax optimal
two-sample tests have been constructed that incorporate, unlike MMD, both the
mean element and a regularized version of the covariance operator. However, as
with most kernel algorithms, the computational complexity of the optimal test
scales cubically in the sample size, limiting its applicability. In this paper,
we propose a spectral regularized two-sample test based on random Fourier
feature (RFF) approximation and investigate the trade-offs between statistical
optimality and computational efficiency. We show the proposed test to be
minimax optimal if the approximation order of RFF (which depends on the
smoothness of the likelihood ratio and the decay rate of the eigenvalues of the
integral operator) is sufficiently large. We develop a practically
implementable permutation-based version of the proposed test with a
data-adaptive strategy for selecting the regularization parameter and the
kernel. Finally, through numerical experiments on simulated and benchmark
datasets, we demonstrate that the proposed RFF-based test is computationally
efficient and performs almost similar (with a small drop in power) to the exact
test.
|
no_new_dataset
| 0.944791
|
2502.20767
|
Hui Lai
|
Hui Lai, Qi Chen, Junping Zhang, Jian Pu
|
A2DO: Adaptive Anti-Degradation Odometry with Deep Multi-Sensor Fusion
for Autonomous Navigation
|
6+1pages, 6 figures, accept by ICRA
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate localization is essential for the safe and effective navigation of
autonomous vehicles, and Simultaneous Localization and Mapping (SLAM) is a
cornerstone technology in this context. However, The performance of the SLAM
system can deteriorate under challenging conditions such as low light, adverse
weather, or obstructions due to sensor degradation. We present A2DO, a novel
end-to-end multi-sensor fusion odometry system that enhances robustness in
these scenarios through deep neural networks. A2DO integrates LiDAR and visual
data, employing a multi-layer, multi-scale feature encoding module augmented by
an attention mechanism to mitigate sensor degradation dynamically. The system
is pre-trained extensively on simulated datasets covering a broad range of
degradation scenarios and fine-tuned on a curated set of real-world data,
ensuring robust adaptation to complex scenarios. Our experiments demonstrate
that A2DO maintains superior localization accuracy and robustness across
various degradation conditions, showcasing its potential for practical
implementation in autonomous vehicle systems.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 06:37:51 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Lai",
"Hui",
""
],
[
"Chen",
"Qi",
""
],
[
"Zhang",
"Junping",
""
],
[
"Pu",
"Jian",
""
]
] |
TITLE: A2DO: Adaptive Anti-Degradation Odometry with Deep Multi-Sensor Fusion
for Autonomous Navigation
ABSTRACT: Accurate localization is essential for the safe and effective navigation of
autonomous vehicles, and Simultaneous Localization and Mapping (SLAM) is a
cornerstone technology in this context. However, The performance of the SLAM
system can deteriorate under challenging conditions such as low light, adverse
weather, or obstructions due to sensor degradation. We present A2DO, a novel
end-to-end multi-sensor fusion odometry system that enhances robustness in
these scenarios through deep neural networks. A2DO integrates LiDAR and visual
data, employing a multi-layer, multi-scale feature encoding module augmented by
an attention mechanism to mitigate sensor degradation dynamically. The system
is pre-trained extensively on simulated datasets covering a broad range of
degradation scenarios and fine-tuned on a curated set of real-world data,
ensuring robust adaptation to complex scenarios. Our experiments demonstrate
that A2DO maintains superior localization accuracy and robustness across
various degradation conditions, showcasing its potential for practical
implementation in autonomous vehicle systems.
|
no_new_dataset
| 0.941547
|
2502.20772
|
Tianyi Wang
|
Tianyi Zeng, Tianyi Wang, Junfeng Jiao, Xinbo Chen
|
Damper-B-PINN: Damper Characteristics-Based Bayesian Physics-Informed
Neural Network for Vehicle State Estimation
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State estimation for Multi-Input Multi-Output (MIMO) systems with noise, such
as vehicle chassis systems, presents a significant challenge due to the
imperfect and complex relationship between inputs and outputs. To solve this
problem, we design a Damper characteristics-based Bayesian Physics-Informed
Neural Network (Damper-B-PINN). First, we introduce a neuron forward process
inspired by the mechanical properties of dampers, which limits abrupt jumps in
neuron values between epochs while maintaining search capability. Additionally,
we apply an optimized Bayesian dropout layer to the MIMO system to enhance
robustness against noise and prevent non-convergence issues. Physical
information is incorporated into the loss function to serve as a physical prior
for the neural network. The effectiveness of our Damper-B-PINN architecture is
then validated across ten datasets and fourteen vehicle types, demonstrating
superior accuracy, computational efficiency, and convergence in vehicle state
estimation (i.e., dynamic wheel load) compared to other state-of-the-art
benchmarks.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 06:46:21 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Zeng",
"Tianyi",
""
],
[
"Wang",
"Tianyi",
""
],
[
"Jiao",
"Junfeng",
""
],
[
"Chen",
"Xinbo",
""
]
] |
TITLE: Damper-B-PINN: Damper Characteristics-Based Bayesian Physics-Informed
Neural Network for Vehicle State Estimation
ABSTRACT: State estimation for Multi-Input Multi-Output (MIMO) systems with noise, such
as vehicle chassis systems, presents a significant challenge due to the
imperfect and complex relationship between inputs and outputs. To solve this
problem, we design a Damper characteristics-based Bayesian Physics-Informed
Neural Network (Damper-B-PINN). First, we introduce a neuron forward process
inspired by the mechanical properties of dampers, which limits abrupt jumps in
neuron values between epochs while maintaining search capability. Additionally,
we apply an optimized Bayesian dropout layer to the MIMO system to enhance
robustness against noise and prevent non-convergence issues. Physical
information is incorporated into the loss function to serve as a physical prior
for the neural network. The effectiveness of our Damper-B-PINN architecture is
then validated across ten datasets and fourteen vehicle types, demonstrating
superior accuracy, computational efficiency, and convergence in vehicle state
estimation (i.e., dynamic wheel load) compared to other state-of-the-art
benchmarks.
|
no_new_dataset
| 0.950824
|
2502.20777
|
Jianping Yao
|
Long Huang and Jianping Yao
|
Silicon Micro-Disk Resonator Crossbar Array for High-Speed and
High-Density Photonic Convolution Processing
|
15 pages, 23 figures
| null | null | null |
physics.optics
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Advanced artificial intelligence (AI) algorithms, particularly those based on
artificial neural networks, have garnered significant attention for their
potential applications in areas such as image recognition and natural language
processing. Notably, neural networks make heavy use of matrix-vector
multiplication (MVM) operations, causing substantial computing burden on
existing electronic computing systems. Optical computing has attracted
considerable attention that can perform optical-domain MVM at an ultra-high
speed. In this paper, we introduce a novel silicon photonic micro-disk
resonator (MDR) crossbar signal processor designed to support matrix-vector
multiplication (MVM) with both high processing speed and enhanced computational
density. The key innovation of the proposed MDR crossbar processor is the
placement of two MDRs at each crosspoint, enabling simultaneous routing and
weighting functions. This design effectively doubles the computational density,
improving overall performance. We fabricate a silicon photonic MDR crossbar
processor, which is employed to perform convolutional tasks in a convolutional
neural network (CNN). The experimental results demonstrate that the photonic
processor achieves a classification accuracy of 96% on the MNIST dataset.
Additionally, it is capable of scaling to a computational speed of up to 160
tera-operations per second (TOPS) and a computational density as high as 25.6
TOPS/mm2. Our approach holds significant promise for enabling highly efficient,
scalable on-chip optical computing, with broad potential applications in AI and
beyond.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 06:55:45 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Huang",
"Long",
""
],
[
"Yao",
"Jianping",
""
]
] |
TITLE: Silicon Micro-Disk Resonator Crossbar Array for High-Speed and
High-Density Photonic Convolution Processing
ABSTRACT: Advanced artificial intelligence (AI) algorithms, particularly those based on
artificial neural networks, have garnered significant attention for their
potential applications in areas such as image recognition and natural language
processing. Notably, neural networks make heavy use of matrix-vector
multiplication (MVM) operations, causing substantial computing burden on
existing electronic computing systems. Optical computing has attracted
considerable attention that can perform optical-domain MVM at an ultra-high
speed. In this paper, we introduce a novel silicon photonic micro-disk
resonator (MDR) crossbar signal processor designed to support matrix-vector
multiplication (MVM) with both high processing speed and enhanced computational
density. The key innovation of the proposed MDR crossbar processor is the
placement of two MDRs at each crosspoint, enabling simultaneous routing and
weighting functions. This design effectively doubles the computational density,
improving overall performance. We fabricate a silicon photonic MDR crossbar
processor, which is employed to perform convolutional tasks in a convolutional
neural network (CNN). The experimental results demonstrate that the photonic
processor achieves a classification accuracy of 96% on the MNIST dataset.
Additionally, it is capable of scaling to a computational speed of up to 160
tera-operations per second (TOPS) and a computational density as high as 25.6
TOPS/mm2. Our approach holds significant promise for enabling highly efficient,
scalable on-chip optical computing, with broad potential applications in AI and
beyond.
|
no_new_dataset
| 0.953144
|
2502.20780
|
Qiao Yan
|
Qiao Yan, Yuchen Yuan, Xiaowei Hu, Yihan Wang, Jiaqi Xu, Jinpeng Li,
Chi-Wing Fu, Pheng-Ann Heng
|
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical
Hallucination in Vision-Language Models
| null | null | null | null |
cs.AI cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The increasing use of vision-language models (VLMs) in healthcare
applications presents great challenges related to hallucinations, in which the
models may generate seemingly plausible results that are in fact incorrect.
Such hallucinations can jeopardize clinical decision making, potentially
harming the diagnosis and treatments. In this work, we propose MedHallTune, a
large-scale benchmark designed specifically to evaluate and mitigate
hallucinations in medical VLMs. Comprising over 100,000 images and 1,000,000
instruction pairs, MedHallTune includes both hallucination and
non-hallucination samples, each with ground-truth annotations. We conduct a
comprehensive evaluation of current medical and general VLMs using MedHallTune,
assessing their performance across key metrics, including clinical accuracy,
relevance, detail level, and risk level. The experimental results show that
fine-tuning with MedHallTune successfully improves the ability of several
existing models to manage hallucinations and boost their zero-shot performance
on downstream visual-question-answering (VQA) tasks, making them more reliable
for practical medical applications. Our work contributes to the development of
more trustworthy VLMs. Codes and dataset will be available at
\href{https://github.com/russellyq/MedHallTune}{MedHallTune}.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 06:59:49 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Yan",
"Qiao",
""
],
[
"Yuan",
"Yuchen",
""
],
[
"Hu",
"Xiaowei",
""
],
[
"Wang",
"Yihan",
""
],
[
"Xu",
"Jiaqi",
""
],
[
"Li",
"Jinpeng",
""
],
[
"Fu",
"Chi-Wing",
""
],
[
"Heng",
"Pheng-Ann",
""
]
] |
TITLE: MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical
Hallucination in Vision-Language Models
ABSTRACT: The increasing use of vision-language models (VLMs) in healthcare
applications presents great challenges related to hallucinations, in which the
models may generate seemingly plausible results that are in fact incorrect.
Such hallucinations can jeopardize clinical decision making, potentially
harming the diagnosis and treatments. In this work, we propose MedHallTune, a
large-scale benchmark designed specifically to evaluate and mitigate
hallucinations in medical VLMs. Comprising over 100,000 images and 1,000,000
instruction pairs, MedHallTune includes both hallucination and
non-hallucination samples, each with ground-truth annotations. We conduct a
comprehensive evaluation of current medical and general VLMs using MedHallTune,
assessing their performance across key metrics, including clinical accuracy,
relevance, detail level, and risk level. The experimental results show that
fine-tuning with MedHallTune successfully improves the ability of several
existing models to manage hallucinations and boost their zero-shot performance
on downstream visual-question-answering (VQA) tasks, making them more reliable
for practical medical applications. Our work contributes to the development of
more trustworthy VLMs. Codes and dataset will be available at
\href{https://github.com/russellyq/MedHallTune}{MedHallTune}.
|
new_dataset
| 0.962568
|
2502.20784
|
Tao Chen
|
Tao Chen and Chenhui Wang and Zhihao Chen and Hongming Shan
|
Autoregressive Medical Image Segmentation via Next-Scale Mask Prediction
|
10 pages, 4 figures
| null | null | null |
eess.IV cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While deep learning has significantly advanced medical image segmentation,
most existing methods still struggle with handling complex anatomical regions.
Cascaded or deep supervision-based approaches attempt to address this challenge
through multi-scale feature learning but fail to establish sufficient
inter-scale dependencies, as each scale relies solely on the features of the
immediate predecessor. To this end, we propose the AutoRegressive Segmentation
framework via next-scale mask prediction, termed AR-Seg, which progressively
predicts the next-scale mask by explicitly modeling dependencies across all
previous scales within a unified architecture. AR-Seg introduces three
innovations: (1) a multi-scale mask autoencoder that quantizes the mask into
multi-scale token maps to capture hierarchical anatomical structures, (2) a
next-scale autoregressive mechanism that progressively predicts next-scale
masks to enable sufficient inter-scale dependencies, and (3) a
consensus-aggregation strategy that combines multiple sampled results to
generate a more accurate mask, further improving segmentation robustness.
Extensive experimental results on two benchmark datasets with different
modalities demonstrate that AR-Seg outperforms state-of-the-art methods while
explicitly visualizing the intermediate coarse-to-fine segmentation process.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 07:05:58 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Chen",
"Tao",
""
],
[
"Wang",
"Chenhui",
""
],
[
"Chen",
"Zhihao",
""
],
[
"Shan",
"Hongming",
""
]
] |
TITLE: Autoregressive Medical Image Segmentation via Next-Scale Mask Prediction
ABSTRACT: While deep learning has significantly advanced medical image segmentation,
most existing methods still struggle with handling complex anatomical regions.
Cascaded or deep supervision-based approaches attempt to address this challenge
through multi-scale feature learning but fail to establish sufficient
inter-scale dependencies, as each scale relies solely on the features of the
immediate predecessor. To this end, we propose the AutoRegressive Segmentation
framework via next-scale mask prediction, termed AR-Seg, which progressively
predicts the next-scale mask by explicitly modeling dependencies across all
previous scales within a unified architecture. AR-Seg introduces three
innovations: (1) a multi-scale mask autoencoder that quantizes the mask into
multi-scale token maps to capture hierarchical anatomical structures, (2) a
next-scale autoregressive mechanism that progressively predicts next-scale
masks to enable sufficient inter-scale dependencies, and (3) a
consensus-aggregation strategy that combines multiple sampled results to
generate a more accurate mask, further improving segmentation robustness.
Extensive experimental results on two benchmark datasets with different
modalities demonstrate that AR-Seg outperforms state-of-the-art methods while
explicitly visualizing the intermediate coarse-to-fine segmentation process.
|
no_new_dataset
| 0.949669
|
2502.20785
|
Hyewon Jeon
|
Hyewon Jeon, Jay-Yoon Lee
|
GraphCheck: Multi-Path Fact-Checking with Entity-Relationship Graphs
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Automated fact-checking aims to assess the truthfulness of text based on
relevant evidence, yet verifying complex claims requiring multi-hop reasoning
remains a significant challenge. We propose GraphCheck, a novel framework that
converts claims into entity-relationship graphs for comprehensive verification.
By identifying relation between explicit entities and latent entities across
multiple paths, GraphCheck enhances the adaptability and robustness of
verification. Furthermore, we introduce DP-GraphCheck, a two-stage variant that
improves performance by incorporating direct prompting as an initial filtering
step. Experiments on the HOVER and EX-FEVER datasets show that our approach
outperforms existing methods, particularly in multi-hop reasoning tasks.
Furthermore, our two-stage framework generalizes well to other fact-checking
pipelines, demonstrating its versatility.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 07:06:19 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Jeon",
"Hyewon",
""
],
[
"Lee",
"Jay-Yoon",
""
]
] |
TITLE: GraphCheck: Multi-Path Fact-Checking with Entity-Relationship Graphs
ABSTRACT: Automated fact-checking aims to assess the truthfulness of text based on
relevant evidence, yet verifying complex claims requiring multi-hop reasoning
remains a significant challenge. We propose GraphCheck, a novel framework that
converts claims into entity-relationship graphs for comprehensive verification.
By identifying relation between explicit entities and latent entities across
multiple paths, GraphCheck enhances the adaptability and robustness of
verification. Furthermore, we introduce DP-GraphCheck, a two-stage variant that
improves performance by incorporating direct prompting as an initial filtering
step. Experiments on the HOVER and EX-FEVER datasets show that our approach
outperforms existing methods, particularly in multi-hop reasoning tasks.
Furthermore, our two-stage framework generalizes well to other fact-checking
pipelines, demonstrating its versatility.
|
no_new_dataset
| 0.944434
|
2502.20793
|
Amir Jahangiri
|
Amir Jahangiri, Tatiana Agback, Ulrika Brath and Vladislav Orekhov
|
Towards Ultimate NMR Resolution with Deep Learning
| null | null | null | null |
physics.bio-ph cs.LG physics.data-an
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In multidimensional NMR spectroscopy, practical resolution is defined as the
ability to distinguish and accurately determine signal positions against a
background of overlapping peaks, thermal noise, and spectral artifacts. In the
pursuit of ultimate resolution, we introduce Peak Probability Presentations
($P^3$)- a statistical spectral representation that assigns a probability to
each spectral point, indicating the likelihood of a peak maximum occurring at
that location. The mapping between the spectrum and $P^3$ is achieved using
MR-Ai, a physics-inspired deep learning neural network architecture, designed
to handle multidimensional NMR spectra. Furthermore, we demonstrate that MR-Ai
enables coprocessing of multiple spectra, facilitating direct information
exchange between datasets. This feature significantly enhances spectral
quality, particularly in cases of highly sparse sampling. Performance of MR-Ai
and high value of the $P^3$ are demonstrated on the synthetic data and spectra
of Tau, MATL1, Calmodulin, and several other proteins.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 07:20:25 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Jahangiri",
"Amir",
""
],
[
"Agback",
"Tatiana",
""
],
[
"Brath",
"Ulrika",
""
],
[
"Orekhov",
"Vladislav",
""
]
] |
TITLE: Towards Ultimate NMR Resolution with Deep Learning
ABSTRACT: In multidimensional NMR spectroscopy, practical resolution is defined as the
ability to distinguish and accurately determine signal positions against a
background of overlapping peaks, thermal noise, and spectral artifacts. In the
pursuit of ultimate resolution, we introduce Peak Probability Presentations
($P^3$)- a statistical spectral representation that assigns a probability to
each spectral point, indicating the likelihood of a peak maximum occurring at
that location. The mapping between the spectrum and $P^3$ is achieved using
MR-Ai, a physics-inspired deep learning neural network architecture, designed
to handle multidimensional NMR spectra. Furthermore, we demonstrate that MR-Ai
enables coprocessing of multiple spectra, facilitating direct information
exchange between datasets. This feature significantly enhances spectral
quality, particularly in cases of highly sparse sampling. Performance of MR-Ai
and high value of the $P^3$ are demonstrated on the synthetic data and spectra
of Tau, MATL1, Calmodulin, and several other proteins.
|
no_new_dataset
| 0.951233
|
2502.20803
|
Masoumeh Chapariniya
|
Masoumeh Chapariniya, Hossein Ranjbar, Teodora Vukovic, Sarah Ebling,
Volker Dellwo
|
Two-Stream Spatial-Temporal Transformer Framework for Person
Identification via Natural Conversational Keypoints
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In the age of AI-driven generative technologies, traditional biometric
recognition systems face unprecedented challenges, particularly from
sophisticated deepfake and face reenactment techniques. In this study, we
propose a Two-Stream Spatial-Temporal Transformer Framework for person
identification using upper body keypoints visible during online conversations,
which we term conversational keypoints. Our framework processes both spatial
relationships between keypoints and their temporal evolution through two
specialized branches: a Spatial Transformer (STR) that learns distinctive
structural patterns in keypoint configurations, and a Temporal Transformer
(TTR) that captures sequential motion patterns. Using the state-of-the-art
Sapiens pose estimator, we extract 133 keypoints (based on COCO-WholeBody
format) representing facial features, head pose, and hand positions. The
framework was evaluated on a dataset of 114 individuals engaged in natural
conversations, achieving recognition accuracies of 80.12% for the spatial
stream, 63.61% for the temporal stream. We then explored two fusion strategies:
a shared loss function approach achieving 82.22% accuracy, and a feature-level
fusion method that concatenates feature maps from both streams, significantly
improving performance to 94.86%. By jointly modeling both static anatomical
relationships and dynamic movement patterns, our approach learns comprehensive
identity signatures that are more robust to spoofing than traditional
appearance-based methods.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 07:38:48 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Chapariniya",
"Masoumeh",
""
],
[
"Ranjbar",
"Hossein",
""
],
[
"Vukovic",
"Teodora",
""
],
[
"Ebling",
"Sarah",
""
],
[
"Dellwo",
"Volker",
""
]
] |
TITLE: Two-Stream Spatial-Temporal Transformer Framework for Person
Identification via Natural Conversational Keypoints
ABSTRACT: In the age of AI-driven generative technologies, traditional biometric
recognition systems face unprecedented challenges, particularly from
sophisticated deepfake and face reenactment techniques. In this study, we
propose a Two-Stream Spatial-Temporal Transformer Framework for person
identification using upper body keypoints visible during online conversations,
which we term conversational keypoints. Our framework processes both spatial
relationships between keypoints and their temporal evolution through two
specialized branches: a Spatial Transformer (STR) that learns distinctive
structural patterns in keypoint configurations, and a Temporal Transformer
(TTR) that captures sequential motion patterns. Using the state-of-the-art
Sapiens pose estimator, we extract 133 keypoints (based on COCO-WholeBody
format) representing facial features, head pose, and hand positions. The
framework was evaluated on a dataset of 114 individuals engaged in natural
conversations, achieving recognition accuracies of 80.12% for the spatial
stream, 63.61% for the temporal stream. We then explored two fusion strategies:
a shared loss function approach achieving 82.22% accuracy, and a feature-level
fusion method that concatenates feature maps from both streams, significantly
improving performance to 94.86%. By jointly modeling both static anatomical
relationships and dynamic movement patterns, our approach learns comprehensive
identity signatures that are more robust to spoofing than traditional
appearance-based methods.
|
no_new_dataset
| 0.94868
|
2502.20806
|
Faisal Mohammad
|
Faisal Mohammad, Duksan Ryu
|
Multimodal Learning for Just-In-Time Software Defect Prediction in
Autonomous Driving Systems
|
9
| null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, the rise of autonomous driving technologies has highlighted
the critical importance of reliable software for ensuring safety and
performance. This paper proposes a novel approach for just-in-time software
defect prediction (JIT-SDP) in autonomous driving software systems using
multimodal learning. The proposed model leverages the multimodal transformers
in which the pre-trained transformers and a combining module deal with the
multiple data modalities of the software system datasets such as code features,
change metrics, and contextual information. The key point for adapting
multimodal learning is to utilize the attention mechanism between the different
data modalities such as text, numerical, and categorical. In the combining
module, the output of a transformer model on text data and tabular features
containing categorical and numerical data are combined to produce the
predictions using the fully connected layers. Experiments conducted on three
open-source autonomous driving system software projects collected from the
GitHub repository (Apollo, Carla, and Donkeycar) demonstrate that the proposed
approach significantly outperforms state-of-the-art deep learning and machine
learning models regarding evaluation metrics. Our findings highlight the
potential of multimodal learning to enhance the reliability and safety of
autonomous driving software through improved defect prediction.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 07:45:10 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Mohammad",
"Faisal",
""
],
[
"Ryu",
"Duksan",
""
]
] |
TITLE: Multimodal Learning for Just-In-Time Software Defect Prediction in
Autonomous Driving Systems
ABSTRACT: In recent years, the rise of autonomous driving technologies has highlighted
the critical importance of reliable software for ensuring safety and
performance. This paper proposes a novel approach for just-in-time software
defect prediction (JIT-SDP) in autonomous driving software systems using
multimodal learning. The proposed model leverages the multimodal transformers
in which the pre-trained transformers and a combining module deal with the
multiple data modalities of the software system datasets such as code features,
change metrics, and contextual information. The key point for adapting
multimodal learning is to utilize the attention mechanism between the different
data modalities such as text, numerical, and categorical. In the combining
module, the output of a transformer model on text data and tabular features
containing categorical and numerical data are combined to produce the
predictions using the fully connected layers. Experiments conducted on three
open-source autonomous driving system software projects collected from the
GitHub repository (Apollo, Carla, and Donkeycar) demonstrate that the proposed
approach significantly outperforms state-of-the-art deep learning and machine
learning models regarding evaluation metrics. Our findings highlight the
potential of multimodal learning to enhance the reliability and safety of
autonomous driving software through improved defect prediction.
|
no_new_dataset
| 0.949106
|
2502.20811
|
Xiao Wang
|
Xiao Wang, Jingyun Hua, Weihong Lin, Yuanxing Zhang, Fuzheng Zhang,
Jianlong Wu, Di Zhang, Liqiang Nie
|
HAIC: Improving Human Action Understanding and Generation with Better
Captions for Multi-modal Large Language Models
| null | null | null | null |
cs.CV cs.CL cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Recent Multi-modal Large Language Models (MLLMs) have made great progress in
video understanding. However, their performance on videos involving human
actions is still limited by the lack of high-quality data. To address this, we
introduce a two-stage data annotation pipeline. First, we design strategies to
accumulate videos featuring clear human actions from the Internet. Second,
videos are annotated in a standardized caption format that uses human
attributes to distinguish individuals and chronologically details their actions
and interactions. Through this pipeline, we curate two datasets, namely
HAICTrain and HAICBench. \textbf{HAICTrain} comprises 126K video-caption pairs
generated by Gemini-Pro and verified for training purposes. Meanwhile,
\textbf{HAICBench} includes 500 manually annotated video-caption pairs and
1,400 QA pairs, for a comprehensive evaluation of human action understanding.
Experimental results demonstrate that training with HAICTrain not only
significantly enhances human understanding abilities across 4 benchmarks, but
can also improve text-to-video generation results. Both the HAICTrain and
HAICBench are released at https://huggingface.co/datasets/KuaishouHAIC/HAIC.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 07:53:40 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Wang",
"Xiao",
""
],
[
"Hua",
"Jingyun",
""
],
[
"Lin",
"Weihong",
""
],
[
"Zhang",
"Yuanxing",
""
],
[
"Zhang",
"Fuzheng",
""
],
[
"Wu",
"Jianlong",
""
],
[
"Zhang",
"Di",
""
],
[
"Nie",
"Liqiang",
""
]
] |
TITLE: HAIC: Improving Human Action Understanding and Generation with Better
Captions for Multi-modal Large Language Models
ABSTRACT: Recent Multi-modal Large Language Models (MLLMs) have made great progress in
video understanding. However, their performance on videos involving human
actions is still limited by the lack of high-quality data. To address this, we
introduce a two-stage data annotation pipeline. First, we design strategies to
accumulate videos featuring clear human actions from the Internet. Second,
videos are annotated in a standardized caption format that uses human
attributes to distinguish individuals and chronologically details their actions
and interactions. Through this pipeline, we curate two datasets, namely
HAICTrain and HAICBench. \textbf{HAICTrain} comprises 126K video-caption pairs
generated by Gemini-Pro and verified for training purposes. Meanwhile,
\textbf{HAICBench} includes 500 manually annotated video-caption pairs and
1,400 QA pairs, for a comprehensive evaluation of human action understanding.
Experimental results demonstrate that training with HAICTrain not only
significantly enhances human understanding abilities across 4 benchmarks, but
can also improve text-to-video generation results. Both the HAICTrain and
HAICBench are released at https://huggingface.co/datasets/KuaishouHAIC/HAIC.
|
new_dataset
| 0.95846
|
2502.20824
|
Fadeel Sher Khan
|
Fadeel Sher Khan, Joshua Ebenezer, Hamid Sheikh, Seok-Jun Lee
|
MFSR-GAN: Multi-Frame Super-Resolution with Handheld Motion Modeling
|
8 pages, 6 figures
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Smartphone cameras have become ubiquitous imaging tools, yet their small
sensors and compact optics often limit spatial resolution and introduce
distortions. Combining information from multiple low-resolution (LR) frames to
produce a high-resolution (HR) image has been explored to overcome the inherent
limitations of smartphone cameras. Despite the promise of multi-frame
super-resolution (MFSR), current approaches are hindered by datasets that fail
to capture the characteristic noise and motion patterns found in real-world
handheld burst images. In this work, we address this gap by introducing a novel
synthetic data engine that uses multi-exposure static images to synthesize
LR-HR training pairs while preserving sensor-specific noise characteristics and
image motion found during handheld burst photography. We also propose MFSR-GAN:
a multi-scale RAW-to-RGB network for MFSR. Compared to prior approaches,
MFSR-GAN emphasizes a "base frame" throughout its architecture to mitigate
artifacts. Experimental results on both synthetic and real data demonstrates
that MFSR-GAN trained with our synthetic engine yields sharper, more realistic
reconstructions than existing methods for real-world MFSR.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 08:11:03 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Khan",
"Fadeel Sher",
""
],
[
"Ebenezer",
"Joshua",
""
],
[
"Sheikh",
"Hamid",
""
],
[
"Lee",
"Seok-Jun",
""
]
] |
TITLE: MFSR-GAN: Multi-Frame Super-Resolution with Handheld Motion Modeling
ABSTRACT: Smartphone cameras have become ubiquitous imaging tools, yet their small
sensors and compact optics often limit spatial resolution and introduce
distortions. Combining information from multiple low-resolution (LR) frames to
produce a high-resolution (HR) image has been explored to overcome the inherent
limitations of smartphone cameras. Despite the promise of multi-frame
super-resolution (MFSR), current approaches are hindered by datasets that fail
to capture the characteristic noise and motion patterns found in real-world
handheld burst images. In this work, we address this gap by introducing a novel
synthetic data engine that uses multi-exposure static images to synthesize
LR-HR training pairs while preserving sensor-specific noise characteristics and
image motion found during handheld burst photography. We also propose MFSR-GAN:
a multi-scale RAW-to-RGB network for MFSR. Compared to prior approaches,
MFSR-GAN emphasizes a "base frame" throughout its architecture to mitigate
artifacts. Experimental results on both synthetic and real data demonstrates
that MFSR-GAN trained with our synthetic engine yields sharper, more realistic
reconstructions than existing methods for real-world MFSR.
|
no_new_dataset
| 0.952397
|
2502.20850
|
Anh Tien Nguyen
|
Anh Tien Nguyen, Keunho Byeon, Kyungeun Kim, Jin Tae Kwak
|
VLEER: Vision and Language Embeddings for Explainable Whole Slide Image
Representation
|
Under review
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in vision-language models (VLMs) have shown remarkable
potential in bridging visual and textual modalities. In computational
pathology, domain-specific VLMs, which are pre-trained on extensive
histopathology image-text datasets, have succeeded in various downstream tasks.
However, existing research has primarily focused on the pre-training process
and direct applications of VLMs on the patch level, leaving their great
potential for whole slide image (WSI) applications unexplored. In this study,
we hypothesize that pre-trained VLMs inherently capture informative and
interpretable WSI representations through quantitative feature extraction. To
validate this hypothesis, we introduce Vision and Language Embeddings for
Explainable WSI Representation (VLEER), a novel method designed to leverage
VLMs for WSI representation. We systematically evaluate VLEER on three
pathological WSI datasets, proving its better performance in WSI analysis
compared to conventional vision features. More importantly, VLEER offers the
unique advantage of interpretability, enabling direct human-readable insights
into the results by leveraging the textual modality for detailed pathology
annotations, providing clear reasoning for WSI-level pathology downstream
tasks.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 08:49:03 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Nguyen",
"Anh Tien",
""
],
[
"Byeon",
"Keunho",
""
],
[
"Kim",
"Kyungeun",
""
],
[
"Kwak",
"Jin Tae",
""
]
] |
TITLE: VLEER: Vision and Language Embeddings for Explainable Whole Slide Image
Representation
ABSTRACT: Recent advances in vision-language models (VLMs) have shown remarkable
potential in bridging visual and textual modalities. In computational
pathology, domain-specific VLMs, which are pre-trained on extensive
histopathology image-text datasets, have succeeded in various downstream tasks.
However, existing research has primarily focused on the pre-training process
and direct applications of VLMs on the patch level, leaving their great
potential for whole slide image (WSI) applications unexplored. In this study,
we hypothesize that pre-trained VLMs inherently capture informative and
interpretable WSI representations through quantitative feature extraction. To
validate this hypothesis, we introduce Vision and Language Embeddings for
Explainable WSI Representation (VLEER), a novel method designed to leverage
VLMs for WSI representation. We systematically evaluate VLEER on three
pathological WSI datasets, proving its better performance in WSI analysis
compared to conventional vision features. More importantly, VLEER offers the
unique advantage of interpretability, enabling direct human-readable insights
into the results by leveraging the textual modality for detailed pathology
annotations, providing clear reasoning for WSI-level pathology downstream
tasks.
|
no_new_dataset
| 0.944125
|
2502.20852
|
Rongchang Lu
|
Rongchang Lu, Bingcheng Liao, Haowen Hou, Jiahang Lv and Xin Hai
|
Delta-WKV: A Novel Meta-in-Context Learner for MRI Super-Resolution
|
This paper has been published to MICCAI 2025. Feel free to contact on
[email protected]
| null | null | null |
eess.IV cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Magnetic Resonance Imaging (MRI) Super-Resolution (SR) addresses the
challenges such as long scan times and expensive equipment by enhancing image
resolution from low-quality inputs acquired in shorter scan times in clinical
settings. However, current SR techniques still have problems such as limited
ability to capture both local and global static patterns effectively and
efficiently. To address these limitations, we propose Delta-WKV, a novel MRI
super-resolution model that combines Meta-in-Context Learning (MiCL) with the
Delta rule to better recognize both local and global patterns in MRI images.
This approach allows Delta-WKV to adjust weights dynamically during inference,
improving pattern recognition with fewer parameters and less computational
effort, without using state-space modeling. Additionally, inspired by
Receptance Weighted Key Value (RWKV), Delta-WKV uses a quad-directional
scanning mechanism with time-mixing and channel-mixing structures to capture
long-range dependencies while maintaining high-frequency details. Tests on the
IXI and fastMRI datasets show that Delta-WKV outperforms existing methods,
improving PSNR by 0.06 dB and SSIM by 0.001, while reducing training and
inference times by over 15\%. These results demonstrate its efficiency and
potential for clinical use with large datasets and high-resolution imaging.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 08:49:46 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Lu",
"Rongchang",
""
],
[
"Liao",
"Bingcheng",
""
],
[
"Hou",
"Haowen",
""
],
[
"Lv",
"Jiahang",
""
],
[
"Hai",
"Xin",
""
]
] |
TITLE: Delta-WKV: A Novel Meta-in-Context Learner for MRI Super-Resolution
ABSTRACT: Magnetic Resonance Imaging (MRI) Super-Resolution (SR) addresses the
challenges such as long scan times and expensive equipment by enhancing image
resolution from low-quality inputs acquired in shorter scan times in clinical
settings. However, current SR techniques still have problems such as limited
ability to capture both local and global static patterns effectively and
efficiently. To address these limitations, we propose Delta-WKV, a novel MRI
super-resolution model that combines Meta-in-Context Learning (MiCL) with the
Delta rule to better recognize both local and global patterns in MRI images.
This approach allows Delta-WKV to adjust weights dynamically during inference,
improving pattern recognition with fewer parameters and less computational
effort, without using state-space modeling. Additionally, inspired by
Receptance Weighted Key Value (RWKV), Delta-WKV uses a quad-directional
scanning mechanism with time-mixing and channel-mixing structures to capture
long-range dependencies while maintaining high-frequency details. Tests on the
IXI and fastMRI datasets show that Delta-WKV outperforms existing methods,
improving PSNR by 0.06 dB and SSIM by 0.001, while reducing training and
inference times by over 15\%. These results demonstrate its efficiency and
potential for clinical use with large datasets and high-resolution imaging.
|
no_new_dataset
| 0.950457
|
2502.20855
|
Jonathan Drechsel
|
Jonathan Drechsel, Anja Reusch, Steffen Herbold
|
MAMUT: A Novel Framework for Modifying Mathematical Formulas for the
Generation of Specialized Datasets for Language Model Training
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mathematical formulas are a fundamental and widely used component in various
scientific fields, serving as a universal language for expressing complex
concepts and relationships. While state-of-the-art transformer models excel in
processing and understanding natural language, they encounter challenges with
mathematical notation, which involves a complex structure and diverse
representations. This study focuses on the development of specialized training
datasets to enhance the encoding of mathematical content. We introduce Math
Mutator (MAMUT), a framework capable of generating equivalent and falsified
versions of a given mathematical formula in LaTeX notation, effectively
capturing the mathematical variety in notation of the same concept. Based on
MAMUT, we have generated four large mathematical datasets containing diverse
notation, which can be used to train language models with enhanced mathematical
embeddings.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 08:53:42 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Drechsel",
"Jonathan",
""
],
[
"Reusch",
"Anja",
""
],
[
"Herbold",
"Steffen",
""
]
] |
TITLE: MAMUT: A Novel Framework for Modifying Mathematical Formulas for the
Generation of Specialized Datasets for Language Model Training
ABSTRACT: Mathematical formulas are a fundamental and widely used component in various
scientific fields, serving as a universal language for expressing complex
concepts and relationships. While state-of-the-art transformer models excel in
processing and understanding natural language, they encounter challenges with
mathematical notation, which involves a complex structure and diverse
representations. This study focuses on the development of specialized training
datasets to enhance the encoding of mathematical content. We introduce Math
Mutator (MAMUT), a framework capable of generating equivalent and falsified
versions of a given mathematical formula in LaTeX notation, effectively
capturing the mathematical variety in notation of the same concept. Based on
MAMUT, we have generated four large mathematical datasets containing diverse
notation, which can be used to train language models with enhanced mathematical
embeddings.
|
new_dataset
| 0.949529
|
2502.20857
|
Hyeonuk Nam
|
Hyeonuk Nam, Yong-Hwa Park
|
JiTTER: Jigsaw Temporal Transformer for Event Reconstruction for
Self-Supervised Sound Event Detection
| null | null | null | null |
eess.AS cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sound event detection (SED) has significantly benefited from self-supervised
learning (SSL) approaches, particularly masked audio transformer for SED
(MAT-SED), which leverages masked block prediction to reconstruct missing audio
segments. However, while effective in capturing global dependencies, masked
block prediction disrupts transient sound events and lacks explicit enforcement
of temporal order, making it less suitable for fine-grained event boundary
detection. To address these limitations, we propose JiTTER (Jigsaw Temporal
Transformer for Event Reconstruction), an SSL framework designed to enhance
temporal modeling in transformer-based SED. JiTTER introduces a hierarchical
temporal shuffle reconstruction strategy, where audio sequences are randomly
shuffled at both the block-level and frame-level, forcing the model to
reconstruct the correct temporal order. This pretraining objective encourages
the model to learn both global event structures and fine-grained transient
details, improving its ability to detect events with sharp onset-offset
characteristics. Additionally, we incorporate noise injection during block
shuffle, providing a subtle perturbation mechanism that further regularizes
feature learning and enhances model robustness. Experimental results on the
DESED dataset demonstrate that JiTTER outperforms MAT-SED, achieving a 5.89%
improvement in PSDS, highlighting the effectiveness of explicit temporal
reasoning in SSL-based SED. Our findings suggest that structured temporal
reconstruction tasks, rather than simple masked prediction, offer a more
effective pretraining paradigm for sound event representation learning.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 08:55:20 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Nam",
"Hyeonuk",
""
],
[
"Park",
"Yong-Hwa",
""
]
] |
TITLE: JiTTER: Jigsaw Temporal Transformer for Event Reconstruction for
Self-Supervised Sound Event Detection
ABSTRACT: Sound event detection (SED) has significantly benefited from self-supervised
learning (SSL) approaches, particularly masked audio transformer for SED
(MAT-SED), which leverages masked block prediction to reconstruct missing audio
segments. However, while effective in capturing global dependencies, masked
block prediction disrupts transient sound events and lacks explicit enforcement
of temporal order, making it less suitable for fine-grained event boundary
detection. To address these limitations, we propose JiTTER (Jigsaw Temporal
Transformer for Event Reconstruction), an SSL framework designed to enhance
temporal modeling in transformer-based SED. JiTTER introduces a hierarchical
temporal shuffle reconstruction strategy, where audio sequences are randomly
shuffled at both the block-level and frame-level, forcing the model to
reconstruct the correct temporal order. This pretraining objective encourages
the model to learn both global event structures and fine-grained transient
details, improving its ability to detect events with sharp onset-offset
characteristics. Additionally, we incorporate noise injection during block
shuffle, providing a subtle perturbation mechanism that further regularizes
feature learning and enhances model robustness. Experimental results on the
DESED dataset demonstrate that JiTTER outperforms MAT-SED, achieving a 5.89%
improvement in PSDS, highlighting the effectiveness of explicit temporal
reasoning in SSL-based SED. Our findings suggest that structured temporal
reconstruction tasks, rather than simple masked prediction, offer a more
effective pretraining paradigm for sound event representation learning.
|
no_new_dataset
| 0.953144
|
2502.20858
|
Xiaochuan Liu
|
Xiaochuan Liu, Xin Cheng, Yuchong Sun, Xiaoxue Wu, Ruihua Song, Hao
Sun, Denghao Zhang
|
EyEar: Learning Audio Synchronized Human Gaze Trajectory Based on
Physics-Informed Dynamics
| null | null | null | null |
cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Imitating how humans move their gaze in a visual scene is a vital research
problem for both visual understanding and psychology, kindling crucial
applications such as building alive virtual characters. Previous studies aim to
predict gaze trajectories when humans are free-viewing an image, searching for
required targets, or looking for clues to answer questions in an image. While
these tasks focus on visual-centric scenarios, humans move their gaze also
along with audio signal inputs in more common scenarios. To fill this gap, we
introduce a new task that predicts human gaze trajectories in a visual scene
with synchronized audio inputs and provide a new dataset containing 20k gaze
points from 8 subjects. To effectively integrate audio information and simulate
the dynamic process of human gaze motion, we propose a novel learning framework
called EyEar (Eye moving while Ear listening) based on physics-informed
dynamics, which considers three key factors to predict gazes: eye inherent
motion tendency, vision salient attraction, and audio semantic attraction. We
also propose a probability density score to overcome the high individual
variability of gaze trajectories, thereby improving the stabilization of
optimization and the reliability of the evaluation. Experimental results show
that EyEar outperforms all the baselines in the context of all evaluation
metrics, thanks to the proposed components in the learning model.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 09:01:30 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Liu",
"Xiaochuan",
""
],
[
"Cheng",
"Xin",
""
],
[
"Sun",
"Yuchong",
""
],
[
"Wu",
"Xiaoxue",
""
],
[
"Song",
"Ruihua",
""
],
[
"Sun",
"Hao",
""
],
[
"Zhang",
"Denghao",
""
]
] |
TITLE: EyEar: Learning Audio Synchronized Human Gaze Trajectory Based on
Physics-Informed Dynamics
ABSTRACT: Imitating how humans move their gaze in a visual scene is a vital research
problem for both visual understanding and psychology, kindling crucial
applications such as building alive virtual characters. Previous studies aim to
predict gaze trajectories when humans are free-viewing an image, searching for
required targets, or looking for clues to answer questions in an image. While
these tasks focus on visual-centric scenarios, humans move their gaze also
along with audio signal inputs in more common scenarios. To fill this gap, we
introduce a new task that predicts human gaze trajectories in a visual scene
with synchronized audio inputs and provide a new dataset containing 20k gaze
points from 8 subjects. To effectively integrate audio information and simulate
the dynamic process of human gaze motion, we propose a novel learning framework
called EyEar (Eye moving while Ear listening) based on physics-informed
dynamics, which considers three key factors to predict gazes: eye inherent
motion tendency, vision salient attraction, and audio semantic attraction. We
also propose a probability density score to overcome the high individual
variability of gaze trajectories, thereby improving the stabilization of
optimization and the reliability of the evaluation. Experimental results show
that EyEar outperforms all the baselines in the context of all evaluation
metrics, thanks to the proposed components in the learning model.
|
new_dataset
| 0.955527
|
2502.20862
|
Ho Fai Po
|
Ho Fai Po, Akke Mats Houben, Anna-Christina Haeb, Yordan P. Raykov,
Daniel Tornero, Jordi Soriano, David Saad
|
Analysis of Evolving Cortical Neuronal Networks Using Visual Informatics
| null | null | null | null |
q-bio.NC cond-mat.dis-nn physics.data-an physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the nature of the changes exhibited by evolving neuronal
dynamics from high-dimensional activity data is essential for advancing
neuroscience, particularly in the study of neuronal network development and the
pathophysiology of neurological disorders. This work examines how advanced
dimensionality reduction techniques can efficiently summarize and enhance our
understanding of the development of neuronal networks over time and in response
to stimulation. We develop a framework based on the Minimum-Distortion
Embedding (MDE) methods and demonstrate how MDE outperforms better known
benchmarks based on Principal Component Analysis (PCA) and t-distributed
Stochastic Neighbor Embedding (t-SNE) by effectively preserving both global
structures and local relationships within complex neuronal datasets. Our
\emph{in silico} experiments reveal MDE's capability to capture the evolving
connectivity patterns of simulated neuronal networks, illustrating a clear
trajectory tracking the simulated network development. Complementary \emph{in
vitro} experiments further validate MDE's advantages, highlighting its ability
to identify behavioral differences and connectivity changes in neuronal
cultures over a 35-day observation period. Additionally, we explore the effects
of stimulation on neuronal activity, providing valuable insights into the
plasticity and learning mechanisms of neuronal networks. Our findings
underscore the importance of metric selection in dimensionality reduction,
showing that correlation metrics yield more meaningful embeddings compared to
Euclidean distance. The implications of this research extend to various areas,
including the potential development of therapeutic intervention strategies for
neurological disorders, and the identification of distinct phases of neuronal
activity for advancing cortical-based computing devices.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 09:02:23 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Po",
"Ho Fai",
""
],
[
"Houben",
"Akke Mats",
""
],
[
"Haeb",
"Anna-Christina",
""
],
[
"Raykov",
"Yordan P.",
""
],
[
"Tornero",
"Daniel",
""
],
[
"Soriano",
"Jordi",
""
],
[
"Saad",
"David",
""
]
] |
TITLE: Analysis of Evolving Cortical Neuronal Networks Using Visual Informatics
ABSTRACT: Understanding the nature of the changes exhibited by evolving neuronal
dynamics from high-dimensional activity data is essential for advancing
neuroscience, particularly in the study of neuronal network development and the
pathophysiology of neurological disorders. This work examines how advanced
dimensionality reduction techniques can efficiently summarize and enhance our
understanding of the development of neuronal networks over time and in response
to stimulation. We develop a framework based on the Minimum-Distortion
Embedding (MDE) methods and demonstrate how MDE outperforms better known
benchmarks based on Principal Component Analysis (PCA) and t-distributed
Stochastic Neighbor Embedding (t-SNE) by effectively preserving both global
structures and local relationships within complex neuronal datasets. Our
\emph{in silico} experiments reveal MDE's capability to capture the evolving
connectivity patterns of simulated neuronal networks, illustrating a clear
trajectory tracking the simulated network development. Complementary \emph{in
vitro} experiments further validate MDE's advantages, highlighting its ability
to identify behavioral differences and connectivity changes in neuronal
cultures over a 35-day observation period. Additionally, we explore the effects
of stimulation on neuronal activity, providing valuable insights into the
plasticity and learning mechanisms of neuronal networks. Our findings
underscore the importance of metric selection in dimensionality reduction,
showing that correlation metrics yield more meaningful embeddings compared to
Euclidean distance. The implications of this research extend to various areas,
including the potential development of therapeutic intervention strategies for
neurological disorders, and the identification of distinct phases of neuronal
activity for advancing cortical-based computing devices.
|
no_new_dataset
| 0.949342
|
2502.20864
|
Mohammad Farhansyah Rifqi
|
Mohammad Rifqi Farhansyah, Iwan Darmawan, Adryan Kusumawardhana, Genta
Indra Winata, Alham Fikri Aji, Derry Tanti Wijaya
|
Do Language Models Understand Honorific Systems in Javanese?
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The Javanese language features a complex system of honorifics that vary
according to the social status of the speaker, listener, and referent. Despite
its cultural and linguistic significance, there has been limited progress in
developing a comprehensive corpus to capture these variations for natural
language processing (NLP) tasks. In this paper, we present Unggah-Ungguh, a
carefully curated dataset designed to encapsulate the nuances of Unggah-Ungguh
Basa, the Javanese speech etiquette framework that dictates the choice of words
and phrases based on social hierarchy and context. Using Unggah-Ungguh, we
assess the ability of language models (LMs) to process various levels of
Javanese honorifics through classification and machine translation tasks. To
further evaluate cross-lingual LMs, we conduct machine translation experiments
between Javanese (at specific honorific levels) and Indonesian. Additionally,
we explore whether LMs can generate contextually appropriate Javanese
honorifics in conversation tasks, where the honorific usage should align with
the social role and contextual cues. Our findings indicate that current LMs
struggle with most honorific levels, exhibitinga bias toward certain honorific
tiers.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 09:05:35 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Farhansyah",
"Mohammad Rifqi",
""
],
[
"Darmawan",
"Iwan",
""
],
[
"Kusumawardhana",
"Adryan",
""
],
[
"Winata",
"Genta Indra",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Wijaya",
"Derry Tanti",
""
]
] |
TITLE: Do Language Models Understand Honorific Systems in Javanese?
ABSTRACT: The Javanese language features a complex system of honorifics that vary
according to the social status of the speaker, listener, and referent. Despite
its cultural and linguistic significance, there has been limited progress in
developing a comprehensive corpus to capture these variations for natural
language processing (NLP) tasks. In this paper, we present Unggah-Ungguh, a
carefully curated dataset designed to encapsulate the nuances of Unggah-Ungguh
Basa, the Javanese speech etiquette framework that dictates the choice of words
and phrases based on social hierarchy and context. Using Unggah-Ungguh, we
assess the ability of language models (LMs) to process various levels of
Javanese honorifics through classification and machine translation tasks. To
further evaluate cross-lingual LMs, we conduct machine translation experiments
between Javanese (at specific honorific levels) and Indonesian. Additionally,
we explore whether LMs can generate contextually appropriate Javanese
honorifics in conversation tasks, where the honorific usage should align with
the social role and contextual cues. Our findings indicate that current LMs
struggle with most honorific levels, exhibitinga bias toward certain honorific
tiers.
|
new_dataset
| 0.95846
|
2502.20869
|
Chunlin Zhong
|
Chunlin Zhong, Shuang Hao, Junhua Wu, Xiaona Chang, Jiwei Jiang, Xiu
Nie, He Tang, Xiang Bai
|
PathVG: A New Benchmark and Dataset for Pathology Visual Grounding
|
10pages, 4figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the rapid development of computational pathology, many AI-assisted
diagnostic tasks have emerged. Cellular nuclei segmentation can segment various
types of cells for downstream analysis, but it relies on predefined categories
and lacks flexibility. Moreover, pathology visual question answering can
perform image-level understanding but lacks region-level detection capability.
To address this, we propose a new benchmark called Pathology Visual Grounding
(PathVG), which aims to detect regions based on expressions with different
attributes. To evaluate PathVG, we create a new dataset named RefPath which
contains 27,610 images with 33,500 language-grounded boxes. Compared to visual
grounding in other domains, PathVG presents pathological images at multi-scale
and contains expressions with pathological knowledge. In the experimental
study, we found that the biggest challenge was the implicit information
underlying the pathological expressions. Based on this, we proposed Pathology
Knowledge-enhanced Network (PKNet) as the baseline model for PathVG. PKNet
leverages the knowledge-enhancement capabilities of Large Language Models
(LLMs) to convert pathological terms with implicit information into explicit
visual features, and fuses knowledge features with expression features through
the designed Knowledge Fusion Module (KFM). The proposed method achieves
state-of-the-art performance on the PathVG benchmark.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 09:13:01 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Zhong",
"Chunlin",
""
],
[
"Hao",
"Shuang",
""
],
[
"Wu",
"Junhua",
""
],
[
"Chang",
"Xiaona",
""
],
[
"Jiang",
"Jiwei",
""
],
[
"Nie",
"Xiu",
""
],
[
"Tang",
"He",
""
],
[
"Bai",
"Xiang",
""
]
] |
TITLE: PathVG: A New Benchmark and Dataset for Pathology Visual Grounding
ABSTRACT: With the rapid development of computational pathology, many AI-assisted
diagnostic tasks have emerged. Cellular nuclei segmentation can segment various
types of cells for downstream analysis, but it relies on predefined categories
and lacks flexibility. Moreover, pathology visual question answering can
perform image-level understanding but lacks region-level detection capability.
To address this, we propose a new benchmark called Pathology Visual Grounding
(PathVG), which aims to detect regions based on expressions with different
attributes. To evaluate PathVG, we create a new dataset named RefPath which
contains 27,610 images with 33,500 language-grounded boxes. Compared to visual
grounding in other domains, PathVG presents pathological images at multi-scale
and contains expressions with pathological knowledge. In the experimental
study, we found that the biggest challenge was the implicit information
underlying the pathological expressions. Based on this, we proposed Pathology
Knowledge-enhanced Network (PKNet) as the baseline model for PathVG. PKNet
leverages the knowledge-enhancement capabilities of Large Language Models
(LLMs) to convert pathological terms with implicit information into explicit
visual features, and fuses knowledge features with expression features through
the designed Knowledge Fusion Module (KFM). The proposed method achieves
state-of-the-art performance on the PathVG benchmark.
|
new_dataset
| 0.95594
|
2502.20877
|
Haozhong Sun
|
Haozhong Sun, Zhongsen Li, Chenlin Du, Haokun Li, Yajie Wang, Huijun
Chen
|
Guiding Quantitative MRI Reconstruction with Phase-wise Uncertainty
|
Submitted to MICCAI2025
| null | null | null |
eess.IV cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Quantitative magnetic resonance imaging (qMRI) requires multi-phase
acqui-sition, often relying on reduced data sampling and reconstruction
algorithms to accelerate scans, which inherently poses an ill-posed inverse
problem. While many studies focus on measuring uncertainty during this process,
few explore how to leverage it to enhance reconstruction performance. In this
paper, we in-troduce PUQ, a novel approach that pioneers the use of uncertainty
infor-mation for qMRI reconstruction. PUQ employs a two-stage reconstruction
and parameter fitting framework, where phase-wise uncertainty is estimated
during reconstruction and utilized in the fitting stage. This design allows
uncertainty to reflect the reliability of different phases and guide
information integration during parameter fitting. We evaluated PUQ on in vivo
T1 and T2 mapping datasets from healthy subjects. Compared to existing qMRI
reconstruction methods, PUQ achieved the state-of-the-art performance in
parameter map-pings, demonstrating the effectiveness of uncertainty guidance.
Our code is available at https://anonymous.4open.science/r/PUQ-75B2/.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 09:21:01 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Sun",
"Haozhong",
""
],
[
"Li",
"Zhongsen",
""
],
[
"Du",
"Chenlin",
""
],
[
"Li",
"Haokun",
""
],
[
"Wang",
"Yajie",
""
],
[
"Chen",
"Huijun",
""
]
] |
TITLE: Guiding Quantitative MRI Reconstruction with Phase-wise Uncertainty
ABSTRACT: Quantitative magnetic resonance imaging (qMRI) requires multi-phase
acqui-sition, often relying on reduced data sampling and reconstruction
algorithms to accelerate scans, which inherently poses an ill-posed inverse
problem. While many studies focus on measuring uncertainty during this process,
few explore how to leverage it to enhance reconstruction performance. In this
paper, we in-troduce PUQ, a novel approach that pioneers the use of uncertainty
infor-mation for qMRI reconstruction. PUQ employs a two-stage reconstruction
and parameter fitting framework, where phase-wise uncertainty is estimated
during reconstruction and utilized in the fitting stage. This design allows
uncertainty to reflect the reliability of different phases and guide
information integration during parameter fitting. We evaluated PUQ on in vivo
T1 and T2 mapping datasets from healthy subjects. Compared to existing qMRI
reconstruction methods, PUQ achieved the state-of-the-art performance in
parameter map-pings, demonstrating the effectiveness of uncertainty guidance.
Our code is available at https://anonymous.4open.science/r/PUQ-75B2/.
|
no_new_dataset
| 0.948917
|
2502.20879
|
Bj\"orn Braun
|
Bj\"orn Braun, Rayan Armani, Manuel Meier, Max Moebus, Christian Holz
|
egoPPG: Heart Rate Estimation from Eye-Tracking Cameras in Egocentric
Systems to Benefit Downstream Vision Tasks
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Egocentric vision systems aim to understand the spatial surroundings and the
wearer's behavior inside it, including motions, activities, and interaction
with objects. Since a person's attention and situational responses are
influenced by their physiological state, egocentric systems must also detect
this state for better context awareness. In this paper, we propose egoPPG, a
novel task for egocentric vision systems to extract a person's heart rate (HR)
as a key indicator of the wearer's physiological state from the system's
built-in sensors (e.g., eye tracking videos). We then propose EgoPulseFormer, a
method that solely takes eye-tracking video as input to estimate a person's
photoplethysmogram (PPG) from areas around the eyes to track HR values-without
requiring additional or dedicated hardware. We demonstrate the downstream
benefit of EgoPulseFormer on EgoExo4D, where we find that augmenting existing
models with tracked HR values improves proficiency estimation by 14%. To train
and validate EgoPulseFormer, we collected a dataset of 13+ hours of
eye-tracking videos from Project Aria and contact-based blood volume pulse
signals as well as an electrocardiogram (ECG) for ground-truth HR values. 25
participants performed diverse everyday activities such as office work,
cooking, dancing, and exercising, which induced significant natural motion and
HR variation (44-164 bpm). Our model robustly estimates HR (MAE=8.82 bpm) and
captures patterns (r=0.81). Our results show how egocentric systems may unify
environmental and physiological tracking to better understand user actions and
internal states.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 09:23:40 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Braun",
"Björn",
""
],
[
"Armani",
"Rayan",
""
],
[
"Meier",
"Manuel",
""
],
[
"Moebus",
"Max",
""
],
[
"Holz",
"Christian",
""
]
] |
TITLE: egoPPG: Heart Rate Estimation from Eye-Tracking Cameras in Egocentric
Systems to Benefit Downstream Vision Tasks
ABSTRACT: Egocentric vision systems aim to understand the spatial surroundings and the
wearer's behavior inside it, including motions, activities, and interaction
with objects. Since a person's attention and situational responses are
influenced by their physiological state, egocentric systems must also detect
this state for better context awareness. In this paper, we propose egoPPG, a
novel task for egocentric vision systems to extract a person's heart rate (HR)
as a key indicator of the wearer's physiological state from the system's
built-in sensors (e.g., eye tracking videos). We then propose EgoPulseFormer, a
method that solely takes eye-tracking video as input to estimate a person's
photoplethysmogram (PPG) from areas around the eyes to track HR values-without
requiring additional or dedicated hardware. We demonstrate the downstream
benefit of EgoPulseFormer on EgoExo4D, where we find that augmenting existing
models with tracked HR values improves proficiency estimation by 14%. To train
and validate EgoPulseFormer, we collected a dataset of 13+ hours of
eye-tracking videos from Project Aria and contact-based blood volume pulse
signals as well as an electrocardiogram (ECG) for ground-truth HR values. 25
participants performed diverse everyday activities such as office work,
cooking, dancing, and exercising, which induced significant natural motion and
HR variation (44-164 bpm). Our model robustly estimates HR (MAE=8.82 bpm) and
captures patterns (r=0.81). Our results show how egocentric systems may unify
environmental and physiological tracking to better understand user actions and
internal states.
|
new_dataset
| 0.96128
|
2502.20885
|
Jhony Heriberto Giraldo Zuluaga
|
Amadou S. Sangare, Nicolas Dunou, Jhony H. Giraldo, Fragkiskos D.
Malliaros
|
A Fused Gromov-Wasserstein Approach to Subgraph Contrastive Learning
| null |
Transactions on Machine Learning Research, 2025
| null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning has become a key method for training deep learning
models when labeled data is scarce or unavailable. While graph machine learning
holds great promise across various domains, the design of effective pretext
tasks for self-supervised graph representation learning remains challenging.
Contrastive learning, a popular approach in graph self-supervised learning,
leverages positive and negative pairs to compute a contrastive loss function.
However, current graph contrastive learning methods often struggle to fully use
structural patterns and node similarities. To address these issues, we present
a new method called Fused Gromov Wasserstein Subgraph Contrastive Learning
(FOSSIL). Our model integrates node-level and subgraph-level contrastive
learning, seamlessly combining a standard node-level contrastive loss with the
Fused Gromov-Wasserstein distance. This combination helps our method capture
both node features and graph structure together. Importantly, our approach
works well with both homophilic and heterophilic graphs and can dynamically
create views for generating positive and negative pairs. Through extensive
experiments on benchmark graph datasets, we show that FOSSIL outperforms or
achieves competitive performance compared to current state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 09:32:07 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Sangare",
"Amadou S.",
""
],
[
"Dunou",
"Nicolas",
""
],
[
"Giraldo",
"Jhony H.",
""
],
[
"Malliaros",
"Fragkiskos D.",
""
]
] |
TITLE: A Fused Gromov-Wasserstein Approach to Subgraph Contrastive Learning
ABSTRACT: Self-supervised learning has become a key method for training deep learning
models when labeled data is scarce or unavailable. While graph machine learning
holds great promise across various domains, the design of effective pretext
tasks for self-supervised graph representation learning remains challenging.
Contrastive learning, a popular approach in graph self-supervised learning,
leverages positive and negative pairs to compute a contrastive loss function.
However, current graph contrastive learning methods often struggle to fully use
structural patterns and node similarities. To address these issues, we present
a new method called Fused Gromov Wasserstein Subgraph Contrastive Learning
(FOSSIL). Our model integrates node-level and subgraph-level contrastive
learning, seamlessly combining a standard node-level contrastive loss with the
Fused Gromov-Wasserstein distance. This combination helps our method capture
both node features and graph structure together. Importantly, our approach
works well with both homophilic and heterophilic graphs and can dynamically
create views for generating positive and negative pairs. Through extensive
experiments on benchmark graph datasets, we show that FOSSIL outperforms or
achieves competitive performance compared to current state-of-the-art methods.
|
no_new_dataset
| 0.949012
|
2502.20897
|
Matthias Orlikowski
|
Matthias Orlikowski, Jiaxin Pei, Paul R\"ottger, Philipp Cimiano,
David Jurgens, Dirk Hovy
|
Beyond Demographics: Fine-tuning Large Language Models to Predict
Individuals' Subjective Text Perceptions
|
Reviewed ARR December 2024
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
People naturally vary in their annotations for subjective questions and some
of this variation is thought to be due to the person's sociodemographic
characteristics. LLMs have also been used to label data, but recent work has
shown that models perform poorly when prompted with sociodemographic
attributes, suggesting limited inherent sociodemographic knowledge. Here, we
ask whether LLMs can be trained to be accurate sociodemographic models of
annotator variation. Using a curated dataset of five tasks with standardized
sociodemographics, we show that models do improve in sociodemographic prompting
when trained but that this performance gain is largely due to models learning
annotator-specific behaviour rather than sociodemographic patterns. Across all
tasks, our results suggest that models learn little meaningful connection
between sociodemographics and annotation, raising doubts about the current use
of LLMs for simulating sociodemographic variation and behaviour.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 09:53:42 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Orlikowski",
"Matthias",
""
],
[
"Pei",
"Jiaxin",
""
],
[
"Röttger",
"Paul",
""
],
[
"Cimiano",
"Philipp",
""
],
[
"Jurgens",
"David",
""
],
[
"Hovy",
"Dirk",
""
]
] |
TITLE: Beyond Demographics: Fine-tuning Large Language Models to Predict
Individuals' Subjective Text Perceptions
ABSTRACT: People naturally vary in their annotations for subjective questions and some
of this variation is thought to be due to the person's sociodemographic
characteristics. LLMs have also been used to label data, but recent work has
shown that models perform poorly when prompted with sociodemographic
attributes, suggesting limited inherent sociodemographic knowledge. Here, we
ask whether LLMs can be trained to be accurate sociodemographic models of
annotator variation. Using a curated dataset of five tasks with standardized
sociodemographics, we show that models do improve in sociodemographic prompting
when trained but that this performance gain is largely due to models learning
annotator-specific behaviour rather than sociodemographic patterns. Across all
tasks, our results suggest that models learn little meaningful connection
between sociodemographics and annotation, raising doubts about the current use
of LLMs for simulating sociodemographic variation and behaviour.
|
new_dataset
| 0.950732
|
2502.20925
|
Nu Hoang
|
Bao Duong, Nu Hoang, Thin Nguyen
|
Amortized Conditional Independence Testing
|
Accepted at PAKDD 2025
| null | null | null |
stat.ML cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Testing for the conditional independence structure in data is a fundamental
and critical task in statistics and machine learning, which finds natural
applications in causal discovery - a highly relevant problem to many scientific
disciplines. Existing methods seek to design explicit test statistics that
quantify the degree of conditional dependence, which is highly challenging yet
cannot capture nor utilize prior knowledge in a data-driven manner. In this
study, an entirely new approach is introduced, where we instead propose to
amortize conditional independence testing and devise ACID - a novel
transformer-based neural network architecture that learns to test for
conditional independence. ACID can be trained on synthetic data in a supervised
learning fashion, and the learned model can then be applied to any dataset of
similar natures or adapted to new domains by fine-tuning with a negligible
computational cost. Our extensive empirical evaluations on both synthetic and
real data reveal that ACID consistently achieves state-of-the-art performance
against existing baselines under multiple metrics, and is able to generalize
robustly to unseen sample sizes, dimensionalities, as well as non-linearities
with a remarkably low inference time.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 10:29:56 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Duong",
"Bao",
""
],
[
"Hoang",
"Nu",
""
],
[
"Nguyen",
"Thin",
""
]
] |
TITLE: Amortized Conditional Independence Testing
ABSTRACT: Testing for the conditional independence structure in data is a fundamental
and critical task in statistics and machine learning, which finds natural
applications in causal discovery - a highly relevant problem to many scientific
disciplines. Existing methods seek to design explicit test statistics that
quantify the degree of conditional dependence, which is highly challenging yet
cannot capture nor utilize prior knowledge in a data-driven manner. In this
study, an entirely new approach is introduced, where we instead propose to
amortize conditional independence testing and devise ACID - a novel
transformer-based neural network architecture that learns to test for
conditional independence. ACID can be trained on synthetic data in a supervised
learning fashion, and the learned model can then be applied to any dataset of
similar natures or adapted to new domains by fine-tuning with a negligible
computational cost. Our extensive empirical evaluations on both synthetic and
real data reveal that ACID consistently achieves state-of-the-art performance
against existing baselines under multiple metrics, and is able to generalize
robustly to unseen sample sizes, dimensionalities, as well as non-linearities
with a remarkably low inference time.
|
no_new_dataset
| 0.947962
|
2502.20931
|
Ilya Koziev
|
Ilya Koziev
|
Automated Evaluation of Meter and Rhyme in Russian Generative and
Human-Authored Poetry
|
7 pages, 1 figure
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Generative poetry systems require effective tools for data engineering and
automatic evaluation, particularly to assess how well a poem adheres to
versification rules, such as the correct alternation of stressed and unstressed
syllables and the presence of rhymes.
In this work, we introduce the Russian Poetry Scansion Tool library designed
for stress mark placement in Russian-language syllabo-tonic poetry, rhyme
detection, and identification of defects of poeticness. Additionally, we
release RIFMA -- a dataset of poem fragments spanning various genres and forms,
annotated with stress marks. This dataset can be used to evaluate the
capability of modern large language models to accurately place stress marks in
poetic texts.
The published resources provide valuable tools for researchers and
practitioners in the field of creative generative AI, facilitating advancements
in the development and evaluation of generative poetry systems.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 10:39:07 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Koziev",
"Ilya",
""
]
] |
TITLE: Automated Evaluation of Meter and Rhyme in Russian Generative and
Human-Authored Poetry
ABSTRACT: Generative poetry systems require effective tools for data engineering and
automatic evaluation, particularly to assess how well a poem adheres to
versification rules, such as the correct alternation of stressed and unstressed
syllables and the presence of rhymes.
In this work, we introduce the Russian Poetry Scansion Tool library designed
for stress mark placement in Russian-language syllabo-tonic poetry, rhyme
detection, and identification of defects of poeticness. Additionally, we
release RIFMA -- a dataset of poem fragments spanning various genres and forms,
annotated with stress marks. This dataset can be used to evaluate the
capability of modern large language models to accurately place stress marks in
poetic texts.
The published resources provide valuable tools for researchers and
practitioners in the field of creative generative AI, facilitating advancements
in the development and evaluation of generative poetry systems.
|
new_dataset
| 0.960473
|
2502.20936
|
Michael Dinzinger
|
Michael Dinzinger and Laura Caspari and Kanishka Ghosh Dastidar and
Jelena Mitrovi\'c and Michael Granitzer
|
WebFAQ: A Multilingual Collection of Natural Q&A Datasets for Dense
Retrieval
|
10 pages, 3 figures, 7 tables
| null | null | null |
cs.CL cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
We present WebFAQ, a large-scale collection of open-domain question answering
datasets derived from FAQ-style schema.org annotations. In total, the data
collection consists of 96 million natural question-answer (QA) pairs across 75
languages, including 47 million (49%) non-English samples. WebFAQ further
serves as the foundation for 20 monolingual retrieval benchmarks with a total
size of 11.2 million QA pairs (5.9 million non-English). These datasets are
carefully curated through refined filtering and near-duplicate detection,
yielding high-quality resources for training and evaluating multilingual dense
retrieval models. To empirically confirm WebFAQ's efficacy, we use the
collected QAs to fine-tune an in-domain pretrained XLM-RoBERTa model. Through
this process of dataset-specific fine-tuning, the model achieves significant
retrieval performance gains, which generalize - beyond WebFAQ - to other
multilingual retrieval benchmarks evaluated in zero-shot setting. Last but not
least, we utilize WebFAQ to construct a set of QA-aligned bilingual corpora
spanning over 1000 language pairs using state-of-the-art bitext mining and
automated LLM-assessed translation evaluation. Due to our advanced, automated
method of bitext dataset generation, the resulting bilingual corpora
demonstrate higher translation quality compared to similar datasets. WebFAQ and
all associated resources are publicly available on GitHub and HuggingFace.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 10:46:52 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Dinzinger",
"Michael",
""
],
[
"Caspari",
"Laura",
""
],
[
"Dastidar",
"Kanishka Ghosh",
""
],
[
"Mitrović",
"Jelena",
""
],
[
"Granitzer",
"Michael",
""
]
] |
TITLE: WebFAQ: A Multilingual Collection of Natural Q&A Datasets for Dense
Retrieval
ABSTRACT: We present WebFAQ, a large-scale collection of open-domain question answering
datasets derived from FAQ-style schema.org annotations. In total, the data
collection consists of 96 million natural question-answer (QA) pairs across 75
languages, including 47 million (49%) non-English samples. WebFAQ further
serves as the foundation for 20 monolingual retrieval benchmarks with a total
size of 11.2 million QA pairs (5.9 million non-English). These datasets are
carefully curated through refined filtering and near-duplicate detection,
yielding high-quality resources for training and evaluating multilingual dense
retrieval models. To empirically confirm WebFAQ's efficacy, we use the
collected QAs to fine-tune an in-domain pretrained XLM-RoBERTa model. Through
this process of dataset-specific fine-tuning, the model achieves significant
retrieval performance gains, which generalize - beyond WebFAQ - to other
multilingual retrieval benchmarks evaluated in zero-shot setting. Last but not
least, we utilize WebFAQ to construct a set of QA-aligned bilingual corpora
spanning over 1000 language pairs using state-of-the-art bitext mining and
automated LLM-assessed translation evaluation. Due to our advanced, automated
method of bitext dataset generation, the resulting bilingual corpora
demonstrate higher translation quality compared to similar datasets. WebFAQ and
all associated resources are publicly available on GitHub and HuggingFace.
|
new_dataset
| 0.519312
|
2502.20948
|
Petr Sokerin Mr
|
Petr Sokerin, Dmitry Anikin, Sofia Krehova, Alexey Zaytsev
|
Concealed Adversarial attacks on neural networks for sequential data
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emergence of deep learning led to the broad usage of neural networks in
the time series domain for various applications, including finance and
medicine. While powerful, these models are prone to adversarial attacks: a
benign targeted perturbation of input data leads to significant changes in a
classifier's output. However, formally small attacks in the time series domain
become easily detected by the human eye or a simple detector model.
We develop a concealed adversarial attack for different time-series models:
it provides more realistic perturbations, being hard to detect by a human or
model discriminator. To achieve this goal, the proposed adversarial attack
maximizes an aggregation of a classifier and a trained discriminator loss. To
make the attack stronger, we also propose a training procedure for a
discriminator that provides broader coverage of possible attacks. Extensive
benchmarking on six UCR time series datasets across four diverse architectures
- including recurrent, convolutional, state-space, and transformer-based models
- demonstrates the superiority of our attack for a concealability-efficiency
trade-off. Our findings highlight the growing challenge of designing robust
time series models, emphasizing the need for improved defenses against
realistic and effective attacks.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 11:03:32 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Sokerin",
"Petr",
""
],
[
"Anikin",
"Dmitry",
""
],
[
"Krehova",
"Sofia",
""
],
[
"Zaytsev",
"Alexey",
""
]
] |
TITLE: Concealed Adversarial attacks on neural networks for sequential data
ABSTRACT: The emergence of deep learning led to the broad usage of neural networks in
the time series domain for various applications, including finance and
medicine. While powerful, these models are prone to adversarial attacks: a
benign targeted perturbation of input data leads to significant changes in a
classifier's output. However, formally small attacks in the time series domain
become easily detected by the human eye or a simple detector model.
We develop a concealed adversarial attack for different time-series models:
it provides more realistic perturbations, being hard to detect by a human or
model discriminator. To achieve this goal, the proposed adversarial attack
maximizes an aggregation of a classifier and a trained discriminator loss. To
make the attack stronger, we also propose a training procedure for a
discriminator that provides broader coverage of possible attacks. Extensive
benchmarking on six UCR time series datasets across four diverse architectures
- including recurrent, convolutional, state-space, and transformer-based models
- demonstrates the superiority of our attack for a concealability-efficiency
trade-off. Our findings highlight the growing challenge of designing robust
time series models, emphasizing the need for improved defenses against
realistic and effective attacks.
|
no_new_dataset
| 0.945349
|
2502.20954
|
Jindong Li
|
Jindong Li and Tim Hamann and Jens Barth and Peter Kaempf and Dario
Zanca and Bjoern Eskofier
|
Robust and Efficient Writer-Independent IMU-Based Handwriting
Recognization
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online handwriting recognition (HWR) using data from inertial measurement
units (IMUs) remains challenging due to variations in writing styles and the
limited availability of high-quality annotated datasets. Traditional models
often struggle to recognize handwriting from unseen writers, making
writer-independent (WI) recognition a crucial but difficult problem. This paper
presents an HWR model with an encoder-decoder structure for IMU data, featuring
a CNN-based encoder for feature extraction and a BiLSTM decoder for sequence
modeling, which supports inputs of varying lengths. Our approach demonstrates
strong robustness and data efficiency, outperforming existing methods on WI
datasets, including the WI split of the OnHW dataset and our own dataset.
Extensive evaluations show that our model maintains high accuracy across
different age groups and writing conditions while effectively learning from
limited data. Through comprehensive ablation studies, we analyze key design
choices, achieving a balance between accuracy and efficiency. These findings
contribute to the development of more adaptable and scalable HWR systems for
real-world applications.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 11:09:28 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Li",
"Jindong",
""
],
[
"Hamann",
"Tim",
""
],
[
"Barth",
"Jens",
""
],
[
"Kaempf",
"Peter",
""
],
[
"Zanca",
"Dario",
""
],
[
"Eskofier",
"Bjoern",
""
]
] |
TITLE: Robust and Efficient Writer-Independent IMU-Based Handwriting
Recognization
ABSTRACT: Online handwriting recognition (HWR) using data from inertial measurement
units (IMUs) remains challenging due to variations in writing styles and the
limited availability of high-quality annotated datasets. Traditional models
often struggle to recognize handwriting from unseen writers, making
writer-independent (WI) recognition a crucial but difficult problem. This paper
presents an HWR model with an encoder-decoder structure for IMU data, featuring
a CNN-based encoder for feature extraction and a BiLSTM decoder for sequence
modeling, which supports inputs of varying lengths. Our approach demonstrates
strong robustness and data efficiency, outperforming existing methods on WI
datasets, including the WI split of the OnHW dataset and our own dataset.
Extensive evaluations show that our model maintains high accuracy across
different age groups and writing conditions while effectively learning from
limited data. Through comprehensive ablation studies, we analyze key design
choices, achieving a balance between accuracy and efficiency. These findings
contribute to the development of more adaptable and scalable HWR systems for
real-world applications.
|
new_dataset
| 0.966663
|
2502.20957
|
Giseung Park
|
Giseung Park, Youngchul Sung
|
Reward Dimension Reduction for Scalable Multi-Objective Reinforcement
Learning
|
Accepted to ICLR 2025
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a simple yet effective reward dimension reduction
method to tackle the scalability challenges of multi-objective reinforcement
learning algorithms. While most existing approaches focus on optimizing two to
four objectives, their abilities to scale to environments with more objectives
remain uncertain. Our method uses a dimension reduction approach to enhance
learning efficiency and policy performance in multi-objective settings. While
most traditional dimension reduction methods are designed for static datasets,
our approach is tailored for online learning and preserves Pareto-optimality
after transformation. We propose a new training and evaluation framework for
reward dimension reduction in multi-objective reinforcement learning and
demonstrate the superiority of our method in environments including one with
sixteen objectives, significantly outperforming existing online dimension
reduction methods.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 11:13:23 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Park",
"Giseung",
""
],
[
"Sung",
"Youngchul",
""
]
] |
TITLE: Reward Dimension Reduction for Scalable Multi-Objective Reinforcement
Learning
ABSTRACT: In this paper, we introduce a simple yet effective reward dimension reduction
method to tackle the scalability challenges of multi-objective reinforcement
learning algorithms. While most existing approaches focus on optimizing two to
four objectives, their abilities to scale to environments with more objectives
remain uncertain. Our method uses a dimension reduction approach to enhance
learning efficiency and policy performance in multi-objective settings. While
most traditional dimension reduction methods are designed for static datasets,
our approach is tailored for online learning and preserves Pareto-optimality
after transformation. We propose a new training and evaluation framework for
reward dimension reduction in multi-objective reinforcement learning and
demonstrate the superiority of our method in environments including one with
sixteen objectives, significantly outperforming existing online dimension
reduction methods.
|
no_new_dataset
| 0.946101
|
2502.20966
|
Richard Scott Bergna
|
Richard Bergna, Stefan Depeweg, Sergio Calvo Ordonez, Jonathan Plenk,
Alvaro Cartea, Jose Miguel Hernandez-Lobato
|
Post-Hoc Uncertainty Quantification in Pre-Trained Neural Networks via
Activation-Level Gaussian Processes
|
10 pages, 8 figures, 7th Symposium on Advances in Approximate
Bayesian Inference
| null | null | null |
stat.ML cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Uncertainty quantification in neural networks through methods such as
Dropout, Bayesian neural networks and Laplace approximations is either prone to
underfitting or computationally demanding, rendering these approaches
impractical for large-scale datasets. In this work, we address these
shortcomings by shifting the focus from uncertainty in the weight space to
uncertainty at the activation level, via Gaussian processes. More specifically,
we introduce the Gaussian Process Activation function (GAPA) to capture
neuron-level uncertainties. Our approach operates in a post-hoc manner,
preserving the original mean predictions of the pre-trained neural network and
thereby avoiding the underfitting issues commonly encountered in previous
methods. We propose two methods. The first, GAPA-Free, employs empirical kernel
learning from the training data for the hyperparameters and is highly efficient
during training. The second, GAPA-Variational, learns the hyperparameters via
gradient descent on the kernels, thus affording greater flexibility. Empirical
results demonstrate that GAPA-Variational outperforms the Laplace approximation
on most datasets in at least one of the uncertainty quantification metrics.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 11:29:06 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Bergna",
"Richard",
""
],
[
"Depeweg",
"Stefan",
""
],
[
"Ordonez",
"Sergio Calvo",
""
],
[
"Plenk",
"Jonathan",
""
],
[
"Cartea",
"Alvaro",
""
],
[
"Hernandez-Lobato",
"Jose Miguel",
""
]
] |
TITLE: Post-Hoc Uncertainty Quantification in Pre-Trained Neural Networks via
Activation-Level Gaussian Processes
ABSTRACT: Uncertainty quantification in neural networks through methods such as
Dropout, Bayesian neural networks and Laplace approximations is either prone to
underfitting or computationally demanding, rendering these approaches
impractical for large-scale datasets. In this work, we address these
shortcomings by shifting the focus from uncertainty in the weight space to
uncertainty at the activation level, via Gaussian processes. More specifically,
we introduce the Gaussian Process Activation function (GAPA) to capture
neuron-level uncertainties. Our approach operates in a post-hoc manner,
preserving the original mean predictions of the pre-trained neural network and
thereby avoiding the underfitting issues commonly encountered in previous
methods. We propose two methods. The first, GAPA-Free, employs empirical kernel
learning from the training data for the hyperparameters and is highly efficient
during training. The second, GAPA-Variational, learns the hyperparameters via
gradient descent on the kernels, thus affording greater flexibility. Empirical
results demonstrate that GAPA-Variational outperforms the Laplace approximation
on most datasets in at least one of the uncertainty quantification metrics.
|
no_new_dataset
| 0.952926
|
2502.20975
|
Naman Bansal
|
Naman Bansal, Yash mahajan, Sanjeev Sinha, Santu Karmaker
|
Set-Theoretic Compositionality of Sentence Embeddings
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Sentence encoders play a pivotal role in various NLP tasks; hence, an
accurate evaluation of their compositional properties is paramount. However,
existing evaluation methods predominantly focus on goal task-specific
performance. This leaves a significant gap in understanding how well sentence
embeddings demonstrate fundamental compositional properties in a
task-independent context. Leveraging classical set theory, we address this gap
by proposing six criteria based on three core "set-like"
compositions/operations: \textit{TextOverlap}, \textit{TextDifference}, and
\textit{TextUnion}. We systematically evaluate $7$ classical and $9$ Large
Language Model (LLM)-based sentence encoders to assess their alignment with
these criteria. Our findings show that SBERT consistently demonstrates set-like
compositional properties, surpassing even the latest LLMs. Additionally, we
introduce a new dataset of ~$192$K samples designed to facilitate future
benchmarking efforts on set-like compositionality of sentence embeddings.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 11:40:34 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Bansal",
"Naman",
""
],
[
"mahajan",
"Yash",
""
],
[
"Sinha",
"Sanjeev",
""
],
[
"Karmaker",
"Santu",
""
]
] |
TITLE: Set-Theoretic Compositionality of Sentence Embeddings
ABSTRACT: Sentence encoders play a pivotal role in various NLP tasks; hence, an
accurate evaluation of their compositional properties is paramount. However,
existing evaluation methods predominantly focus on goal task-specific
performance. This leaves a significant gap in understanding how well sentence
embeddings demonstrate fundamental compositional properties in a
task-independent context. Leveraging classical set theory, we address this gap
by proposing six criteria based on three core "set-like"
compositions/operations: \textit{TextOverlap}, \textit{TextDifference}, and
\textit{TextUnion}. We systematically evaluate $7$ classical and $9$ Large
Language Model (LLM)-based sentence encoders to assess their alignment with
these criteria. Our findings show that SBERT consistently demonstrates set-like
compositional properties, surpassing even the latest LLMs. Additionally, we
introduce a new dataset of ~$192$K samples designed to facilitate future
benchmarking efforts on set-like compositionality of sentence embeddings.
|
new_dataset
| 0.963265
|
2502.20981
|
Fuyun Wang
|
Fuyun Wang, Tong Zhang, Yuanzhi Wang, Yide Qiu, Xin Liu, Xu Guo, Zhen
Cui
|
Distribution Prototype Diffusion Learning for Open-set Supervised
Anomaly Detection
|
Accepted by CVPR 2025
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In Open-set Supervised Anomaly Detection (OSAD), the existing methods
typically generate pseudo anomalies to compensate for the scarcity of observed
anomaly samples, while overlooking critical priors of normal samples, leading
to less effective discriminative boundaries. To address this issue, we propose
a Distribution Prototype Diffusion Learning (DPDL) method aimed at enclosing
normal samples within a compact and discriminative distribution space.
Specifically, we construct multiple learnable Gaussian prototypes to create a
latent representation space for abundant and diverse normal samples and learn a
Schr\"odinger bridge to facilitate a diffusive transition toward these
prototypes for normal samples while steering anomaly samples away. Moreover, to
enhance inter-sample separation, we design a dispersion feature learning way in
hyperspherical space, which benefits the identification of out-of-distribution
anomalies. Experimental results demonstrate the effectiveness and superiority
of our proposed DPDL, achieving state-of-the-art performance on 9 public
datasets.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 11:50:50 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Wang",
"Fuyun",
""
],
[
"Zhang",
"Tong",
""
],
[
"Wang",
"Yuanzhi",
""
],
[
"Qiu",
"Yide",
""
],
[
"Liu",
"Xin",
""
],
[
"Guo",
"Xu",
""
],
[
"Cui",
"Zhen",
""
]
] |
TITLE: Distribution Prototype Diffusion Learning for Open-set Supervised
Anomaly Detection
ABSTRACT: In Open-set Supervised Anomaly Detection (OSAD), the existing methods
typically generate pseudo anomalies to compensate for the scarcity of observed
anomaly samples, while overlooking critical priors of normal samples, leading
to less effective discriminative boundaries. To address this issue, we propose
a Distribution Prototype Diffusion Learning (DPDL) method aimed at enclosing
normal samples within a compact and discriminative distribution space.
Specifically, we construct multiple learnable Gaussian prototypes to create a
latent representation space for abundant and diverse normal samples and learn a
Schr\"odinger bridge to facilitate a diffusive transition toward these
prototypes for normal samples while steering anomaly samples away. Moreover, to
enhance inter-sample separation, we design a dispersion feature learning way in
hyperspherical space, which benefits the identification of out-of-distribution
anomalies. Experimental results demonstrate the effectiveness and superiority
of our proposed DPDL, achieving state-of-the-art performance on 9 public
datasets.
|
no_new_dataset
| 0.954052
|
2502.20985
|
Maximilian Rouven Rokuss
|
Maximilian Rokuss, Yannick Kirchhoff, Seval Akbal, Balint Kovacs,
Saikat Roy, Constantin Ulrich, Tassilo Wald, Lukas T. Rotkopf, Heinz-Peter
Schlemmer, Klaus Maier-Hein
|
LesionLocator: Zero-Shot Universal Tumor Segmentation and Tracking in 3D
Whole-Body Imaging
|
Accepted at CVPR 2025
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we present LesionLocator, a framework for zero-shot
longitudinal lesion tracking and segmentation in 3D medical imaging,
establishing the first end-to-end model capable of 4D tracking with dense
spatial prompts. Our model leverages an extensive dataset of 23,262 annotated
medical scans, as well as synthesized longitudinal data across diverse lesion
types. The diversity and scale of our dataset significantly enhances model
generalizability to real-world medical imaging challenges and addresses key
limitations in longitudinal data availability. LesionLocator outperforms all
existing promptable models in lesion segmentation by nearly 10 dice points,
reaching human-level performance, and achieves state-of-the-art results in
lesion tracking, with superior lesion retrieval and segmentation accuracy.
LesionLocator not only sets a new benchmark in universal promptable lesion
segmentation and automated longitudinal lesion tracking but also provides the
first open-access solution of its kind, releasing our synthetic 4D dataset and
model to the community, empowering future advancements in medical imaging. Code
is available at: www.github.com/MIC-DKFZ/LesionLocator
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 11:58:33 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Rokuss",
"Maximilian",
""
],
[
"Kirchhoff",
"Yannick",
""
],
[
"Akbal",
"Seval",
""
],
[
"Kovacs",
"Balint",
""
],
[
"Roy",
"Saikat",
""
],
[
"Ulrich",
"Constantin",
""
],
[
"Wald",
"Tassilo",
""
],
[
"Rotkopf",
"Lukas T.",
""
],
[
"Schlemmer",
"Heinz-Peter",
""
],
[
"Maier-Hein",
"Klaus",
""
]
] |
TITLE: LesionLocator: Zero-Shot Universal Tumor Segmentation and Tracking in 3D
Whole-Body Imaging
ABSTRACT: In this work, we present LesionLocator, a framework for zero-shot
longitudinal lesion tracking and segmentation in 3D medical imaging,
establishing the first end-to-end model capable of 4D tracking with dense
spatial prompts. Our model leverages an extensive dataset of 23,262 annotated
medical scans, as well as synthesized longitudinal data across diverse lesion
types. The diversity and scale of our dataset significantly enhances model
generalizability to real-world medical imaging challenges and addresses key
limitations in longitudinal data availability. LesionLocator outperforms all
existing promptable models in lesion segmentation by nearly 10 dice points,
reaching human-level performance, and achieves state-of-the-art results in
lesion tracking, with superior lesion retrieval and segmentation accuracy.
LesionLocator not only sets a new benchmark in universal promptable lesion
segmentation and automated longitudinal lesion tracking but also provides the
first open-access solution of its kind, releasing our synthetic 4D dataset and
model to the community, empowering future advancements in medical imaging. Code
is available at: www.github.com/MIC-DKFZ/LesionLocator
|
new_dataset
| 0.941277
|
2502.20988
|
Jinguang Gu
|
Qiyuan Li, Haijiang Liu, Caicai Guo, Deyu Chen, Meng Wang, Feng Gao,
Jinguang Gu
|
Merging Clinical Knowledge into Large Language Models for Medical
Research and Applications: A Survey
| null | null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clinical knowledge is the collection of information learned from studies on
the causes, prognosis, diagnosis, and treatment of diseases. This type of
knowledge can improve curing performances, and promote physical health. With
the emergence of large language models (LLMs), medical artificial intelligence
(medical AI), which aims to apply academic medical AI systems to real-world
medical scenarios, has entered a new age of development, resulting in excellent
works such as DoctorGPT and Pangu-Drug from academic and industrial researches.
However, the field lacks a comprehensive compendium and comparison of building
medical AI systems from academia and industry. Therefore, this survey focuses
on the building paradigms of medical AI systems including the use of clinical
databases, datasets, training pipelines, integrating medical knowledge graphs,
system applications, and evaluation systems. We hope that this survey can help
relevant practical researchers understand the current performance of academic
models in various fields of healthcare, as well as the potential problems and
future directions for implementing these scientific achievements.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 12:00:51 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Li",
"Qiyuan",
""
],
[
"Liu",
"Haijiang",
""
],
[
"Guo",
"Caicai",
""
],
[
"Chen",
"Deyu",
""
],
[
"Wang",
"Meng",
""
],
[
"Gao",
"Feng",
""
],
[
"Gu",
"Jinguang",
""
]
] |
TITLE: Merging Clinical Knowledge into Large Language Models for Medical
Research and Applications: A Survey
ABSTRACT: Clinical knowledge is the collection of information learned from studies on
the causes, prognosis, diagnosis, and treatment of diseases. This type of
knowledge can improve curing performances, and promote physical health. With
the emergence of large language models (LLMs), medical artificial intelligence
(medical AI), which aims to apply academic medical AI systems to real-world
medical scenarios, has entered a new age of development, resulting in excellent
works such as DoctorGPT and Pangu-Drug from academic and industrial researches.
However, the field lacks a comprehensive compendium and comparison of building
medical AI systems from academia and industry. Therefore, this survey focuses
on the building paradigms of medical AI systems including the use of clinical
databases, datasets, training pipelines, integrating medical knowledge graphs,
system applications, and evaluation systems. We hope that this survey can help
relevant practical researchers understand the current performance of academic
models in various fields of healthcare, as well as the potential problems and
future directions for implementing these scientific achievements.
|
no_new_dataset
| 0.941331
|
2502.20992
|
Jiaxiang Liu
|
Xiusheng Huang, Jiaxiang Liu, Yequan Wang, Jun Zhao, Kang Liu
|
Capability Localization: Capabilities Can be Localized rather than
Individual Knowledge
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Large scale language models have achieved superior performance in tasks
related to natural language processing, however, it is still unclear how model
parameters affect performance improvement. Previous studies assumed that
individual knowledge is stored in local parameters, and the storage form of
individual knowledge is dispersed parameters, parameter layers, or parameter
chains, which are not unified. We found through fidelity and reliability
evaluation experiments that individual knowledge cannot be localized.
Afterwards, we constructed a dataset for decoupling experiments and discovered
the potential for localizing data commonalities. To further reveal this
phenomenon, this paper proposes a Commonality Neuron Localization (CNL) method,
which successfully locates commonality neurons and achieves a neuron overlap
rate of 96.42% on the GSM8K dataset. Finally, we have demonstrated through
cross data experiments that commonality neurons are a collection of capability
neurons that possess the capability to enhance performance. Our code is
available at https://github.com/nlpkeg/Capability-Neuron-Localization.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 12:22:13 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Huang",
"Xiusheng",
""
],
[
"Liu",
"Jiaxiang",
""
],
[
"Wang",
"Yequan",
""
],
[
"Zhao",
"Jun",
""
],
[
"Liu",
"Kang",
""
]
] |
TITLE: Capability Localization: Capabilities Can be Localized rather than
Individual Knowledge
ABSTRACT: Large scale language models have achieved superior performance in tasks
related to natural language processing, however, it is still unclear how model
parameters affect performance improvement. Previous studies assumed that
individual knowledge is stored in local parameters, and the storage form of
individual knowledge is dispersed parameters, parameter layers, or parameter
chains, which are not unified. We found through fidelity and reliability
evaluation experiments that individual knowledge cannot be localized.
Afterwards, we constructed a dataset for decoupling experiments and discovered
the potential for localizing data commonalities. To further reveal this
phenomenon, this paper proposes a Commonality Neuron Localization (CNL) method,
which successfully locates commonality neurons and achieves a neuron overlap
rate of 96.42% on the GSM8K dataset. Finally, we have demonstrated through
cross data experiments that commonality neurons are a collection of capability
neurons that possess the capability to enhance performance. Our code is
available at https://github.com/nlpkeg/Capability-Neuron-Localization.
|
new_dataset
| 0.962673
|
2502.21011
|
Junchao Zhu
|
Junchao Zhu, Ruining Deng, Tianyuan Yao, Juming Xiong, Chongyu Qu,
Junlin Guo, Siqi Lu, Yucheng Tang, Daguang Xu, Mengmeng Yin, Yu Wang, Shilin
Zhao, Yaohong Wang, Haichun Yang, Yuankai Huo
|
MagNet: Multi-Level Attention Graph Network for Predicting
High-Resolution Spatial Transcriptomics
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid development of spatial transcriptomics (ST) offers new
opportunities to explore the gene expression patterns within the spatial
microenvironment. Current research integrates pathological images to infer gene
expression, addressing the high costs and time-consuming processes to generate
spatial transcriptomics data. However, as spatial transcriptomics resolution
continues to improve, existing methods remain primarily focused on gene
expression prediction at low-resolution spot levels. These methods face
significant challenges, especially the information bottleneck, when they are
applied to high-resolution HD data. To bridge this gap, this paper introduces
MagNet, a multi-level attention graph network designed for accurate prediction
of high-resolution HD data. MagNet employs cross-attention layers to integrate
features from multi-resolution image patches hierarchically and utilizes a
GAT-Transformer module to aggregate neighborhood information. By integrating
multilevel features, MagNet overcomes the limitations posed by low-resolution
inputs in predicting high-resolution gene expression. We systematically
evaluated MagNet and existing ST prediction models on both a private spatial
transcriptomics dataset and a public dataset at three different resolution
levels. The results demonstrate that MagNet achieves state-of-the-art
performance at both spot level and high-resolution bin levels, providing a
novel methodology and benchmark for future research and applications in
high-resolution HD-level spatial transcriptomics. Code is available at
https://github.com/Junchao-Zhu/MagNet.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 12:55:37 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Zhu",
"Junchao",
""
],
[
"Deng",
"Ruining",
""
],
[
"Yao",
"Tianyuan",
""
],
[
"Xiong",
"Juming",
""
],
[
"Qu",
"Chongyu",
""
],
[
"Guo",
"Junlin",
""
],
[
"Lu",
"Siqi",
""
],
[
"Tang",
"Yucheng",
""
],
[
"Xu",
"Daguang",
""
],
[
"Yin",
"Mengmeng",
""
],
[
"Wang",
"Yu",
""
],
[
"Zhao",
"Shilin",
""
],
[
"Wang",
"Yaohong",
""
],
[
"Yang",
"Haichun",
""
],
[
"Huo",
"Yuankai",
""
]
] |
TITLE: MagNet: Multi-Level Attention Graph Network for Predicting
High-Resolution Spatial Transcriptomics
ABSTRACT: The rapid development of spatial transcriptomics (ST) offers new
opportunities to explore the gene expression patterns within the spatial
microenvironment. Current research integrates pathological images to infer gene
expression, addressing the high costs and time-consuming processes to generate
spatial transcriptomics data. However, as spatial transcriptomics resolution
continues to improve, existing methods remain primarily focused on gene
expression prediction at low-resolution spot levels. These methods face
significant challenges, especially the information bottleneck, when they are
applied to high-resolution HD data. To bridge this gap, this paper introduces
MagNet, a multi-level attention graph network designed for accurate prediction
of high-resolution HD data. MagNet employs cross-attention layers to integrate
features from multi-resolution image patches hierarchically and utilizes a
GAT-Transformer module to aggregate neighborhood information. By integrating
multilevel features, MagNet overcomes the limitations posed by low-resolution
inputs in predicting high-resolution gene expression. We systematically
evaluated MagNet and existing ST prediction models on both a private spatial
transcriptomics dataset and a public dataset at three different resolution
levels. The results demonstrate that MagNet achieves state-of-the-art
performance at both spot level and high-resolution bin levels, providing a
novel methodology and benchmark for future research and applications in
high-resolution HD-level spatial transcriptomics. Code is available at
https://github.com/Junchao-Zhu/MagNet.
|
no_new_dataset
| 0.952882
|
2502.21012
|
Silin Chen
|
Silin Chen, Kangjian Di, Yichu Xu, Han-Jia Ye, Wenhan Luo, Ningmu Zou
|
FedDyMem: Efficient Federated Learning with Dynamic Memory and
Memory-Reduce for Unsupervised Image Anomaly Detection
| null | null | null | null |
cs.DC cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised image anomaly detection (UAD) has become a critical process in
industrial and medical applications, but it faces growing challenges due to
increasing concerns over data privacy. The limited class diversity inherent to
one-class classification tasks, combined with distribution biases caused by
variations in products across and within clients, poses significant challenges
for preserving data privacy with federated UAD. Thus, this article proposes an
efficient federated learning method with dynamic memory and memory-reduce for
unsupervised image anomaly detection, called FedDyMem. Considering all client
data belongs to a single class (i.e., normal sample) in UAD and the
distribution of intra-class features demonstrates significant skewness,
FedDyMem facilitates knowledge sharing between the client and server through
the client's dynamic memory bank instead of model parameters. In the local
clients, a memory generator and a metric loss are employed to improve the
consistency of the feature distribution for normal samples, leveraging the
local model to update the memory bank dynamically. For efficient communication,
a memory-reduce method based on weighted averages is proposed to significantly
decrease the scale of memory banks. On the server, global memory is constructed
and distributed to individual clients through k-means aggregation. Experiments
conducted on six industrial and medical datasets, comprising a mixture of six
products or health screening types derived from eleven public datasets,
demonstrate the effectiveness of FedDyMem.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 12:55:58 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Chen",
"Silin",
""
],
[
"Di",
"Kangjian",
""
],
[
"Xu",
"Yichu",
""
],
[
"Ye",
"Han-Jia",
""
],
[
"Luo",
"Wenhan",
""
],
[
"Zou",
"Ningmu",
""
]
] |
TITLE: FedDyMem: Efficient Federated Learning with Dynamic Memory and
Memory-Reduce for Unsupervised Image Anomaly Detection
ABSTRACT: Unsupervised image anomaly detection (UAD) has become a critical process in
industrial and medical applications, but it faces growing challenges due to
increasing concerns over data privacy. The limited class diversity inherent to
one-class classification tasks, combined with distribution biases caused by
variations in products across and within clients, poses significant challenges
for preserving data privacy with federated UAD. Thus, this article proposes an
efficient federated learning method with dynamic memory and memory-reduce for
unsupervised image anomaly detection, called FedDyMem. Considering all client
data belongs to a single class (i.e., normal sample) in UAD and the
distribution of intra-class features demonstrates significant skewness,
FedDyMem facilitates knowledge sharing between the client and server through
the client's dynamic memory bank instead of model parameters. In the local
clients, a memory generator and a metric loss are employed to improve the
consistency of the feature distribution for normal samples, leveraging the
local model to update the memory bank dynamically. For efficient communication,
a memory-reduce method based on weighted averages is proposed to significantly
decrease the scale of memory banks. On the server, global memory is constructed
and distributed to individual clients through k-means aggregation. Experiments
conducted on six industrial and medical datasets, comprising a mixture of six
products or health screening types derived from eleven public datasets,
demonstrate the effectiveness of FedDyMem.
|
no_new_dataset
| 0.951863
|
2502.21034
|
Youran Zhou
|
Youran Zhou, Jianzhong Qi
|
Synthesizing Tabular Data Using Selectivity Enhanced Generative
Adversarial Networks
|
This thesis submitted to the University of Melbourne for partial
fulfillment of the degree of Master of Data Science
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
As E-commerce platforms face surging transactions during major shopping
events like Black Friday, stress testing with synthesized data is crucial for
resource planning. Most recent studies use Generative Adversarial Networks
(GANs) to generate tabular data while ensuring privacy and machine learning
utility. However, these methods overlook the computational demands of
processing GAN-generated data, making them unsuitable for E-commerce stress
testing.
This thesis introduces a novel GAN-based approach incorporating query
selectivity constraints, a key factor in database transaction processing. We
integrate a pre-trained deep neural network to maintain selectivity consistency
between real and synthetic data. Our method, tested on five real-world
datasets, outperforms three state-of-the-art GANs and a VAE model, improving
selectivity estimation accuracy by up to 20pct and machine learning utility by
up to 6 pct.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 13:26:41 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Zhou",
"Youran",
""
],
[
"Qi",
"Jianzhong",
""
]
] |
TITLE: Synthesizing Tabular Data Using Selectivity Enhanced Generative
Adversarial Networks
ABSTRACT: As E-commerce platforms face surging transactions during major shopping
events like Black Friday, stress testing with synthesized data is crucial for
resource planning. Most recent studies use Generative Adversarial Networks
(GANs) to generate tabular data while ensuring privacy and machine learning
utility. However, these methods overlook the computational demands of
processing GAN-generated data, making them unsuitable for E-commerce stress
testing.
This thesis introduces a novel GAN-based approach incorporating query
selectivity constraints, a key factor in database transaction processing. We
integrate a pre-trained deep neural network to maintain selectivity consistency
between real and synthetic data. Our method, tested on five real-world
datasets, outperforms three state-of-the-art GANs and a VAE model, improving
selectivity estimation accuracy by up to 20pct and machine learning utility by
up to 6 pct.
|
no_new_dataset
| 0.9455
|
2502.21035
|
Melanie Schaller Dr.
|
Melanie Schaller and Bodo Rosenhahn
|
S4ConvD: Adaptive Scaling and Frequency Adjustment for Energy-Efficient
Sensor Networks in Smart Buildings
|
Submitted to TOSN Journal
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Predicting energy consumption in smart buildings is challenging due to
dependencies in sensor data and the variability of environmental conditions. We
introduce S4ConvD, a novel convolutional variant of Deep State Space Models
(Deep-SSMs), that minimizes reliance on extensive preprocessing steps. S4ConvD
is designed to optimize runtime in resource-constrained environments. By
implementing adaptive scaling and frequency adjustments, this model shows to
capture complex temporal patterns in building energy dynamics. Experiments on
the ASHRAE Great Energy Predictor III dataset reveal that S4ConvD outperforms
current benchmarks. Additionally, S4ConvD benefits from significant
improvements in GPU runtime through the use of Block Tiling optimization
techniques. Thus, S4ConvD has the potential for practical deployment in
real-time energy modeling. Furthermore, the complete codebase and dataset are
accessible on GitHub, fostering open-source contributions and facilitating
further research. Our method also promotes resource-efficient model execution,
enhancing both energy forecasting and the potential integration of renewable
energy sources into smart grid systems.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 13:27:25 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Schaller",
"Melanie",
""
],
[
"Rosenhahn",
"Bodo",
""
]
] |
TITLE: S4ConvD: Adaptive Scaling and Frequency Adjustment for Energy-Efficient
Sensor Networks in Smart Buildings
ABSTRACT: Predicting energy consumption in smart buildings is challenging due to
dependencies in sensor data and the variability of environmental conditions. We
introduce S4ConvD, a novel convolutional variant of Deep State Space Models
(Deep-SSMs), that minimizes reliance on extensive preprocessing steps. S4ConvD
is designed to optimize runtime in resource-constrained environments. By
implementing adaptive scaling and frequency adjustments, this model shows to
capture complex temporal patterns in building energy dynamics. Experiments on
the ASHRAE Great Energy Predictor III dataset reveal that S4ConvD outperforms
current benchmarks. Additionally, S4ConvD benefits from significant
improvements in GPU runtime through the use of Block Tiling optimization
techniques. Thus, S4ConvD has the potential for practical deployment in
real-time energy modeling. Furthermore, the complete codebase and dataset are
accessible on GitHub, fostering open-source contributions and facilitating
further research. Our method also promotes resource-efficient model execution,
enhancing both energy forecasting and the potential integration of renewable
energy sources into smart grid systems.
|
no_new_dataset
| 0.947088
|
2502.21046
|
Jonathan Will
|
Jonathan Will and Lauritz Thamsen and Jonathan Bader and Odej Kao
|
Flora: Efficient Cloud Resource Selection for Big Data Processing via
Job Classification
|
9 pages, 3 figures, 5 tables. Conference: CCGrid 2025
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distributed dataflow systems like Spark and Flink enable data-parallel
processing of large datasets on clusters of cloud resources. Yet, selecting
appropriate computational resources for dataflow jobs is often challenging. For
efficient execution, individual resource allocations, such as memory and CPU
cores, must meet the specific resource demands of the job. Meanwhile, the
choices of cloud configurations are often plentiful, especially in public
clouds, and the current cost of the available resource options can fluctuate.
Addressing this challenge, we present Flora, a low-overhead approach to
cost-optimizing cloud cluster configurations for big data processing. Flora
lets users categorize jobs according to their data access patterns and derives
suitable cluster resource configurations from executions of test jobs of the
same category, considering current resource costs. In our evaluation on a new
dataset comprising 180 Spark job executions on Google Cloud, Flora's cluster
resource selections exhibit an average deviation below 6% from the most
cost-optimal solution, with a maximum deviation below 24%.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 13:40:44 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Will",
"Jonathan",
""
],
[
"Thamsen",
"Lauritz",
""
],
[
"Bader",
"Jonathan",
""
],
[
"Kao",
"Odej",
""
]
] |
TITLE: Flora: Efficient Cloud Resource Selection for Big Data Processing via
Job Classification
ABSTRACT: Distributed dataflow systems like Spark and Flink enable data-parallel
processing of large datasets on clusters of cloud resources. Yet, selecting
appropriate computational resources for dataflow jobs is often challenging. For
efficient execution, individual resource allocations, such as memory and CPU
cores, must meet the specific resource demands of the job. Meanwhile, the
choices of cloud configurations are often plentiful, especially in public
clouds, and the current cost of the available resource options can fluctuate.
Addressing this challenge, we present Flora, a low-overhead approach to
cost-optimizing cloud cluster configurations for big data processing. Flora
lets users categorize jobs according to their data access patterns and derives
suitable cluster resource configurations from executions of test jobs of the
same category, considering current resource costs. In our evaluation on a new
dataset comprising 180 Spark job executions on Google Cloud, Flora's cluster
resource selections exhibit an average deviation below 6% from the most
cost-optimal solution, with a maximum deviation below 24%.
|
new_dataset
| 0.959421
|
2502.21049
|
Jingru Fu
|
Jingru Fu, Yuqi Zheng, Neel Dey, Daniel Ferreira, Rodrigo Moreno
|
Synthesizing Individualized Aging Brains in Health and Disease with
Generative Models and Parallel Transport
|
20 pages, 9 figures, 6 tables, diffeomorphic registration, parallel
transport, brain aging, medical image generation, Alzheimer's disease
| null | null | null |
cs.CV cs.AI eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Simulating prospective magnetic resonance imaging (MRI) scans from a given
individual brain image is challenging, as it requires accounting for canonical
changes in aging and/or disease progression while also considering the
individual brain's current status and unique characteristics. While current
deep generative models can produce high-resolution anatomically accurate
templates for population-wide studies, their ability to predict future aging
trajectories for individuals remains limited, particularly in capturing
subject-specific neuroanatomical variations over time. In this study, we
introduce Individualized Brain Synthesis (InBrainSyn), a framework for
synthesizing high-resolution subject-specific longitudinal MRI scans that
simulate neurodegeneration in both Alzheimer's disease (AD) and normal aging.
InBrainSyn uses a parallel transport algorithm to adapt the population-level
aging trajectories learned by a generative deep template network, enabling
individualized aging synthesis. As InBrainSyn uses diffeomorphic
transformations to simulate aging, the synthesized images are topologically
consistent with the original anatomy by design. We evaluated InBrainSyn both
quantitatively and qualitatively on AD and healthy control cohorts from the
Open Access Series of Imaging Studies - version 3 dataset. Experimentally,
InBrainSyn can also model neuroanatomical transitions between normal aging and
AD. An evaluation of an external set supports its generalizability. Overall,
with only a single baseline scan, InBrainSyn synthesizes realistic 3D
spatiotemporal T1w MRI scans, producing personalized longitudinal aging
trajectories. The code for InBrainSyn is available at:
https://github.com/Fjr9516/InBrainSyn.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 13:45:09 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Fu",
"Jingru",
""
],
[
"Zheng",
"Yuqi",
""
],
[
"Dey",
"Neel",
""
],
[
"Ferreira",
"Daniel",
""
],
[
"Moreno",
"Rodrigo",
""
]
] |
TITLE: Synthesizing Individualized Aging Brains in Health and Disease with
Generative Models and Parallel Transport
ABSTRACT: Simulating prospective magnetic resonance imaging (MRI) scans from a given
individual brain image is challenging, as it requires accounting for canonical
changes in aging and/or disease progression while also considering the
individual brain's current status and unique characteristics. While current
deep generative models can produce high-resolution anatomically accurate
templates for population-wide studies, their ability to predict future aging
trajectories for individuals remains limited, particularly in capturing
subject-specific neuroanatomical variations over time. In this study, we
introduce Individualized Brain Synthesis (InBrainSyn), a framework for
synthesizing high-resolution subject-specific longitudinal MRI scans that
simulate neurodegeneration in both Alzheimer's disease (AD) and normal aging.
InBrainSyn uses a parallel transport algorithm to adapt the population-level
aging trajectories learned by a generative deep template network, enabling
individualized aging synthesis. As InBrainSyn uses diffeomorphic
transformations to simulate aging, the synthesized images are topologically
consistent with the original anatomy by design. We evaluated InBrainSyn both
quantitatively and qualitatively on AD and healthy control cohorts from the
Open Access Series of Imaging Studies - version 3 dataset. Experimentally,
InBrainSyn can also model neuroanatomical transitions between normal aging and
AD. An evaluation of an external set supports its generalizability. Overall,
with only a single baseline scan, InBrainSyn synthesizes realistic 3D
spatiotemporal T1w MRI scans, producing personalized longitudinal aging
trajectories. The code for InBrainSyn is available at:
https://github.com/Fjr9516/InBrainSyn.
|
no_new_dataset
| 0.944382
|
2502.21051
|
Valentin Guien
|
Valentin Guien, Violaine Antoine, Romain Lardy, Isabelle Veissier and
Luis E C Rocha
|
Detection of anomalies in cow activity using wavelet transform based
features
|
17 pages, 8 figures, 4 tables, 1 algorithm
| null | null | null |
cs.LG cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
In Precision Livestock Farming, detecting deviations from optimal or baseline
values - i.e. anomalies in time series - is essential to allow undertaking
corrective actions rapidly. Here we aim at detecting anomalies in 24h time
series of cow activity, with a view to detect cases of disease or oestrus.
Deviations must be distinguished from noise which can be very high in case of
biological data. It is also important to detect the anomaly early, e.g. before
a farmer would notice it visually. Here, we investigate the benefit of using
wavelet transforms to denoise data and we assess the performance of an anomaly
detection algorithm considering the timing of the detection. We developed
features based on the comparisons between the wavelet transforms of the mean of
the time series and the wavelet transforms of individual time series instances.
We hypothesized that these features contribute to the detection of anomalies in
periodic time series using a feature-based algorithm. We tested this hypothesis
with two datasets representing cow activity, which typically follows a daily
pattern but can deviate due to specific physiological or pathological
conditions. We applied features derived from wavelet transform as well as
statistical features in an Isolation Forest algorithm. We measured the distance
of detection between the days annotated abnormal by animal caretakers days and
the days predicted abnormal by the algorithm. The results show that
wavelet-based features are among the features most contributing to anomaly
detection. They also show that detections are close to the annotated days, and
often precede it. In conclusion, using wavelet transforms on time series of cow
activity data helps to detect anomalies related to specific cow states. The
detection is often obtained on days that precede the day annotated by
caretakers, which offer possibility to take corrective actions at an early
stage.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 13:50:18 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Guien",
"Valentin",
""
],
[
"Antoine",
"Violaine",
""
],
[
"Lardy",
"Romain",
""
],
[
"Veissier",
"Isabelle",
""
],
[
"Rocha",
"Luis E C",
""
]
] |
TITLE: Detection of anomalies in cow activity using wavelet transform based
features
ABSTRACT: In Precision Livestock Farming, detecting deviations from optimal or baseline
values - i.e. anomalies in time series - is essential to allow undertaking
corrective actions rapidly. Here we aim at detecting anomalies in 24h time
series of cow activity, with a view to detect cases of disease or oestrus.
Deviations must be distinguished from noise which can be very high in case of
biological data. It is also important to detect the anomaly early, e.g. before
a farmer would notice it visually. Here, we investigate the benefit of using
wavelet transforms to denoise data and we assess the performance of an anomaly
detection algorithm considering the timing of the detection. We developed
features based on the comparisons between the wavelet transforms of the mean of
the time series and the wavelet transforms of individual time series instances.
We hypothesized that these features contribute to the detection of anomalies in
periodic time series using a feature-based algorithm. We tested this hypothesis
with two datasets representing cow activity, which typically follows a daily
pattern but can deviate due to specific physiological or pathological
conditions. We applied features derived from wavelet transform as well as
statistical features in an Isolation Forest algorithm. We measured the distance
of detection between the days annotated abnormal by animal caretakers days and
the days predicted abnormal by the algorithm. The results show that
wavelet-based features are among the features most contributing to anomaly
detection. They also show that detections are close to the annotated days, and
often precede it. In conclusion, using wavelet transforms on time series of cow
activity data helps to detect anomalies related to specific cow states. The
detection is often obtained on days that precede the day annotated by
caretakers, which offer possibility to take corrective actions at an early
stage.
|
no_new_dataset
| 0.94801
|
2502.21054
|
Emanuele Vivoli
|
Emanuele Vivoli, Lorenzo Capineri, Marco Bertini
|
HoloMine: A Synthetic Dataset for Buried Landmines Recognition using
Microwave Holographic Imaging
|
under review
| null | null | null |
cs.CV eess.IV eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
The detection and removal of landmines is a complex and risky task that
requires advanced remote sensing techniques to reduce the risk for the
professionals involved in this task. In this paper, we propose a novel
synthetic dataset for buried landmine detection to provide researchers with a
valuable resource to observe, measure, locate, and address issues in landmine
detection. The dataset consists of 41,800 microwave holographic images (2D) and
their holographic inverted scans (3D) of different types of buried objects,
including landmines, clutter, and pottery objects, and is collected by means of
a microwave holography sensor.
We evaluate the performance of several state-of-the-art deep learning models
trained on our synthetic dataset for various classification tasks. While the
results do not yield yet high performances, showing the difficulty of the
proposed task, we believe that our dataset has significant potential to drive
progress in the field of landmine detection thanks to the accuracy and
resolution obtainable using holographic radars.
To the best of our knowledge, our dataset is the first of its kind and will
help drive further research on computer vision methods to automatize mine
detection, with the overall goal of reducing the risks and the costs of the
demining process.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 13:53:35 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Vivoli",
"Emanuele",
""
],
[
"Capineri",
"Lorenzo",
""
],
[
"Bertini",
"Marco",
""
]
] |
TITLE: HoloMine: A Synthetic Dataset for Buried Landmines Recognition using
Microwave Holographic Imaging
ABSTRACT: The detection and removal of landmines is a complex and risky task that
requires advanced remote sensing techniques to reduce the risk for the
professionals involved in this task. In this paper, we propose a novel
synthetic dataset for buried landmine detection to provide researchers with a
valuable resource to observe, measure, locate, and address issues in landmine
detection. The dataset consists of 41,800 microwave holographic images (2D) and
their holographic inverted scans (3D) of different types of buried objects,
including landmines, clutter, and pottery objects, and is collected by means of
a microwave holography sensor.
We evaluate the performance of several state-of-the-art deep learning models
trained on our synthetic dataset for various classification tasks. While the
results do not yield yet high performances, showing the difficulty of the
proposed task, we believe that our dataset has significant potential to drive
progress in the field of landmine detection thanks to the accuracy and
resolution obtainable using holographic radars.
To the best of our knowledge, our dataset is the first of its kind and will
help drive further research on computer vision methods to automatize mine
detection, with the overall goal of reducing the risks and the costs of the
demining process.
|
new_dataset
| 0.961642
|
2502.21055
|
Micha{\l} Romaszewski
|
Przemys{\l}aw Seku{\l}a, Micha{\l} Romaszewski, Przemys{\l}aw
G{\l}omb, Micha{\l} Cholewa, {\L}ukasz Pawela
|
Quantum-aware Transformer model for state classification
|
13 pages, 1 figure
| null | null | null |
quant-ph cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Entanglement is a fundamental feature of quantum mechanics, playing a crucial
role in quantum information processing. However, classifying entangled states,
particularly in the mixed-state regime, remains a challenging problem,
especially as system dimensions increase. In this work, we focus on bipartite
quantum states and present a data-driven approach to entanglement
classification using transformer-based neural networks. Our dataset consists of
a diverse set of bipartite states, including pure separable states, Werner
entangled states, general entangled states, and maximally entangled states. We
pretrain the transformer in an unsupervised fashion by masking elements of
vectorized Hermitian matrix representations of quantum states, allowing the
model to learn structural properties of quantum density matrices. This approach
enables the model to generalize entanglement characteristics across different
classes of states. Once trained, our method achieves near-perfect
classification accuracy, effectively distinguishing between separable and
entangled states. Compared to previous Machine Learning, our method
successfully adapts transformers for quantum state analysis, demonstrating
their ability to systematically identify entanglement in bipartite systems.
These results highlight the potential of modern machine learning techniques in
automating entanglement detection and classification, bridging the gap between
quantum information theory and artificial intelligence.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 13:56:48 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Sekuła",
"Przemysław",
""
],
[
"Romaszewski",
"Michał",
""
],
[
"Głomb",
"Przemysław",
""
],
[
"Cholewa",
"Michał",
""
],
[
"Pawela",
"Łukasz",
""
]
] |
TITLE: Quantum-aware Transformer model for state classification
ABSTRACT: Entanglement is a fundamental feature of quantum mechanics, playing a crucial
role in quantum information processing. However, classifying entangled states,
particularly in the mixed-state regime, remains a challenging problem,
especially as system dimensions increase. In this work, we focus on bipartite
quantum states and present a data-driven approach to entanglement
classification using transformer-based neural networks. Our dataset consists of
a diverse set of bipartite states, including pure separable states, Werner
entangled states, general entangled states, and maximally entangled states. We
pretrain the transformer in an unsupervised fashion by masking elements of
vectorized Hermitian matrix representations of quantum states, allowing the
model to learn structural properties of quantum density matrices. This approach
enables the model to generalize entanglement characteristics across different
classes of states. Once trained, our method achieves near-perfect
classification accuracy, effectively distinguishing between separable and
entangled states. Compared to previous Machine Learning, our method
successfully adapts transformers for quantum state analysis, demonstrating
their ability to systematically identify entanglement in bipartite systems.
These results highlight the potential of modern machine learning techniques in
automating entanglement detection and classification, bridging the gap between
quantum information theory and artificial intelligence.
|
new_dataset
| 0.908374
|
2502.21059
|
Zhen Sun
|
Ziyi Zhang, Zhen Sun, Zongmin Zhang, Jihui Guo, Xinlei He
|
FC-Attack: Jailbreaking Large Vision-Language Models via Auto-Generated
Flowcharts
|
13 pages, 6 figures
| null | null | null |
cs.CV cs.AI cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large Vision-Language Models (LVLMs) have become powerful and widely adopted
in some practical applications. However, recent research has revealed their
vulnerability to multimodal jailbreak attacks, whereby the model can be induced
to generate harmful content, leading to safety risks. Although most LVLMs have
undergone safety alignment, recent research shows that the visual modality is
still vulnerable to jailbreak attacks. In our work, we discover that by using
flowcharts with partially harmful information, LVLMs can be induced to provide
additional harmful details. Based on this, we propose a jailbreak attack method
based on auto-generated flowcharts, FC-Attack. Specifically, FC-Attack first
fine-tunes a pre-trained LLM to create a step-description generator based on
benign datasets. The generator is then used to produce step descriptions
corresponding to a harmful query, which are transformed into flowcharts in 3
different shapes (vertical, horizontal, and S-shaped) as visual prompts. These
flowcharts are then combined with a benign textual prompt to execute a
jailbreak attack on LVLMs. Our evaluations using the Advbench dataset show that
FC-Attack achieves over 90% attack success rates on Gemini-1.5, Llaval-Next,
Qwen2-VL, and InternVL-2.5 models, outperforming existing LVLM jailbreak
methods. Additionally, we investigate factors affecting the attack performance,
including the number of steps and the font styles in the flowcharts. Our
evaluation shows that FC-Attack can improve the jailbreak performance from 4%
to 28% in Claude-3.5 by changing the font style. To mitigate the attack, we
explore several defenses and find that AdaShield can largely reduce the
jailbreak performance but with the cost of utility drop.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 13:59:11 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Zhang",
"Ziyi",
""
],
[
"Sun",
"Zhen",
""
],
[
"Zhang",
"Zongmin",
""
],
[
"Guo",
"Jihui",
""
],
[
"He",
"Xinlei",
""
]
] |
TITLE: FC-Attack: Jailbreaking Large Vision-Language Models via Auto-Generated
Flowcharts
ABSTRACT: Large Vision-Language Models (LVLMs) have become powerful and widely adopted
in some practical applications. However, recent research has revealed their
vulnerability to multimodal jailbreak attacks, whereby the model can be induced
to generate harmful content, leading to safety risks. Although most LVLMs have
undergone safety alignment, recent research shows that the visual modality is
still vulnerable to jailbreak attacks. In our work, we discover that by using
flowcharts with partially harmful information, LVLMs can be induced to provide
additional harmful details. Based on this, we propose a jailbreak attack method
based on auto-generated flowcharts, FC-Attack. Specifically, FC-Attack first
fine-tunes a pre-trained LLM to create a step-description generator based on
benign datasets. The generator is then used to produce step descriptions
corresponding to a harmful query, which are transformed into flowcharts in 3
different shapes (vertical, horizontal, and S-shaped) as visual prompts. These
flowcharts are then combined with a benign textual prompt to execute a
jailbreak attack on LVLMs. Our evaluations using the Advbench dataset show that
FC-Attack achieves over 90% attack success rates on Gemini-1.5, Llaval-Next,
Qwen2-VL, and InternVL-2.5 models, outperforming existing LVLM jailbreak
methods. Additionally, we investigate factors affecting the attack performance,
including the number of steps and the font styles in the flowcharts. Our
evaluation shows that FC-Attack can improve the jailbreak performance from 4%
to 28% in Claude-3.5 by changing the font style. To mitigate the attack, we
explore several defenses and find that AdaShield can largely reduce the
jailbreak performance but with the cost of utility drop.
|
no_new_dataset
| 0.923868
|
2502.21074
|
Zhenyi Shen
|
Zhenyi Shen, Hanqi Yan, Linhai Zhang, Zhanghao Hu, Yali Du, Yulan He
|
CODI: Compressing Chain-of-Thought into Continuous Space via
Self-Distillation
|
15 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Chain-of-Thought (CoT) enhances Large Language Models (LLMs) by enabling
step-by-step reasoning in natural language. However, the language space may be
suboptimal for reasoning. While implicit CoT methods attempt to enable
reasoning without explicit CoT tokens, they have consistently lagged behind
explicit CoT method in task performance. We propose CODI (Continuous
Chain-of-Thought via Self-Distillation), a novel framework that distills CoT
into a continuous space, where a shared model acts as both teacher and student,
jointly learning explicit and implicit CoT while aligning their hidden
activation on the token generating the final answer. CODI is the first implicit
CoT method to match explicit CoT's performance on GSM8k while achieving 3.1x
compression, surpassing the previous state-of-the-art by 28.2% in accuracy.
Furthermore, CODI demonstrates scalability, robustness, and generalizability to
more complex CoT datasets. Additionally, CODI retains interpretability by
decoding its continuous thoughts, making its reasoning process transparent. Our
findings establish implicit CoT as not only a more efficient but a powerful
alternative to explicit CoT.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 14:07:48 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Shen",
"Zhenyi",
""
],
[
"Yan",
"Hanqi",
""
],
[
"Zhang",
"Linhai",
""
],
[
"Hu",
"Zhanghao",
""
],
[
"Du",
"Yali",
""
],
[
"He",
"Yulan",
""
]
] |
TITLE: CODI: Compressing Chain-of-Thought into Continuous Space via
Self-Distillation
ABSTRACT: Chain-of-Thought (CoT) enhances Large Language Models (LLMs) by enabling
step-by-step reasoning in natural language. However, the language space may be
suboptimal for reasoning. While implicit CoT methods attempt to enable
reasoning without explicit CoT tokens, they have consistently lagged behind
explicit CoT method in task performance. We propose CODI (Continuous
Chain-of-Thought via Self-Distillation), a novel framework that distills CoT
into a continuous space, where a shared model acts as both teacher and student,
jointly learning explicit and implicit CoT while aligning their hidden
activation on the token generating the final answer. CODI is the first implicit
CoT method to match explicit CoT's performance on GSM8k while achieving 3.1x
compression, surpassing the previous state-of-the-art by 28.2% in accuracy.
Furthermore, CODI demonstrates scalability, robustness, and generalizability to
more complex CoT datasets. Additionally, CODI retains interpretability by
decoding its continuous thoughts, making its reasoning process transparent. Our
findings establish implicit CoT as not only a more efficient but a powerful
alternative to explicit CoT.
|
no_new_dataset
| 0.945751
|
2502.21079
|
Yifei Xia
|
Yifei Xia, Suhan Ling, Fangcheng Fu, Yujie Wang, Huixia Li, Xuefeng
Xiao, Bin Cui
|
Training-free and Adaptive Sparse Attention for Efficient Long Video
Generation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Generating high-fidelity long videos with Diffusion Transformers (DiTs) is
often hindered by significant latency, primarily due to the computational
demands of attention mechanisms. For instance, generating an 8-second 720p
video (110K tokens) with HunyuanVideo takes about 600 PFLOPs, with around 500
PFLOPs consumed by attention computations. To address this issue, we propose
AdaSpa, the first Dynamic Pattern and Online Precise Search sparse attention
method. Firstly, to realize the Dynamic Pattern, we introduce a blockified
pattern to efficiently capture the hierarchical sparsity inherent in DiTs. This
is based on our observation that sparse characteristics of DiTs exhibit
hierarchical and blockified structures between and within different modalities.
This blockified approach significantly reduces the complexity of attention
computation while maintaining high fidelity in the generated videos. Secondly,
to enable Online Precise Search, we propose the Fused LSE-Cached Search with
Head-adaptive Hierarchical Block Sparse Attention. This method is motivated by
our finding that DiTs' sparse pattern and LSE vary w.r.t. inputs, layers, and
heads, but remain invariant across denoising steps. By leveraging this
invariance across denoising steps, it adapts to the dynamic nature of DiTs and
allows for precise, real-time identification of sparse indices with minimal
overhead. AdaSpa is implemented as an adaptive, plug-and-play solution and can
be integrated seamlessly with existing DiTs, requiring neither additional
fine-tuning nor a dataset-dependent profiling. Extensive experiments validate
that AdaSpa delivers substantial acceleration across various models while
preserving video quality, establishing itself as a robust and scalable approach
to efficient video generation.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 14:11:20 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Xia",
"Yifei",
""
],
[
"Ling",
"Suhan",
""
],
[
"Fu",
"Fangcheng",
""
],
[
"Wang",
"Yujie",
""
],
[
"Li",
"Huixia",
""
],
[
"Xiao",
"Xuefeng",
""
],
[
"Cui",
"Bin",
""
]
] |
TITLE: Training-free and Adaptive Sparse Attention for Efficient Long Video
Generation
ABSTRACT: Generating high-fidelity long videos with Diffusion Transformers (DiTs) is
often hindered by significant latency, primarily due to the computational
demands of attention mechanisms. For instance, generating an 8-second 720p
video (110K tokens) with HunyuanVideo takes about 600 PFLOPs, with around 500
PFLOPs consumed by attention computations. To address this issue, we propose
AdaSpa, the first Dynamic Pattern and Online Precise Search sparse attention
method. Firstly, to realize the Dynamic Pattern, we introduce a blockified
pattern to efficiently capture the hierarchical sparsity inherent in DiTs. This
is based on our observation that sparse characteristics of DiTs exhibit
hierarchical and blockified structures between and within different modalities.
This blockified approach significantly reduces the complexity of attention
computation while maintaining high fidelity in the generated videos. Secondly,
to enable Online Precise Search, we propose the Fused LSE-Cached Search with
Head-adaptive Hierarchical Block Sparse Attention. This method is motivated by
our finding that DiTs' sparse pattern and LSE vary w.r.t. inputs, layers, and
heads, but remain invariant across denoising steps. By leveraging this
invariance across denoising steps, it adapts to the dynamic nature of DiTs and
allows for precise, real-time identification of sparse indices with minimal
overhead. AdaSpa is implemented as an adaptive, plug-and-play solution and can
be integrated seamlessly with existing DiTs, requiring neither additional
fine-tuning nor a dataset-dependent profiling. Extensive experiments validate
that AdaSpa delivers substantial acceleration across various models while
preserving video quality, establishing itself as a robust and scalable approach
to efficient video generation.
|
no_new_dataset
| 0.946745
|
2502.21085
|
Jing-Yuan Chang
|
Jing-Yuan Chang
|
BST: Badminton Stroke-type Transformer for Skeleton-based Action
Recognition in Racket Sports
|
8 pages (excluding references). The code will be released in a few
months
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Badminton, known for having the fastest ball speeds among all sports,
presents significant challenges to the field of computer vision, including
player identification, court line detection, shuttlecock trajectory tracking,
and player stroke-type classification. In this paper, we introduce a novel
video segmentation strategy to extract frames of each player's racket swing in
a badminton broadcast match. These segmented frames are then processed by two
existing models: one for Human Pose Estimation to obtain player skeletal
joints, and the other for shuttlecock trajectory detection to extract
shuttlecock trajectories. Leveraging these joints, trajectories, and player
positions as inputs, we propose Badminton Stroke-type Transformer (BST) to
classify player stroke-types in singles. To the best of our knowledge,
experimental results demonstrate that our method outperforms the previous
state-of-the-art on the largest publicly available badminton video dataset,
ShuttleSet, which shows that effectively leveraging ball trajectory is likely
to be a trend for racket sports action recognition.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 14:18:39 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Chang",
"Jing-Yuan",
""
]
] |
TITLE: BST: Badminton Stroke-type Transformer for Skeleton-based Action
Recognition in Racket Sports
ABSTRACT: Badminton, known for having the fastest ball speeds among all sports,
presents significant challenges to the field of computer vision, including
player identification, court line detection, shuttlecock trajectory tracking,
and player stroke-type classification. In this paper, we introduce a novel
video segmentation strategy to extract frames of each player's racket swing in
a badminton broadcast match. These segmented frames are then processed by two
existing models: one for Human Pose Estimation to obtain player skeletal
joints, and the other for shuttlecock trajectory detection to extract
shuttlecock trajectories. Leveraging these joints, trajectories, and player
positions as inputs, we propose Badminton Stroke-type Transformer (BST) to
classify player stroke-types in singles. To the best of our knowledge,
experimental results demonstrate that our method outperforms the previous
state-of-the-art on the largest publicly available badminton video dataset,
ShuttleSet, which shows that effectively leveraging ball trajectory is likely
to be a trend for racket sports action recognition.
|
no_new_dataset
| 0.951006
|
2502.21086
|
\"Ozg\"un Turgut
|
\"Ozg\"un Turgut, Felix S. Bott, Markus Ploner, Daniel Rueckert
|
Are foundation models useful feature extractors for
electroencephalography analysis?
| null | null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The success of foundation models in natural language processing and computer
vision has motivated similar approaches for general time series analysis. While
these models are effective for a variety of tasks, their applicability in
medical domains with limited data remains largely unexplored. To address this,
we investigate the effectiveness of foundation models in medical time series
analysis involving electroencephalography (EEG). Through extensive experiments
on tasks such as age prediction, seizure detection, and the classification of
clinically relevant EEG events, we compare their diagnostic accuracy with that
of specialised EEG models. Our analysis shows that foundation models extract
meaningful EEG features, outperform specialised models even without domain
adaptation, and localise task-specific biomarkers. Moreover, we demonstrate
that diagnostic accuracy is substantially influenced by architectural choices
such as context length. Overall, our study reveals that foundation models with
general time series understanding eliminate the dependency on large
domain-specific datasets, making them valuable tools for clinical practice.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 14:21:34 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Turgut",
"Özgün",
""
],
[
"Bott",
"Felix S.",
""
],
[
"Ploner",
"Markus",
""
],
[
"Rueckert",
"Daniel",
""
]
] |
TITLE: Are foundation models useful feature extractors for
electroencephalography analysis?
ABSTRACT: The success of foundation models in natural language processing and computer
vision has motivated similar approaches for general time series analysis. While
these models are effective for a variety of tasks, their applicability in
medical domains with limited data remains largely unexplored. To address this,
we investigate the effectiveness of foundation models in medical time series
analysis involving electroencephalography (EEG). Through extensive experiments
on tasks such as age prediction, seizure detection, and the classification of
clinically relevant EEG events, we compare their diagnostic accuracy with that
of specialised EEG models. Our analysis shows that foundation models extract
meaningful EEG features, outperform specialised models even without domain
adaptation, and localise task-specific biomarkers. Moreover, we demonstrate
that diagnostic accuracy is substantially influenced by architectural choices
such as context length. Overall, our study reveals that foundation models with
general time series understanding eliminate the dependency on large
domain-specific datasets, making them valuable tools for clinical practice.
|
no_new_dataset
| 0.948917
|
2502.21087
|
Hansi Yang
|
Hansi Yang, Qi Zhang, Wei Jiang, Jianguo Li
|
PASemiQA: Plan-Assisted Agent for Question Answering on Semi-Structured
Data with Text and Relational Information
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Large language models (LLMs) have shown impressive abilities in answering
questions across various domains, but they often encounter hallucination issues
on questions that require professional and up-to-date knowledge. To address
this limitation, retrieval-augmented generation (RAG) techniques have been
proposed, which retrieve relevant information from external sources to inform
their responses. However, existing RAG methods typically focus on a single type
of external data, such as vectorized text database or knowledge graphs, and
cannot well handle real-world questions on semi-structured data containing both
text and relational information. To bridge this gap, we introduce PASemiQA, a
novel approach that jointly leverages text and relational information in
semi-structured data to answer questions. PASemiQA first generates a plan to
identify relevant text and relational information to answer the question in
semi-structured data, and then uses an LLM agent to traverse the
semi-structured data and extract necessary information. Our empirical results
demonstrate the effectiveness of PASemiQA across different semi-structured
datasets from various domains, showcasing its potential to improve the accuracy
and reliability of question answering systems on semi-structured data.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 14:26:47 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Yang",
"Hansi",
""
],
[
"Zhang",
"Qi",
""
],
[
"Jiang",
"Wei",
""
],
[
"Li",
"Jianguo",
""
]
] |
TITLE: PASemiQA: Plan-Assisted Agent for Question Answering on Semi-Structured
Data with Text and Relational Information
ABSTRACT: Large language models (LLMs) have shown impressive abilities in answering
questions across various domains, but they often encounter hallucination issues
on questions that require professional and up-to-date knowledge. To address
this limitation, retrieval-augmented generation (RAG) techniques have been
proposed, which retrieve relevant information from external sources to inform
their responses. However, existing RAG methods typically focus on a single type
of external data, such as vectorized text database or knowledge graphs, and
cannot well handle real-world questions on semi-structured data containing both
text and relational information. To bridge this gap, we introduce PASemiQA, a
novel approach that jointly leverages text and relational information in
semi-structured data to answer questions. PASemiQA first generates a plan to
identify relevant text and relational information to answer the question in
semi-structured data, and then uses an LLM agent to traverse the
semi-structured data and extract necessary information. Our empirical results
demonstrate the effectiveness of PASemiQA across different semi-structured
datasets from various domains, showcasing its potential to improve the accuracy
and reliability of question answering systems on semi-structured data.
|
no_new_dataset
| 0.94474
|
2502.21110
|
Charles Dawson
|
Charles Dawson, Van Tran, Max Z. Li, Chuchu Fan
|
Rare event modeling with self-regularized normalizing flows: what can we
learn from a single failure?
|
Published at ICLR 2025
| null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Increased deployment of autonomous systems in fields like transportation and
robotics have seen a corresponding increase in safety-critical failures. These
failures can be difficult to model and debug due to the relative lack of data:
compared to tens of thousands of examples from normal operations, we may have
only seconds of data leading up to the failure. This scarcity makes it
challenging to train generative models of rare failure events, as existing
methods risk either overfitting to noise in the limited failure dataset or
underfitting due to an overly strong prior. We address this challenge with
CalNF, or calibrated normalizing flows, a self-regularized framework for
posterior learning from limited data. CalNF achieves state-of-the-art
performance on data-limited failure modeling and inverse problems and enables a
first-of-a-kind case study into the root causes of the 2022 Southwest Airlines
scheduling crisis.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 14:47:52 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Dawson",
"Charles",
""
],
[
"Tran",
"Van",
""
],
[
"Li",
"Max Z.",
""
],
[
"Fan",
"Chuchu",
""
]
] |
TITLE: Rare event modeling with self-regularized normalizing flows: what can we
learn from a single failure?
ABSTRACT: Increased deployment of autonomous systems in fields like transportation and
robotics have seen a corresponding increase in safety-critical failures. These
failures can be difficult to model and debug due to the relative lack of data:
compared to tens of thousands of examples from normal operations, we may have
only seconds of data leading up to the failure. This scarcity makes it
challenging to train generative models of rare failure events, as existing
methods risk either overfitting to noise in the limited failure dataset or
underfitting due to an overly strong prior. We address this challenge with
CalNF, or calibrated normalizing flows, a self-regularized framework for
posterior learning from limited data. CalNF achieves state-of-the-art
performance on data-limited failure modeling and inverse problems and enables a
first-of-a-kind case study into the root causes of the 2022 Southwest Airlines
scheduling crisis.
|
no_new_dataset
| 0.949342
|
2502.21112
|
Francesco Osborne
|
Mattia Birti, Francesco Osborne, Andrea Maurino
|
Optimizing Large Language Models for ESG Activity Detection in Financial
Texts
| null | null | null | null |
cs.AI cs.CE cs.CL cs.CY cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The integration of Environmental, Social, and Governance (ESG) factors into
corporate decision-making is a fundamental aspect of sustainable finance.
However, ensuring that business practices align with evolving regulatory
frameworks remains a persistent challenge. AI-driven solutions for
automatically assessing the alignment of sustainability reports and
non-financial disclosures with specific ESG activities could greatly support
this process. Yet, this task remains complex due to the limitations of
general-purpose Large Language Models (LLMs) in domain-specific contexts and
the scarcity of structured, high-quality datasets. In this paper, we
investigate the ability of current-generation LLMs to identify text related to
environmental activities. Furthermore, we demonstrate that their performance
can be significantly enhanced through fine-tuning on a combination of original
and synthetically generated data. To this end, we introduce ESG-Activities, a
benchmark dataset containing 1,325 labelled text segments classified according
to the EU ESG taxonomy. Our experimental results show that fine-tuning on
ESG-Activities significantly enhances classification accuracy, with open models
such as Llama 7B and Gemma 7B outperforming large proprietary solutions in
specific configurations. These findings have important implications for
financial analysts, policymakers, and AI researchers seeking to enhance ESG
transparency and compliance through advanced natural language processing
techniques.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 14:52:25 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Birti",
"Mattia",
""
],
[
"Osborne",
"Francesco",
""
],
[
"Maurino",
"Andrea",
""
]
] |
TITLE: Optimizing Large Language Models for ESG Activity Detection in Financial
Texts
ABSTRACT: The integration of Environmental, Social, and Governance (ESG) factors into
corporate decision-making is a fundamental aspect of sustainable finance.
However, ensuring that business practices align with evolving regulatory
frameworks remains a persistent challenge. AI-driven solutions for
automatically assessing the alignment of sustainability reports and
non-financial disclosures with specific ESG activities could greatly support
this process. Yet, this task remains complex due to the limitations of
general-purpose Large Language Models (LLMs) in domain-specific contexts and
the scarcity of structured, high-quality datasets. In this paper, we
investigate the ability of current-generation LLMs to identify text related to
environmental activities. Furthermore, we demonstrate that their performance
can be significantly enhanced through fine-tuning on a combination of original
and synthetically generated data. To this end, we introduce ESG-Activities, a
benchmark dataset containing 1,325 labelled text segments classified according
to the EU ESG taxonomy. Our experimental results show that fine-tuning on
ESG-Activities significantly enhances classification accuracy, with open models
such as Llama 7B and Gemma 7B outperforming large proprietary solutions in
specific configurations. These findings have important implications for
financial analysts, policymakers, and AI researchers seeking to enhance ESG
transparency and compliance through advanced natural language processing
techniques.
|
new_dataset
| 0.963643
|
2502.21120
|
Yunfan Lu
|
Yunfan Lu, Xiaogang Xu, Hao Lu, Yanlin Qian, Pengteng Li, Huizai Yao,
Bin Yang, Junyi Li, Qianyi Cai, Weiyu Guo, Hui Xiong
|
SEE: See Everything Every Time -- Adaptive Brightness Adjustment for
Broad Light Range Images via Events
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event cameras, with a high dynamic range exceeding $120dB$, significantly
outperform traditional embedded cameras, robustly recording detailed changing
information under various lighting conditions, including both low- and
high-light situations. However, recent research on utilizing event data has
primarily focused on low-light image enhancement, neglecting image enhancement
and brightness adjustment across a broader range of lighting conditions, such
as normal or high illumination. Based on this, we propose a novel research
question: how to employ events to enhance and adaptively adjust the brightness
of images captured under broad lighting conditions? To investigate this
question, we first collected a new dataset, SEE-600K, consisting of 610,126
images and corresponding events across 202 scenarios, each featuring an average
of four lighting conditions with over a 1000-fold variation in illumination.
Subsequently, we propose a framework that effectively utilizes events to
smoothly adjust image brightness through the use of prompts. Our framework
captures color through sensor patterns, uses cross-attention to model events as
a brightness dictionary, and adjusts the image's dynamic range to form a broad
light-range representation (BLR), which is then decoded at the pixel level
based on the brightness prompt. Experimental results demonstrate that our
method not only performs well on the low-light enhancement dataset but also
shows robust performance on broader light-range image enhancement using the
SEE-600K dataset. Additionally, our approach enables pixel-level brightness
adjustment, providing flexibility for post-processing and inspiring more
imaging applications. The dataset and source code are publicly available
at:https://github.com/yunfanLu/SEE.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 14:55:37 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Lu",
"Yunfan",
""
],
[
"Xu",
"Xiaogang",
""
],
[
"Lu",
"Hao",
""
],
[
"Qian",
"Yanlin",
""
],
[
"Li",
"Pengteng",
""
],
[
"Yao",
"Huizai",
""
],
[
"Yang",
"Bin",
""
],
[
"Li",
"Junyi",
""
],
[
"Cai",
"Qianyi",
""
],
[
"Guo",
"Weiyu",
""
],
[
"Xiong",
"Hui",
""
]
] |
TITLE: SEE: See Everything Every Time -- Adaptive Brightness Adjustment for
Broad Light Range Images via Events
ABSTRACT: Event cameras, with a high dynamic range exceeding $120dB$, significantly
outperform traditional embedded cameras, robustly recording detailed changing
information under various lighting conditions, including both low- and
high-light situations. However, recent research on utilizing event data has
primarily focused on low-light image enhancement, neglecting image enhancement
and brightness adjustment across a broader range of lighting conditions, such
as normal or high illumination. Based on this, we propose a novel research
question: how to employ events to enhance and adaptively adjust the brightness
of images captured under broad lighting conditions? To investigate this
question, we first collected a new dataset, SEE-600K, consisting of 610,126
images and corresponding events across 202 scenarios, each featuring an average
of four lighting conditions with over a 1000-fold variation in illumination.
Subsequently, we propose a framework that effectively utilizes events to
smoothly adjust image brightness through the use of prompts. Our framework
captures color through sensor patterns, uses cross-attention to model events as
a brightness dictionary, and adjusts the image's dynamic range to form a broad
light-range representation (BLR), which is then decoded at the pixel level
based on the brightness prompt. Experimental results demonstrate that our
method not only performs well on the low-light enhancement dataset but also
shows robust performance on broader light-range image enhancement using the
SEE-600K dataset. Additionally, our approach enables pixel-level brightness
adjustment, providing flexibility for post-processing and inspiring more
imaging applications. The dataset and source code are publicly available
at:https://github.com/yunfanLu/SEE.
|
new_dataset
| 0.957991
|
2502.21143
|
Hyungi Lee
|
Hyungi Lee, Seungyoo Lee, Juho Lee
|
Variational Bayesian Pseudo-Coreset
|
The Thirteenth International Conference on Learning Representations
(ICLR2025)
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The success of deep learning requires large datasets and extensive training,
which can create significant computational challenges. To address these
challenges, pseudo-coresets, small learnable datasets that mimic the entire
data, have been proposed. Bayesian Neural Networks, which offer predictive
uncertainty and probabilistic interpretation for deep neural networks, also
face issues with large-scale datasets due to their high-dimensional parameter
space. Prior works on Bayesian Pseudo-Coresets (BPC) attempt to reduce the
computational load for computing weight posterior distribution by a small
number of pseudo-coresets but suffer from memory inefficiency during BPC
training and sub-optimal results. To overcome these limitations, we propose
Variational Bayesian Pseudo-Coreset (VBPC), a novel approach that utilizes
variational inference to efficiently approximate the posterior distribution,
reducing memory usage and computational costs while improving performance
across benchmark datasets.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 15:26:10 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Lee",
"Hyungi",
""
],
[
"Lee",
"Seungyoo",
""
],
[
"Lee",
"Juho",
""
]
] |
TITLE: Variational Bayesian Pseudo-Coreset
ABSTRACT: The success of deep learning requires large datasets and extensive training,
which can create significant computational challenges. To address these
challenges, pseudo-coresets, small learnable datasets that mimic the entire
data, have been proposed. Bayesian Neural Networks, which offer predictive
uncertainty and probabilistic interpretation for deep neural networks, also
face issues with large-scale datasets due to their high-dimensional parameter
space. Prior works on Bayesian Pseudo-Coresets (BPC) attempt to reduce the
computational load for computing weight posterior distribution by a small
number of pseudo-coresets but suffer from memory inefficiency during BPC
training and sub-optimal results. To overcome these limitations, we propose
Variational Bayesian Pseudo-Coreset (VBPC), a novel approach that utilizes
variational inference to efficiently approximate the posterior distribution,
reducing memory usage and computational costs while improving performance
across benchmark datasets.
|
no_new_dataset
| 0.952175
|
2502.21147
|
Eli Verwimp
|
Eli Verwimp, Guy Hacohen and Tinne Tuytelaars
|
Same accuracy, twice as fast: continuous training surpasses retraining
from scratch
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Continual learning aims to enable models to adapt to new datasets without
losing performance on previously learned data, often assuming that prior data
is no longer available. However, in many practical scenarios, both old and new
data are accessible. In such cases, good performance on both datasets is
typically achieved by abandoning the model trained on the previous data and
re-training a new model from scratch on both datasets. This training from
scratch is computationally expensive. In contrast, methods that leverage the
previously trained model and old data are worthy of investigation, as they
could significantly reduce computational costs. Our evaluation framework
quantifies the computational savings of such methods while maintaining or
exceeding the performance of training from scratch. We identify key
optimization aspects -- initialization, regularization, data selection, and
hyper-parameters -- that can each contribute to reducing computational costs.
For each aspect, we propose effective first-step methods that already yield
substantial computational savings. By combining these methods, we achieve up to
2.7x reductions in computation time across various computer vision tasks,
highlighting the potential for further advancements in this area.
|
[
{
"version": "v1",
"created": "Fri, 28 Feb 2025 15:28:12 GMT"
}
] | 2025-03-03T00:00:00
|
[
[
"Verwimp",
"Eli",
""
],
[
"Hacohen",
"Guy",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] |
TITLE: Same accuracy, twice as fast: continuous training surpasses retraining
from scratch
ABSTRACT: Continual learning aims to enable models to adapt to new datasets without
losing performance on previously learned data, often assuming that prior data
is no longer available. However, in many practical scenarios, both old and new
data are accessible. In such cases, good performance on both datasets is
typically achieved by abandoning the model trained on the previous data and
re-training a new model from scratch on both datasets. This training from
scratch is computationally expensive. In contrast, methods that leverage the
previously trained model and old data are worthy of investigation, as they
could significantly reduce computational costs. Our evaluation framework
quantifies the computational savings of such methods while maintaining or
exceeding the performance of training from scratch. We identify key
optimization aspects -- initialization, regularization, data selection, and
hyper-parameters -- that can each contribute to reducing computational costs.
For each aspect, we propose effective first-step methods that already yield
substantial computational savings. By combining these methods, we achieve up to
2.7x reductions in computation time across various computer vision tasks,
highlighting the potential for further advancements in this area.
|
no_new_dataset
| 0.945601
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.