id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.00096 | Jon Laurent | Ludovico Mitchener, Jon M Laurent, Benjamin Tenmann, Siddharth
Narayanan, Geemi P Wellawatte, Andrew White, Lorenzo Sani, Samuel G Rodriques | BixBench: a Comprehensive Benchmark for LLM-based Agents in
Computational Biology | 8 main text pages, 5 main figures | null | null | null | q-bio.QM cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Large Language Models (LLMs) and LLM-based agents show great promise in
accelerating scientific research. Existing benchmarks for measuring this
potential and guiding future development continue to evolve from pure recall
and rote knowledge tasks, towards more practical work such as literature review
and experimental planning. Bioinformatics is a domain where fully autonomous
AI-driven discovery may be near, but no extensive benchmarks for measuring
progress have been introduced to date. We therefore present the Bioinformatics
Benchmark (BixBench), a dataset comprising over 50 real-world scenarios of
practical biological data analysis with nearly 300 associated open-answer
questions designed to measure the ability of LLM-based agents to explore
biological datasets, perform long, multi-step analytical trajectories, and
interpret the nuanced results of those analyses. We evaluate the performance of
two frontier LLMs (GPT-4o and Claude 3.5 Sonnet) using a custom agent framework
we open source. We find that even the latest frontier models only achieve 17%
accuracy in the open-answer regime, and no better than random in a
multiple-choice setting. By exposing the current limitations of frontier
models, we hope BixBench can spur the development of agents capable of
conducting rigorous bioinformatic analysis and accelerate scientific discovery.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 18:47:57 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 00:57:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Mitchener",
"Ludovico",
""
],
[
"Laurent",
"Jon M",
""
],
[
"Tenmann",
"Benjamin",
""
],
[
"Narayanan",
"Siddharth",
""
],
[
"Wellawatte",
"Geemi P",
""
],
[
"White",
"Andrew",
""
],
[
"Sani",
"Lorenzo",
""
],
[
"Rodriques",
"Samuel G",
""
]
]
| TITLE: BixBench: a Comprehensive Benchmark for LLM-based Agents in
Computational Biology
ABSTRACT: Large Language Models (LLMs) and LLM-based agents show great promise in
accelerating scientific research. Existing benchmarks for measuring this
potential and guiding future development continue to evolve from pure recall
and rote knowledge tasks, towards more practical work such as literature review
and experimental planning. Bioinformatics is a domain where fully autonomous
AI-driven discovery may be near, but no extensive benchmarks for measuring
progress have been introduced to date. We therefore present the Bioinformatics
Benchmark (BixBench), a dataset comprising over 50 real-world scenarios of
practical biological data analysis with nearly 300 associated open-answer
questions designed to measure the ability of LLM-based agents to explore
biological datasets, perform long, multi-step analytical trajectories, and
interpret the nuanced results of those analyses. We evaluate the performance of
two frontier LLMs (GPT-4o and Claude 3.5 Sonnet) using a custom agent framework
we open source. We find that even the latest frontier models only achieve 17%
accuracy in the open-answer regime, and no better than random in a
multiple-choice setting. By exposing the current limitations of frontier
models, we hope BixBench can spur the development of agents capable of
conducting rigorous bioinformatic analysis and accelerate scientific discovery.
| new_dataset | 0.981683 |
2503.00203 | William Nguyen | William Nguyen, An Phan, Konobu Kimura, Hitoshi Maeno, Mika Tanaka,
Quynh Le, William Poucher, Christopher Nguyen | Llamarine: Open-source Maritime Industry-specific Large Language Model | Work in progress | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have demonstrated substantial potential in
addressing complex reasoning tasks, yet their general-purpose nature often
limits their effectiveness in specialized domains such as maritime navigation.
To bridge this gap, we introduce Llamarine, the first open-source LLM designed
specifically for maritime navigation. Llamarine 1.0 is developed through
continued pretraining and fine-tuning on a high-quality corpus comprising
maritime textbooks, research publications, and web text from Wikipedia. This
domain-specific training enables the model to acquire expert-level knowledge in
navigational principles, collision avoidance, route optimization, and
regulatory compliance. Our key contributions include (a) the curation of a
comprehensive maritime dataset from authoritative sources, ensuring depth and
reliability in the model's knowledge base; (b) the development of a
foundational model capable of reasoning about complex navigational challenges
with greater accuracy than general-purpose LLMs; and (c) the establishment of a
benchmark to evaluate performance in maritime-specific decision-making tasks.
Experimental results demonstrate that Llamarine outperforms both
general-purpose and commercial LLMs in critical navigation-related tasks, such
as trajectory planning, risk assessment, and compliance with maritime
regulations. By providing an open-source foundation model trained exclusively
on high-quality maritime literature, Llamarine paves the way for AI-driven
advancements in maritime safety, efficiency, and operational decision-making.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 21:39:22 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 08:23:10 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 22:12:14 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Nguyen",
"William",
""
],
[
"Phan",
"An",
""
],
[
"Kimura",
"Konobu",
""
],
[
"Maeno",
"Hitoshi",
""
],
[
"Tanaka",
"Mika",
""
],
[
"Le",
"Quynh",
""
],
[
"Poucher",
"William",
""
],
[
"Nguyen",
"Christopher",
""
]
]
| TITLE: Llamarine: Open-source Maritime Industry-specific Large Language Model
ABSTRACT: Large Language Models (LLMs) have demonstrated substantial potential in
addressing complex reasoning tasks, yet their general-purpose nature often
limits their effectiveness in specialized domains such as maritime navigation.
To bridge this gap, we introduce Llamarine, the first open-source LLM designed
specifically for maritime navigation. Llamarine 1.0 is developed through
continued pretraining and fine-tuning on a high-quality corpus comprising
maritime textbooks, research publications, and web text from Wikipedia. This
domain-specific training enables the model to acquire expert-level knowledge in
navigational principles, collision avoidance, route optimization, and
regulatory compliance. Our key contributions include (a) the curation of a
comprehensive maritime dataset from authoritative sources, ensuring depth and
reliability in the model's knowledge base; (b) the development of a
foundational model capable of reasoning about complex navigational challenges
with greater accuracy than general-purpose LLMs; and (c) the establishment of a
benchmark to evaluate performance in maritime-specific decision-making tasks.
Experimental results demonstrate that Llamarine outperforms both
general-purpose and commercial LLMs in critical navigation-related tasks, such
as trajectory planning, risk assessment, and compliance with maritime
regulations. By providing an open-source foundation model trained exclusively
on high-quality maritime literature, Llamarine paves the way for AI-driven
advancements in maritime safety, efficiency, and operational decision-making.
| no_new_dataset | 0.945851 |
2503.01115 | Zhipeng Huang | Zhipeng Huang, Shaobin Zhuang, Canmiao Fu, Binxin Yang, Ying Zhang,
Chong Sun, Zhizheng Zhang, Yali Wang, Chen Li and Zheng-Jun Zha | WeGen: A Unified Model for Interactive Multimodal Generation as We Chat | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing multimodal generative models fall short as qualified design
copilots, as they often struggle to generate imaginative outputs once
instructions are less detailed or lack the ability to maintain consistency with
the provided references. In this work, we introduce WeGen, a model that unifies
multimodal generation and understanding, and promotes their interplay in
iterative generation. It can generate diverse results with high creativity for
less detailed instructions. And it can progressively refine prior generation
results or integrating specific contents from references following the
instructions in its chat with users. During this process, it is capable of
preserving consistency in the parts that the user is already satisfied with. To
this end, we curate a large-scale dataset, extracted from Internet videos,
containing rich object dynamics and auto-labeled dynamics descriptions by
advanced foundation models to date. These two information are interleaved into
a single sequence to enable WeGen to learn consistency-aware generation where
the specified dynamics are generated while the consistency of unspecified
content is preserved aligned with instructions. Besides, we introduce a prompt
self-rewriting mechanism to enhance generation diversity. Extensive experiments
demonstrate the effectiveness of unifying multimodal understanding and
generation in WeGen and show it achieves state-of-the-art performance across
various visual generation benchmarks. These also demonstrate the potential of
WeGen as a user-friendly design copilot as desired. The code and models will be
available at https://github.com/hzphzp/WeGen.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 02:50:07 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 02:12:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Huang",
"Zhipeng",
""
],
[
"Zhuang",
"Shaobin",
""
],
[
"Fu",
"Canmiao",
""
],
[
"Yang",
"Binxin",
""
],
[
"Zhang",
"Ying",
""
],
[
"Sun",
"Chong",
""
],
[
"Zhang",
"Zhizheng",
""
],
[
"Wang",
"Yali",
""
],
[
"Li",
"Chen",
""
],
[
"Zha",
"Zheng-Jun",
""
]
]
| TITLE: WeGen: A Unified Model for Interactive Multimodal Generation as We Chat
ABSTRACT: Existing multimodal generative models fall short as qualified design
copilots, as they often struggle to generate imaginative outputs once
instructions are less detailed or lack the ability to maintain consistency with
the provided references. In this work, we introduce WeGen, a model that unifies
multimodal generation and understanding, and promotes their interplay in
iterative generation. It can generate diverse results with high creativity for
less detailed instructions. And it can progressively refine prior generation
results or integrating specific contents from references following the
instructions in its chat with users. During this process, it is capable of
preserving consistency in the parts that the user is already satisfied with. To
this end, we curate a large-scale dataset, extracted from Internet videos,
containing rich object dynamics and auto-labeled dynamics descriptions by
advanced foundation models to date. These two information are interleaved into
a single sequence to enable WeGen to learn consistency-aware generation where
the specified dynamics are generated while the consistency of unspecified
content is preserved aligned with instructions. Besides, we introduce a prompt
self-rewriting mechanism to enhance generation diversity. Extensive experiments
demonstrate the effectiveness of unifying multimodal understanding and
generation in WeGen and show it achieves state-of-the-art performance across
various visual generation benchmarks. These also demonstrate the potential of
WeGen as a user-friendly design copilot as desired. The code and models will be
available at https://github.com/hzphzp/WeGen.
| new_dataset | 0.954095 |
2503.02459 | Dengke Zhang | Dengke Zhang, Quan Tang, Fagui Liu, Haiqing Mei, C. L. Philip Chen | Exploring Token-Level Augmentation in Vision Transformer for
Semi-Supervised Semantic Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi-supervised semantic segmentation has witnessed remarkable advancements
in recent years. However, existing algorithms are based on convolutional neural
networks and directly applying them to Vision Transformers poses certain
limitations due to conceptual disparities. To this end, we propose TokenMix, a
data augmentation technique specifically designed for semi-supervised semantic
segmentation with Vision Transformers. TokenMix aligns well with the global
attention mechanism by mixing images at the token level, enhancing learning
capability for contextual information among image patches. We further
incorporate image augmentation and feature augmentation to promote the
diversity of augmentation. Moreover, to enhance consistency regularization, we
propose a dual-branch framework where each branch applies image and feature
augmentation to the input image. We conduct extensive experiments across
multiple benchmark datasets, including Pascal VOC 2012, Cityscapes, and COCO.
Results suggest that the proposed method outperforms state-of-the-art
algorithms with notably observed accuracy improvement, especially under limited
fine annotations.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 10:09:46 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 12:48:54 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Dengke",
""
],
[
"Tang",
"Quan",
""
],
[
"Liu",
"Fagui",
""
],
[
"Mei",
"Haiqing",
""
],
[
"Chen",
"C. L. Philip",
""
]
]
| TITLE: Exploring Token-Level Augmentation in Vision Transformer for
Semi-Supervised Semantic Segmentation
ABSTRACT: Semi-supervised semantic segmentation has witnessed remarkable advancements
in recent years. However, existing algorithms are based on convolutional neural
networks and directly applying them to Vision Transformers poses certain
limitations due to conceptual disparities. To this end, we propose TokenMix, a
data augmentation technique specifically designed for semi-supervised semantic
segmentation with Vision Transformers. TokenMix aligns well with the global
attention mechanism by mixing images at the token level, enhancing learning
capability for contextual information among image patches. We further
incorporate image augmentation and feature augmentation to promote the
diversity of augmentation. Moreover, to enhance consistency regularization, we
propose a dual-branch framework where each branch applies image and feature
augmentation to the input image. We conduct extensive experiments across
multiple benchmark datasets, including Pascal VOC 2012, Cityscapes, and COCO.
Results suggest that the proposed method outperforms state-of-the-art
algorithms with notably observed accuracy improvement, especially under limited
fine annotations.
| no_new_dataset | 0.944944 |
2503.02943 | Alexandre Alouadi | Alexandre Alouadi, Baptiste Barreau, Laurent Carlier, Huy\^en Pham | Robust time series generation via Schr\"odinger Bridge: a comprehensive
evaluation | 11 pages | null | null | null | cs.LG cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the generative capabilities of the Schr\"odinger Bridge (SB)
approach for time series. The SB framework formulates time series synthesis as
an entropic optimal interpolation transport problem between a reference
probability measure on path space and a target joint distribution. This results
in a stochastic differential equation over a finite horizon that accurately
captures the temporal dynamics of the target time series. While the SB approach
has been largely explored in fields like image generation, there is a scarcity
of studies for its application to time series. In this work, we bridge this gap
by conducting a comprehensive evaluation of the SB method's robustness and
generative performance. We benchmark it against state-of-the-art (SOTA) time
series generation methods across diverse datasets, assessing its strengths,
limitations, and capacity to model complex temporal dependencies. Our results
offer valuable insights into the SB framework's potential as a versatile and
robust tool for time series generation.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 19:01:30 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 15:12:00 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Alouadi",
"Alexandre",
""
],
[
"Barreau",
"Baptiste",
""
],
[
"Carlier",
"Laurent",
""
],
[
"Pham",
"Huyên",
""
]
]
| TITLE: Robust time series generation via Schr\"odinger Bridge: a comprehensive
evaluation
ABSTRACT: We investigate the generative capabilities of the Schr\"odinger Bridge (SB)
approach for time series. The SB framework formulates time series synthesis as
an entropic optimal interpolation transport problem between a reference
probability measure on path space and a target joint distribution. This results
in a stochastic differential equation over a finite horizon that accurately
captures the temporal dynamics of the target time series. While the SB approach
has been largely explored in fields like image generation, there is a scarcity
of studies for its application to time series. In this work, we bridge this gap
by conducting a comprehensive evaluation of the SB method's robustness and
generative performance. We benchmark it against state-of-the-art (SOTA) time
series generation methods across diverse datasets, assessing its strengths,
limitations, and capacity to model complex temporal dependencies. Our results
offer valuable insights into the SB framework's potential as a versatile and
robust tool for time series generation.
| no_new_dataset | 0.950088 |
2503.03091 | Haji Gul | Haji Gul, Ajaz Ahmad Bhat, Abdul Ghani Haji Naim | MuCo-KGC: Multi-Context-Aware Knowledge Graph Completion | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Knowledge graph completion (KGC) seeks to predict missing entities (e.g.,
heads or tails) or relationships in knowledge graphs (KGs), which often contain
incomplete data. Traditional embedding-based methods, such as TransE and
ComplEx, have improved tail entity prediction but struggle to generalize to
unseen entities during testing. Textual-based models mitigate this issue by
leveraging additional semantic context; however, their reliance on negative
triplet sampling introduces high computational overhead, semantic
inconsistencies, and data imbalance. Recent approaches, like KG-BERT, show
promise but depend heavily on entity descriptions, which are often unavailable
in KGs. Critically, existing methods overlook valuable structural information
in the KG related to the entities and relationships. To address these
challenges, we propose Multi-Context-Aware Knowledge Graph Completion
(MuCo-KGC), a novel model that utilizes contextual information from linked
entities and relations within the graph to predict tail entities. MuCo-KGC
eliminates the need for entity descriptions and negative triplet sampling,
significantly reducing computational complexity while enhancing performance.
Our experiments on standard datasets, including FB15k-237, WN18RR, CoDEx-S, and
CoDEx-M, demonstrate that MuCo-KGC outperforms state-of-the-art methods on
three datasets. Notably, MuCo-KGC improves MRR on WN18RR, and CoDEx-S and
CoDEx-M datasets by $1.63\%$, and $3.77\%$ and $20.15\%$ respectively,
demonstrating its effectiveness for KGC tasks.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 01:18:11 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 07:14:51 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Gul",
"Haji",
""
],
[
"Bhat",
"Ajaz Ahmad",
""
],
[
"Naim",
"Abdul Ghani Haji",
""
]
]
| TITLE: MuCo-KGC: Multi-Context-Aware Knowledge Graph Completion
ABSTRACT: Knowledge graph completion (KGC) seeks to predict missing entities (e.g.,
heads or tails) or relationships in knowledge graphs (KGs), which often contain
incomplete data. Traditional embedding-based methods, such as TransE and
ComplEx, have improved tail entity prediction but struggle to generalize to
unseen entities during testing. Textual-based models mitigate this issue by
leveraging additional semantic context; however, their reliance on negative
triplet sampling introduces high computational overhead, semantic
inconsistencies, and data imbalance. Recent approaches, like KG-BERT, show
promise but depend heavily on entity descriptions, which are often unavailable
in KGs. Critically, existing methods overlook valuable structural information
in the KG related to the entities and relationships. To address these
challenges, we propose Multi-Context-Aware Knowledge Graph Completion
(MuCo-KGC), a novel model that utilizes contextual information from linked
entities and relations within the graph to predict tail entities. MuCo-KGC
eliminates the need for entity descriptions and negative triplet sampling,
significantly reducing computational complexity while enhancing performance.
Our experiments on standard datasets, including FB15k-237, WN18RR, CoDEx-S, and
CoDEx-M, demonstrate that MuCo-KGC outperforms state-of-the-art methods on
three datasets. Notably, MuCo-KGC improves MRR on WN18RR, and CoDEx-S and
CoDEx-M datasets by $1.63\%$, and $3.77\%$ and $20.15\%$ respectively,
demonstrating its effectiveness for KGC tasks.
| no_new_dataset | 0.944893 |
2503.03122 | Zichao Li | Zichao Li, Xueru Wen, Jie Lou, Yuqiu Ji, Yaojie Lu, Xianpei Han,
Debing Zhang, Le Sun | The Devil Is in the Details: Tackling Unimodal Spurious Correlations for
Generalizable Multimodal Reward Models | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Multimodal Reward Models (MM-RMs) are crucial for aligning Large Language
Models (LLMs) with human preferences, particularly as LLMs increasingly
interact with multimodal data. However, we find that MM-RMs trained on existing
datasets often struggle to generalize to out-of-distribution data due to their
reliance on unimodal spurious correlations, primarily text-only shortcuts
within the training distribution, which prevents them from leveraging true
multimodal reward functions. To address this, we introduce a Shortcut-aware
MM-RM learning algorithm that mitigates this issue by dynamically reweighting
training samples, shifting the distribution toward better multimodal
understanding, and reducing dependence on unimodal spurious correlations. Our
experiments demonstrate significant improvements in generalization, downstream
task performance, and scalability, establishing a more robust framework for
multimodal reward modeling.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 02:37:41 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 02:34:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Zichao",
""
],
[
"Wen",
"Xueru",
""
],
[
"Lou",
"Jie",
""
],
[
"Ji",
"Yuqiu",
""
],
[
"Lu",
"Yaojie",
""
],
[
"Han",
"Xianpei",
""
],
[
"Zhang",
"Debing",
""
],
[
"Sun",
"Le",
""
]
]
| TITLE: The Devil Is in the Details: Tackling Unimodal Spurious Correlations for
Generalizable Multimodal Reward Models
ABSTRACT: Multimodal Reward Models (MM-RMs) are crucial for aligning Large Language
Models (LLMs) with human preferences, particularly as LLMs increasingly
interact with multimodal data. However, we find that MM-RMs trained on existing
datasets often struggle to generalize to out-of-distribution data due to their
reliance on unimodal spurious correlations, primarily text-only shortcuts
within the training distribution, which prevents them from leveraging true
multimodal reward functions. To address this, we introduce a Shortcut-aware
MM-RM learning algorithm that mitigates this issue by dynamically reweighting
training samples, shifting the distribution toward better multimodal
understanding, and reducing dependence on unimodal spurious correlations. Our
experiments demonstrate significant improvements in generalization, downstream
task performance, and scalability, establishing a more robust framework for
multimodal reward modeling.
| no_new_dataset | 0.944177 |
2503.03135 | Runze Wang | Runze Wang, Mingqi Yang, Yanming Shen | Bridging Molecular Graphs and Large Language Models | AAAI 2025 camera ready version | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | While Large Language Models (LLMs) have shown exceptional generalization
capabilities, their ability to process graph data, such as molecular
structures, remains limited. To bridge this gap, this paper proposes
Graph2Token, an efficient solution that aligns graph tokens to LLM tokens. The
key idea is to represent a graph token with the LLM token vocabulary, without
fine-tuning the LLM backbone. To achieve this goal, we first construct a
molecule-text paired dataset from multisources, including CHEBI and HMDB, to
train a graph structure encoder, which reduces the distance between graphs and
texts representations in the feature space. Then, we propose a novel alignment
strategy that associates a graph token with LLM tokens. To further unleash the
potential of LLMs, we collect molecular IUPAC name identifiers, which are
incorporated into the LLM prompts. By aligning molecular graphs as special
tokens, we can activate LLM generalization ability to molecular few-shot
learning. Extensive experiments on molecular classification and regression
tasks demonstrate the effectiveness of our proposed Graph2Token.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 03:15:38 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 09:51:05 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Runze",
""
],
[
"Yang",
"Mingqi",
""
],
[
"Shen",
"Yanming",
""
]
]
| TITLE: Bridging Molecular Graphs and Large Language Models
ABSTRACT: While Large Language Models (LLMs) have shown exceptional generalization
capabilities, their ability to process graph data, such as molecular
structures, remains limited. To bridge this gap, this paper proposes
Graph2Token, an efficient solution that aligns graph tokens to LLM tokens. The
key idea is to represent a graph token with the LLM token vocabulary, without
fine-tuning the LLM backbone. To achieve this goal, we first construct a
molecule-text paired dataset from multisources, including CHEBI and HMDB, to
train a graph structure encoder, which reduces the distance between graphs and
texts representations in the feature space. Then, we propose a novel alignment
strategy that associates a graph token with LLM tokens. To further unleash the
potential of LLMs, we collect molecular IUPAC name identifiers, which are
incorporated into the LLM prompts. By aligning molecular graphs as special
tokens, we can activate LLM generalization ability to molecular few-shot
learning. Extensive experiments on molecular classification and regression
tasks demonstrate the effectiveness of our proposed Graph2Token.
| no_new_dataset | 0.859782 |
2503.03205 | Ruida Wang | Ruida Wang, Rui Pan, Yuxin Li, Jipeng Zhang, Yizhen Jia, Shizhe Diao,
Renjie Pi, Junjie Hu, Tong Zhang | MA-LoT: Multi-Agent Lean-based Long Chain-of-Thought Reasoning enhances
Formal Theorem Proving | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Solving mathematical problems using computer-verifiable languages like Lean
has significantly impacted mathematical and computer science communities.
State-of-the-art methods utilize single Large Language Models (LLMs) as agents
or provers to either generate complete proof or perform tree searches. However,
single-agent methods inherently lack a structured way to combine high-level
reasoning in Natural Language (NL) with Formal Language (FL) verification
feedback. To solve these issues, we propose MA-LoT: Multi-Agent Lean-based Long
Chain-of-Thought framework, (to the best of our knowledge), the first
multi-agent framework for Lean4 theorem proving that balance high-level NL
reasoning and FL verification in Long CoT. Using this structured interaction,
our approach enables deeper insights and long-term coherence in proof
generation, with which past methods struggle. We do this by leveraging emergent
formal reasoning ability in Long CoT using our novel LoT-Transfer Learning
training-inference pipeline. Extensive experiments show that our framework
achieves a 61.07% accuracy rate on the Lean4 version of the MiniF2F-Test
dataset, largely outperforming GPT-4 (22.95%), single-agent tree search
(InternLM-Step-Prover, 50.70%), and whole-proof generation (Godel-Prover,
55.33%) baselines. Furthermore, our findings highlight the potential of
combining Long CoT with formal verification for a more insightful generation in
a broader perspective.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 05:50:31 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 17:39:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Ruida",
""
],
[
"Pan",
"Rui",
""
],
[
"Li",
"Yuxin",
""
],
[
"Zhang",
"Jipeng",
""
],
[
"Jia",
"Yizhen",
""
],
[
"Diao",
"Shizhe",
""
],
[
"Pi",
"Renjie",
""
],
[
"Hu",
"Junjie",
""
],
[
"Zhang",
"Tong",
""
]
]
| TITLE: MA-LoT: Multi-Agent Lean-based Long Chain-of-Thought Reasoning enhances
Formal Theorem Proving
ABSTRACT: Solving mathematical problems using computer-verifiable languages like Lean
has significantly impacted mathematical and computer science communities.
State-of-the-art methods utilize single Large Language Models (LLMs) as agents
or provers to either generate complete proof or perform tree searches. However,
single-agent methods inherently lack a structured way to combine high-level
reasoning in Natural Language (NL) with Formal Language (FL) verification
feedback. To solve these issues, we propose MA-LoT: Multi-Agent Lean-based Long
Chain-of-Thought framework, (to the best of our knowledge), the first
multi-agent framework for Lean4 theorem proving that balance high-level NL
reasoning and FL verification in Long CoT. Using this structured interaction,
our approach enables deeper insights and long-term coherence in proof
generation, with which past methods struggle. We do this by leveraging emergent
formal reasoning ability in Long CoT using our novel LoT-Transfer Learning
training-inference pipeline. Extensive experiments show that our framework
achieves a 61.07% accuracy rate on the Lean4 version of the MiniF2F-Test
dataset, largely outperforming GPT-4 (22.95%), single-agent tree search
(InternLM-Step-Prover, 50.70%), and whole-proof generation (Godel-Prover,
55.33%) baselines. Furthermore, our findings highlight the potential of
combining Long CoT with formal verification for a more insightful generation in
a broader perspective.
| no_new_dataset | 0.95594 |
2503.03302 | Akash Yadav | Akash Yadav and Eulalia Nualart | Differential Machine Learning for Time Series Prediction | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate time series prediction is challenging due to the inherent
nonlinearity and sensitivity to initial conditions. We propose a novel approach
that enhances neural network predictions through differential learning, which
involves training models on both the original time series and its differential
series. Specifically, we develop a differential long short-term memory
(Diff-LSTM) network that uses a shared LSTM cell to simultaneously process both
data streams, effectively capturing intrinsic patterns and temporal dynamics.
Evaluated on the Mackey-Glass, Lorenz, and R\"ossler chaotic time series, as
well as a real-world financial dataset from ACI Worldwide Inc., our results
demonstrate that the Diff- LSTM network outperforms prevalent models such as
recurrent neural networks, convolutional neural networks, and bidirectional and
encoder-decoder LSTM networks in both short-term and long-term predictions.
This framework offers a promising solution for enhancing time series
prediction, even when comprehensive knowledge of the underlying dynamics of the
time series is not fully available.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 09:36:57 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 02:42:26 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yadav",
"Akash",
""
],
[
"Nualart",
"Eulalia",
""
]
]
| TITLE: Differential Machine Learning for Time Series Prediction
ABSTRACT: Accurate time series prediction is challenging due to the inherent
nonlinearity and sensitivity to initial conditions. We propose a novel approach
that enhances neural network predictions through differential learning, which
involves training models on both the original time series and its differential
series. Specifically, we develop a differential long short-term memory
(Diff-LSTM) network that uses a shared LSTM cell to simultaneously process both
data streams, effectively capturing intrinsic patterns and temporal dynamics.
Evaluated on the Mackey-Glass, Lorenz, and R\"ossler chaotic time series, as
well as a real-world financial dataset from ACI Worldwide Inc., our results
demonstrate that the Diff- LSTM network outperforms prevalent models such as
recurrent neural networks, convolutional neural networks, and bidirectional and
encoder-decoder LSTM networks in both short-term and long-term predictions.
This framework offers a promising solution for enhancing time series
prediction, even when comprehensive knowledge of the underlying dynamics of the
time series is not fully available.
| no_new_dataset | 0.950778 |
2503.03592 | Karl Audun Borgersen | Karl Audun Borgersen | English K_Quantization of LLMs Does Not Disproportionately Diminish
Multilingual Performance | 8 pages, 6 figures, v2 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | For consumer usage of locally deployed LLMs, the GGUF format and
k\_quantization are invaluable tools for maintaining the performance of the
original model while reducing it to sizes deployable with consumer-grade
hardware. The number of bits dedicated to each weight from the original model
is reduced based on how important they are thought to be during model
inference. This importance is arrived at through the application of an
'importance matrix'-a relatively small text document meant to be representative
of the LLM's standard use-cases. In the vast majority of quants available
online, this document is primarily written in English. It was therefore an open
question whether performance on English language tasks was preserved through
the sacrifice of multilingual performance and whether it can be preserved with
alternate importance matrices. This article investigates these hypotheses by
quantizing Llama3.3 70B on importance matrices written in three languages
(English, Norwegian, and Malayalam) and evaluating them on the MixEval dataset
in both English and Norwegian. All experiments related to yielded
non-significant results indicating that current quantization practices do not
disproportionately harm multilingual performance.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 15:26:59 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 07:36:46 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Borgersen",
"Karl Audun",
""
]
]
| TITLE: English K_Quantization of LLMs Does Not Disproportionately Diminish
Multilingual Performance
ABSTRACT: For consumer usage of locally deployed LLMs, the GGUF format and
k\_quantization are invaluable tools for maintaining the performance of the
original model while reducing it to sizes deployable with consumer-grade
hardware. The number of bits dedicated to each weight from the original model
is reduced based on how important they are thought to be during model
inference. This importance is arrived at through the application of an
'importance matrix'-a relatively small text document meant to be representative
of the LLM's standard use-cases. In the vast majority of quants available
online, this document is primarily written in English. It was therefore an open
question whether performance on English language tasks was preserved through
the sacrifice of multilingual performance and whether it can be preserved with
alternate importance matrices. This article investigates these hypotheses by
quantizing Llama3.3 70B on importance matrices written in three languages
(English, Norwegian, and Malayalam) and evaluating them on the MixEval dataset
in both English and Norwegian. All experiments related to yielded
non-significant results indicating that current quantization practices do not
disproportionately harm multilingual performance.
| no_new_dataset | 0.949856 |
2503.03594 | Haoran Fan | Haoran Fan, Bin Li, Yixuan Weng and Shoujun Zhou | Small but Mighty: Enhancing Time Series Forecasting with Lightweight
LLMs | 20 pages, 10 figures | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While LLMs have demonstrated remarkable potential in time series forecasting,
their practical deployment remains constrained by excessive computational
demands and memory footprints. Existing LLM-based approaches typically suffer
from three critical limitations: Inefficient parameter utilization in handling
numerical time series patterns; Modality misalignment between continuous
temporal signals and discrete text embeddings; and Inflexibility for real-time
expert knowledge integration. We present SMETimes, the first systematic
investigation of sub-3B parameter SLMs for efficient and accurate time series
forecasting. Our approach centers on three key innovations: A
statistically-enhanced prompting mechanism that bridges numerical time series
with textual semantics through descriptive statistical features; A adaptive
fusion embedding architecture that aligns temporal patterns with language model
token spaces through learnable parameters; And a dynamic mixture-of-experts
framework enabled by SLMs' computational efficiency, adaptively combining base
predictions with domain-specific models. Extensive evaluations across seven
benchmark datasets demonstrate that our 3B-parameter SLM achieves
state-of-the-art performance on five primary datasets while maintaining 3.8x
faster training and 5.2x lower memory consumption compared to 7B-parameter LLM
baselines. Notably, the proposed model exhibits better learning capabilities,
achieving 12.3% lower MSE than conventional LLM. Ablation studies validate that
our statistical prompting and cross-modal fusion modules respectively
contribute 15.7% and 18.2% error reduction in long-horizon forecasting tasks.
By redefining the efficiency-accuracy trade-off landscape, this work
establishes SLMs as viable alternatives to resource-intensive LLMs for
practical time series forecasting. Code and models are available at
https://github.com/xiyan1234567/SMETimes.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 15:27:36 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 10:56:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Fan",
"Haoran",
""
],
[
"Li",
"Bin",
""
],
[
"Weng",
"Yixuan",
""
],
[
"Zhou",
"Shoujun",
""
]
]
| TITLE: Small but Mighty: Enhancing Time Series Forecasting with Lightweight
LLMs
ABSTRACT: While LLMs have demonstrated remarkable potential in time series forecasting,
their practical deployment remains constrained by excessive computational
demands and memory footprints. Existing LLM-based approaches typically suffer
from three critical limitations: Inefficient parameter utilization in handling
numerical time series patterns; Modality misalignment between continuous
temporal signals and discrete text embeddings; and Inflexibility for real-time
expert knowledge integration. We present SMETimes, the first systematic
investigation of sub-3B parameter SLMs for efficient and accurate time series
forecasting. Our approach centers on three key innovations: A
statistically-enhanced prompting mechanism that bridges numerical time series
with textual semantics through descriptive statistical features; A adaptive
fusion embedding architecture that aligns temporal patterns with language model
token spaces through learnable parameters; And a dynamic mixture-of-experts
framework enabled by SLMs' computational efficiency, adaptively combining base
predictions with domain-specific models. Extensive evaluations across seven
benchmark datasets demonstrate that our 3B-parameter SLM achieves
state-of-the-art performance on five primary datasets while maintaining 3.8x
faster training and 5.2x lower memory consumption compared to 7B-parameter LLM
baselines. Notably, the proposed model exhibits better learning capabilities,
achieving 12.3% lower MSE than conventional LLM. Ablation studies validate that
our statistical prompting and cross-modal fusion modules respectively
contribute 15.7% and 18.2% error reduction in long-horizon forecasting tasks.
By redefining the efficiency-accuracy trade-off landscape, this work
establishes SLMs as viable alternatives to resource-intensive LLMs for
practical time series forecasting. Code and models are available at
https://github.com/xiyan1234567/SMETimes.
| no_new_dataset | 0.94366 |
2503.03874 | Hetarth Chopra | Hetarth Chopra, Vidhi Rambhia and Vikram Adve | LEWIS (LayEr WIse Sparsity) -- A Training Free Guided Model Merging
Approach | Accepted at ICLR 2025 Workshop: SLLM (Sparsity in Large Language
Models) | null | null | null | cs.LG cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As specialized large language models (LLMs) become increasingly prevalent,
model merging methods are being used to combine them to create a single
multi-task model without requiring any additional data or training. However,
these approaches fall short when the objective of merging is to increase the
downstream model's performance on a particular task-specific benchmark. In this
work, we propose LEWIS (Layer Wise Sparsity), a guided model-merging framework
that uses activation-based layer importance to dynamically adjust layer-wise
task-vector sparsity required for the merge process. LEWIS uses a calibration
dataset to prioritize critical layers during the task-vector pruning process
required for model merging. This approach guides existing merging methods by
preserving essential layer-wise task-specific knowledge while ensuring the
merged model performs the best at benchmarks resembling the calibration
dataset. Our experiments demonstrate the effectiveness of LEWIS with
performance improvements of code instruction-following and math-solving models
created through model merging up to 4 percent and 11.3 percent, respectively,
outperforming unguided data-less model merging approaches that use
uniform-sparsity.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 20:09:59 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 22:25:17 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chopra",
"Hetarth",
""
],
[
"Rambhia",
"Vidhi",
""
],
[
"Adve",
"Vikram",
""
]
]
| TITLE: LEWIS (LayEr WIse Sparsity) -- A Training Free Guided Model Merging
Approach
ABSTRACT: As specialized large language models (LLMs) become increasingly prevalent,
model merging methods are being used to combine them to create a single
multi-task model without requiring any additional data or training. However,
these approaches fall short when the objective of merging is to increase the
downstream model's performance on a particular task-specific benchmark. In this
work, we propose LEWIS (Layer Wise Sparsity), a guided model-merging framework
that uses activation-based layer importance to dynamically adjust layer-wise
task-vector sparsity required for the merge process. LEWIS uses a calibration
dataset to prioritize critical layers during the task-vector pruning process
required for model merging. This approach guides existing merging methods by
preserving essential layer-wise task-specific knowledge while ensuring the
merged model performs the best at benchmarks resembling the calibration
dataset. Our experiments demonstrate the effectiveness of LEWIS with
performance improvements of code instruction-following and math-solving models
created through model merging up to 4 percent and 11.3 percent, respectively,
outperforming unguided data-less model merging approaches that use
uniform-sparsity.
| no_new_dataset | 0.94801 |
2503.04065 | Wenyu Lv | Feng Ni, Kui Huang, Yao Lu, Wenyu Lv, Guanzhong Wang, Zeyu Chen, Yi
Liu | PP-DocBee: Improving Multimodal Document Understanding Through a Bag of
Tricks | null | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advancement of digitalization, various document images are
being applied more extensively in production and daily life, and there is an
increasingly urgent need for fast and accurate parsing of the content in
document images. Therefore, this report presents PP-DocBee, a novel multimodal
large language model designed for end-to-end document image understanding.
First, we develop a data synthesis strategy tailored to document scenarios in
which we build a diverse dataset to improve the model generalization. Then, we
apply a few training techniques, including dynamic proportional sampling, data
preprocessing, and OCR postprocessing strategies. Extensive evaluations
demonstrate the superior performance of PP-DocBee, achieving state-of-the-art
results on English document understanding benchmarks and even outperforming
existing open source and commercial models in Chinese document understanding.
The source code and pre-trained models are publicly available at
\href{https://github.com/PaddlePaddle/PaddleMIX}{https://github.com/PaddlePaddle/PaddleMIX}.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 03:43:21 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 03:22:24 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ni",
"Feng",
""
],
[
"Huang",
"Kui",
""
],
[
"Lu",
"Yao",
""
],
[
"Lv",
"Wenyu",
""
],
[
"Wang",
"Guanzhong",
""
],
[
"Chen",
"Zeyu",
""
],
[
"Liu",
"Yi",
""
]
]
| TITLE: PP-DocBee: Improving Multimodal Document Understanding Through a Bag of
Tricks
ABSTRACT: With the rapid advancement of digitalization, various document images are
being applied more extensively in production and daily life, and there is an
increasingly urgent need for fast and accurate parsing of the content in
document images. Therefore, this report presents PP-DocBee, a novel multimodal
large language model designed for end-to-end document image understanding.
First, we develop a data synthesis strategy tailored to document scenarios in
which we build a diverse dataset to improve the model generalization. Then, we
apply a few training techniques, including dynamic proportional sampling, data
preprocessing, and OCR postprocessing strategies. Extensive evaluations
demonstrate the superior performance of PP-DocBee, achieving state-of-the-art
results on English document understanding benchmarks and even outperforming
existing open source and commercial models in Chinese document understanding.
The source code and pre-trained models are publicly available at
\href{https://github.com/PaddlePaddle/PaddleMIX}{https://github.com/PaddlePaddle/PaddleMIX}.
| no_new_dataset | 0.94887 |
2503.04404 | Siamak Layeghy | Majed Luay, Siamak Layeghy, Seyedehfaezeh Hosseininoorbin, Mohanad
Sarhan, Nour Moustafa, Marius Portmann | Temporal Analysis of NetFlow Datasets for Network Intrusion Detection
Systems | null | null | null | null | cs.LG cs.CR cs.NI | http://creativecommons.org/licenses/by/4.0/ | This paper investigates the temporal analysis of NetFlow datasets for machine
learning (ML)-based network intrusion detection systems (NIDS). Although many
previous studies have highlighted the critical role of temporal features, such
as inter-packet arrival time and flow length/duration, in NIDS, the currently
available NetFlow datasets for NIDS lack these temporal features. This study
addresses this gap by creating and making publicly available a set of NetFlow
datasets that incorporate these temporal features [1]. With these temporal
features, we provide a comprehensive temporal analysis of NetFlow datasets by
examining the distribution of various features over time and presenting
time-series representations of NetFlow features. This temporal analysis has not
been previously provided in the existing literature. We also borrowed an idea
from signal processing, time frequency analysis, and tested it to see how
different the time frequency signal presentations (TFSPs) are for various
attacks. The results indicate that many attacks have unique patterns, which
could help ML models to identify them more easily.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 12:58:09 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 07:31:18 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Luay",
"Majed",
""
],
[
"Layeghy",
"Siamak",
""
],
[
"Hosseininoorbin",
"Seyedehfaezeh",
""
],
[
"Sarhan",
"Mohanad",
""
],
[
"Moustafa",
"Nour",
""
],
[
"Portmann",
"Marius",
""
]
]
| TITLE: Temporal Analysis of NetFlow Datasets for Network Intrusion Detection
Systems
ABSTRACT: This paper investigates the temporal analysis of NetFlow datasets for machine
learning (ML)-based network intrusion detection systems (NIDS). Although many
previous studies have highlighted the critical role of temporal features, such
as inter-packet arrival time and flow length/duration, in NIDS, the currently
available NetFlow datasets for NIDS lack these temporal features. This study
addresses this gap by creating and making publicly available a set of NetFlow
datasets that incorporate these temporal features [1]. With these temporal
features, we provide a comprehensive temporal analysis of NetFlow datasets by
examining the distribution of various features over time and presenting
time-series representations of NetFlow features. This temporal analysis has not
been previously provided in the existing literature. We also borrowed an idea
from signal processing, time frequency analysis, and tested it to see how
different the time frequency signal presentations (TFSPs) are for various
attacks. The results indicate that many attacks have unique patterns, which
could help ML models to identify them more easily.
| new_dataset | 0.96525 |
2503.04500 | Yu-Hsi Chen | Yu-Hsi Chen and Chin-Tien Wu | ReynoldsFlow: Exquisite Flow Estimation via Reynolds Transport Theorem | 10 pages, 3 figures, 3 tables | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Optical flow is a fundamental technique for motion estimation, widely applied
in video stabilization, interpolation, and object tracking. Traditional optical
flow estimation methods rely on restrictive assumptions like brightness
constancy and slow motion constraints. Recent deep learning-based flow
estimations require extensive training on large domain-specific datasets,
making them computationally demanding. Also, artificial intelligence (AI)
advances have enabled deep learning models to take advantage of optical flow as
an important feature for object tracking and motion analysis. Since optical
flow is commonly encoded in HSV for visualization, its conversion to RGB for
neural network processing is nonlinear and may introduce perceptual
distortions. These transformations amplify the sensitivity to estimation
errors, potentially affecting the predictive accuracy of the networks. To
address these challenges that are influential to the performance of downstream
network models, we propose Reynolds flow, a novel training-free flow estimation
inspired by the Reynolds transport theorem, offering a principled approach to
modeling complex motion dynamics. In addition to conventional HSV-based
visualization of Reynolds flow, we also introduce an RGB-encoded representation
of Reynolds flow designed to improve flow visualization and feature enhancement
for neural networks. We evaluated the effectiveness of Reynolds flow in
video-based tasks. Experimental results on three benchmarks, tiny object
detection on UAVDB, infrared object detection on Anti-UAV, and pose estimation
on GolfDB, demonstrate that networks trained with RGB-encoded Reynolds flow
achieve SOTA performance, exhibiting improved robustness and efficiency across
all tasks.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:49:28 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 17:47:41 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chen",
"Yu-Hsi",
""
],
[
"Wu",
"Chin-Tien",
""
]
]
| TITLE: ReynoldsFlow: Exquisite Flow Estimation via Reynolds Transport Theorem
ABSTRACT: Optical flow is a fundamental technique for motion estimation, widely applied
in video stabilization, interpolation, and object tracking. Traditional optical
flow estimation methods rely on restrictive assumptions like brightness
constancy and slow motion constraints. Recent deep learning-based flow
estimations require extensive training on large domain-specific datasets,
making them computationally demanding. Also, artificial intelligence (AI)
advances have enabled deep learning models to take advantage of optical flow as
an important feature for object tracking and motion analysis. Since optical
flow is commonly encoded in HSV for visualization, its conversion to RGB for
neural network processing is nonlinear and may introduce perceptual
distortions. These transformations amplify the sensitivity to estimation
errors, potentially affecting the predictive accuracy of the networks. To
address these challenges that are influential to the performance of downstream
network models, we propose Reynolds flow, a novel training-free flow estimation
inspired by the Reynolds transport theorem, offering a principled approach to
modeling complex motion dynamics. In addition to conventional HSV-based
visualization of Reynolds flow, we also introduce an RGB-encoded representation
of Reynolds flow designed to improve flow visualization and feature enhancement
for neural networks. We evaluated the effectiveness of Reynolds flow in
video-based tasks. Experimental results on three benchmarks, tiny object
detection on UAVDB, infrared object detection on Anti-UAV, and pose estimation
on GolfDB, demonstrate that networks trained with RGB-encoded Reynolds flow
achieve SOTA performance, exhibiting improved robustness and efficiency across
all tasks.
| no_new_dataset | 0.951369 |
2503.04626 | Yu Pan | Yu Pan, Chaozheng Wang, Zekai Wu, Qifan Wang, Min Zhang, Zenglin Xu | IDInit: A Universal and Stable Initialization Method for Neural Network
Training | Accepted in ICLR 2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks have achieved remarkable accomplishments in practice.
The success of these networks hinges on effective initialization methods, which
are vital for ensuring stable and rapid convergence during training. Recently,
initialization methods that maintain identity transition within layers have
shown good efficiency in network training. These techniques (e.g., Fixup) set
specific weights to zero to achieve identity control. However, settings of
remaining weight (e.g., Fixup uses random values to initialize non-zero
weights) will affect the inductive bias that is achieved only by a zero weight,
which may be harmful to training. Addressing this concern, we introduce fully
identical initialization (IDInit), a novel method that preserves identity in
both the main and sub-stem layers of residual networks. IDInit employs a padded
identity-like matrix to overcome rank constraints in non-square weight
matrices. Furthermore, we show the convergence problem of an identity matrix
can be solved by stochastic gradient descent. Additionally, we enhance the
universality of IDInit by processing higher-order weights and addressing dead
neuron problems. IDInit is a straightforward yet effective initialization
method, with improved convergence, stability, and performance across various
settings, including large-scale datasets and deep models.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:12:46 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 16:31:31 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Pan",
"Yu",
""
],
[
"Wang",
"Chaozheng",
""
],
[
"Wu",
"Zekai",
""
],
[
"Wang",
"Qifan",
""
],
[
"Zhang",
"Min",
""
],
[
"Xu",
"Zenglin",
""
]
]
| TITLE: IDInit: A Universal and Stable Initialization Method for Neural Network
Training
ABSTRACT: Deep neural networks have achieved remarkable accomplishments in practice.
The success of these networks hinges on effective initialization methods, which
are vital for ensuring stable and rapid convergence during training. Recently,
initialization methods that maintain identity transition within layers have
shown good efficiency in network training. These techniques (e.g., Fixup) set
specific weights to zero to achieve identity control. However, settings of
remaining weight (e.g., Fixup uses random values to initialize non-zero
weights) will affect the inductive bias that is achieved only by a zero weight,
which may be harmful to training. Addressing this concern, we introduce fully
identical initialization (IDInit), a novel method that preserves identity in
both the main and sub-stem layers of residual networks. IDInit employs a padded
identity-like matrix to overcome rank constraints in non-square weight
matrices. Furthermore, we show the convergence problem of an identity matrix
can be solved by stochastic gradient descent. Additionally, we enhance the
universality of IDInit by processing higher-order weights and addressing dead
neuron problems. IDInit is a straightforward yet effective initialization
method, with improved convergence, stability, and performance across various
settings, including large-scale datasets and deep models.
| no_new_dataset | 0.941439 |
2503.04691 | Pengcheng Qiu | Pengcheng Qiu, Chaoyi Wu, Shuyu Liu, Weike Zhao, Zhuoxia Chen, Hongfei
Gu, Chuanjin Peng, Ya Zhang, Yanfeng Wang, Weidi Xie | Quantifying the Reasoning Abilities of LLMs on Real-world Clinical Cases | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in reasoning-enhanced large language models (LLMs), such
as DeepSeek-R1 and OpenAI-o3, have demonstrated significant progress. However,
their application in professional medical contexts remains underexplored,
particularly in evaluating the quality of their reasoning processes alongside
final outputs. Here, we introduce MedR-Bench, a benchmarking dataset of 1,453
structured patient cases, annotated with reasoning references derived from
clinical case reports. Spanning 13 body systems and 10 specialties, it includes
both common and rare diseases. To comprehensively evaluate LLM performance, we
propose a framework encompassing three critical examination recommendation,
diagnostic decision-making, and treatment planning, simulating the entire
patient care journey. To assess reasoning quality, we present the Reasoning
Evaluator, a novel automated system that objectively scores free-text reasoning
responses based on efficiency, actuality, and completeness using dynamic
cross-referencing and evidence checks. Using this benchmark, we evaluate five
state-of-the-art reasoning LLMs, including DeepSeek-R1, OpenAI-o3-mini, and
Gemini-2.0-Flash Thinking, etc. Our results show that current LLMs achieve over
85% accuracy in relatively simple diagnostic tasks when provided with
sufficient examination results. However, performance declines in more complex
tasks, such as examination recommendation and treatment planning. While
reasoning outputs are generally reliable, with factuality scores exceeding 90%,
critical reasoning steps are frequently missed. These findings underscore both
the progress and limitations of clinical LLMs. Notably, open-source models like
DeepSeek-R1 are narrowing the gap with proprietary systems, highlighting their
potential to drive accessible and equitable advancements in healthcare.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 18:35:39 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 17:28:31 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Qiu",
"Pengcheng",
""
],
[
"Wu",
"Chaoyi",
""
],
[
"Liu",
"Shuyu",
""
],
[
"Zhao",
"Weike",
""
],
[
"Chen",
"Zhuoxia",
""
],
[
"Gu",
"Hongfei",
""
],
[
"Peng",
"Chuanjin",
""
],
[
"Zhang",
"Ya",
""
],
[
"Wang",
"Yanfeng",
""
],
[
"Xie",
"Weidi",
""
]
]
| TITLE: Quantifying the Reasoning Abilities of LLMs on Real-world Clinical Cases
ABSTRACT: Recent advancements in reasoning-enhanced large language models (LLMs), such
as DeepSeek-R1 and OpenAI-o3, have demonstrated significant progress. However,
their application in professional medical contexts remains underexplored,
particularly in evaluating the quality of their reasoning processes alongside
final outputs. Here, we introduce MedR-Bench, a benchmarking dataset of 1,453
structured patient cases, annotated with reasoning references derived from
clinical case reports. Spanning 13 body systems and 10 specialties, it includes
both common and rare diseases. To comprehensively evaluate LLM performance, we
propose a framework encompassing three critical examination recommendation,
diagnostic decision-making, and treatment planning, simulating the entire
patient care journey. To assess reasoning quality, we present the Reasoning
Evaluator, a novel automated system that objectively scores free-text reasoning
responses based on efficiency, actuality, and completeness using dynamic
cross-referencing and evidence checks. Using this benchmark, we evaluate five
state-of-the-art reasoning LLMs, including DeepSeek-R1, OpenAI-o3-mini, and
Gemini-2.0-Flash Thinking, etc. Our results show that current LLMs achieve over
85% accuracy in relatively simple diagnostic tasks when provided with
sufficient examination results. However, performance declines in more complex
tasks, such as examination recommendation and treatment planning. While
reasoning outputs are generally reliable, with factuality scores exceeding 90%,
critical reasoning steps are frequently missed. These findings underscore both
the progress and limitations of clinical LLMs. Notably, open-source models like
DeepSeek-R1 are narrowing the gap with proprietary systems, highlighting their
potential to drive accessible and equitable advancements in healthcare.
| new_dataset | 0.960435 |
2503.04804 | Arturs Kanepajs | Arturs Kanepajs, Aditi Basu, Sankalpa Ghose, Constance Li, Akshat
Mehta, Ronak Mehta, Samuel David Tucker-Davis, Eric Zhou, Bob Fischer | What do Large Language Models Say About Animals? Investigating Risks of
Animal Harm in Generated Text | null | null | null | null | cs.CY cs.CL | http://creativecommons.org/licenses/by/4.0/ | As machine learning systems become increasingly embedded in human society,
their impact on the natural world continues to escalate. Technical evaluations
have addressed a variety of potential harms from large language models (LLMs)
towards humans and the environment, but there is little empirical work
regarding harms towards nonhuman animals. Following the growing recognition of
animal protection in regulatory and ethical AI frameworks, we present the
Animal Harm Assessment (AHA), a novel evaluation of risks of animal harm in
LLM-generated text. Our dataset comprises 1,850 curated questions from Reddit
post titles and 2,500 synthetic questions based on 50 animal categories (e.g.,
cats, reptiles) and 50 ethical scenarios, with further 70-30 public-private
split. Scenarios include open-ended questions about how to treat animals,
practical scenarios with potential animal harm, and willingness-to-pay measures
for the prevention of animal harm. Using the LLM-as-a-judge framework, answers
are evaluated for their potential to increase or decrease harm, and evaluations
are debiased for the tendency to judge their own outputs more favorably. We
show that AHA produces meaningful evaluation results when applied to frontier
LLMs, revealing significant differences between models, animal categories,
scenarios, and subreddits. We conclude with future directions for technical
research and the challenges of building evaluations on complex social and moral
topics.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 15:32:18 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 03:02:59 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kanepajs",
"Arturs",
""
],
[
"Basu",
"Aditi",
""
],
[
"Ghose",
"Sankalpa",
""
],
[
"Li",
"Constance",
""
],
[
"Mehta",
"Akshat",
""
],
[
"Mehta",
"Ronak",
""
],
[
"Tucker-Davis",
"Samuel David",
""
],
[
"Zhou",
"Eric",
""
],
[
"Fischer",
"Bob",
""
]
]
| TITLE: What do Large Language Models Say About Animals? Investigating Risks of
Animal Harm in Generated Text
ABSTRACT: As machine learning systems become increasingly embedded in human society,
their impact on the natural world continues to escalate. Technical evaluations
have addressed a variety of potential harms from large language models (LLMs)
towards humans and the environment, but there is little empirical work
regarding harms towards nonhuman animals. Following the growing recognition of
animal protection in regulatory and ethical AI frameworks, we present the
Animal Harm Assessment (AHA), a novel evaluation of risks of animal harm in
LLM-generated text. Our dataset comprises 1,850 curated questions from Reddit
post titles and 2,500 synthetic questions based on 50 animal categories (e.g.,
cats, reptiles) and 50 ethical scenarios, with further 70-30 public-private
split. Scenarios include open-ended questions about how to treat animals,
practical scenarios with potential animal harm, and willingness-to-pay measures
for the prevention of animal harm. Using the LLM-as-a-judge framework, answers
are evaluated for their potential to increase or decrease harm, and evaluations
are debiased for the tendency to judge their own outputs more favorably. We
show that AHA produces meaningful evaluation results when applied to frontier
LLMs, revealing significant differences between models, animal categories,
scenarios, and subreddits. We conclude with future directions for technical
research and the challenges of building evaluations on complex social and moral
topics.
| new_dataset | 0.955817 |
2503.04809 | Lang Mei | Lang Mei, Chong Chen, Jiaxin Mao | PanguIR Technical Report for NTCIR-18 AEOLLM Task | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) gain widespread attention in both academia
and industry, it becomes increasingly critical and challenging to effectively
evaluate their capabilities. Existing evaluation methods can be broadly
categorized into two types: manual evaluation and automatic evaluation. Manual
evaluation, while comprehensive, is often costly and resource-intensive.
Conversely, automatic evaluation offers greater scalability but is constrained
by the limitations of its evaluation criteria (dominated by reference-based
answers). To address these challenges, NTCIR-18 introduced the AEOLLM
(Automatic Evaluation of LLMs) task, aiming to encourage reference-free
evaluation methods that can overcome the limitations of existing approaches. In
this paper, to enhance the evaluation performance of the AEOLLM task, we
propose three key methods to improve the reference-free evaluation: 1)
Multi-model Collaboration: Leveraging multiple LLMs to approximate human
ratings across various subtasks; 2) Prompt Auto-optimization: Utilizing LLMs to
iteratively refine the initial task prompts based on evaluation feedback from
training samples; and 3) In-context Learning (ICL) Optimization: Based on the
multi-task evaluation feedback, we train a specialized in-context example
retrieval model, combined with a semantic relevance retrieval model, to jointly
identify the most effective in-context learning examples. Experiments conducted
on the final dataset demonstrate that our approach achieves superior
performance on the AEOLLM task.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 07:40:02 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 06:49:01 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Mei",
"Lang",
""
],
[
"Chen",
"Chong",
""
],
[
"Mao",
"Jiaxin",
""
]
]
| TITLE: PanguIR Technical Report for NTCIR-18 AEOLLM Task
ABSTRACT: As large language models (LLMs) gain widespread attention in both academia
and industry, it becomes increasingly critical and challenging to effectively
evaluate their capabilities. Existing evaluation methods can be broadly
categorized into two types: manual evaluation and automatic evaluation. Manual
evaluation, while comprehensive, is often costly and resource-intensive.
Conversely, automatic evaluation offers greater scalability but is constrained
by the limitations of its evaluation criteria (dominated by reference-based
answers). To address these challenges, NTCIR-18 introduced the AEOLLM
(Automatic Evaluation of LLMs) task, aiming to encourage reference-free
evaluation methods that can overcome the limitations of existing approaches. In
this paper, to enhance the evaluation performance of the AEOLLM task, we
propose three key methods to improve the reference-free evaluation: 1)
Multi-model Collaboration: Leveraging multiple LLMs to approximate human
ratings across various subtasks; 2) Prompt Auto-optimization: Utilizing LLMs to
iteratively refine the initial task prompts based on evaluation feedback from
training samples; and 3) In-context Learning (ICL) Optimization: Based on the
multi-task evaluation feedback, we train a specialized in-context example
retrieval model, combined with a semantic relevance retrieval model, to jointly
identify the most effective in-context learning examples. Experiments conducted
on the final dataset demonstrate that our approach achieves superior
performance on the AEOLLM task.
| no_new_dataset | 0.943971 |
2503.04870 | Devi Dutta Biswajeet | Devi Dutta Biswajeet and Sara Kadkhodaei | Leveraging Large Language Models to Address Data Scarcity in Machine
Learning: Applications in Graphene Synthesis | 20 pages, 10 figures, 4 tables; Supplementary Material with 13
figures and 4 tables | null | null | null | physics.comp-ph cond-mat.mtrl-sci cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Machine learning in materials science faces challenges due to limited
experimental data, as generating synthesis data is costly and time-consuming,
especially with in-house experiments. Mining data from existing literature
introduces issues like mixed data quality, inconsistent formats, and variations
in reporting experimental parameters, complicating the creation of consistent
features for the learning algorithm. Additionally, combining continuous and
discrete features can hinder the learning process with limited data. Here, we
propose strategies that utilize large language models (LLMs) to enhance machine
learning performance on a limited, heterogeneous dataset of graphene chemical
vapor deposition synthesis compiled from existing literature. These strategies
include prompting modalities for imputing missing data points and leveraging
large language model embeddings to encode the complex nomenclature of
substrates reported in chemical vapor deposition experiments. The proposed
strategies enhance graphene layer classification using a support vector machine
(SVM) model, increasing binary classification accuracy from 39% to 65% and
ternary accuracy from 52% to 72%. We compare the performance of the SVM and a
GPT-4 model, both trained and fine-tuned on the same data. Our results
demonstrate that the numerical classifier, when combined with LLM-driven data
enhancements, outperforms the standalone LLM predictor, highlighting that in
data-scarce scenarios, improving predictive learning with LLM strategies
requires more than simple fine-tuning on datasets. Instead, it necessitates
sophisticated approaches for data imputation and feature space homogenization
to achieve optimal performance. The proposed strategies emphasize data
enhancement techniques, offering a broadly applicable framework for improving
machine learning performance on scarce, inhomogeneous datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 16:04:01 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 14:04:38 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Biswajeet",
"Devi Dutta",
""
],
[
"Kadkhodaei",
"Sara",
""
]
]
| TITLE: Leveraging Large Language Models to Address Data Scarcity in Machine
Learning: Applications in Graphene Synthesis
ABSTRACT: Machine learning in materials science faces challenges due to limited
experimental data, as generating synthesis data is costly and time-consuming,
especially with in-house experiments. Mining data from existing literature
introduces issues like mixed data quality, inconsistent formats, and variations
in reporting experimental parameters, complicating the creation of consistent
features for the learning algorithm. Additionally, combining continuous and
discrete features can hinder the learning process with limited data. Here, we
propose strategies that utilize large language models (LLMs) to enhance machine
learning performance on a limited, heterogeneous dataset of graphene chemical
vapor deposition synthesis compiled from existing literature. These strategies
include prompting modalities for imputing missing data points and leveraging
large language model embeddings to encode the complex nomenclature of
substrates reported in chemical vapor deposition experiments. The proposed
strategies enhance graphene layer classification using a support vector machine
(SVM) model, increasing binary classification accuracy from 39% to 65% and
ternary accuracy from 52% to 72%. We compare the performance of the SVM and a
GPT-4 model, both trained and fine-tuned on the same data. Our results
demonstrate that the numerical classifier, when combined with LLM-driven data
enhancements, outperforms the standalone LLM predictor, highlighting that in
data-scarce scenarios, improving predictive learning with LLM strategies
requires more than simple fine-tuning on datasets. Instead, it necessitates
sophisticated approaches for data imputation and feature space homogenization
to achieve optimal performance. The proposed strategies emphasize data
enhancement techniques, offering a broadly applicable framework for improving
machine learning performance on scarce, inhomogeneous datasets.
| no_new_dataset | 0.958693 |
2503.05120 | Zhao Wang | Xinghong Mai, Zhao Wang, Lijun Pan, Johannes Schorghuber, Peter
Kovacs, Jesus Carrete, Georg K. H. Madsen | Computing Anharmonic Infrared Spectra of Polycyclic Aromatic
Hydrocarbons Using Machine-Learning Molecular Dynamics | null | null | null | null | astro-ph.IM astro-ph.GA astro-ph.SR physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | Polycyclic aromatic hydrocarbons (PAHs) are key contributors to interstellar
aromatic infrared (IR) bands. However, current spectral databases for IR
emission analysis are limited by the omission of vibrational anharmonicity and
temperature effects, primarily because of the high computational cost of
conventional quantum chemical calculations (QCCs). In this work, we present a
machine learning-based molecular dynamics (MLMD) approach that efficiently
computes anharmonic IR spectra while incorporating temperature effects. MLMD
achieves predictive accuracy comparable to that of QCCs but with significantly
reduced computational cost, scaling linearly with the number of atoms in the
system. We applied MLMD to calculate the anharmonic spectra of 1704 PAHs in the
NASA Ames PAH IR Spectroscopic Database with up to 216 carbon atoms,
demonstrating its capability for high-throughput spectral calculations of large
molecular systems. Our results highlight MLMD's potential to enable the
development of extensive molecular spectral datasets, enhancing data-driven
analyses of astronomical IR spectra, particularly in anticipation of upcoming
data from the James Webb Space Telescope.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 03:46:03 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 09:05:49 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Mai",
"Xinghong",
""
],
[
"Wang",
"Zhao",
""
],
[
"Pan",
"Lijun",
""
],
[
"Schorghuber",
"Johannes",
""
],
[
"Kovacs",
"Peter",
""
],
[
"Carrete",
"Jesus",
""
],
[
"Madsen",
"Georg K. H.",
""
]
]
| TITLE: Computing Anharmonic Infrared Spectra of Polycyclic Aromatic
Hydrocarbons Using Machine-Learning Molecular Dynamics
ABSTRACT: Polycyclic aromatic hydrocarbons (PAHs) are key contributors to interstellar
aromatic infrared (IR) bands. However, current spectral databases for IR
emission analysis are limited by the omission of vibrational anharmonicity and
temperature effects, primarily because of the high computational cost of
conventional quantum chemical calculations (QCCs). In this work, we present a
machine learning-based molecular dynamics (MLMD) approach that efficiently
computes anharmonic IR spectra while incorporating temperature effects. MLMD
achieves predictive accuracy comparable to that of QCCs but with significantly
reduced computational cost, scaling linearly with the number of atoms in the
system. We applied MLMD to calculate the anharmonic spectra of 1704 PAHs in the
NASA Ames PAH IR Spectroscopic Database with up to 216 carbon atoms,
demonstrating its capability for high-throughput spectral calculations of large
molecular systems. Our results highlight MLMD's potential to enable the
development of extensive molecular spectral datasets, enhancing data-driven
analyses of astronomical IR spectra, particularly in anticipation of upcoming
data from the James Webb Space Telescope.
| no_new_dataset | 0.946892 |
2503.05132 | Hengguang Zhou | Hengguang Zhou and Xirui Li and Ruochen Wang and Minhao Cheng and
Tianyi Zhou and Cho-Jui Hsieh | R1-Zero's "Aha Moment" in Visual Reasoning on a 2B Non-SFT Model | 10 pages, 6 figures | null | null | null | cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recently DeepSeek R1 demonstrated how reinforcement learning with simple
rule-based incentives can enable autonomous development of complex reasoning in
large language models, characterized by the "aha moment", in which the model
manifest self-reflection and increased response length during training.
However, attempts to extend this success to multimodal reasoning often failed
to reproduce these key characteristics. In this report, we present the first
successful replication of these emergent characteristics for multimodal
reasoning on only a non-SFT 2B model. Starting with Qwen2-VL-2B and applying
reinforcement learning directly on the SAT dataset, our model achieves 59.47%
accuracy on CVBench, outperforming the base model by approximately ~30% and
exceeding both SFT setting by ~2%. In addition, we share our failed attempts
and insights in attempting to achieve R1-like reasoning using RL with instruct
models. aiming to shed light on the challenges involved. Our key observations
include: (1) applying RL on instruct model often results in trivial reasoning
trajectories, and (2) naive length reward are ineffective in eliciting
reasoning capabilities. The project code is available at
https://github.com/turningpoint-ai/VisualThinker-R1-Zero
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 04:21:47 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 01:52:08 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhou",
"Hengguang",
""
],
[
"Li",
"Xirui",
""
],
[
"Wang",
"Ruochen",
""
],
[
"Cheng",
"Minhao",
""
],
[
"Zhou",
"Tianyi",
""
],
[
"Hsieh",
"Cho-Jui",
""
]
]
| TITLE: R1-Zero's "Aha Moment" in Visual Reasoning on a 2B Non-SFT Model
ABSTRACT: Recently DeepSeek R1 demonstrated how reinforcement learning with simple
rule-based incentives can enable autonomous development of complex reasoning in
large language models, characterized by the "aha moment", in which the model
manifest self-reflection and increased response length during training.
However, attempts to extend this success to multimodal reasoning often failed
to reproduce these key characteristics. In this report, we present the first
successful replication of these emergent characteristics for multimodal
reasoning on only a non-SFT 2B model. Starting with Qwen2-VL-2B and applying
reinforcement learning directly on the SAT dataset, our model achieves 59.47%
accuracy on CVBench, outperforming the base model by approximately ~30% and
exceeding both SFT setting by ~2%. In addition, we share our failed attempts
and insights in attempting to achieve R1-like reasoning using RL with instruct
models. aiming to shed light on the challenges involved. Our key observations
include: (1) applying RL on instruct model often results in trivial reasoning
trajectories, and (2) naive length reward are ineffective in eliciting
reasoning capabilities. The project code is available at
https://github.com/turningpoint-ai/VisualThinker-R1-Zero
| no_new_dataset | 0.946151 |
2503.05200 | Pranshav Gajjar | Pranshav Gajjar, Vijay K. Shah | ORANSight-2.0: Foundational LLMs for O-RAN | null | null | null | null | cs.CL cs.AI cs.LG cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the transformative impact of Large Language Models (LLMs) across
critical domains such as healthcare, customer service, and business marketing,
their integration into Open Radio Access Networks (O-RAN) remains limited. This
gap is primarily due to the absence of domain-specific foundational models,
with existing solutions often relying on general-purpose LLMs that fail to
address the unique challenges and technical intricacies of O-RAN. To bridge
this gap, we introduce ORANSight-2.0 (O-RAN Insights), a pioneering initiative
aimed at developing specialized foundational LLMs tailored for O-RAN. Built on
18 LLMs spanning five open-source LLM frameworks, ORANSight-2.0 fine-tunes
models ranging from 1 to 70B parameters, significantly reducing reliance on
proprietary, closed-source models while enhancing performance for O-RAN. At the
core of ORANSight-2.0 is RANSTRUCT, a novel Retrieval-Augmented Generation
(RAG) based instruction-tuning framework that employs two LLM agents to create
high-quality instruction-tuning datasets. The generated dataset is then used to
fine-tune the 18 pre-trained open-source LLMs via QLoRA. To evaluate
ORANSight-2.0, we introduce srsRANBench, a novel benchmark designed for code
generation and codebase understanding in the context of srsRAN, a widely used
5G O-RAN stack. We also leverage ORANBench13K, an existing benchmark for
assessing O-RAN-specific knowledge. Our comprehensive evaluations demonstrate
that ORANSight-2.0 models outperform general-purpose and closed-source models,
such as ChatGPT-4o and Gemini, by 5.421% on ORANBench and 18.465% on
srsRANBench, achieving superior performance while maintaining lower
computational and energy costs. We also experiment with RAG-augmented variants
of ORANSight-2.0 LLMs and thoroughly evaluate their energy characteristics,
demonstrating costs for training, standard inference, and RAG-augmented
inference.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 07:44:31 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Gajjar",
"Pranshav",
""
],
[
"Shah",
"Vijay K.",
""
]
]
| TITLE: ORANSight-2.0: Foundational LLMs for O-RAN
ABSTRACT: Despite the transformative impact of Large Language Models (LLMs) across
critical domains such as healthcare, customer service, and business marketing,
their integration into Open Radio Access Networks (O-RAN) remains limited. This
gap is primarily due to the absence of domain-specific foundational models,
with existing solutions often relying on general-purpose LLMs that fail to
address the unique challenges and technical intricacies of O-RAN. To bridge
this gap, we introduce ORANSight-2.0 (O-RAN Insights), a pioneering initiative
aimed at developing specialized foundational LLMs tailored for O-RAN. Built on
18 LLMs spanning five open-source LLM frameworks, ORANSight-2.0 fine-tunes
models ranging from 1 to 70B parameters, significantly reducing reliance on
proprietary, closed-source models while enhancing performance for O-RAN. At the
core of ORANSight-2.0 is RANSTRUCT, a novel Retrieval-Augmented Generation
(RAG) based instruction-tuning framework that employs two LLM agents to create
high-quality instruction-tuning datasets. The generated dataset is then used to
fine-tune the 18 pre-trained open-source LLMs via QLoRA. To evaluate
ORANSight-2.0, we introduce srsRANBench, a novel benchmark designed for code
generation and codebase understanding in the context of srsRAN, a widely used
5G O-RAN stack. We also leverage ORANBench13K, an existing benchmark for
assessing O-RAN-specific knowledge. Our comprehensive evaluations demonstrate
that ORANSight-2.0 models outperform general-purpose and closed-source models,
such as ChatGPT-4o and Gemini, by 5.421% on ORANBench and 18.465% on
srsRANBench, achieving superior performance while maintaining lower
computational and energy costs. We also experiment with RAG-augmented variants
of ORANSight-2.0 LLMs and thoroughly evaluate their energy characteristics,
demonstrating costs for training, standard inference, and RAG-augmented
inference.
| no_new_dataset | 0.936634 |
2503.05379 | Jiaxing Zhao | Jiaxing Zhao, Xihan Wei, Liefeng Bo | R1-Omni: Explainable Omni-Multimodal Emotion Recognition with
Reinforcement Learning | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present the first application of Reinforcement Learning with
Verifiable Reward (RLVR) to an Omni-multimodal large language model in the
context of emotion recognition, a task where both visual and audio modalities
play crucial roles. We leverage RLVR to optimize the Omni model, significantly
enhancing its performance in three key aspects: reasoning capability, emotion
recognition accuracy, and generalization ability. The introduction of RLVR not
only improves the model's overall performance on in-distribution data but also
demonstrates superior robustness when evaluated on out-of-distribution
datasets. More importantly, the improved reasoning capability enables clear
analysis of the contributions of different modalities, particularly visual and
audio information, in the emotion recognition process. This provides valuable
insights into the optimization of multimodal large language models.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 12:46:42 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 07:11:14 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhao",
"Jiaxing",
""
],
[
"Wei",
"Xihan",
""
],
[
"Bo",
"Liefeng",
""
]
]
| TITLE: R1-Omni: Explainable Omni-Multimodal Emotion Recognition with
Reinforcement Learning
ABSTRACT: In this work, we present the first application of Reinforcement Learning with
Verifiable Reward (RLVR) to an Omni-multimodal large language model in the
context of emotion recognition, a task where both visual and audio modalities
play crucial roles. We leverage RLVR to optimize the Omni model, significantly
enhancing its performance in three key aspects: reasoning capability, emotion
recognition accuracy, and generalization ability. The introduction of RLVR not
only improves the model's overall performance on in-distribution data but also
demonstrates superior robustness when evaluated on out-of-distribution
datasets. More importantly, the improved reasoning capability enables clear
analysis of the contributions of different modalities, particularly visual and
audio information, in the emotion recognition process. This provides valuable
insights into the optimization of multimodal large language models.
| no_new_dataset | 0.948202 |
2503.05577 | Henrik Schopmans | Daniel Hollarek, Henrik Schopmans, Jona \"Ostreicher, Jonas Teufel,
Bin Cao, Adie Alwen, Simon Schweidler, Mriganka Singh, Tim Kodalle, Hanlin
Hu, Gregoire Heymans, Maged Abdelsamie, Arthur Hardiagon, Alexander
Wieczorek, Siarhei Zhuk, Ruth Schwaiger, Sebastian Siol, Fran\c{c}ois-Xavier
Coudert, Moritz Wolf, Carolin M. Sutter-Fella, Ben Breitung, Andrea M. Hodge,
Tong-yi Zhang, Pascal Friederich | opXRD: Open Experimental Powder X-ray Diffraction Database | null | null | null | null | cond-mat.mtrl-sci cs.LG | http://creativecommons.org/licenses/by/4.0/ | Powder X-ray diffraction (pXRD) experiments are a cornerstone for materials
structure characterization. Despite their widespread application, analyzing
pXRD diffractograms still presents a significant challenge to automation and a
bottleneck in high-throughput discovery in self-driving labs. Machine learning
promises to resolve this bottleneck by enabling automated powder diffraction
analysis. A notable difficulty in applying machine learning to this domain is
the lack of sufficiently sized experimental datasets, which has constrained
researchers to train primarily on simulated data. However, models trained on
simulated pXRD patterns showed limited generalization to experimental patterns,
particularly for low-quality experimental patterns with high noise levels and
elevated backgrounds. With the Open Experimental Powder X-Ray Diffraction
Database (opXRD), we provide an openly available and easily accessible dataset
of labeled and unlabeled experimental powder diffractograms. Labeled opXRD data
can be used to evaluate the performance of models on experimental data and
unlabeled opXRD data can help improve the performance of models on experimental
data, e.g. through transfer learning methods. We collected 92552
diffractograms, 2179 of them labeled, from a wide spectrum of materials
classes. We hope this ongoing effort can guide machine learning research toward
fully automated analysis of pXRD data and thus enable future self-driving
materials labs.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 16:59:18 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 07:35:46 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hollarek",
"Daniel",
""
],
[
"Schopmans",
"Henrik",
""
],
[
"Östreicher",
"Jona",
""
],
[
"Teufel",
"Jonas",
""
],
[
"Cao",
"Bin",
""
],
[
"Alwen",
"Adie",
""
],
[
"Schweidler",
"Simon",
""
],
[
"Singh",
"Mriganka",
""
],
[
"Kodalle",
"Tim",
""
],
[
"Hu",
"Hanlin",
""
],
[
"Heymans",
"Gregoire",
""
],
[
"Abdelsamie",
"Maged",
""
],
[
"Hardiagon",
"Arthur",
""
],
[
"Wieczorek",
"Alexander",
""
],
[
"Zhuk",
"Siarhei",
""
],
[
"Schwaiger",
"Ruth",
""
],
[
"Siol",
"Sebastian",
""
],
[
"Coudert",
"François-Xavier",
""
],
[
"Wolf",
"Moritz",
""
],
[
"Sutter-Fella",
"Carolin M.",
""
],
[
"Breitung",
"Ben",
""
],
[
"Hodge",
"Andrea M.",
""
],
[
"Zhang",
"Tong-yi",
""
],
[
"Friederich",
"Pascal",
""
]
]
| TITLE: opXRD: Open Experimental Powder X-ray Diffraction Database
ABSTRACT: Powder X-ray diffraction (pXRD) experiments are a cornerstone for materials
structure characterization. Despite their widespread application, analyzing
pXRD diffractograms still presents a significant challenge to automation and a
bottleneck in high-throughput discovery in self-driving labs. Machine learning
promises to resolve this bottleneck by enabling automated powder diffraction
analysis. A notable difficulty in applying machine learning to this domain is
the lack of sufficiently sized experimental datasets, which has constrained
researchers to train primarily on simulated data. However, models trained on
simulated pXRD patterns showed limited generalization to experimental patterns,
particularly for low-quality experimental patterns with high noise levels and
elevated backgrounds. With the Open Experimental Powder X-Ray Diffraction
Database (opXRD), we provide an openly available and easily accessible dataset
of labeled and unlabeled experimental powder diffractograms. Labeled opXRD data
can be used to evaluate the performance of models on experimental data and
unlabeled opXRD data can help improve the performance of models on experimental
data, e.g. through transfer learning methods. We collected 92552
diffractograms, 2179 of them labeled, from a wide spectrum of materials
classes. We hope this ongoing effort can guide machine learning research toward
fully automated analysis of pXRD data and thus enable future self-driving
materials labs.
| new_dataset | 0.87153 |
2503.05700 | William Marfo | William Marfo, Enrique A. Rico, Deepak K. Tosh, Shirley V. Moore | Network Anomaly Detection in Distributed Edge Computing Infrastructure | null | null | null | null | cs.DC cs.NI | http://creativecommons.org/licenses/by/4.0/ | As networks continue to grow in complexity and scale, detecting anomalies has
become increasingly challenging, particularly in diverse and geographically
dispersed environments. Traditional approaches often struggle with managing the
computational burden associated with analyzing large-scale network traffic to
identify anomalies. This paper introduces a distributed edge computing
framework that integrates federated learning with Apache Spark and Kubernetes
to address these challenges. We hypothesize that our approach, which enables
collaborative model training across distributed nodes, significantly enhances
the detection accuracy of network anomalies across different network types. By
leveraging distributed computing and containerization technologies, our
framework not only improves scalability and fault tolerance but also achieves
superior detection performance compared to state-of-the-art methods. Extensive
experiments on the UNSW-NB15 and ROAD datasets validate the effectiveness of
our approach, demonstrating statistically significant improvements in detection
accuracy and training efficiency over baseline models, as confirmed by
Mann-Whitney U and Kolmogorov-Smirnov tests (p < 0.05).
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2025 01:34:31 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Marfo",
"William",
""
],
[
"Rico",
"Enrique A.",
""
],
[
"Tosh",
"Deepak K.",
""
],
[
"Moore",
"Shirley V.",
""
]
]
| TITLE: Network Anomaly Detection in Distributed Edge Computing Infrastructure
ABSTRACT: As networks continue to grow in complexity and scale, detecting anomalies has
become increasingly challenging, particularly in diverse and geographically
dispersed environments. Traditional approaches often struggle with managing the
computational burden associated with analyzing large-scale network traffic to
identify anomalies. This paper introduces a distributed edge computing
framework that integrates federated learning with Apache Spark and Kubernetes
to address these challenges. We hypothesize that our approach, which enables
collaborative model training across distributed nodes, significantly enhances
the detection accuracy of network anomalies across different network types. By
leveraging distributed computing and containerization technologies, our
framework not only improves scalability and fault tolerance but also achieves
superior detection performance compared to state-of-the-art methods. Extensive
experiments on the UNSW-NB15 and ROAD datasets validate the effectiveness of
our approach, demonstrating statistically significant improvements in detection
accuracy and training efficiency over baseline models, as confirmed by
Mann-Whitney U and Kolmogorov-Smirnov tests (p < 0.05).
| no_new_dataset | 0.946597 |
2503.05701 | Alberto Santamaria-Pang | Alberto Santamaria-Pang and Frank Tuan and Ross Campbell and Cindy
Zhang and Ankush Jindal and Roopa Surapur and Brad Holloman and Deanna
Hanisch and Rae Buckley and Carisa Cooney and Ivan Tarapov and Kimberly S.
Peairs and Brian Hasselfeld and Peter Greene | OPTIC: Optimizing Patient-Provider Triaging & Improving Communications
in Clinical Operations using GPT-4 Data Labeling and Model Distillation | 15 pages, 8 figures. submitted to Journal of the American Medical
Informatics Association | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | The COVID-19 pandemic has accelerated the adoption of telemedicine and
patient messaging through electronic medical portals (patient medical advice
requests, or PMARs). While these platforms enhance patient access to
healthcare, they have also increased the burden on healthcare providers due to
the surge in PMARs. This study seeks to develop an efficient tool for message
triaging to reduce physician workload and improve patient-provider
communication. We developed OPTIC (Optimizing Patient-Provider Triaging &
Improving Communications in Clinical Operations), a powerful message triaging
tool that utilizes GPT-4 for data labeling and BERT for model distillation. The
study used a dataset of 405,487 patient messaging encounters from Johns Hopkins
Medicine between January and June 2020. High-quality labeled data was generated
through GPT-4-based prompt engineering, which was then used to train a BERT
model to classify messages as "Admin" or "Clinical." The BERT model achieved
88.85% accuracy on the test set validated by GPT-4 labeling, with a sensitivity
of 88.29%, specificity of 89.38%, and an F1 score of 0.8842. BERTopic analysis
identified 81 distinct topics within the test data, with over 80% accuracy in
classifying 58 topics. The system was successfully deployed through Epic's
Nebula Cloud Platform, demonstrating its practical effectiveness in healthcare
settings.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 05:49:34 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Santamaria-Pang",
"Alberto",
""
],
[
"Tuan",
"Frank",
""
],
[
"Campbell",
"Ross",
""
],
[
"Zhang",
"Cindy",
""
],
[
"Jindal",
"Ankush",
""
],
[
"Surapur",
"Roopa",
""
],
[
"Holloman",
"Brad",
""
],
[
"Hanisch",
"Deanna",
""
],
[
"Buckley",
"Rae",
""
],
[
"Cooney",
"Carisa",
""
],
[
"Tarapov",
"Ivan",
""
],
[
"Peairs",
"Kimberly S.",
""
],
[
"Hasselfeld",
"Brian",
""
],
[
"Greene",
"Peter",
""
]
]
| TITLE: OPTIC: Optimizing Patient-Provider Triaging & Improving Communications
in Clinical Operations using GPT-4 Data Labeling and Model Distillation
ABSTRACT: The COVID-19 pandemic has accelerated the adoption of telemedicine and
patient messaging through electronic medical portals (patient medical advice
requests, or PMARs). While these platforms enhance patient access to
healthcare, they have also increased the burden on healthcare providers due to
the surge in PMARs. This study seeks to develop an efficient tool for message
triaging to reduce physician workload and improve patient-provider
communication. We developed OPTIC (Optimizing Patient-Provider Triaging &
Improving Communications in Clinical Operations), a powerful message triaging
tool that utilizes GPT-4 for data labeling and BERT for model distillation. The
study used a dataset of 405,487 patient messaging encounters from Johns Hopkins
Medicine between January and June 2020. High-quality labeled data was generated
through GPT-4-based prompt engineering, which was then used to train a BERT
model to classify messages as "Admin" or "Clinical." The BERT model achieved
88.85% accuracy on the test set validated by GPT-4 labeling, with a sensitivity
of 88.29%, specificity of 89.38%, and an F1 score of 0.8842. BERTopic analysis
identified 81 distinct topics within the test data, with over 80% accuracy in
classifying 58 topics. The system was successfully deployed through Epic's
Nebula Cloud Platform, demonstrating its practical effectiveness in healthcare
settings.
| no_new_dataset | 0.952794 |
2503.05703 | Jordi Armengol-Estap\'e | Jordi Armengol-Estap\'e, Quentin Carbonneaux, Tianjun Zhang, Aram H.
Markosyan, Volker Seeker, Chris Cummins, Melanie Kambadur, Michael F.P.
O'Boyle, Sida Wang, Gabriel Synnaeve, Hugh James Leather | What I cannot execute, I do not understand: Training and Evaluating LLMs
on Program Execution Traces | null | null | null | null | cs.LG cs.AI cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Code generation and understanding are critical capabilities for large
language models (LLMs). Thus, most LLMs are pretrained and fine-tuned on code
data. However, these datasets typically treat code as static strings and rarely
exploit the dynamic information about their execution. Building upon previous
work on trace modeling, we study Execution Tuning (E.T.), a training procedure
in which we explicitly model real-world program execution traces without
requiring manual test annotations. We train and evaluate models on different
execution trace granularities (line and instruction-level) and strategies on
the task of output prediction, obtaining around 80% accuracy on CruxEval and
MBPP, and showing the advantages of dynamic scratchpads (i.e., self-contained
intermediate computations updated by the model rather than accumulated as a
history of past computations) on long executions (up to 14k steps). Finally, we
discuss E.T.'s practical applications.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 14:42:13 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Armengol-Estapé",
"Jordi",
""
],
[
"Carbonneaux",
"Quentin",
""
],
[
"Zhang",
"Tianjun",
""
],
[
"Markosyan",
"Aram H.",
""
],
[
"Seeker",
"Volker",
""
],
[
"Cummins",
"Chris",
""
],
[
"Kambadur",
"Melanie",
""
],
[
"O'Boyle",
"Michael F. P.",
""
],
[
"Wang",
"Sida",
""
],
[
"Synnaeve",
"Gabriel",
""
],
[
"Leather",
"Hugh James",
""
]
]
| TITLE: What I cannot execute, I do not understand: Training and Evaluating LLMs
on Program Execution Traces
ABSTRACT: Code generation and understanding are critical capabilities for large
language models (LLMs). Thus, most LLMs are pretrained and fine-tuned on code
data. However, these datasets typically treat code as static strings and rarely
exploit the dynamic information about their execution. Building upon previous
work on trace modeling, we study Execution Tuning (E.T.), a training procedure
in which we explicitly model real-world program execution traces without
requiring manual test annotations. We train and evaluate models on different
execution trace granularities (line and instruction-level) and strategies on
the task of output prediction, obtaining around 80% accuracy on CruxEval and
MBPP, and showing the advantages of dynamic scratchpads (i.e., self-contained
intermediate computations updated by the model rather than accumulated as a
history of past computations) on long executions (up to 14k steps). Finally, we
discuss E.T.'s practical applications.
| no_new_dataset | 0.94625 |
2503.05706 | Hanlin Tian | Hanlin Tian, Yuxiang Feng, Wei Zhou, Anupriya, Mohammed Quddus,
Yiannis Demiris, and Panagiotis Angeloudis | The Impact of Building-Induced Visibility Restrictions on Intersection
Accidents | TRBAM-24-02409 | null | null | null | cs.CY stat.AP | http://creativecommons.org/licenses/by/4.0/ | Traffic accidents, especially at intersections, are a major road safety
concern. Previous research has extensively studied intersection-related
accidents, but the effect of building-induced visibility restrictions at
intersections on accident rates has been under-explored, particularly in urban
contexts. Using OpenStreetMap data, the UK's geographic and accident datasets,
and the UK Traffic Count Dataset, we formulated a novel approach to estimate
accident risk at intersections. This method factors in the area visible to
drivers, accounting for views blocked by buildings - a distinctive aspect in
traffic accident analysis. Our findings reveal a notable correlation between
the road visible percentage and accident frequency. In the model, the
coefficient for "road visible percentage" is 1.7450, implying a strong positive
relationship. Incorporating this visibility factor enhances the model's
explanatory power, with increased R-square values and reduced AIC and BIC,
indicating a better data fit. This study underscores the essential role of
architectural layouts in road safety and suggests that urban planning
strategies should consider building-induced visibility restrictions. Such
consideration could be an effective approach to mitigate accident rates at
intersections. This research opens up new avenues for innovative, data-driven
urban planning and traffic management strategies, highlighting the importance
of visibility enhancements for safer roads.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2025 17:45:51 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Tian",
"Hanlin",
""
],
[
"Feng",
"Yuxiang",
""
],
[
"Zhou",
"Wei",
""
],
[
"Anupriya",
"",
""
],
[
"Quddus",
"Mohammed",
""
],
[
"Demiris",
"Yiannis",
""
],
[
"Angeloudis",
"Panagiotis",
""
]
]
| TITLE: The Impact of Building-Induced Visibility Restrictions on Intersection
Accidents
ABSTRACT: Traffic accidents, especially at intersections, are a major road safety
concern. Previous research has extensively studied intersection-related
accidents, but the effect of building-induced visibility restrictions at
intersections on accident rates has been under-explored, particularly in urban
contexts. Using OpenStreetMap data, the UK's geographic and accident datasets,
and the UK Traffic Count Dataset, we formulated a novel approach to estimate
accident risk at intersections. This method factors in the area visible to
drivers, accounting for views blocked by buildings - a distinctive aspect in
traffic accident analysis. Our findings reveal a notable correlation between
the road visible percentage and accident frequency. In the model, the
coefficient for "road visible percentage" is 1.7450, implying a strong positive
relationship. Incorporating this visibility factor enhances the model's
explanatory power, with increased R-square values and reduced AIC and BIC,
indicating a better data fit. This study underscores the essential role of
architectural layouts in road safety and suggests that urban planning
strategies should consider building-induced visibility restrictions. Such
consideration could be an effective approach to mitigate accident rates at
intersections. This research opens up new avenues for innovative, data-driven
urban planning and traffic management strategies, highlighting the importance
of visibility enhancements for safer roads.
| no_new_dataset | 0.9455 |
2503.05707 | Anton Bazdyrev | Anton Bazdyrev | Russo-Ukrainian war disinformation detection in suspicious Telegram
channels | CEUR-WS, Vol-3777 ProfIT AI 2024 4th International Workshop of
IT-professionals on Artificial Intelligence 2024 | null | null | null | cs.CY cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | The paper proposes an advanced approach for identifying disinformation on
Telegram channels related to the Russo-Ukrainian conflict, utilizing
state-of-the-art (SOTA) deep learning techniques and transfer learning.
Traditional methods of disinformation detection, often relying on manual
verification or rule-based systems, are increasingly inadequate in the face of
rapidly evolving propaganda tactics and the massive volume of data generated
daily. To address these challenges, the proposed system employs deep learning
algorithms, including LLM models, which are fine-tuned on a custom dataset
encompassing verified disinformation and legitimate content. The paper's
findings indicate that this approach significantly outperforms traditional
machine learning techniques, offering enhanced contextual understanding and
adaptability to emerging disinformation strategies.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2025 19:37:37 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Bazdyrev",
"Anton",
""
]
]
| TITLE: Russo-Ukrainian war disinformation detection in suspicious Telegram
channels
ABSTRACT: The paper proposes an advanced approach for identifying disinformation on
Telegram channels related to the Russo-Ukrainian conflict, utilizing
state-of-the-art (SOTA) deep learning techniques and transfer learning.
Traditional methods of disinformation detection, often relying on manual
verification or rule-based systems, are increasingly inadequate in the face of
rapidly evolving propaganda tactics and the massive volume of data generated
daily. To address these challenges, the proposed system employs deep learning
algorithms, including LLM models, which are fine-tuned on a custom dataset
encompassing verified disinformation and legitimate content. The paper's
findings indicate that this approach significantly outperforms traditional
machine learning techniques, offering enhanced contextual understanding and
adaptability to emerging disinformation strategies.
| new_dataset | 0.957078 |
2503.05709 | Shadeeb Hossain | Shadeeb Hossain | Using Artificial Intelligence to Improve Classroom Learning Experience | null | null | null | null | cs.CY cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper explores advancements in Artificial Intelligence technologies to
enhance classroom learning, highlighting contributions from companies like IBM,
Microsoft, Google, and ChatGPT, as well as the potential of brain signal
analysis. The focus is on improving students learning experiences by using
Machine Learning algorithms to : identify a student preferred learning style
and predict academic dropout risk. A Logistic Regression algorithm is applied
for binary classification using six predictor variables, such as assessment
scores, lesson duration, and preferred learning style, to accurately identify
learning preferences. A case study, with 76,519 candidates and 35 predictor
variables, assesses academic dropout risk using Logistic Regression, achieving
a test accuracy of 87.39%. In comparison, the Stochastic Gradient Descent
classifier achieved an accuracy of 83.1% on the same dataset.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 00:15:37 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hossain",
"Shadeeb",
""
]
]
| TITLE: Using Artificial Intelligence to Improve Classroom Learning Experience
ABSTRACT: This paper explores advancements in Artificial Intelligence technologies to
enhance classroom learning, highlighting contributions from companies like IBM,
Microsoft, Google, and ChatGPT, as well as the potential of brain signal
analysis. The focus is on improving students learning experiences by using
Machine Learning algorithms to : identify a student preferred learning style
and predict academic dropout risk. A Logistic Regression algorithm is applied
for binary classification using six predictor variables, such as assessment
scores, lesson duration, and preferred learning style, to accurately identify
learning preferences. A case study, with 76,519 candidates and 35 predictor
variables, assesses academic dropout risk using Logistic Regression, achieving
a test accuracy of 87.39%. In comparison, the Stochastic Gradient Descent
classifier achieved an accuracy of 83.1% on the same dataset.
| no_new_dataset | 0.955693 |
2503.05713 | Yupeng Chen | Yupeng Chen, Xiaoyu Zhang, Yixian Huang, Qian Xie | Beyond English: Unveiling Multilingual Bias in LLM Copyright Compliance | Work in progress | null | null | null | cs.CY cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have raised significant concerns regarding the
fair use of copyright-protected content. While prior studies have examined the
extent to which LLMs reproduce copyrighted materials, they have predominantly
focused on English, neglecting multilingual dimensions of copyright protection.
In this work, we investigate multilingual biases in LLM copyright protection by
addressing two key questions: (1) Do LLMs exhibit bias in protecting
copyrighted works across languages? (2) Is it easier to elicit copyrighted
content using prompts in specific languages? To explore these questions, we
construct a dataset of popular song lyrics in English, French, Chinese, and
Korean and systematically probe seven LLMs using prompts in these languages.
Our findings reveal significant imbalances in LLMs' handling of copyrighted
content, both in terms of the language of the copyrighted material and the
language of the prompt. These results highlight the need for further research
and development of more robust, language-agnostic copyright protection
mechanisms to ensure fair and consistent protection across languages.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 16:59:10 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chen",
"Yupeng",
""
],
[
"Zhang",
"Xiaoyu",
""
],
[
"Huang",
"Yixian",
""
],
[
"Xie",
"Qian",
""
]
]
| TITLE: Beyond English: Unveiling Multilingual Bias in LLM Copyright Compliance
ABSTRACT: Large Language Models (LLMs) have raised significant concerns regarding the
fair use of copyright-protected content. While prior studies have examined the
extent to which LLMs reproduce copyrighted materials, they have predominantly
focused on English, neglecting multilingual dimensions of copyright protection.
In this work, we investigate multilingual biases in LLM copyright protection by
addressing two key questions: (1) Do LLMs exhibit bias in protecting
copyrighted works across languages? (2) Is it easier to elicit copyrighted
content using prompts in specific languages? To explore these questions, we
construct a dataset of popular song lyrics in English, French, Chinese, and
Korean and systematically probe seven LLMs using prompts in these languages.
Our findings reveal significant imbalances in LLMs' handling of copyrighted
content, both in terms of the language of the copyrighted material and the
language of the prompt. These results highlight the need for further research
and development of more robust, language-agnostic copyright protection
mechanisms to ensure fair and consistent protection across languages.
| new_dataset | 0.95877 |
2503.05720 | Marco Antonio Stranisci | Soda Marem Lo, Oscar Araque, Rajesh Sharma, Marco Antonio Stranisci | That is Unacceptable: the Moral Foundations of Canceling | null | null | null | null | cs.CY cs.CL | http://creativecommons.org/licenses/by/4.0/ | Canceling is a morally-driven phenomenon that hinders the development of safe
social media platforms and contributes to ideological polarization. To address
this issue we present the Canceling Attitudes Detection (CADE) dataset, an
annotated corpus of canceling incidents aimed at exploring the factors of
disagreements in evaluating people canceling attitudes on social media.
Specifically, we study the impact of annotators' morality in their perception
of canceling, showing that morality is an independent axis for the explanation
of disagreement on this phenomenon. Annotator's judgments heavily depend on the
type of controversial events and involved celebrities. This shows the need to
develop more event-centric datasets to better understand how harms are
perpetrated in social media and to develop more aware technologies for their
detection.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 13:01:06 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Lo",
"Soda Marem",
""
],
[
"Araque",
"Oscar",
""
],
[
"Sharma",
"Rajesh",
""
],
[
"Stranisci",
"Marco Antonio",
""
]
]
| TITLE: That is Unacceptable: the Moral Foundations of Canceling
ABSTRACT: Canceling is a morally-driven phenomenon that hinders the development of safe
social media platforms and contributes to ideological polarization. To address
this issue we present the Canceling Attitudes Detection (CADE) dataset, an
annotated corpus of canceling incidents aimed at exploring the factors of
disagreements in evaluating people canceling attitudes on social media.
Specifically, we study the impact of annotators' morality in their perception
of canceling, showing that morality is an independent axis for the explanation
of disagreement on this phenomenon. Annotator's judgments heavily depend on the
type of controversial events and involved celebrities. This shows the need to
develop more event-centric datasets to better understand how harms are
perpetrated in social media and to develop more aware technologies for their
detection.
| new_dataset | 0.954732 |
2503.05721 | Marco Antonio Stranisci | Marco Antonio Stranisci, Christian Hardmeier | What Are They Filtering Out? A Survey of Filtering Strategies for Harm
Reduction in Pretraining Datasets | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Data filtering strategies are a crucial component to develop safe Large
Language Models (LLM), since they support the removal of harmful contents from
pretraining datasets. There is a lack of research on the actual impact of these
strategies on vulnerable groups to discrimination, though, and their
effectiveness has not been yet systematically addressed. In this paper we
present a benchmark study of data filtering strategies for harm reduction aimed
at providing a systematic overview on these approaches. We survey 55 technical
reports of English LMs and LLMs to identify the existing filtering strategies
in literature and implement an experimental setting to test their impact
against vulnerable groups. Our results show that the positive impact that
strategies have in reducing harmful contents from documents has the side effect
of increasing the underrepresentation of vulnerable groups to discrimination in
datasets.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 13:10:57 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Stranisci",
"Marco Antonio",
""
],
[
"Hardmeier",
"Christian",
""
]
]
| TITLE: What Are They Filtering Out? A Survey of Filtering Strategies for Harm
Reduction in Pretraining Datasets
ABSTRACT: Data filtering strategies are a crucial component to develop safe Large
Language Models (LLM), since they support the removal of harmful contents from
pretraining datasets. There is a lack of research on the actual impact of these
strategies on vulnerable groups to discrimination, though, and their
effectiveness has not been yet systematically addressed. In this paper we
present a benchmark study of data filtering strategies for harm reduction aimed
at providing a systematic overview on these approaches. We survey 55 technical
reports of English LMs and LLMs to identify the existing filtering strategies
in literature and implement an experimental setting to test their impact
against vulnerable groups. Our results show that the positive impact that
strategies have in reducing harmful contents from documents has the side effect
of increasing the underrepresentation of vulnerable groups to discrimination in
datasets.
| no_new_dataset | 0.940898 |
2503.05729 | Alberto Nogales | Blanca Mellor-Marsa, Alfredo Guitian, Andrew Coney, Berta Padilla and
Alberto Nogales | Discovering the influence of personal features in psychological
processes using Artificial Intelligence techniques: the case of COVID19
lockdown in Spain | null | null | null | null | cs.CY cs.LG | http://creativecommons.org/licenses/by/4.0/ | At the end of 2019, an outbreak of a novel coronavirus was reported in China,
leading to the COVID-19 pandemic. In Spain, the first cases were detected in
late January 2020, and by mid-March, infections had surpassed 5,000. On March
the Spanish government started a nationwide lockdown to contain the spread of
the virus. While isolation measures were necessary, they posed significant
psychological and socioeconomic challenges, particularly for vulnerable
populations. Understanding the psychological impact of lockdown and the factors
influencing mental health is crucial for informing future public health
policies. This study analyzes the influence of personal, socioeconomic, general
health and living condition factors on psychological states during lockdown
using AI techniques. A dataset collected through an online questionnaire was
processed using two workflows, each structured into three stages. First,
individuals were categorized based on psychological assessments, either
directly or in combination with unsupervised learning techniques. Second,
various Machine Learning classifiers were trained to distinguish between the
identified groups. Finally, feature importance analysis was conducted to
identify the most influential variables related to different psychological
conditions. The evaluated models demonstrated strong performance, with accuracy
exceeding 80% and often surpassing 90%, particularly for Random Forest,
Decision Trees, and Support Vector Machines. Sensitivity and specificity
analyses revealed that models performed well across different psychological
conditions, with the health impacts subset showing the highest reliability. For
diagnosing vulnerability, models achieved over 90% accuracy, except for less
vulnerable individuals using living environment and economic status features,
where performance was slightly lower.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 19:54:26 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Mellor-Marsa",
"Blanca",
""
],
[
"Guitian",
"Alfredo",
""
],
[
"Coney",
"Andrew",
""
],
[
"Padilla",
"Berta",
""
],
[
"Nogales",
"Alberto",
""
]
]
| TITLE: Discovering the influence of personal features in psychological
processes using Artificial Intelligence techniques: the case of COVID19
lockdown in Spain
ABSTRACT: At the end of 2019, an outbreak of a novel coronavirus was reported in China,
leading to the COVID-19 pandemic. In Spain, the first cases were detected in
late January 2020, and by mid-March, infections had surpassed 5,000. On March
the Spanish government started a nationwide lockdown to contain the spread of
the virus. While isolation measures were necessary, they posed significant
psychological and socioeconomic challenges, particularly for vulnerable
populations. Understanding the psychological impact of lockdown and the factors
influencing mental health is crucial for informing future public health
policies. This study analyzes the influence of personal, socioeconomic, general
health and living condition factors on psychological states during lockdown
using AI techniques. A dataset collected through an online questionnaire was
processed using two workflows, each structured into three stages. First,
individuals were categorized based on psychological assessments, either
directly or in combination with unsupervised learning techniques. Second,
various Machine Learning classifiers were trained to distinguish between the
identified groups. Finally, feature importance analysis was conducted to
identify the most influential variables related to different psychological
conditions. The evaluated models demonstrated strong performance, with accuracy
exceeding 80% and often surpassing 90%, particularly for Random Forest,
Decision Trees, and Support Vector Machines. Sensitivity and specificity
analyses revealed that models performed well across different psychological
conditions, with the health impacts subset showing the highest reliability. For
diagnosing vulnerability, models achieved over 90% accuracy, except for less
vulnerable individuals using living environment and economic status features,
where performance was slightly lower.
| no_new_dataset | 0.933854 |
2503.05730 | Lingkai Kong | Lingkai Kong, Haichuan Wang, Yuqi Pan, Cheol Woo Kim, Mingxiao Song,
Alayna Nguyen, Tonghan Wang, Haifeng Xu, Milind Tambe | Robust Optimization with Diffusion Models for Green Security | null | null | null | null | cs.CY cs.AI | http://creativecommons.org/licenses/by/4.0/ | In green security, defenders must forecast adversarial behavior, such as
poaching, illegal logging, and illegal fishing, to plan effective patrols.
These behavior are often highly uncertain and complex. Prior work has leveraged
game theory to design robust patrol strategies to handle uncertainty, but
existing adversarial behavior models primarily rely on Gaussian processes or
linear models, which lack the expressiveness needed to capture intricate
behavioral patterns. To address this limitation, we propose a conditional
diffusion model for adversary behavior modeling, leveraging its strong
distribution-fitting capabilities. To the best of our knowledge, this is the
first application of diffusion models in the green security domain. Integrating
diffusion models into game-theoretic optimization, however, presents new
challenges, including a constrained mixed strategy space and the need to sample
from an unnormalized distribution to estimate utilities. To tackle these
challenges, we introduce a mixed strategy of mixed strategies and employ a
twisted Sequential Monte Carlo (SMC) sampler for accurate sampling.
Theoretically, our algorithm is guaranteed to converge to an epsilon
equilibrium with high probability using a finite number of iterations and
samples. Empirically, we evaluate our approach on both synthetic and real-world
poaching datasets, demonstrating its effectiveness.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2025 05:30:46 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kong",
"Lingkai",
""
],
[
"Wang",
"Haichuan",
""
],
[
"Pan",
"Yuqi",
""
],
[
"Kim",
"Cheol Woo",
""
],
[
"Song",
"Mingxiao",
""
],
[
"Nguyen",
"Alayna",
""
],
[
"Wang",
"Tonghan",
""
],
[
"Xu",
"Haifeng",
""
],
[
"Tambe",
"Milind",
""
]
]
| TITLE: Robust Optimization with Diffusion Models for Green Security
ABSTRACT: In green security, defenders must forecast adversarial behavior, such as
poaching, illegal logging, and illegal fishing, to plan effective patrols.
These behavior are often highly uncertain and complex. Prior work has leveraged
game theory to design robust patrol strategies to handle uncertainty, but
existing adversarial behavior models primarily rely on Gaussian processes or
linear models, which lack the expressiveness needed to capture intricate
behavioral patterns. To address this limitation, we propose a conditional
diffusion model for adversary behavior modeling, leveraging its strong
distribution-fitting capabilities. To the best of our knowledge, this is the
first application of diffusion models in the green security domain. Integrating
diffusion models into game-theoretic optimization, however, presents new
challenges, including a constrained mixed strategy space and the need to sample
from an unnormalized distribution to estimate utilities. To tackle these
challenges, we introduce a mixed strategy of mixed strategies and employ a
twisted Sequential Monte Carlo (SMC) sampler for accurate sampling.
Theoretically, our algorithm is guaranteed to converge to an epsilon
equilibrium with high probability using a finite number of iterations and
samples. Empirically, we evaluate our approach on both synthetic and real-world
poaching datasets, demonstrating its effectiveness.
| no_new_dataset | 0.945801 |
2503.05731 | Sarah Luger | Shaona Ghosh, Heather Frase, Adina Williams, Sarah Luger, Paul
R\"ottger, Fazl Barez, Sean McGregor, Kenneth Fricklas, Mala Kumar, Quentin
Feuillade--Montixi, Kurt Bollacker, Felix Friedrich, Ryan Tsang, Bertie
Vidgen, Alicia Parrish, Chris Knotz, Eleonora Presani, Jonathan Bennion,
Marisa Ferrara Boston, Mike Kuniavsky, Wiebke Hutiri, James Ezick, Malek Ben
Salem, Rajat Sahay, Sujata Goswami, Usman Gohar, Ben Huang, Supheakmungkol
Sarin, Elie Alhajjar, Canyu Chen, Roman Eng, Kashyap Ramanandula Manjusha,
Virendra Mehta, Eileen Long, Murali Emani, Natan Vidra, Benjamin Rukundo,
Abolfazl Shahbazi, Kongtao Chen, Rajat Ghosh, Vithursan Thangarasa, Pierre
Peign\'e, Abhinav Singh, Max Bartolo, Satyapriya Krishna, Mubashara Akhtar,
Rafael Gold, Cody Coleman, Luis Oala, Vassil Tashev, Joseph Marvin Imperial,
Amy Russ, Sasidhar Kunapuli, Nicolas Miailhe, Julien Delaunay, Bhaktipriya
Radharapu, Rajat Shinde, Tuesday, Debojyoti Dutta, Declan Grabb, Ananya
Gangavarapu, Saurav Sahay, Agasthya Gangavarapu, Patrick Schramowski, Stephen
Singam, Tom David, Xudong Han, Priyanka Mary Mammen, Tarunima Prabhakar,
Venelin Kovatchev, Ahmed Ahmed, Kelvin N. Manyeki, Sandeep Madireddy, Foutse
Khomh, Fedor Zhdanov, Joachim Baumann, Nina Vasan, Xianjun Yang, Carlos
Mougn, Jibin Rajan Varghese, Hussain Chinoy, Seshakrishna Jitendar, Manil
Maskey, Claire V. Hardgrove, Tianhao Li, Aakash Gupta, Emil Joswin, Yifan
Mai, Shachi H Kumar, Cigdem Patlak, Kevin Lu, Vincent Alessi, Sree Bhargavi
Balija, Chenhe Gu, Robert Sullivan, James Gealy, Matt Lavrisa, James Goel,
Peter Mattson, Percy Liang, Joaquin Vanschoren | AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark
from MLCommons | 51 pages, 8 figures and an appendix | null | null | null | cs.CY cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rapid advancement and deployment of AI systems have created an urgent
need for standard safety-evaluation frameworks. This paper introduces
AILuminate v1.0, the first comprehensive industry-standard benchmark for
assessing AI-product risk and reliability. Its development employed an open
process that included participants from multiple fields. The benchmark
evaluates an AI system's resistance to prompts designed to elicit dangerous,
illegal, or undesirable behavior in 12 hazard categories, including violent
crimes, nonviolent crimes, sex-related crimes, child sexual exploitation,
indiscriminate weapons, suicide and self-harm, intellectual property, privacy,
defamation, hate, sexual content, and specialized advice (election, financial,
health, legal). Our method incorporates a complete assessment standard,
extensive prompt datasets, a novel evaluation framework, a grading and
reporting system, and the technical as well as organizational infrastructure
for long-term support and evolution. In particular, the benchmark employs an
understandable five-tier grading scale (Poor to Excellent) and incorporates an
innovative entropy-based system-response evaluation.
In addition to unveiling the benchmark, this report also identifies
limitations of our method and of building safety benchmarks generally,
including evaluator uncertainty and the constraints of single-turn
interactions. This work represents a crucial step toward establishing global
standards for AI risk and reliability evaluation while acknowledging the need
for continued development in areas such as multiturn interactions, multimodal
understanding, coverage of additional languages, and emerging hazard
categories. Our findings provide valuable insights for model developers, system
integrators, and policymakers working to promote safer AI deployment.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2025 05:58:52 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ghosh",
"Shaona",
""
],
[
"Frase",
"Heather",
""
],
[
"Williams",
"Adina",
""
],
[
"Luger",
"Sarah",
""
],
[
"Röttger",
"Paul",
""
],
[
"Barez",
"Fazl",
""
],
[
"McGregor",
"Sean",
""
],
[
"Fricklas",
"Kenneth",
""
],
[
"Kumar",
"Mala",
""
],
[
"Feuillade--Montixi",
"Quentin",
""
],
[
"Bollacker",
"Kurt",
""
],
[
"Friedrich",
"Felix",
""
],
[
"Tsang",
"Ryan",
""
],
[
"Vidgen",
"Bertie",
""
],
[
"Parrish",
"Alicia",
""
],
[
"Knotz",
"Chris",
""
],
[
"Presani",
"Eleonora",
""
],
[
"Bennion",
"Jonathan",
""
],
[
"Boston",
"Marisa Ferrara",
""
],
[
"Kuniavsky",
"Mike",
""
],
[
"Hutiri",
"Wiebke",
""
],
[
"Ezick",
"James",
""
],
[
"Salem",
"Malek Ben",
""
],
[
"Sahay",
"Rajat",
""
],
[
"Goswami",
"Sujata",
""
],
[
"Gohar",
"Usman",
""
],
[
"Huang",
"Ben",
""
],
[
"Sarin",
"Supheakmungkol",
""
],
[
"Alhajjar",
"Elie",
""
],
[
"Chen",
"Canyu",
""
],
[
"Eng",
"Roman",
""
],
[
"Manjusha",
"Kashyap Ramanandula",
""
],
[
"Mehta",
"Virendra",
""
],
[
"Long",
"Eileen",
""
],
[
"Emani",
"Murali",
""
],
[
"Vidra",
"Natan",
""
],
[
"Rukundo",
"Benjamin",
""
],
[
"Shahbazi",
"Abolfazl",
""
],
[
"Chen",
"Kongtao",
""
],
[
"Ghosh",
"Rajat",
""
],
[
"Thangarasa",
"Vithursan",
""
],
[
"Peigné",
"Pierre",
""
],
[
"Singh",
"Abhinav",
""
],
[
"Bartolo",
"Max",
""
],
[
"Krishna",
"Satyapriya",
""
],
[
"Akhtar",
"Mubashara",
""
],
[
"Gold",
"Rafael",
""
],
[
"Coleman",
"Cody",
""
],
[
"Oala",
"Luis",
""
],
[
"Tashev",
"Vassil",
""
],
[
"Imperial",
"Joseph Marvin",
""
],
[
"Russ",
"Amy",
""
],
[
"Kunapuli",
"Sasidhar",
""
],
[
"Miailhe",
"Nicolas",
""
],
[
"Delaunay",
"Julien",
""
],
[
"Radharapu",
"Bhaktipriya",
""
],
[
"Shinde",
"Rajat",
""
],
[
"Tuesday",
"",
""
],
[
"Dutta",
"Debojyoti",
""
],
[
"Grabb",
"Declan",
""
],
[
"Gangavarapu",
"Ananya",
""
],
[
"Sahay",
"Saurav",
""
],
[
"Gangavarapu",
"Agasthya",
""
],
[
"Schramowski",
"Patrick",
""
],
[
"Singam",
"Stephen",
""
],
[
"David",
"Tom",
""
],
[
"Han",
"Xudong",
""
],
[
"Mammen",
"Priyanka Mary",
""
],
[
"Prabhakar",
"Tarunima",
""
],
[
"Kovatchev",
"Venelin",
""
],
[
"Ahmed",
"Ahmed",
""
],
[
"Manyeki",
"Kelvin N.",
""
],
[
"Madireddy",
"Sandeep",
""
],
[
"Khomh",
"Foutse",
""
],
[
"Zhdanov",
"Fedor",
""
],
[
"Baumann",
"Joachim",
""
],
[
"Vasan",
"Nina",
""
],
[
"Yang",
"Xianjun",
""
],
[
"Mougn",
"Carlos",
""
],
[
"Varghese",
"Jibin Rajan",
""
],
[
"Chinoy",
"Hussain",
""
],
[
"Jitendar",
"Seshakrishna",
""
],
[
"Maskey",
"Manil",
""
],
[
"Hardgrove",
"Claire V.",
""
],
[
"Li",
"Tianhao",
""
],
[
"Gupta",
"Aakash",
""
],
[
"Joswin",
"Emil",
""
],
[
"Mai",
"Yifan",
""
],
[
"Kumar",
"Shachi H",
""
],
[
"Patlak",
"Cigdem",
""
],
[
"Lu",
"Kevin",
""
],
[
"Alessi",
"Vincent",
""
],
[
"Balija",
"Sree Bhargavi",
""
],
[
"Gu",
"Chenhe",
""
],
[
"Sullivan",
"Robert",
""
],
[
"Gealy",
"James",
""
],
[
"Lavrisa",
"Matt",
""
],
[
"Goel",
"James",
""
],
[
"Mattson",
"Peter",
""
],
[
"Liang",
"Percy",
""
],
[
"Vanschoren",
"Joaquin",
""
]
]
| TITLE: AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark
from MLCommons
ABSTRACT: The rapid advancement and deployment of AI systems have created an urgent
need for standard safety-evaluation frameworks. This paper introduces
AILuminate v1.0, the first comprehensive industry-standard benchmark for
assessing AI-product risk and reliability. Its development employed an open
process that included participants from multiple fields. The benchmark
evaluates an AI system's resistance to prompts designed to elicit dangerous,
illegal, or undesirable behavior in 12 hazard categories, including violent
crimes, nonviolent crimes, sex-related crimes, child sexual exploitation,
indiscriminate weapons, suicide and self-harm, intellectual property, privacy,
defamation, hate, sexual content, and specialized advice (election, financial,
health, legal). Our method incorporates a complete assessment standard,
extensive prompt datasets, a novel evaluation framework, a grading and
reporting system, and the technical as well as organizational infrastructure
for long-term support and evolution. In particular, the benchmark employs an
understandable five-tier grading scale (Poor to Excellent) and incorporates an
innovative entropy-based system-response evaluation.
In addition to unveiling the benchmark, this report also identifies
limitations of our method and of building safety benchmarks generally,
including evaluator uncertainty and the constraints of single-turn
interactions. This work represents a crucial step toward establishing global
standards for AI risk and reliability evaluation while acknowledging the need
for continued development in areas such as multiturn interactions, multimodal
understanding, coverage of additional languages, and emerging hazard
categories. Our findings provide valuable insights for model developers, system
integrators, and policymakers working to promote safer AI deployment.
| no_new_dataset | 0.949059 |
2503.05739 | Licia Amichi | Licia Amichi, Gautam Malviya Thakur, Carter Christopher | Understanding Individual-Space Relationships to Inform and Enhance
Location-Based Applications | null | null | 10.1145/3681773.3699694 | null | cs.CY physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Understanding the complex dynamics of human navigation and spatial behavior
is essential for advancing location-based services, public health, and related
fields. This paper investigates the multifaceted relationship between
individuals and their environments (e.g. location and places they visit),
acknowledging the distinct influences of personal preferences, experiences, and
social connections. While certain locations hold sentimental value and are
frequently visited, others function as mere transitory points. To the best of
our knowledge, this paper is the first to exploit visitation patterns and dwell
times to characterize an individual's relationship with specific locations. We
identify seven key types of spatial relationships and analyze the discrepancies
among these visit types across semantic, spatial, and temporal dimensions. Our
analysis highlights key findings, such as the prevalence of anchored-like
visits (e.g. home, work) in both real-world Singapore and Beijing datasets,
with unique associations in each city -Singapore's anchored-liked visits
include recreational spaces, while Beijing's are limited to residential,
business, and educational sites. These findings emphasize the importance of
geographic and cultural context in shaping mobility and their potential in
benefiting the precision and personalization of location-based services.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2025 20:36:06 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Amichi",
"Licia",
""
],
[
"Thakur",
"Gautam Malviya",
""
],
[
"Christopher",
"Carter",
""
]
]
| TITLE: Understanding Individual-Space Relationships to Inform and Enhance
Location-Based Applications
ABSTRACT: Understanding the complex dynamics of human navigation and spatial behavior
is essential for advancing location-based services, public health, and related
fields. This paper investigates the multifaceted relationship between
individuals and their environments (e.g. location and places they visit),
acknowledging the distinct influences of personal preferences, experiences, and
social connections. While certain locations hold sentimental value and are
frequently visited, others function as mere transitory points. To the best of
our knowledge, this paper is the first to exploit visitation patterns and dwell
times to characterize an individual's relationship with specific locations. We
identify seven key types of spatial relationships and analyze the discrepancies
among these visit types across semantic, spatial, and temporal dimensions. Our
analysis highlights key findings, such as the prevalence of anchored-like
visits (e.g. home, work) in both real-world Singapore and Beijing datasets,
with unique associations in each city -Singapore's anchored-liked visits
include recreational spaces, while Beijing's are limited to residential,
business, and educational sites. These findings emphasize the importance of
geographic and cultural context in shaping mobility and their potential in
benefiting the precision and personalization of location-based services.
| no_new_dataset | 0.948822 |
2503.05745 | Maheshwari Neelam | Maheshwari Neelam, Kamaldeep Bhui, Trent Cowan, Brian Freitag | Diminishing Waters: The Great Salt Lake's Desiccation and Its Mental
Health Consequences | null | null | null | null | cs.CY physics.ao-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study examines how the desiccation of Utah Great Salt Lake GSL,
exacerbated by anthropogenic changes, poses significant health risks,
particularly communities mental health. Reduced water inflow has exposed the
lakebed, increasing airborne particulate matter PM2.5 and dust storms, which
impact air quality. By integrating diverse datasets spanning from 1980 to
present including insitu measurements, satellite imagery, and reanalysis
products this study synthesizes hydrological, atmospheric, and epidemiological
variables to comprehensively track the extent of the GSL surface water, local
air quality fluctuations, and their effects on community mental health. The
findings indicate a clear relationship between higher pollution days and more
severe depressive symptoms. Specifically, individuals exposed to 22 days with
PM2.5 levels above the World Health Organizations 24 hour guideline of 15 ug
per m3 were more likely to experience severe depressive symptoms. Our results
also suggest that people experiencing more severe depression not only face a
higher number of high pollution days but also encounter such days more
frequently. The study highlights the interconnectedness of poor air quality,
environmental degradation and mental health emphasizing the need for more
sustainable economic growth in the region.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 16:49:49 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Neelam",
"Maheshwari",
""
],
[
"Bhui",
"Kamaldeep",
""
],
[
"Cowan",
"Trent",
""
],
[
"Freitag",
"Brian",
""
]
]
| TITLE: Diminishing Waters: The Great Salt Lake's Desiccation and Its Mental
Health Consequences
ABSTRACT: This study examines how the desiccation of Utah Great Salt Lake GSL,
exacerbated by anthropogenic changes, poses significant health risks,
particularly communities mental health. Reduced water inflow has exposed the
lakebed, increasing airborne particulate matter PM2.5 and dust storms, which
impact air quality. By integrating diverse datasets spanning from 1980 to
present including insitu measurements, satellite imagery, and reanalysis
products this study synthesizes hydrological, atmospheric, and epidemiological
variables to comprehensively track the extent of the GSL surface water, local
air quality fluctuations, and their effects on community mental health. The
findings indicate a clear relationship between higher pollution days and more
severe depressive symptoms. Specifically, individuals exposed to 22 days with
PM2.5 levels above the World Health Organizations 24 hour guideline of 15 ug
per m3 were more likely to experience severe depressive symptoms. Our results
also suggest that people experiencing more severe depression not only face a
higher number of high pollution days but also encounter such days more
frequently. The study highlights the interconnectedness of poor air quality,
environmental degradation and mental health emphasizing the need for more
sustainable economic growth in the region.
| no_new_dataset | 0.936576 |
2503.05746 | Nora Fink | Nora Fink | Unsupervised Clustering Approaches for Autism Screening: Achieving
95.31% Accuracy with a Gaussian Mixture Model | null | null | null | null | cs.CY cs.LG | http://creativecommons.org/licenses/by/4.0/ | Autism spectrum disorder (ASD) remains a challenging condition to diagnose
effectively and promptly, despite global efforts in public health, clinical
screening, and scientific research. Traditional diagnostic methods, primarily
reliant on supervised learning approaches, presuppose the availability of
labeled data, which can be both time-consuming and resource-intensive to
obtain. Unsupervised learning, in contrast, offers a means of gaining insights
from unlabeled datasets in a manner that can expedite or support the diagnostic
process. This paper explores the use of four distinct unsupervised clustering
algorithms K-Means, Gaussian Mixture Model (GMM), Agglomerative Clustering, and
DBSCAN to analyze a publicly available dataset of 704 adult individuals
screened for ASD. After extensive hyperparameter tuning via cross-validation,
the study documents how the Gaussian Mixture Model achieved the highest
clustering-to-label accuracy (95.31%) when mapped to the original ASD/NO
classification (4). Other key performance metrics included the Adjusted Rand
Index (ARI) and silhouette scores, which further illustrated the internal
coherence of each cluster. The dataset underwent preprocessing procedures
including data cleaning, label encoding of categorical features, and standard
scaling, followed by a thorough cross-validation approach to assess and compare
the four clustering methods (5). These results highlight the significant
potential of unsupervised methods in assisting ASD screening, especially in
contexts where labeled data may be sparse, uncertain, or prohibitively
expensive to obtain. With continued methodological refinements, unsupervised
approaches hold promise for augmenting early detection initiatives and guiding
resource allocation to individuals at high risk.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 18:12:59 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Fink",
"Nora",
""
]
]
| TITLE: Unsupervised Clustering Approaches for Autism Screening: Achieving
95.31% Accuracy with a Gaussian Mixture Model
ABSTRACT: Autism spectrum disorder (ASD) remains a challenging condition to diagnose
effectively and promptly, despite global efforts in public health, clinical
screening, and scientific research. Traditional diagnostic methods, primarily
reliant on supervised learning approaches, presuppose the availability of
labeled data, which can be both time-consuming and resource-intensive to
obtain. Unsupervised learning, in contrast, offers a means of gaining insights
from unlabeled datasets in a manner that can expedite or support the diagnostic
process. This paper explores the use of four distinct unsupervised clustering
algorithms K-Means, Gaussian Mixture Model (GMM), Agglomerative Clustering, and
DBSCAN to analyze a publicly available dataset of 704 adult individuals
screened for ASD. After extensive hyperparameter tuning via cross-validation,
the study documents how the Gaussian Mixture Model achieved the highest
clustering-to-label accuracy (95.31%) when mapped to the original ASD/NO
classification (4). Other key performance metrics included the Adjusted Rand
Index (ARI) and silhouette scores, which further illustrated the internal
coherence of each cluster. The dataset underwent preprocessing procedures
including data cleaning, label encoding of categorical features, and standard
scaling, followed by a thorough cross-validation approach to assess and compare
the four clustering methods (5). These results highlight the significant
potential of unsupervised methods in assisting ASD screening, especially in
contexts where labeled data may be sparse, uncertain, or prohibitively
expensive to obtain. With continued methodological refinements, unsupervised
approaches hold promise for augmenting early detection initiatives and guiding
resource allocation to individuals at high risk.
| no_new_dataset | 0.944638 |
2503.05750 | Md Jobayer | Mst. Fahmida Sultana Naznin, Adnan Ibney Faruq, Mostafa Rifat Tazwar,
Md Jobayer, Md. Mehedi Hasan Shawon, Md Rakibul Hasan | CSTRL: Context-Driven Sequential Transfer Learning for Abstractive
Radiology Report Summarization | 11-pages main paper with 2-pages appendices | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A radiology report comprises several sections, including the Findings and
Impression of the diagnosis. Automatically generating the Impression from the
Findings is crucial for reducing radiologists' workload and improving
diagnostic accuracy. Pretrained models that excel in common abstractive
summarization problems encounter challenges when applied to specialized medical
domains largely due to the complex terminology and the necessity for accurate
clinical context. Such tasks in medical domains demand extracting core
information, avoiding context shifts, and maintaining proper flow. Misuse of
medical terms can lead to drastic clinical errors. To address these issues, we
introduce a sequential transfer learning that ensures key content extraction
and coherent summarization. Sequential transfer learning often faces challenges
like initial parameter decay and knowledge loss, which we resolve with the
Fisher matrix regularization. Using MIMIC-CXR and Open-I datasets, our model,
CSTRL-Context-driven Sequential TRansfer Learning-achieved state-of-the-art
performance, showing 56.2% improvement in BLEU-1, 40.5% in BLEU-2, 84.3% in
BLEU-3, 28.9% in ROUGE-1, 41.0% in ROUGE-2 and 26.5% in ROGUE-3 score over
benchmark studies. We also analyze factual consistency scores while preserving
the medical context. Our code is publicly available at TBA.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 08:32:11 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Naznin",
"Mst. Fahmida Sultana",
""
],
[
"Faruq",
"Adnan Ibney",
""
],
[
"Tazwar",
"Mostafa Rifat",
""
],
[
"Jobayer",
"Md",
""
],
[
"Shawon",
"Md. Mehedi Hasan",
""
],
[
"Hasan",
"Md Rakibul",
""
]
]
| TITLE: CSTRL: Context-Driven Sequential Transfer Learning for Abstractive
Radiology Report Summarization
ABSTRACT: A radiology report comprises several sections, including the Findings and
Impression of the diagnosis. Automatically generating the Impression from the
Findings is crucial for reducing radiologists' workload and improving
diagnostic accuracy. Pretrained models that excel in common abstractive
summarization problems encounter challenges when applied to specialized medical
domains largely due to the complex terminology and the necessity for accurate
clinical context. Such tasks in medical domains demand extracting core
information, avoiding context shifts, and maintaining proper flow. Misuse of
medical terms can lead to drastic clinical errors. To address these issues, we
introduce a sequential transfer learning that ensures key content extraction
and coherent summarization. Sequential transfer learning often faces challenges
like initial parameter decay and knowledge loss, which we resolve with the
Fisher matrix regularization. Using MIMIC-CXR and Open-I datasets, our model,
CSTRL-Context-driven Sequential TRansfer Learning-achieved state-of-the-art
performance, showing 56.2% improvement in BLEU-1, 40.5% in BLEU-2, 84.3% in
BLEU-3, 28.9% in ROUGE-1, 41.0% in ROUGE-2 and 26.5% in ROGUE-3 score over
benchmark studies. We also analyze factual consistency scores while preserving
the medical context. Our code is publicly available at TBA.
| no_new_dataset | 0.948394 |
2503.05755 | Md Sirajul Islam | Md Sirajul Islam, Sanjeev Panta, Fei Xu, Xu Yuan, Li Chen, Nian-Feng
Tzeng | SEAFL: Enhancing Efficiency in Semi-Asynchronous Federated Learning
through Adaptive Aggregation and Selective Training | null | null | null | null | cs.DC cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) is a promising distributed machine learning framework
that allows collaborative learning of a global model across decentralized
devices without uploading their local data. However, in real-world FL
scenarios, the conventional synchronous FL mechanism suffers from inefficient
training caused by slow-speed devices, commonly known as stragglers, especially
in heterogeneous communication environments. Though asynchronous FL effectively
tackles the efficiency challenge, it induces substantial system overheads and
model degradation. Striking for a balance, semi-asynchronous FL has gained
increasing attention, while still suffering from the open challenge of stale
models, where newly arrived updates are calculated based on outdated weights
that easily hurt the convergence of the global model. In this paper, we present
{\em SEAFL}, a novel FL framework designed to mitigate both the straggler and
the stale model challenges in semi-asynchronous FL. {\em SEAFL} dynamically
assigns weights to uploaded models during aggregation based on their staleness
and importance to the current global model. We theoretically analyze the
convergence rate of {\em SEAFL} and further enhance the training efficiency
with an extended variant that allows partial training on slower devices,
enabling them to contribute to global aggregation while reducing excessive
waiting times. We evaluate the effectiveness of {\em SEAFL} through extensive
experiments on three benchmark datasets. The experimental results demonstrate
that {\em SEAFL} outperforms its closest counterpart by up to $\sim$22\% in
terms of the wall-clock training time required to achieve target accuracy.
| [
{
"version": "v1",
"created": "Sat, 22 Feb 2025 05:13:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Islam",
"Md Sirajul",
""
],
[
"Panta",
"Sanjeev",
""
],
[
"Xu",
"Fei",
""
],
[
"Yuan",
"Xu",
""
],
[
"Chen",
"Li",
""
],
[
"Tzeng",
"Nian-Feng",
""
]
]
| TITLE: SEAFL: Enhancing Efficiency in Semi-Asynchronous Federated Learning
through Adaptive Aggregation and Selective Training
ABSTRACT: Federated Learning (FL) is a promising distributed machine learning framework
that allows collaborative learning of a global model across decentralized
devices without uploading their local data. However, in real-world FL
scenarios, the conventional synchronous FL mechanism suffers from inefficient
training caused by slow-speed devices, commonly known as stragglers, especially
in heterogeneous communication environments. Though asynchronous FL effectively
tackles the efficiency challenge, it induces substantial system overheads and
model degradation. Striking for a balance, semi-asynchronous FL has gained
increasing attention, while still suffering from the open challenge of stale
models, where newly arrived updates are calculated based on outdated weights
that easily hurt the convergence of the global model. In this paper, we present
{\em SEAFL}, a novel FL framework designed to mitigate both the straggler and
the stale model challenges in semi-asynchronous FL. {\em SEAFL} dynamically
assigns weights to uploaded models during aggregation based on their staleness
and importance to the current global model. We theoretically analyze the
convergence rate of {\em SEAFL} and further enhance the training efficiency
with an extended variant that allows partial training on slower devices,
enabling them to contribute to global aggregation while reducing excessive
waiting times. We evaluate the effectiveness of {\em SEAFL} through extensive
experiments on three benchmark datasets. The experimental results demonstrate
that {\em SEAFL} outperforms its closest counterpart by up to $\sim$22\% in
terms of the wall-clock training time required to achieve target accuracy.
| no_new_dataset | 0.943971 |
2503.05757 | Prasenjit Dey | Prasenjit Dey, Srujana Merugu, Sivaramakrishnan Kaveri | Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating
Hallucinations in Large Language Models | Proceedings of the ACM Web Conference 2025, WWW 25 | null | 10.1145/3701716.3715523 | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) are known to hallucinate and generate
non-factual outputs which can undermine user trust. Traditional methods to
directly mitigate hallucinations, such as representation editing and
contrastive decoding, often require additional training data and involve high
implementation complexity. While ensemble-based approaches harness multiple
LLMs to tap into the "wisdom of crowds", these methods overlook uncertainties
in individual model responses. Recent studies reveal that uncertainty
estimation can enable LLMs to self-assess the likelihood of generating
hallucinations. In this work, we focus on factoid question answering (QA) and
observe that LLMs accuracy and self-assessment capabilities vary widely with
different models excelling in different scenarios. Leveraging this insight, we
propose Uncertainty-Aware Fusion (UAF), an ensemble framework to reduces
hallucinations by strategically combining multiple LLM based on their accuracy
and self-assessment abilities. Empirical results on several public benchmark
datasets show that UAF outperforms state-of-the-art hallucination mitigation
methods by $8\%$ in factual accuracy, while either narrowing or surpassing the
performance gap with GPT-4.
| [
{
"version": "v1",
"created": "Sat, 22 Feb 2025 10:48:18 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Dey",
"Prasenjit",
""
],
[
"Merugu",
"Srujana",
""
],
[
"Kaveri",
"Sivaramakrishnan",
""
]
]
| TITLE: Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating
Hallucinations in Large Language Models
ABSTRACT: Large Language Models (LLMs) are known to hallucinate and generate
non-factual outputs which can undermine user trust. Traditional methods to
directly mitigate hallucinations, such as representation editing and
contrastive decoding, often require additional training data and involve high
implementation complexity. While ensemble-based approaches harness multiple
LLMs to tap into the "wisdom of crowds", these methods overlook uncertainties
in individual model responses. Recent studies reveal that uncertainty
estimation can enable LLMs to self-assess the likelihood of generating
hallucinations. In this work, we focus on factoid question answering (QA) and
observe that LLMs accuracy and self-assessment capabilities vary widely with
different models excelling in different scenarios. Leveraging this insight, we
propose Uncertainty-Aware Fusion (UAF), an ensemble framework to reduces
hallucinations by strategically combining multiple LLM based on their accuracy
and self-assessment abilities. Empirical results on several public benchmark
datasets show that UAF outperforms state-of-the-art hallucination mitigation
methods by $8\%$ in factual accuracy, while either narrowing or surpassing the
performance gap with GPT-4.
| no_new_dataset | 0.945751 |
2503.05772 | Khalid Mahmood | Josimar Chire, Khalid Mahmood, Zhao Liang | Complex Networks for Pattern-Based Data Classification | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Data classification techniques partition the data or feature space into
smaller sub-spaces, each corresponding to a specific class. To classify into
subspaces, physical features e.g., distance and distributions are utilized.
This approach is challenging for the characterization of complex patterns that
are embedded in the dataset. However, complex networks remain a powerful
technique for capturing internal relationships and class structures, enabling
High-Level Classification. Although several complex network-based
classification techniques have been proposed, high-level classification by
leveraging pattern formation to classify data has not been utilized. In this
work, we present two network-based classification techniques utilizing unique
measures derived from the Minimum Spanning Tree and Single Source Shortest
Path. These network measures are evaluated from the data patterns represented
by the inherent network constructed from each class. We have applied our
proposed techniques to several data classification scenarios including
synthetic and real-world datasets. Compared to the existing classic high-level
and machine-learning classification techniques, we have observed promising
numerical results for our proposed approaches. Furthermore, the proposed models
demonstrate the following distinguished features in comparison to the previous
high-level classification techniques: (1) A single network measure is
introduced to characterize the data pattern, eliminating the need to determine
weight parameters among network measures. Therefore, the model is largely
simplified, while obtaining better classification results. (2) The metrics
proposed are sensitive and used for classification with competitive results.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 18:36:02 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chire",
"Josimar",
""
],
[
"Mahmood",
"Khalid",
""
],
[
"Liang",
"Zhao",
""
]
]
| TITLE: Complex Networks for Pattern-Based Data Classification
ABSTRACT: Data classification techniques partition the data or feature space into
smaller sub-spaces, each corresponding to a specific class. To classify into
subspaces, physical features e.g., distance and distributions are utilized.
This approach is challenging for the characterization of complex patterns that
are embedded in the dataset. However, complex networks remain a powerful
technique for capturing internal relationships and class structures, enabling
High-Level Classification. Although several complex network-based
classification techniques have been proposed, high-level classification by
leveraging pattern formation to classify data has not been utilized. In this
work, we present two network-based classification techniques utilizing unique
measures derived from the Minimum Spanning Tree and Single Source Shortest
Path. These network measures are evaluated from the data patterns represented
by the inherent network constructed from each class. We have applied our
proposed techniques to several data classification scenarios including
synthetic and real-world datasets. Compared to the existing classic high-level
and machine-learning classification techniques, we have observed promising
numerical results for our proposed approaches. Furthermore, the proposed models
demonstrate the following distinguished features in comparison to the previous
high-level classification techniques: (1) A single network measure is
introduced to characterize the data pattern, eliminating the need to determine
weight parameters among network measures. Therefore, the model is largely
simplified, while obtaining better classification results. (2) The metrics
proposed are sensitive and used for classification with competitive results.
| no_new_dataset | 0.949106 |
2503.05774 | Theodor Lundqvist | Theodor Lundqvist and Ludvig Delvret | GeoJEPA: Towards Eliminating Augmentation- and Sampling Bias in
Multimodal Geospatial Learning | 131 pages, 49 figures, 48 tables | null | null | 1650-2884 2025-01 | cs.LG cs.DB | http://creativecommons.org/licenses/by/4.0/ | Existing methods for self-supervised representation learning of geospatial
regions and map entities rely extensively on the design of pretext tasks, often
involving augmentations or heuristic sampling of positive and negative pairs
based on spatial proximity. This reliance introduces biases and limits the
representations' expressiveness and generalisability. Consequently, the
literature has expressed a pressing need to explore different methods for
modelling geospatial data. To address the key difficulties of such methods,
namely multimodality, heterogeneity, and the choice of pretext tasks, we
present GeoJEPA, a versatile multimodal fusion model for geospatial data built
on the self-supervised Joint-Embedding Predictive Architecture. With GeoJEPA,
we aim to eliminate the widely accepted augmentation- and sampling biases found
in self-supervised geospatial representation learning. GeoJEPA uses
self-supervised pretraining on a large dataset of OpenStreetMap attributes,
geometries and aerial images. The results are multimodal semantic
representations of urban regions and map entities that we evaluate both
quantitatively and qualitatively. Through this work, we uncover several key
insights into JEPA's ability to handle multimodal data.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 22:03:28 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Lundqvist",
"Theodor",
""
],
[
"Delvret",
"Ludvig",
""
]
]
| TITLE: GeoJEPA: Towards Eliminating Augmentation- and Sampling Bias in
Multimodal Geospatial Learning
ABSTRACT: Existing methods for self-supervised representation learning of geospatial
regions and map entities rely extensively on the design of pretext tasks, often
involving augmentations or heuristic sampling of positive and negative pairs
based on spatial proximity. This reliance introduces biases and limits the
representations' expressiveness and generalisability. Consequently, the
literature has expressed a pressing need to explore different methods for
modelling geospatial data. To address the key difficulties of such methods,
namely multimodality, heterogeneity, and the choice of pretext tasks, we
present GeoJEPA, a versatile multimodal fusion model for geospatial data built
on the self-supervised Joint-Embedding Predictive Architecture. With GeoJEPA,
we aim to eliminate the widely accepted augmentation- and sampling biases found
in self-supervised geospatial representation learning. GeoJEPA uses
self-supervised pretraining on a large dataset of OpenStreetMap attributes,
geometries and aerial images. The results are multimodal semantic
representations of urban regions and map entities that we evaluate both
quantitatively and qualitatively. Through this work, we uncover several key
insights into JEPA's ability to handle multimodal data.
| no_new_dataset | 0.9434 |
2503.05776 | Yihang Wu | Yihang Wu, Ahmad Chaddad, Christian Desrosiers, Tareef Daqqaq, Reem
Kateb | FAA-CLIP: Federated Adversarial Adaptation of CLIP | Accepted in IEEE Internet of Things Journal | null | 10.1109/JIOT.2025.3545574 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the remarkable performance of vision language models (VLMs) such as
Contrastive Language Image Pre-training (CLIP), the large size of these models
is a considerable obstacle to their use in federated learning (FL) systems
where the parameters of local client models need to be transferred to a global
server for aggregation. Another challenge in FL is the heterogeneity of data
from different clients, which affects the generalization performance of the
solution. In addition, natural pre-trained VLMs exhibit poor generalization
ability in the medical datasets, suggests there exists a domain gap. To solve
these issues, we introduce a novel method for the Federated Adversarial
Adaptation (FAA) of CLIP. Our method, named FAA-CLIP, handles the large
communication costs of CLIP using a light-weight feature adaptation module
(FAM) for aggregation, effectively adapting this VLM to each client's data
while greatly reducing the number of parameters to transfer. By keeping CLIP
frozen and only updating the FAM parameters, our method is also computationally
efficient. Unlike existing approaches, our FAA-CLIP method directly addresses
the problem of domain shifts across clients via a domain adaptation (DA)
module. This module employs a domain classifier to predict if a given sample is
from the local client or the global server, allowing the model to learn
domain-invariant representations. Extensive experiments on six different
datasets containing both natural and medical images demonstrate that FAA-CLIP
can generalize well on both natural and medical datasets compared to recent FL
approaches. Our codes are available at https://github.com/AIPMLab/FAA-CLIP.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 01:51:11 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wu",
"Yihang",
""
],
[
"Chaddad",
"Ahmad",
""
],
[
"Desrosiers",
"Christian",
""
],
[
"Daqqaq",
"Tareef",
""
],
[
"Kateb",
"Reem",
""
]
]
| TITLE: FAA-CLIP: Federated Adversarial Adaptation of CLIP
ABSTRACT: Despite the remarkable performance of vision language models (VLMs) such as
Contrastive Language Image Pre-training (CLIP), the large size of these models
is a considerable obstacle to their use in federated learning (FL) systems
where the parameters of local client models need to be transferred to a global
server for aggregation. Another challenge in FL is the heterogeneity of data
from different clients, which affects the generalization performance of the
solution. In addition, natural pre-trained VLMs exhibit poor generalization
ability in the medical datasets, suggests there exists a domain gap. To solve
these issues, we introduce a novel method for the Federated Adversarial
Adaptation (FAA) of CLIP. Our method, named FAA-CLIP, handles the large
communication costs of CLIP using a light-weight feature adaptation module
(FAM) for aggregation, effectively adapting this VLM to each client's data
while greatly reducing the number of parameters to transfer. By keeping CLIP
frozen and only updating the FAM parameters, our method is also computationally
efficient. Unlike existing approaches, our FAA-CLIP method directly addresses
the problem of domain shifts across clients via a domain adaptation (DA)
module. This module employs a domain classifier to predict if a given sample is
from the local client or the global server, allowing the model to learn
domain-invariant representations. Extensive experiments on six different
datasets containing both natural and medical images demonstrate that FAA-CLIP
can generalize well on both natural and medical datasets compared to recent FL
approaches. Our codes are available at https://github.com/AIPMLab/FAA-CLIP.
| no_new_dataset | 0.949856 |
2503.05777 | Yubin Kim | Yubin Kim, Hyewon Jeong, Shan Chen, Shuyue Stella Li, Mingyu Lu,
Kumail Alhamoud, Jimin Mun, Cristina Grau, Minseok Jung, Rodrigo Gameiro,
Lizhou Fan, Eugene Park, Tristan Lin, Joonsik Yoon, Wonjin Yoon, Maarten Sap,
Yulia Tsvetkov, Paul Liang, Xuhai Xu, Xin Liu, Daniel McDuff, Hyeonhoon Lee,
Hae Won Park, Samir Tulebaev, Cynthia Breazeal | Medical Hallucinations in Foundation Models and Their Impact on
Healthcare | null | null | null | null | cs.CL cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Foundation Models that are capable of processing and generating multi-modal
data have transformed AI's role in medicine. However, a key limitation of their
reliability is hallucination, where inaccurate or fabricated information can
impact clinical decisions and patient safety. We define medical hallucination
as any instance in which a model generates misleading medical content. This
paper examines the unique characteristics, causes, and implications of medical
hallucinations, with a particular focus on how these errors manifest themselves
in real-world clinical scenarios. Our contributions include (1) a taxonomy for
understanding and addressing medical hallucinations, (2) benchmarking models
using medical hallucination dataset and physician-annotated LLM responses to
real medical cases, providing direct insight into the clinical impact of
hallucinations, and (3) a multi-national clinician survey on their experiences
with medical hallucinations. Our results reveal that inference techniques such
as Chain-of-Thought (CoT) and Search Augmented Generation can effectively
reduce hallucination rates. However, despite these improvements, non-trivial
levels of hallucination persist. These findings underscore the ethical and
practical imperative for robust detection and mitigation strategies,
establishing a foundation for regulatory policies that prioritize patient
safety and maintain clinical integrity as AI becomes more integrated into
healthcare. The feedback from clinicians highlights the urgent need for not
only technical advances but also for clearer ethical and regulatory guidelines
to ensure patient safety. A repository organizing the paper resources,
summaries, and additional information is available at
https://github.com/mitmedialab/medical hallucination.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 02:30:44 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kim",
"Yubin",
""
],
[
"Jeong",
"Hyewon",
""
],
[
"Chen",
"Shan",
""
],
[
"Li",
"Shuyue Stella",
""
],
[
"Lu",
"Mingyu",
""
],
[
"Alhamoud",
"Kumail",
""
],
[
"Mun",
"Jimin",
""
],
[
"Grau",
"Cristina",
""
],
[
"Jung",
"Minseok",
""
],
[
"Gameiro",
"Rodrigo",
""
],
[
"Fan",
"Lizhou",
""
],
[
"Park",
"Eugene",
""
],
[
"Lin",
"Tristan",
""
],
[
"Yoon",
"Joonsik",
""
],
[
"Yoon",
"Wonjin",
""
],
[
"Sap",
"Maarten",
""
],
[
"Tsvetkov",
"Yulia",
""
],
[
"Liang",
"Paul",
""
],
[
"Xu",
"Xuhai",
""
],
[
"Liu",
"Xin",
""
],
[
"McDuff",
"Daniel",
""
],
[
"Lee",
"Hyeonhoon",
""
],
[
"Park",
"Hae Won",
""
],
[
"Tulebaev",
"Samir",
""
],
[
"Breazeal",
"Cynthia",
""
]
]
| TITLE: Medical Hallucinations in Foundation Models and Their Impact on
Healthcare
ABSTRACT: Foundation Models that are capable of processing and generating multi-modal
data have transformed AI's role in medicine. However, a key limitation of their
reliability is hallucination, where inaccurate or fabricated information can
impact clinical decisions and patient safety. We define medical hallucination
as any instance in which a model generates misleading medical content. This
paper examines the unique characteristics, causes, and implications of medical
hallucinations, with a particular focus on how these errors manifest themselves
in real-world clinical scenarios. Our contributions include (1) a taxonomy for
understanding and addressing medical hallucinations, (2) benchmarking models
using medical hallucination dataset and physician-annotated LLM responses to
real medical cases, providing direct insight into the clinical impact of
hallucinations, and (3) a multi-national clinician survey on their experiences
with medical hallucinations. Our results reveal that inference techniques such
as Chain-of-Thought (CoT) and Search Augmented Generation can effectively
reduce hallucination rates. However, despite these improvements, non-trivial
levels of hallucination persist. These findings underscore the ethical and
practical imperative for robust detection and mitigation strategies,
establishing a foundation for regulatory policies that prioritize patient
safety and maintain clinical integrity as AI becomes more integrated into
healthcare. The feedback from clinicians highlights the urgent need for not
only technical advances but also for clearer ethical and regulatory guidelines
to ensure patient safety. A repository organizing the paper resources,
summaries, and additional information is available at
https://github.com/mitmedialab/medical hallucination.
| no_new_dataset | 0.923039 |
2503.05778 | Tapasvi Panchagnula | Tapasvi Panchagnula | DreamNet: A Multimodal Framework for Semantic and Emotional Analysis of
Sleep Narratives | 10 pages, 5 figures, new research contribution | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Dream narratives provide a unique window into human cognition and emotion,
yet their systematic analysis using artificial intelligence has been
underexplored. We introduce DreamNet, a novel deep learning framework that
decodes semantic themes and emotional states from textual dream reports,
optionally enhanced with REM-stage EEG data. Leveraging a transformer-based
architecture with multimodal attention, DreamNet achieves 92.1% accuracy and
88.4% F1-score in text-only mode (DNet-T) on a curated dataset of 1,500
anonymized dream narratives, improving to 99.0% accuracy and 95.2% F1-score
with EEG integration (DNet-M). Strong dream-emotion correlations (e.g.,
falling-anxiety, r = 0.91, p < 0.01) highlight its potential for mental health
diagnostics, cognitive science, and personalized therapy. This work provides a
scalable tool, a publicly available enriched dataset, and a rigorous
methodology, bridging AI and psychological research.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 09:10:07 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Panchagnula",
"Tapasvi",
""
]
]
| TITLE: DreamNet: A Multimodal Framework for Semantic and Emotional Analysis of
Sleep Narratives
ABSTRACT: Dream narratives provide a unique window into human cognition and emotion,
yet their systematic analysis using artificial intelligence has been
underexplored. We introduce DreamNet, a novel deep learning framework that
decodes semantic themes and emotional states from textual dream reports,
optionally enhanced with REM-stage EEG data. Leveraging a transformer-based
architecture with multimodal attention, DreamNet achieves 92.1% accuracy and
88.4% F1-score in text-only mode (DNet-T) on a curated dataset of 1,500
anonymized dream narratives, improving to 99.0% accuracy and 95.2% F1-score
with EEG integration (DNet-M). Strong dream-emotion correlations (e.g.,
falling-anxiety, r = 0.91, p < 0.01) highlight its potential for mental health
diagnostics, cognitive science, and personalized therapy. This work provides a
scalable tool, a publicly available enriched dataset, and a rigorous
methodology, bridging AI and psychological research.
| new_dataset | 0.942929 |
2503.05780 | Elizabeth Daly | Frank Bagehorn, Kristina Brimijoin, Elizabeth M. Daly, Jessica He,
Michael Hind, Luis Garces-Erice, Christopher Giblin, Ioana Giurgiu, Jacquelyn
Martino, Rahul Nair, David Piorkowski, Ambrish Rawat, John Richards, Sean
Rooney, Dhaval Salwala, Seshu Tirupathi, Peter Urbanetz, Kush R. Varshney,
Inge Vejsbjerg, Mira L. Wolf-Bauwens | AI Risk Atlas: Taxonomy and Tooling for Navigating AI Risks and
Resources | 4.5 page main text, 22 page supporting material, 2 figures | null | null | null | cs.CY cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid evolution of generative AI has expanded the breadth of risks
associated with AI systems. While various taxonomies and frameworks exist to
classify these risks, the lack of interoperability between them creates
challenges for researchers, practitioners, and policymakers seeking to
operationalise AI governance. To address this gap, we introduce the AI Risk
Atlas, a structured taxonomy that consolidates AI risks from diverse sources
and aligns them with governance frameworks. Additionally, we present the Risk
Atlas Nexus, a collection of open-source tools designed to bridge the divide
between risk definitions, benchmarks, datasets, and mitigation strategies. This
knowledge-driven approach leverages ontologies and knowledge graphs to
facilitate risk identification, prioritization, and mitigation. By integrating
AI-assisted compliance workflows and automation strategies, our framework
lowers the barrier to responsible AI adoption. We invite the broader research
and open-source community to contribute to this evolving initiative, fostering
cross-domain collaboration and ensuring AI governance keeps pace with
technological advancements.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 12:23:14 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Bagehorn",
"Frank",
""
],
[
"Brimijoin",
"Kristina",
""
],
[
"Daly",
"Elizabeth M.",
""
],
[
"He",
"Jessica",
""
],
[
"Hind",
"Michael",
""
],
[
"Garces-Erice",
"Luis",
""
],
[
"Giblin",
"Christopher",
""
],
[
"Giurgiu",
"Ioana",
""
],
[
"Martino",
"Jacquelyn",
""
],
[
"Nair",
"Rahul",
""
],
[
"Piorkowski",
"David",
""
],
[
"Rawat",
"Ambrish",
""
],
[
"Richards",
"John",
""
],
[
"Rooney",
"Sean",
""
],
[
"Salwala",
"Dhaval",
""
],
[
"Tirupathi",
"Seshu",
""
],
[
"Urbanetz",
"Peter",
""
],
[
"Varshney",
"Kush R.",
""
],
[
"Vejsbjerg",
"Inge",
""
],
[
"Wolf-Bauwens",
"Mira L.",
""
]
]
| TITLE: AI Risk Atlas: Taxonomy and Tooling for Navigating AI Risks and
Resources
ABSTRACT: The rapid evolution of generative AI has expanded the breadth of risks
associated with AI systems. While various taxonomies and frameworks exist to
classify these risks, the lack of interoperability between them creates
challenges for researchers, practitioners, and policymakers seeking to
operationalise AI governance. To address this gap, we introduce the AI Risk
Atlas, a structured taxonomy that consolidates AI risks from diverse sources
and aligns them with governance frameworks. Additionally, we present the Risk
Atlas Nexus, a collection of open-source tools designed to bridge the divide
between risk definitions, benchmarks, datasets, and mitigation strategies. This
knowledge-driven approach leverages ontologies and knowledge graphs to
facilitate risk identification, prioritization, and mitigation. By integrating
AI-assisted compliance workflows and automation strategies, our framework
lowers the barrier to responsible AI adoption. We invite the broader research
and open-source community to contribute to this evolving initiative, fostering
cross-domain collaboration and ensuring AI governance keeps pace with
technological advancements.
| no_new_dataset | 0.946498 |
2503.05808 | Shenyu Zhang | Shenyu Zhang, Jiaguo Tian, Zhengbang Zhu, Shan Huang, Jucheng Yang,
Weinan Zhang | DriveGen: Towards Infinite Diverse Traffic Scenarios with Large Models | 8 pages, 3 figures | null | null | null | cs.AI cs.LG cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Microscopic traffic simulation has become an important tool for autonomous
driving training and testing. Although recent data-driven approaches advance
realistic behavior generation, their learning still relies primarily on a
single real-world dataset, which limits their diversity and thereby hinders
downstream algorithm optimization. In this paper, we propose DriveGen, a novel
traffic simulation framework with large models for more diverse traffic
generation that supports further customized designs. DriveGen consists of two
internal stages: the initialization stage uses large language model and
retrieval technique to generate map and vehicle assets; the rollout stage
outputs trajectories with selected waypoint goals from visual language model
and a specific designed diffusion planner. Through this two-staged process,
DriveGen fully utilizes large models' high-level cognition and reasoning of
driving behavior, obtaining greater diversity beyond datasets while maintaining
high realism. To support effective downstream optimization, we additionally
develop DriveGen-CS, an automatic corner case generation pipeline that uses
failures of the driving algorithm as additional prompt knowledge for large
models without the need for retraining or fine-tuning. Experiments show that
our generated scenarios and corner cases have a superior performance compared
to state-of-the-art baselines. Downstream experiments further verify that the
synthesized traffic of DriveGen provides better optimization of the performance
of typical driving algorithms, demonstrating the effectiveness of our
framework.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:14:21 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Shenyu",
""
],
[
"Tian",
"Jiaguo",
""
],
[
"Zhu",
"Zhengbang",
""
],
[
"Huang",
"Shan",
""
],
[
"Yang",
"Jucheng",
""
],
[
"Zhang",
"Weinan",
""
]
]
| TITLE: DriveGen: Towards Infinite Diverse Traffic Scenarios with Large Models
ABSTRACT: Microscopic traffic simulation has become an important tool for autonomous
driving training and testing. Although recent data-driven approaches advance
realistic behavior generation, their learning still relies primarily on a
single real-world dataset, which limits their diversity and thereby hinders
downstream algorithm optimization. In this paper, we propose DriveGen, a novel
traffic simulation framework with large models for more diverse traffic
generation that supports further customized designs. DriveGen consists of two
internal stages: the initialization stage uses large language model and
retrieval technique to generate map and vehicle assets; the rollout stage
outputs trajectories with selected waypoint goals from visual language model
and a specific designed diffusion planner. Through this two-staged process,
DriveGen fully utilizes large models' high-level cognition and reasoning of
driving behavior, obtaining greater diversity beyond datasets while maintaining
high realism. To support effective downstream optimization, we additionally
develop DriveGen-CS, an automatic corner case generation pipeline that uses
failures of the driving algorithm as additional prompt knowledge for large
models without the need for retraining or fine-tuning. Experiments show that
our generated scenarios and corner cases have a superior performance compared
to state-of-the-art baselines. Downstream experiments further verify that the
synthesized traffic of DriveGen provides better optimization of the performance
of typical driving algorithms, demonstrating the effectiveness of our
framework.
| no_new_dataset | 0.943815 |
2503.05837 | M Tanveer PhD | A. Quadir and M. Tanveer | Randomized based restricted kernel machine for hyperspectral image
classification | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the random vector functional link (RVFL) network has gained
significant popularity in hyperspectral image (HSI) classification due to its
simplicity, speed, and strong generalization performance. However, despite
these advantages, RVFL models face several limitations, particularly in
handling non-linear relationships and complex data structures. The random
initialization of input-to-hidden weights can lead to instability, and the
model struggles with determining the optimal number of hidden nodes, affecting
its performance on more challenging datasets. To address these issues, we
propose a novel randomized based restricted kernel machine ($R^2KM$) model that
combines the strehyperngths of RVFL and restricted kernel machines (RKM).
$R^2KM$ introduces a layered structure that represents kernel methods using
both visible and hidden variables, analogous to the energy function in
restricted Boltzmann machines (RBM). This structure enables $R^2KM$ to capture
complex data interactions and non-linear relationships more effectively,
improving both interpretability and model robustness. A key contribution of
$R^2KM$ is the introduction of a novel conjugate feature duality based on the
Fenchel-Young inequality, which expresses the problem in terms of conjugate
dual variables and provides an upper bound on the objective function. This
duality enhances the model's flexibility and scalability, offering a more
efficient and flexible solution for complex data analysis tasks. Extensive
experiments on hyperspectral image datasets and real-world data from the UCI
and KEEL repositories show that $R^2KM$ outperforms baseline models,
demonstrating its effectiveness in classification and regression tasks.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:18:39 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Quadir",
"A.",
""
],
[
"Tanveer",
"M.",
""
]
]
| TITLE: Randomized based restricted kernel machine for hyperspectral image
classification
ABSTRACT: In recent years, the random vector functional link (RVFL) network has gained
significant popularity in hyperspectral image (HSI) classification due to its
simplicity, speed, and strong generalization performance. However, despite
these advantages, RVFL models face several limitations, particularly in
handling non-linear relationships and complex data structures. The random
initialization of input-to-hidden weights can lead to instability, and the
model struggles with determining the optimal number of hidden nodes, affecting
its performance on more challenging datasets. To address these issues, we
propose a novel randomized based restricted kernel machine ($R^2KM$) model that
combines the strehyperngths of RVFL and restricted kernel machines (RKM).
$R^2KM$ introduces a layered structure that represents kernel methods using
both visible and hidden variables, analogous to the energy function in
restricted Boltzmann machines (RBM). This structure enables $R^2KM$ to capture
complex data interactions and non-linear relationships more effectively,
improving both interpretability and model robustness. A key contribution of
$R^2KM$ is the introduction of a novel conjugate feature duality based on the
Fenchel-Young inequality, which expresses the problem in terms of conjugate
dual variables and provides an upper bound on the objective function. This
duality enhances the model's flexibility and scalability, offering a more
efficient and flexible solution for complex data analysis tasks. Extensive
experiments on hyperspectral image datasets and real-world data from the UCI
and KEEL repositories show that $R^2KM$ outperforms baseline models,
demonstrating its effectiveness in classification and regression tasks.
| no_new_dataset | 0.948585 |
2503.05850 | Sefik Ilkin Serengil | Sefik Serengil, Alper Ozpinar | Encrypted Vector Similarity Computations Using Partially Homomorphic
Encryption: Applications and Performance Analysis | null | null | null | null | cs.CR cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper explores the use of partially homomorphic encryption (PHE) for
encrypted vector similarity search, with a focus on facial recognition and
broader applications like reverse image search, recommendation engines, and
large language models (LLMs). While fully homomorphic encryption (FHE) exists,
we demonstrate that encrypted cosine similarity can be computed using PHE,
offering a more practical alternative. Since PHE does not directly support
cosine similarity, we propose a method that normalizes vectors in advance,
enabling dot product calculations as a proxy. We also apply min-max
normalization to handle negative dimension values.
Experiments on the Labeled Faces in the Wild (LFW) dataset use DeepFace's
FaceNet128d, FaceNet512d, and VGG-Face (4096d) models in a two-tower setup.
Pre-encrypted embeddings are stored in one tower, while an edge device captures
images, computes embeddings, and performs encrypted-plaintext dot products via
additively homomorphic encryption. We implement this with LightPHE, evaluating
Paillier, Damgard-Jurik, and Okamoto-Uchiyama schemes, excluding others due to
performance or decryption complexity. Tests at 80-bit and 112-bit security
(NIST-secure until 2030) compare PHE against FHE (via TenSEAL), analyzing
encryption, decryption, operation time, cosine similarity loss, key/ciphertext
sizes.
Results show PHE is less computationally intensive, faster, and produces
smaller ciphertexts/keys, making it well-suited for memory-constrained
environments and real-world privacy-preserving encrypted similarity search.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 09:52:16 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Serengil",
"Sefik",
""
],
[
"Ozpinar",
"Alper",
""
]
]
| TITLE: Encrypted Vector Similarity Computations Using Partially Homomorphic
Encryption: Applications and Performance Analysis
ABSTRACT: This paper explores the use of partially homomorphic encryption (PHE) for
encrypted vector similarity search, with a focus on facial recognition and
broader applications like reverse image search, recommendation engines, and
large language models (LLMs). While fully homomorphic encryption (FHE) exists,
we demonstrate that encrypted cosine similarity can be computed using PHE,
offering a more practical alternative. Since PHE does not directly support
cosine similarity, we propose a method that normalizes vectors in advance,
enabling dot product calculations as a proxy. We also apply min-max
normalization to handle negative dimension values.
Experiments on the Labeled Faces in the Wild (LFW) dataset use DeepFace's
FaceNet128d, FaceNet512d, and VGG-Face (4096d) models in a two-tower setup.
Pre-encrypted embeddings are stored in one tower, while an edge device captures
images, computes embeddings, and performs encrypted-plaintext dot products via
additively homomorphic encryption. We implement this with LightPHE, evaluating
Paillier, Damgard-Jurik, and Okamoto-Uchiyama schemes, excluding others due to
performance or decryption complexity. Tests at 80-bit and 112-bit security
(NIST-secure until 2030) compare PHE against FHE (via TenSEAL), analyzing
encryption, decryption, operation time, cosine similarity loss, key/ciphertext
sizes.
Results show PHE is less computationally intensive, faster, and produces
smaller ciphertexts/keys, making it well-suited for memory-constrained
environments and real-world privacy-preserving encrypted similarity search.
| no_new_dataset | 0.94625 |
2503.05854 | Dmitrii Pantiukhin | Dmitrii Pantiukhin, Boris Shapkin, Ivan Kuznetsov, Antonia Anna Jost,
Nikolay Koldunov | Accelerating Earth Science Discovery via Multi-Agent LLM Systems | 10 pages, 1 figure. Perspective article | null | null | null | cs.MA cs.AI | http://creativecommons.org/licenses/by/4.0/ | This Perspective explores the transformative potential of Multi-Agent Systems
(MAS) powered by Large Language Models (LLMs) in the geosciences. Users of
geoscientific data repositories face challenges due to the complexity and
diversity of data formats, inconsistent metadata practices, and a considerable
number of unprocessed datasets. MAS possesses transformative potential for
improving scientists' interaction with geoscientific data by enabling
intelligent data processing, natural language interfaces, and collaborative
problem-solving capabilities. We illustrate this approach with "PANGAEA GPT", a
specialized MAS pipeline integrated with the diverse PANGAEA database for Earth
and Environmental Science, demonstrating how MAS-driven workflows can
effectively manage complex datasets and accelerate scientific discovery. We
discuss how MAS can address current data challenges in geosciences, highlight
advancements in other scientific fields, and propose future directions for
integrating MAS into geoscientific data processing pipelines. In this
Perspective, we show how MAS can fundamentally improve data accessibility,
promote cross-disciplinary collaboration, and accelerate geoscientific
discoveries.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 13:25:56 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Pantiukhin",
"Dmitrii",
""
],
[
"Shapkin",
"Boris",
""
],
[
"Kuznetsov",
"Ivan",
""
],
[
"Jost",
"Antonia Anna",
""
],
[
"Koldunov",
"Nikolay",
""
]
]
| TITLE: Accelerating Earth Science Discovery via Multi-Agent LLM Systems
ABSTRACT: This Perspective explores the transformative potential of Multi-Agent Systems
(MAS) powered by Large Language Models (LLMs) in the geosciences. Users of
geoscientific data repositories face challenges due to the complexity and
diversity of data formats, inconsistent metadata practices, and a considerable
number of unprocessed datasets. MAS possesses transformative potential for
improving scientists' interaction with geoscientific data by enabling
intelligent data processing, natural language interfaces, and collaborative
problem-solving capabilities. We illustrate this approach with "PANGAEA GPT", a
specialized MAS pipeline integrated with the diverse PANGAEA database for Earth
and Environmental Science, demonstrating how MAS-driven workflows can
effectively manage complex datasets and accelerate scientific discovery. We
discuss how MAS can address current data challenges in geosciences, highlight
advancements in other scientific fields, and propose future directions for
integrating MAS into geoscientific data processing pipelines. In this
Perspective, we show how MAS can fundamentally improve data accessibility,
promote cross-disciplinary collaboration, and accelerate geoscientific
discoveries.
| no_new_dataset | 0.950365 |
2503.05898 | Karan Vombatkere | Karan Vombatkere, Evimaria Terzi, Aristides Gionis | Forming Coordinated Teams that Balance Task Coverage and Expert Workload | null | Data Mining and Knowledge Discovery (2025) | 10.1007/s10618-025-01090-x | null | cs.SI cs.DM | http://creativecommons.org/licenses/by/4.0/ | We study a new formulation of the team-formation problem, where the goal is
to form teams to work on a given set of tasks requiring different skills.
Deviating from the classic problem setting where one is asking to cover all
skills of each given task, we aim to cover as many skills as possible while
also trying to minimize the maximum workload among the experts. We do this by
combining penalization terms for the coverage and load constraints into one
objective. We call the corresponding assignment problem
$\texttt{Balanced-Coverage}$, and show that it is NP-hard. We also consider a
variant of this problem, where the experts are organized into a graph, which
encodes how well they work together. Utilizing such a coordination graph, we
aim to find teams to assign to tasks such that each team's radius does not
exceed a given threshold. We refer to this problem as
$\texttt{Network-Balanced-Coverage}$. We develop a generic template algorithm
for approximating both problems in polynomial time, and we show that our
template algorithm for $\texttt{Balanced-Coverage}$ has provable guarantees. We
describe a set of computational speedups that we can apply to our algorithms
and make them scale for reasonably large datasets. From the practical point of
view, we demonstrate how to efficiently tune the two parts of the objective and
tailor their importance to a particular application. Our experiments with a
variety of real-world datasets demonstrate the utility of our problem
formulation as well as the efficiency of our algorithms in practice.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 19:34:25 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Vombatkere",
"Karan",
""
],
[
"Terzi",
"Evimaria",
""
],
[
"Gionis",
"Aristides",
""
]
]
| TITLE: Forming Coordinated Teams that Balance Task Coverage and Expert Workload
ABSTRACT: We study a new formulation of the team-formation problem, where the goal is
to form teams to work on a given set of tasks requiring different skills.
Deviating from the classic problem setting where one is asking to cover all
skills of each given task, we aim to cover as many skills as possible while
also trying to minimize the maximum workload among the experts. We do this by
combining penalization terms for the coverage and load constraints into one
objective. We call the corresponding assignment problem
$\texttt{Balanced-Coverage}$, and show that it is NP-hard. We also consider a
variant of this problem, where the experts are organized into a graph, which
encodes how well they work together. Utilizing such a coordination graph, we
aim to find teams to assign to tasks such that each team's radius does not
exceed a given threshold. We refer to this problem as
$\texttt{Network-Balanced-Coverage}$. We develop a generic template algorithm
for approximating both problems in polynomial time, and we show that our
template algorithm for $\texttt{Balanced-Coverage}$ has provable guarantees. We
describe a set of computational speedups that we can apply to our algorithms
and make them scale for reasonably large datasets. From the practical point of
view, we demonstrate how to efficiently tune the two parts of the objective and
tailor their importance to a particular application. Our experiments with a
variety of real-world datasets demonstrate the utility of our problem
formulation as well as the efficiency of our algorithms in practice.
| no_new_dataset | 0.934634 |
2503.05916 | D L Ferreira PhD | Danielle L. Ferreira, Ahana Gangopadhyay, Hsi-Ming Chang, Ravi Soni,
Gopal Avinash | SAS: Segment Anything Small for Ultrasound -- A Non-Generative Data
Augmentation Technique for Robust Deep Learning in Ultrasound Imaging | 25 pages, 8 figures | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurate segmentation of anatomical structures in ultrasound (US) images,
particularly small ones, is challenging due to noise and variability in imaging
conditions (e.g., probe position, patient anatomy, tissue characteristics and
pathology). To address this, we introduce Segment Anything Small (SAS), a
simple yet effective scale- and texture-aware data augmentation technique
designed to enhance the performance of deep learning models for segmenting
small anatomical structures in ultrasound images. SAS employs a dual
transformation strategy: (1) simulating diverse organ scales by resizing and
embedding organ thumbnails into a black background, and (2) injecting noise
into regions of interest to simulate varying tissue textures. These
transformations generate realistic and diverse training data without
introducing hallucinations or artifacts, improving the model's robustness to
noise and variability. We fine-tuned a promptable foundation model on a
controlled organ-specific medical imaging dataset and evaluated its performance
on one internal and five external datasets. Experimental results demonstrate
significant improvements in segmentation performance, with Dice score gains of
up to 0.35 and an average improvement of 0.16 [95% CI 0.132,0.188].
Additionally, our iterative point prompts provide precise control and adaptive
refinement, achieving performance comparable to bounding box prompts with just
two points. SAS enhances model robustness and generalizability across diverse
anatomical structures and imaging conditions, particularly for small
structures, without compromising the accuracy of larger ones. By offering a
computationally efficient solution that eliminates the need for extensive human
labeling efforts, SAS emerges as a powerful tool for advancing medical image
analysis, particularly in resource-constrained settings.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 20:24:35 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ferreira",
"Danielle L.",
""
],
[
"Gangopadhyay",
"Ahana",
""
],
[
"Chang",
"Hsi-Ming",
""
],
[
"Soni",
"Ravi",
""
],
[
"Avinash",
"Gopal",
""
]
]
| TITLE: SAS: Segment Anything Small for Ultrasound -- A Non-Generative Data
Augmentation Technique for Robust Deep Learning in Ultrasound Imaging
ABSTRACT: Accurate segmentation of anatomical structures in ultrasound (US) images,
particularly small ones, is challenging due to noise and variability in imaging
conditions (e.g., probe position, patient anatomy, tissue characteristics and
pathology). To address this, we introduce Segment Anything Small (SAS), a
simple yet effective scale- and texture-aware data augmentation technique
designed to enhance the performance of deep learning models for segmenting
small anatomical structures in ultrasound images. SAS employs a dual
transformation strategy: (1) simulating diverse organ scales by resizing and
embedding organ thumbnails into a black background, and (2) injecting noise
into regions of interest to simulate varying tissue textures. These
transformations generate realistic and diverse training data without
introducing hallucinations or artifacts, improving the model's robustness to
noise and variability. We fine-tuned a promptable foundation model on a
controlled organ-specific medical imaging dataset and evaluated its performance
on one internal and five external datasets. Experimental results demonstrate
significant improvements in segmentation performance, with Dice score gains of
up to 0.35 and an average improvement of 0.16 [95% CI 0.132,0.188].
Additionally, our iterative point prompts provide precise control and adaptive
refinement, achieving performance comparable to bounding box prompts with just
two points. SAS enhances model robustness and generalizability across diverse
anatomical structures and imaging conditions, particularly for small
structures, without compromising the accuracy of larger ones. By offering a
computationally efficient solution that eliminates the need for extensive human
labeling efforts, SAS emerges as a powerful tool for advancing medical image
analysis, particularly in resource-constrained settings.
| no_new_dataset | 0.953492 |
2503.05919 | Eric Zhao | Eric Zhao, Pranjal Awasthi, and Nika Haghtalab | From Style to Facts: Mapping the Boundaries of Knowledge Injection with
Finetuning | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Finetuning provides a scalable and cost-effective means of customizing
language models for specific tasks or response styles, with greater reliability
than prompting or in-context learning. In contrast, the conventional wisdom is
that injecting knowledge via finetuning results in brittle performance and poor
generalization. We argue that the dichotomy of "task customization" (e.g.,
instruction tuning) and "knowledge injection" (e.g., teaching new facts) is a
distinction without a difference. We instead identify concrete factors that
explain the heterogeneous effectiveness observed with finetuning. To this end,
we conduct a large-scale experimental study of finetuning the frontier Gemini
v1.5 model family on a spectrum of datasets that are artificially engineered to
interpolate between the strengths and failure modes of finetuning. Our findings
indicate that question-answer training data formats provide much stronger
knowledge generalization than document/article-style training data, numerical
information can be harder for finetuning to retain than categorical
information, and models struggle to apply finetuned knowledge during multi-step
reasoning even when trained on similar examples -- all factors that render
"knowledge injection" to be especially difficult, even after controlling for
considerations like data augmentation and information volume. On the other
hand, our findings also indicate that it is not fundamentally more difficult to
finetune information about a real-world event than information about what a
model's writing style should be.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 20:35:31 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhao",
"Eric",
""
],
[
"Awasthi",
"Pranjal",
""
],
[
"Haghtalab",
"Nika",
""
]
]
| TITLE: From Style to Facts: Mapping the Boundaries of Knowledge Injection with
Finetuning
ABSTRACT: Finetuning provides a scalable and cost-effective means of customizing
language models for specific tasks or response styles, with greater reliability
than prompting or in-context learning. In contrast, the conventional wisdom is
that injecting knowledge via finetuning results in brittle performance and poor
generalization. We argue that the dichotomy of "task customization" (e.g.,
instruction tuning) and "knowledge injection" (e.g., teaching new facts) is a
distinction without a difference. We instead identify concrete factors that
explain the heterogeneous effectiveness observed with finetuning. To this end,
we conduct a large-scale experimental study of finetuning the frontier Gemini
v1.5 model family on a spectrum of datasets that are artificially engineered to
interpolate between the strengths and failure modes of finetuning. Our findings
indicate that question-answer training data formats provide much stronger
knowledge generalization than document/article-style training data, numerical
information can be harder for finetuning to retain than categorical
information, and models struggle to apply finetuned knowledge during multi-step
reasoning even when trained on similar examples -- all factors that render
"knowledge injection" to be especially difficult, even after controlling for
considerations like data augmentation and information volume. On the other
hand, our findings also indicate that it is not fundamentally more difficult to
finetune information about a real-world event than information about what a
model's writing style should be.
| no_new_dataset | 0.943504 |
2503.05925 | Greg d'Eon | Greg d'Eon, Hala Murad, Kevin Leyton-Brown, James R. Wright | ElementaryNet: A Non-Strategic Neural Network for Predicting Human
Behavior in Normal-Form Games | 14 pages. Submitted to EC 2025 | null | null | null | cs.LG cs.AI cs.GT | http://creativecommons.org/licenses/by/4.0/ | Models of human behavior in game-theoretic settings often distinguish between
strategic behavior, in which a player both reasons about how others will act
and best responds to these beliefs, and "level-0" non-strategic behavior, in
which they do not respond to explicit beliefs about others. The state of the
art for predicting human behavior on unrepeated simultaneous-move games is
GameNet, a neural network that learns extremely complex level-0 specifications
from data. The current paper makes three contributions. First, it shows that
GameNet's level-0 specifications are too powerful, because they are capable of
strategic reasoning. Second, it introduces a novel neural network architecture
(dubbed ElementaryNet) and proves that it is only capable of nonstrategic
behavior. Third, it describes an extensive experimental evaluation of
ElementaryNet. Our overall findings are that (1) ElementaryNet dramatically
underperforms GameNet when neither model is allowed to explicitly model higher
level agents who best-respond to the model's predictions, indicating that good
performance on our dataset requires a model capable of strategic reasoning; (2)
that the two models achieve statistically indistinguishable performance when
such higher-level agents are introduced, meaning that ElementaryNet's
restriction to a non-strategic level-0 specification does not degrade model
performance; and (3) that this continues to hold even when ElementaryNet is
restricted to a set of level-0 building blocks previously introduced in the
literature, with only the functional form being learned by the neural network.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 20:47:16 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"d'Eon",
"Greg",
""
],
[
"Murad",
"Hala",
""
],
[
"Leyton-Brown",
"Kevin",
""
],
[
"Wright",
"James R.",
""
]
]
| TITLE: ElementaryNet: A Non-Strategic Neural Network for Predicting Human
Behavior in Normal-Form Games
ABSTRACT: Models of human behavior in game-theoretic settings often distinguish between
strategic behavior, in which a player both reasons about how others will act
and best responds to these beliefs, and "level-0" non-strategic behavior, in
which they do not respond to explicit beliefs about others. The state of the
art for predicting human behavior on unrepeated simultaneous-move games is
GameNet, a neural network that learns extremely complex level-0 specifications
from data. The current paper makes three contributions. First, it shows that
GameNet's level-0 specifications are too powerful, because they are capable of
strategic reasoning. Second, it introduces a novel neural network architecture
(dubbed ElementaryNet) and proves that it is only capable of nonstrategic
behavior. Third, it describes an extensive experimental evaluation of
ElementaryNet. Our overall findings are that (1) ElementaryNet dramatically
underperforms GameNet when neither model is allowed to explicitly model higher
level agents who best-respond to the model's predictions, indicating that good
performance on our dataset requires a model capable of strategic reasoning; (2)
that the two models achieve statistically indistinguishable performance when
such higher-level agents are introduced, meaning that ElementaryNet's
restriction to a non-strategic level-0 specification does not degrade model
performance; and (3) that this continues to hold even when ElementaryNet is
restricted to a set of level-0 building blocks previously introduced in the
literature, with only the functional form being learned by the neural network.
| no_new_dataset | 0.948537 |
2503.05933 | Yao Du | Yao Du, Jiaxin Zhuang, Xiaoyu Zheng, Jing Cong, Limei Guo, Chao He,
Lin Luo, Xiaomeng Li | Beyond H&E: Unlocking Pathological Insights with Polarization via
Self-supervised Learning | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Histopathology image analysis is fundamental to digital pathology, with
hematoxylin and eosin (H&E) staining as the gold standard for diagnostic and
prognostic assessments. While H&E imaging effectively highlights cellular and
tissue structures, it lacks sensitivity to birefringence and tissue anisotropy,
which are crucial for assessing collagen organization, fiber alignment, and
microstructural alterations--key indicators of tumor progression, fibrosis, and
other pathological conditions. To bridge this gap, we propose PolarHE, a dual
modality fusion framework that integrates H&E with polarization imaging,
leveraging the polarization ability to enhance tissue characterization. Our
approach employs a feature decomposition strategy to disentangle common and
modality specific features, ensuring effective multimodal representation
learning. Through comprehensive validation, our approach significantly
outperforms previous methods, achieving an accuracy of 86.70% on the Chaoyang
dataset and 89.06% on the MHIST dataset. Moreover, polarization property
visualization reveals distinct optical signatures of pathological tissues,
highlighting its diagnostic potential. t-SNE visualizations further confirm our
model effectively captures both shared and unique modality features,
reinforcing the complementary nature of polarization imaging. These results
demonstrate that polarization imaging is a powerful and underutilized modality
in computational pathology, enriching feature representation and improving
diagnostic accuracy. PolarHE establishes a promising direction for multimodal
learning, paving the way for more interpretable and generalizable pathology
models. Our code will be released after paper acceptance.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 05:00:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Du",
"Yao",
""
],
[
"Zhuang",
"Jiaxin",
""
],
[
"Zheng",
"Xiaoyu",
""
],
[
"Cong",
"Jing",
""
],
[
"Guo",
"Limei",
""
],
[
"He",
"Chao",
""
],
[
"Luo",
"Lin",
""
],
[
"Li",
"Xiaomeng",
""
]
]
| TITLE: Beyond H&E: Unlocking Pathological Insights with Polarization via
Self-supervised Learning
ABSTRACT: Histopathology image analysis is fundamental to digital pathology, with
hematoxylin and eosin (H&E) staining as the gold standard for diagnostic and
prognostic assessments. While H&E imaging effectively highlights cellular and
tissue structures, it lacks sensitivity to birefringence and tissue anisotropy,
which are crucial for assessing collagen organization, fiber alignment, and
microstructural alterations--key indicators of tumor progression, fibrosis, and
other pathological conditions. To bridge this gap, we propose PolarHE, a dual
modality fusion framework that integrates H&E with polarization imaging,
leveraging the polarization ability to enhance tissue characterization. Our
approach employs a feature decomposition strategy to disentangle common and
modality specific features, ensuring effective multimodal representation
learning. Through comprehensive validation, our approach significantly
outperforms previous methods, achieving an accuracy of 86.70% on the Chaoyang
dataset and 89.06% on the MHIST dataset. Moreover, polarization property
visualization reveals distinct optical signatures of pathological tissues,
highlighting its diagnostic potential. t-SNE visualizations further confirm our
model effectively captures both shared and unique modality features,
reinforcing the complementary nature of polarization imaging. These results
demonstrate that polarization imaging is a powerful and underutilized modality
in computational pathology, enriching feature representation and improving
diagnostic accuracy. PolarHE establishes a promising direction for multimodal
learning, paving the way for more interpretable and generalizable pathology
models. Our code will be released after paper acceptance.
| no_new_dataset | 0.953449 |
2503.05950 | Jean Louis Kedieng Ebongue Fendji | Jean Louis Fendji Kedieng Ebongue | From Community Network to Community Data: Towards Combining Data Pool
and Data Cooperative for Data Justice in Rural Areas | 11 pages, 2 Figures | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | This study explores the shift from community networks (CNs) to community data
in rural areas, focusing on combining data pools and data cooperatives to
achieve data justice and foster and a just AI ecosystem. With 2.7 billion
people still offline, especially in the Global South, addressing data justice
is critical. While discussions related to data justice have evolved to include
economic dimensions, rural areas still struggle with the challenge of being
adequately represented in the datasets. This study investigates a Community
Data Model (CDM) that integrates the simplicity of data pools with the
structured organization of data cooperatives to generate local data for AI for
good. CDM leverages CNs, which have proven effective in promoting digital
inclusion, to establish a centralized data repository, ensuring accessibility
through open data principles. The model emphasizes community needs,
prioritizing local knowledge, education, and traditional practices, with an
iterative approach starting from pilot projects. Capacity building is a core
component of digital literacy training and partnership with educational
institutions and NGOs. The legal and regulatory dimension ensures compliance
with data privacy laws. By empowering rural communities to control and manage
their data, the CDM fosters equitable access and participation and sustains
local identity and knowledge. This approach can mitigate the challenges of data
creation in rural areas and enhance data justice. CDM can contribute to AI by
improving data quality and relevance, enabling rural areas to benefit from AI
advancements.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 21:41:01 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ebongue",
"Jean Louis Fendji Kedieng",
""
]
]
| TITLE: From Community Network to Community Data: Towards Combining Data Pool
and Data Cooperative for Data Justice in Rural Areas
ABSTRACT: This study explores the shift from community networks (CNs) to community data
in rural areas, focusing on combining data pools and data cooperatives to
achieve data justice and foster and a just AI ecosystem. With 2.7 billion
people still offline, especially in the Global South, addressing data justice
is critical. While discussions related to data justice have evolved to include
economic dimensions, rural areas still struggle with the challenge of being
adequately represented in the datasets. This study investigates a Community
Data Model (CDM) that integrates the simplicity of data pools with the
structured organization of data cooperatives to generate local data for AI for
good. CDM leverages CNs, which have proven effective in promoting digital
inclusion, to establish a centralized data repository, ensuring accessibility
through open data principles. The model emphasizes community needs,
prioritizing local knowledge, education, and traditional practices, with an
iterative approach starting from pilot projects. Capacity building is a core
component of digital literacy training and partnership with educational
institutions and NGOs. The legal and regulatory dimension ensures compliance
with data privacy laws. By empowering rural communities to control and manage
their data, the CDM fosters equitable access and participation and sustains
local identity and knowledge. This approach can mitigate the challenges of data
creation in rural areas and enhance data justice. CDM can contribute to AI by
improving data quality and relevance, enabling rural areas to benefit from AI
advancements.
| no_new_dataset | 0.945851 |
2503.05951 | Deepak Vungarala | Deepak Vungarala, Mohammed E. Elbtity, Sumiya Syed, Sakila Alam,
Kartik Pandit, Arnob Ghosh, Ramtin Zand, Shaahin Angizi | TPU-Gen: LLM-Driven Custom Tensor Processing Unit Generator | 8 Pages, 9 Figures, 5 Tables | null | null | null | cs.AR cs.AI | http://creativecommons.org/licenses/by/4.0/ | The increasing complexity and scale of Deep Neural Networks (DNNs)
necessitate specialized tensor accelerators, such as Tensor Processing Units
(TPUs), to meet various computational and energy efficiency requirements.
Nevertheless, designing optimal TPU remains challenging due to the high domain
expertise level, considerable manual design time, and lack of high-quality,
domain-specific datasets. This paper introduces TPU-Gen, the first Large
Language Model (LLM) based framework designed to automate the exact and
approximate TPU generation process, focusing on systolic array architectures.
TPU-Gen is supported with a meticulously curated, comprehensive, and
open-source dataset that covers a wide range of spatial array designs and
approximate multiply-and-accumulate units, enabling design reuse, adaptation,
and customization for different DNN workloads. The proposed framework leverages
Retrieval-Augmented Generation (RAG) as an effective solution for a data-scare
hardware domain in building LLMs, addressing the most intriguing issue,
hallucinations. TPU-Gen transforms high-level architectural specifications into
optimized low-level implementations through an effective hardware generation
pipeline. Our extensive experimental evaluations demonstrate superior
performance, power, and area efficiency, with an average reduction in area and
power of 92\% and 96\% from the manual optimization reference values. These
results set new standards for driving advancements in next-generation design
automation tools powered by LLMs.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 21:41:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Vungarala",
"Deepak",
""
],
[
"Elbtity",
"Mohammed E.",
""
],
[
"Syed",
"Sumiya",
""
],
[
"Alam",
"Sakila",
""
],
[
"Pandit",
"Kartik",
""
],
[
"Ghosh",
"Arnob",
""
],
[
"Zand",
"Ramtin",
""
],
[
"Angizi",
"Shaahin",
""
]
]
| TITLE: TPU-Gen: LLM-Driven Custom Tensor Processing Unit Generator
ABSTRACT: The increasing complexity and scale of Deep Neural Networks (DNNs)
necessitate specialized tensor accelerators, such as Tensor Processing Units
(TPUs), to meet various computational and energy efficiency requirements.
Nevertheless, designing optimal TPU remains challenging due to the high domain
expertise level, considerable manual design time, and lack of high-quality,
domain-specific datasets. This paper introduces TPU-Gen, the first Large
Language Model (LLM) based framework designed to automate the exact and
approximate TPU generation process, focusing on systolic array architectures.
TPU-Gen is supported with a meticulously curated, comprehensive, and
open-source dataset that covers a wide range of spatial array designs and
approximate multiply-and-accumulate units, enabling design reuse, adaptation,
and customization for different DNN workloads. The proposed framework leverages
Retrieval-Augmented Generation (RAG) as an effective solution for a data-scare
hardware domain in building LLMs, addressing the most intriguing issue,
hallucinations. TPU-Gen transforms high-level architectural specifications into
optimized low-level implementations through an effective hardware generation
pipeline. Our extensive experimental evaluations demonstrate superior
performance, power, and area efficiency, with an average reduction in area and
power of 92\% and 96\% from the manual optimization reference values. These
results set new standards for driving advancements in next-generation design
automation tools powered by LLMs.
| no_new_dataset | 0.836287 |
2503.05962 | Franklin Mingzhe Li | Franklin Mingzhe Li, Kaitlyn Ng, Bin Zhu, Patrick Carrington | OSCAR: Object Status and Contextual Awareness for Recipes to Support
Non-Visual Cooking | CHI 2025 Late Breaking Work | null | null | null | cs.HC cs.CV | http://creativecommons.org/licenses/by/4.0/ | Following recipes while cooking is an important but difficult task for
visually impaired individuals. We developed OSCAR (Object Status Context
Awareness for Recipes), a novel approach that provides recipe progress tracking
and context-aware feedback on the completion of cooking tasks through tracking
object statuses. OSCAR leverages both Large-Language Models (LLMs) and
Vision-Language Models (VLMs) to manipulate recipe steps, extract object status
information, align visual frames with object status, and provide cooking
progress tracking log. We evaluated OSCAR's recipe following functionality
using 173 YouTube cooking videos and 12 real-world non-visual cooking videos to
demonstrate OSCAR's capability to track cooking steps and provide contextual
guidance. Our results highlight the effectiveness of using object status to
improve performance compared to baseline by over 20% across different VLMs, and
we present factors that impact prediction performance. Furthermore, we
contribute a dataset of real-world non-visual cooking videos with step
annotations as an evaluation benchmark.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 22:03:21 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Franklin Mingzhe",
""
],
[
"Ng",
"Kaitlyn",
""
],
[
"Zhu",
"Bin",
""
],
[
"Carrington",
"Patrick",
""
]
]
| TITLE: OSCAR: Object Status and Contextual Awareness for Recipes to Support
Non-Visual Cooking
ABSTRACT: Following recipes while cooking is an important but difficult task for
visually impaired individuals. We developed OSCAR (Object Status Context
Awareness for Recipes), a novel approach that provides recipe progress tracking
and context-aware feedback on the completion of cooking tasks through tracking
object statuses. OSCAR leverages both Large-Language Models (LLMs) and
Vision-Language Models (VLMs) to manipulate recipe steps, extract object status
information, align visual frames with object status, and provide cooking
progress tracking log. We evaluated OSCAR's recipe following functionality
using 173 YouTube cooking videos and 12 real-world non-visual cooking videos to
demonstrate OSCAR's capability to track cooking steps and provide contextual
guidance. Our results highlight the effectiveness of using object status to
improve performance compared to baseline by over 20% across different VLMs, and
we present factors that impact prediction performance. Furthermore, we
contribute a dataset of real-world non-visual cooking videos with step
annotations as an evaluation benchmark.
| new_dataset | 0.94428 |
2503.05969 | Beyza Kalkanli | Beyza Kalkanli, Tales Imbiriba, Stratis Ioannidis, Deniz Erdogmus,
Jennifer Dy | Dependency-aware Maximum Likelihood Estimation for Active Learning | 26 pages, 8 figures | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Active learning aims to efficiently build a labeled training set by
strategically selecting samples to query labels from annotators. In this
sequential process, each sample acquisition influences subsequent selections,
causing dependencies among samples in the labeled set. However, these
dependencies are overlooked during the model parameter estimation stage when
updating the model using Maximum Likelihood Estimation (MLE), a conventional
method that assumes independent and identically distributed (i.i.d.) data. We
propose Dependency-aware MLE (DMLE), which corrects MLE within the active
learning framework by addressing sample dependencies typically neglected due to
the i.i.d. assumption, ensuring consistency with active learning principles in
the model parameter estimation process. This improved method achieves superior
performance across multiple benchmark datasets, reaching higher performance in
earlier cycles compared to conventional MLE. Specifically, we observe average
accuracy improvements of 6\%, 8.6\%, and 10.5\% for $k=1$, $k=5$, and $k=10$
respectively, after collecting the first 100 samples, where entropy is the
acquisition function and $k$ is the query batch size acquired at every active
learning cycle.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 22:48:33 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kalkanli",
"Beyza",
""
],
[
"Imbiriba",
"Tales",
""
],
[
"Ioannidis",
"Stratis",
""
],
[
"Erdogmus",
"Deniz",
""
],
[
"Dy",
"Jennifer",
""
]
]
| TITLE: Dependency-aware Maximum Likelihood Estimation for Active Learning
ABSTRACT: Active learning aims to efficiently build a labeled training set by
strategically selecting samples to query labels from annotators. In this
sequential process, each sample acquisition influences subsequent selections,
causing dependencies among samples in the labeled set. However, these
dependencies are overlooked during the model parameter estimation stage when
updating the model using Maximum Likelihood Estimation (MLE), a conventional
method that assumes independent and identically distributed (i.i.d.) data. We
propose Dependency-aware MLE (DMLE), which corrects MLE within the active
learning framework by addressing sample dependencies typically neglected due to
the i.i.d. assumption, ensuring consistency with active learning principles in
the model parameter estimation process. This improved method achieves superior
performance across multiple benchmark datasets, reaching higher performance in
earlier cycles compared to conventional MLE. Specifically, we observe average
accuracy improvements of 6\%, 8.6\%, and 10.5\% for $k=1$, $k=5$, and $k=10$
respectively, after collecting the first 100 samples, where entropy is the
acquisition function and $k$ is the query batch size acquired at every active
learning cycle.
| no_new_dataset | 0.948346 |
2503.05974 | Ishaan Gakhar | Krish Didwania, Ishaan Gakhar, Prakhar Arya, Sanskriti Labroo | LapLoss: Laplacian Pyramid-based Multiscale loss for Image Translation | Accepted at the DeLTa Workshop, ICLR 2025 | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Contrast enhancement, a key aspect of image-to-image translation (I2IT),
improves visual quality by adjusting intensity differences between pixels.
However, many existing methods struggle to preserve fine-grained details, often
leading to the loss of low-level features. This paper introduces LapLoss, a
novel approach designed for I2IT contrast enhancement, based on the Laplacian
pyramid-centric networks, forming the core of our proposed methodology. The
proposed approach employs a multiple discriminator architecture, each operating
at a different resolution to capture high-level features, in addition to
maintaining low-level details and textures under mixed lighting conditions. The
proposed methodology computes the loss at multiple scales, balancing
reconstruction accuracy and perceptual quality to enhance overall image
generation. The distinct blend of the loss calculation at each level of the
pyramid, combined with the architecture of the Laplacian pyramid enables
LapLoss to exceed contemporary contrast enhancement techniques. This framework
achieves state-of-the-art results, consistently performing well across
different lighting conditions in the SICE dataset.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 23:05:47 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Didwania",
"Krish",
""
],
[
"Gakhar",
"Ishaan",
""
],
[
"Arya",
"Prakhar",
""
],
[
"Labroo",
"Sanskriti",
""
]
]
| TITLE: LapLoss: Laplacian Pyramid-based Multiscale loss for Image Translation
ABSTRACT: Contrast enhancement, a key aspect of image-to-image translation (I2IT),
improves visual quality by adjusting intensity differences between pixels.
However, many existing methods struggle to preserve fine-grained details, often
leading to the loss of low-level features. This paper introduces LapLoss, a
novel approach designed for I2IT contrast enhancement, based on the Laplacian
pyramid-centric networks, forming the core of our proposed methodology. The
proposed approach employs a multiple discriminator architecture, each operating
at a different resolution to capture high-level features, in addition to
maintaining low-level details and textures under mixed lighting conditions. The
proposed methodology computes the loss at multiple scales, balancing
reconstruction accuracy and perceptual quality to enhance overall image
generation. The distinct blend of the loss calculation at each level of the
pyramid, combined with the architecture of the Laplacian pyramid enables
LapLoss to exceed contemporary contrast enhancement techniques. This framework
achieves state-of-the-art results, consistently performing well across
different lighting conditions in the SICE dataset.
| no_new_dataset | 0.949295 |
2503.05980 | Samir Abdaljalil | Samir Abdaljalil, Hasan Kurban, Parichit Sharma, Erchin Serpedin,
Rachad Atat | SINdex: Semantic INconsistency Index for Hallucination Detection in LLMs | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) are increasingly deployed across diverse
domains, yet they are prone to generating factually incorrect outputs -
commonly known as "hallucinations." Among existing mitigation strategies,
uncertainty-based methods are particularly attractive due to their ease of
implementation, independence from external data, and compatibility with
standard LLMs. In this work, we introduce a novel and scalable
uncertainty-based semantic clustering framework for automated hallucination
detection. Our approach leverages sentence embeddings and hierarchical
clustering alongside a newly proposed inconsistency measure, SINdex, to yield
more homogeneous clusters and more accurate detection of hallucination
phenomena across various LLMs. Evaluations on prominent open- and closed-book
QA datasets demonstrate that our method achieves AUROC improvements of up to
9.3% over state-of-the-art techniques. Extensive ablation studies further
validate the effectiveness of each component in our framework.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 23:25:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Abdaljalil",
"Samir",
""
],
[
"Kurban",
"Hasan",
""
],
[
"Sharma",
"Parichit",
""
],
[
"Serpedin",
"Erchin",
""
],
[
"Atat",
"Rachad",
""
]
]
| TITLE: SINdex: Semantic INconsistency Index for Hallucination Detection in LLMs
ABSTRACT: Large language models (LLMs) are increasingly deployed across diverse
domains, yet they are prone to generating factually incorrect outputs -
commonly known as "hallucinations." Among existing mitigation strategies,
uncertainty-based methods are particularly attractive due to their ease of
implementation, independence from external data, and compatibility with
standard LLMs. In this work, we introduce a novel and scalable
uncertainty-based semantic clustering framework for automated hallucination
detection. Our approach leverages sentence embeddings and hierarchical
clustering alongside a newly proposed inconsistency measure, SINdex, to yield
more homogeneous clusters and more accurate detection of hallucination
phenomena across various LLMs. Evaluations on prominent open- and closed-book
QA datasets demonstrate that our method achieves AUROC improvements of up to
9.3% over state-of-the-art techniques. Extensive ablation studies further
validate the effectiveness of each component in our framework.
| no_new_dataset | 0.949295 |
2503.05985 | Lucius Bynum | Lucius E.J. Bynum, Aahlad Manas Puli, Diego Herrero-Quevedo, Nhi
Nguyen, Carlos Fernandez-Granda, Kyunghyun Cho, Rajesh Ranganath | Black Box Causal Inference: Effect Estimation via Meta Prediction | null | null | null | null | cs.LG cs.AI stat.CO stat.ME stat.ML | http://creativecommons.org/licenses/by/4.0/ | Causal inference and the estimation of causal effects plays a central role in
decision-making across many areas, including healthcare and economics.
Estimating causal effects typically requires an estimator that is tailored to
each problem of interest. But developing estimators can take significant effort
for even a single causal inference setting. For example, algorithms for
regression-based estimators, propensity score methods, and doubly robust
methods were designed across several decades to handle causal estimation with
observed confounders. Similarly, several estimators have been developed to
exploit instrumental variables (IVs), including two-stage least-squares (TSLS),
control functions, and the method-of-moments. In this work, we instead frame
causal inference as a dataset-level prediction problem, offloading algorithm
design to the learning process. The approach we introduce, called black box
causal inference (BBCI), builds estimators in a black-box manner by learning to
predict causal effects from sampled dataset-effect pairs. We demonstrate
accurate estimation of average treatment effects (ATEs) and conditional average
treatment effects (CATEs) with BBCI across several causal inference problems
with known identification, including problems with less developed estimators.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 23:43:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Bynum",
"Lucius E. J.",
""
],
[
"Puli",
"Aahlad Manas",
""
],
[
"Herrero-Quevedo",
"Diego",
""
],
[
"Nguyen",
"Nhi",
""
],
[
"Fernandez-Granda",
"Carlos",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Ranganath",
"Rajesh",
""
]
]
| TITLE: Black Box Causal Inference: Effect Estimation via Meta Prediction
ABSTRACT: Causal inference and the estimation of causal effects plays a central role in
decision-making across many areas, including healthcare and economics.
Estimating causal effects typically requires an estimator that is tailored to
each problem of interest. But developing estimators can take significant effort
for even a single causal inference setting. For example, algorithms for
regression-based estimators, propensity score methods, and doubly robust
methods were designed across several decades to handle causal estimation with
observed confounders. Similarly, several estimators have been developed to
exploit instrumental variables (IVs), including two-stage least-squares (TSLS),
control functions, and the method-of-moments. In this work, we instead frame
causal inference as a dataset-level prediction problem, offloading algorithm
design to the learning process. The approach we introduce, called black box
causal inference (BBCI), builds estimators in a black-box manner by learning to
predict causal effects from sampled dataset-effect pairs. We demonstrate
accurate estimation of average treatment effects (ATEs) and conditional average
treatment effects (CATEs) with BBCI across several causal inference problems
with known identification, including problems with less developed estimators.
| no_new_dataset | 0.945751 |
2503.05990 | Qi Zhang | Qi Zhang, Shunan Zhang, Ziqi Zhao, Kun Wang, Jun Xu, and Jianqi Sun | HealthiVert-GAN: A Novel Framework of Pseudo-Healthy Vertebral Image
Synthesis for Interpretable Compression Fracture Grading | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Osteoporotic vertebral compression fractures (VCFs) are prevalent in the
elderly population, typically assessed on computed tomography (CT) scans by
evaluating vertebral height loss. This assessment helps determine the
fracture's impact on spinal stability and the need for surgical intervention.
However, clinical data indicate that many VCFs exhibit irregular compression,
complicating accurate diagnosis. While deep learning methods have shown promise
in aiding VCFs screening, they often lack interpretability and sufficient
sensitivity, limiting their clinical applicability. To address these
challenges, we introduce a novel vertebra synthesis-height loss
quantification-VCFs grading framework. Our proposed model, HealthiVert-GAN,
utilizes a coarse-to-fine synthesis network designed to generate pseudo-healthy
vertebral images that simulate the pre-fracture state of fractured vertebrae.
This model integrates three auxiliary modules that leverage the morphology and
height information of adjacent healthy vertebrae to ensure anatomical
consistency. Additionally, we introduce the Relative Height Loss of Vertebrae
(RHLV) as a quantification metric, which divides each vertebra into three
sections to measure height loss between pre-fracture and post-fracture states,
followed by fracture severity classification using a Support Vector Machine
(SVM). Our approach achieves state-of-the-art classification performance on
both the Verse2019 dataset and our private dataset, and it provides
cross-sectional distribution maps of vertebral height loss. This practical tool
enhances diagnostic sensitivity in clinical settings and assisting in surgical
decision-making. Our code is available:
https://github.com/zhibaishouheilab/HealthiVert-GAN.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 00:05:39 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Qi",
""
],
[
"Zhang",
"Shunan",
""
],
[
"Zhao",
"Ziqi",
""
],
[
"Wang",
"Kun",
""
],
[
"Xu",
"Jun",
""
],
[
"Sun",
"Jianqi",
""
]
]
| TITLE: HealthiVert-GAN: A Novel Framework of Pseudo-Healthy Vertebral Image
Synthesis for Interpretable Compression Fracture Grading
ABSTRACT: Osteoporotic vertebral compression fractures (VCFs) are prevalent in the
elderly population, typically assessed on computed tomography (CT) scans by
evaluating vertebral height loss. This assessment helps determine the
fracture's impact on spinal stability and the need for surgical intervention.
However, clinical data indicate that many VCFs exhibit irregular compression,
complicating accurate diagnosis. While deep learning methods have shown promise
in aiding VCFs screening, they often lack interpretability and sufficient
sensitivity, limiting their clinical applicability. To address these
challenges, we introduce a novel vertebra synthesis-height loss
quantification-VCFs grading framework. Our proposed model, HealthiVert-GAN,
utilizes a coarse-to-fine synthesis network designed to generate pseudo-healthy
vertebral images that simulate the pre-fracture state of fractured vertebrae.
This model integrates three auxiliary modules that leverage the morphology and
height information of adjacent healthy vertebrae to ensure anatomical
consistency. Additionally, we introduce the Relative Height Loss of Vertebrae
(RHLV) as a quantification metric, which divides each vertebra into three
sections to measure height loss between pre-fracture and post-fracture states,
followed by fracture severity classification using a Support Vector Machine
(SVM). Our approach achieves state-of-the-art classification performance on
both the Verse2019 dataset and our private dataset, and it provides
cross-sectional distribution maps of vertebral height loss. This practical tool
enhances diagnostic sensitivity in clinical settings and assisting in surgical
decision-making. Our code is available:
https://github.com/zhibaishouheilab/HealthiVert-GAN.
| new_dataset | 0.965446 |
2503.05991 | Zixuan Liu | Zixuan Liu, Aaron Honjaya, Yuekai Xu, Yi Zhang, Hefu Pan, Xin Wang,
Linda G Shapiro, Sheng Wang, Ruikang K Wang | GrInAdapt: Scaling Retinal Vessel Structural Map Segmentation Through
Grounding, Integrating and Adapting Multi-device, Multi-site, and Multi-modal
Fundus Domains | null | null | null | null | eess.IV cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Retinal vessel segmentation is critical for diagnosing ocular conditions, yet
current deep learning methods are limited by modality-specific challenges and
significant distribution shifts across imaging devices, resolutions, and
anatomical regions. In this paper, we propose GrInAdapt, a novel framework for
source-free multi-target domain adaptation that leverages multi-view images to
refine segmentation labels and enhance model generalizability for optical
coherence tomography angiography (OCTA) of the fundus of the eye. GrInAdapt
follows an intuitive three-step approach: (i) grounding images to a common
anchor space via registration, (ii) integrating predictions from multiple views
to achieve improved label consensus, and (iii) adapting the source model to
diverse target domains. Furthermore, GrInAdapt is flexible enough to
incorporate auxiliary modalities such as color fundus photography, to provide
complementary cues for robust vessel segmentation. Extensive experiments on a
multi-device, multi-site, and multi-modal retinal dataset demonstrate that
GrInAdapt significantly outperforms existing domain adaptation methods,
achieving higher segmentation accuracy and robustness across multiple domains.
These results highlight the potential of GrInAdapt to advance automated retinal
vessel analysis and support robust clinical decision-making.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 00:15:21 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Zixuan",
""
],
[
"Honjaya",
"Aaron",
""
],
[
"Xu",
"Yuekai",
""
],
[
"Zhang",
"Yi",
""
],
[
"Pan",
"Hefu",
""
],
[
"Wang",
"Xin",
""
],
[
"Shapiro",
"Linda G",
""
],
[
"Wang",
"Sheng",
""
],
[
"Wang",
"Ruikang K",
""
]
]
| TITLE: GrInAdapt: Scaling Retinal Vessel Structural Map Segmentation Through
Grounding, Integrating and Adapting Multi-device, Multi-site, and Multi-modal
Fundus Domains
ABSTRACT: Retinal vessel segmentation is critical for diagnosing ocular conditions, yet
current deep learning methods are limited by modality-specific challenges and
significant distribution shifts across imaging devices, resolutions, and
anatomical regions. In this paper, we propose GrInAdapt, a novel framework for
source-free multi-target domain adaptation that leverages multi-view images to
refine segmentation labels and enhance model generalizability for optical
coherence tomography angiography (OCTA) of the fundus of the eye. GrInAdapt
follows an intuitive three-step approach: (i) grounding images to a common
anchor space via registration, (ii) integrating predictions from multiple views
to achieve improved label consensus, and (iii) adapting the source model to
diverse target domains. Furthermore, GrInAdapt is flexible enough to
incorporate auxiliary modalities such as color fundus photography, to provide
complementary cues for robust vessel segmentation. Extensive experiments on a
multi-device, multi-site, and multi-modal retinal dataset demonstrate that
GrInAdapt significantly outperforms existing domain adaptation methods,
achieving higher segmentation accuracy and robustness across multiple domains.
These results highlight the potential of GrInAdapt to advance automated retinal
vessel analysis and support robust clinical decision-making.
| no_new_dataset | 0.949763 |
2503.05995 | Shan An | Shan An, Shipeng Dai, Mahrukh Ansari, Yu Liang, Ming Zeng,
Konstantinos A. Tsintotas, Changhong Fu, Hong Zhang | ReJSHand: Efficient Real-Time Hand Pose Estimation and Mesh
Reconstruction Using Refined Joint and Skeleton Features | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate hand pose estimation is vital in robotics, advancing dexterous
manipulation in human-computer interaction. Toward this goal, this paper
presents ReJSHand (which stands for Refined Joint and Skeleton Features), a
cutting-edge network formulated for real-time hand pose estimation and mesh
reconstruction. The proposed framework is designed to accurately predict 3D
hand gestures under real-time constraints, which is essential for systems that
demand agile and responsive hand motion tracking. The network's design
prioritizes computational efficiency without compromising accuracy, a
prerequisite for instantaneous robotic interactions. Specifically, ReJSHand
comprises a 2D keypoint generator, a 3D keypoint generator, an expansion block,
and a feature interaction block for meticulously reconstructing 3D hand poses
from 2D imagery. In addition, the multi-head self-attention mechanism and a
coordinate attention layer enhance feature representation, streamlining the
creation of hand mesh vertices through sophisticated feature mapping and linear
transformation. Regarding performance, comprehensive evaluations on the
FreiHand dataset demonstrate ReJSHand's computational prowess. It achieves a
frame rate of 72 frames per second while maintaining a PA-MPJPE
(Position-Accurate Mean Per Joint Position Error) of 6.3 mm and a PA-MPVPE
(Position-Accurate Mean Per Vertex Position Error) of 6.4 mm. Moreover, our
model reaches scores of 0.756 for F@05 and 0.984 for F@15, surpassing modern
pipelines and solidifying its position at the forefront of robotic hand pose
estimators. To facilitate future studies, we provide our source code at
~\url{https://github.com/daishipeng/ReJSHand}.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 00:33:41 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"An",
"Shan",
""
],
[
"Dai",
"Shipeng",
""
],
[
"Ansari",
"Mahrukh",
""
],
[
"Liang",
"Yu",
""
],
[
"Zeng",
"Ming",
""
],
[
"Tsintotas",
"Konstantinos A.",
""
],
[
"Fu",
"Changhong",
""
],
[
"Zhang",
"Hong",
""
]
]
| TITLE: ReJSHand: Efficient Real-Time Hand Pose Estimation and Mesh
Reconstruction Using Refined Joint and Skeleton Features
ABSTRACT: Accurate hand pose estimation is vital in robotics, advancing dexterous
manipulation in human-computer interaction. Toward this goal, this paper
presents ReJSHand (which stands for Refined Joint and Skeleton Features), a
cutting-edge network formulated for real-time hand pose estimation and mesh
reconstruction. The proposed framework is designed to accurately predict 3D
hand gestures under real-time constraints, which is essential for systems that
demand agile and responsive hand motion tracking. The network's design
prioritizes computational efficiency without compromising accuracy, a
prerequisite for instantaneous robotic interactions. Specifically, ReJSHand
comprises a 2D keypoint generator, a 3D keypoint generator, an expansion block,
and a feature interaction block for meticulously reconstructing 3D hand poses
from 2D imagery. In addition, the multi-head self-attention mechanism and a
coordinate attention layer enhance feature representation, streamlining the
creation of hand mesh vertices through sophisticated feature mapping and linear
transformation. Regarding performance, comprehensive evaluations on the
FreiHand dataset demonstrate ReJSHand's computational prowess. It achieves a
frame rate of 72 frames per second while maintaining a PA-MPJPE
(Position-Accurate Mean Per Joint Position Error) of 6.3 mm and a PA-MPVPE
(Position-Accurate Mean Per Vertex Position Error) of 6.4 mm. Moreover, our
model reaches scores of 0.756 for F@05 and 0.984 for F@15, surpassing modern
pipelines and solidifying its position at the forefront of robotic hand pose
estimators. To facilitate future studies, we provide our source code at
~\url{https://github.com/daishipeng/ReJSHand}.
| no_new_dataset | 0.944536 |
2503.05997 | Yasin Sonmez | Yasin Sonmez, Hanna Krasowski, Murat Arcak | Learning to Drive by Imitating Surrounding Vehicles | null | null | null | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Imitation learning is a promising approach for training autonomous vehicles
(AV) to navigate complex traffic environments by mimicking expert driver
behaviors. However, a major challenge in this paradigm lies in effectively
utilizing available driving data, as collecting new data is resource-intensive
and often limited in its ability to cover diverse driving scenarios. While
existing imitation learning frameworks focus on leveraging expert
demonstrations, they often overlook the potential of additional complex driving
data from surrounding traffic participants. In this paper, we propose a data
augmentation strategy that enhances imitation learning by leveraging the
observed trajectories of nearby vehicles, captured through the AV's sensors, as
additional expert demonstrations. We introduce a vehicle selection sampling
strategy that prioritizes informative and diverse driving behaviors,
contributing to a richer and more diverse dataset for training. We evaluate our
approach using the state-of-the-art learning-based planning method PLUTO on the
nuPlan dataset and demonstrate that our augmentation method leads to improved
performance in complex driving scenarios. Specifically, our method reduces
collision rates and improves safety metrics compared to the baseline. Notably,
even when using only 10% of the original dataset, our method achieves
performance comparable to that of the full dataset, with improved collision
rates. Our findings highlight the importance of leveraging diverse real-world
trajectory data in imitation learning and provide insights into data
augmentation strategies for autonomous driving.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 00:40:47 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Sonmez",
"Yasin",
""
],
[
"Krasowski",
"Hanna",
""
],
[
"Arcak",
"Murat",
""
]
]
| TITLE: Learning to Drive by Imitating Surrounding Vehicles
ABSTRACT: Imitation learning is a promising approach for training autonomous vehicles
(AV) to navigate complex traffic environments by mimicking expert driver
behaviors. However, a major challenge in this paradigm lies in effectively
utilizing available driving data, as collecting new data is resource-intensive
and often limited in its ability to cover diverse driving scenarios. While
existing imitation learning frameworks focus on leveraging expert
demonstrations, they often overlook the potential of additional complex driving
data from surrounding traffic participants. In this paper, we propose a data
augmentation strategy that enhances imitation learning by leveraging the
observed trajectories of nearby vehicles, captured through the AV's sensors, as
additional expert demonstrations. We introduce a vehicle selection sampling
strategy that prioritizes informative and diverse driving behaviors,
contributing to a richer and more diverse dataset for training. We evaluate our
approach using the state-of-the-art learning-based planning method PLUTO on the
nuPlan dataset and demonstrate that our augmentation method leads to improved
performance in complex driving scenarios. Specifically, our method reduces
collision rates and improves safety metrics compared to the baseline. Notably,
even when using only 10% of the original dataset, our method achieves
performance comparable to that of the full dataset, with improved collision
rates. Our findings highlight the importance of leveraging diverse real-world
trajectory data in imitation learning and provide insights into data
augmentation strategies for autonomous driving.
| no_new_dataset | 0.942718 |
2503.06003 | Md Azim Khan | Md Azim Khan, Aryya Gangopadhyay, Jianwu Wang, Robert F. Erbacher | Integrating Frequency-Domain Representations with Low-Rank Adaptation in
Vision-Language Models | 8 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Situational awareness applications rely heavily on real-time processing of
visual and textual data to provide actionable insights. Vision language models
(VLMs) have become essential tools for interpreting complex environments by
connecting visual inputs with natural language descriptions. However, these
models often face computational challenges, especially when required to perform
efficiently in real environments. This research presents a novel vision
language model (VLM) framework that leverages frequency domain transformations
and low-rank adaptation (LoRA) to enhance feature extraction, scalability, and
efficiency. Unlike traditional VLMs, which rely solely on spatial-domain
representations, our approach incorporates Discrete Fourier Transform (DFT)
based low-rank features while retaining pretrained spatial weights, enabling
robust performance in noisy or low visibility scenarios. We evaluated the
proposed model on caption generation and Visual Question Answering (VQA) tasks
using benchmark datasets with varying levels of Gaussian noise. Quantitative
results demonstrate that our model achieves evaluation metrics comparable to
state-of-the-art VLMs, such as CLIP ViT-L/14 and SigLIP. Qualitative analysis
further reveals that our model provides more detailed and contextually relevant
responses, particularly for real-world images captured by a RealSense camera
mounted on an Unmanned Ground Vehicle (UGV).
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 01:22:10 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Khan",
"Md Azim",
""
],
[
"Gangopadhyay",
"Aryya",
""
],
[
"Wang",
"Jianwu",
""
],
[
"Erbacher",
"Robert F.",
""
]
]
| TITLE: Integrating Frequency-Domain Representations with Low-Rank Adaptation in
Vision-Language Models
ABSTRACT: Situational awareness applications rely heavily on real-time processing of
visual and textual data to provide actionable insights. Vision language models
(VLMs) have become essential tools for interpreting complex environments by
connecting visual inputs with natural language descriptions. However, these
models often face computational challenges, especially when required to perform
efficiently in real environments. This research presents a novel vision
language model (VLM) framework that leverages frequency domain transformations
and low-rank adaptation (LoRA) to enhance feature extraction, scalability, and
efficiency. Unlike traditional VLMs, which rely solely on spatial-domain
representations, our approach incorporates Discrete Fourier Transform (DFT)
based low-rank features while retaining pretrained spatial weights, enabling
robust performance in noisy or low visibility scenarios. We evaluated the
proposed model on caption generation and Visual Question Answering (VQA) tasks
using benchmark datasets with varying levels of Gaussian noise. Quantitative
results demonstrate that our model achieves evaluation metrics comparable to
state-of-the-art VLMs, such as CLIP ViT-L/14 and SigLIP. Qualitative analysis
further reveals that our model provides more detailed and contextually relevant
responses, particularly for real-world images captured by a RealSense camera
mounted on an Unmanned Ground Vehicle (UGV).
| no_new_dataset | 0.953622 |
2503.06012 | Zhenrong Wang | Zhenrong Wang, Qi Zheng, Sihan Ma, Maosheng Ye, Yibing Zhan, Dongjiang
Li | End-to-End HOI Reconstruction Transformer with Graph-based Encoding | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the diversification of human-object interaction (HOI) applications and
the success of capturing human meshes, HOI reconstruction has gained widespread
attention. Existing mainstream HOI reconstruction methods often rely on
explicitly modeling interactions between humans and objects. However, such a
way leads to a natural conflict between 3D mesh reconstruction, which
emphasizes global structure, and fine-grained contact reconstruction, which
focuses on local details. To address the limitations of explicit modeling, we
propose the End-to-End HOI Reconstruction Transformer with Graph-based Encoding
(HOI-TG). It implicitly learns the interaction between humans and objects by
leveraging self-attention mechanisms. Within the transformer architecture, we
devise graph residual blocks to aggregate the topology among vertices of
different spatial structures. This dual focus effectively balances global and
local representations. Without bells and whistles, HOI-TG achieves
state-of-the-art performance on BEHAVE and InterCap datasets. Particularly on
the challenging InterCap dataset, our method improves the reconstruction
results for human and object meshes by 8.9% and 8.6%, respectively.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 02:21:40 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Zhenrong",
""
],
[
"Zheng",
"Qi",
""
],
[
"Ma",
"Sihan",
""
],
[
"Ye",
"Maosheng",
""
],
[
"Zhan",
"Yibing",
""
],
[
"Li",
"Dongjiang",
""
]
]
| TITLE: End-to-End HOI Reconstruction Transformer with Graph-based Encoding
ABSTRACT: With the diversification of human-object interaction (HOI) applications and
the success of capturing human meshes, HOI reconstruction has gained widespread
attention. Existing mainstream HOI reconstruction methods often rely on
explicitly modeling interactions between humans and objects. However, such a
way leads to a natural conflict between 3D mesh reconstruction, which
emphasizes global structure, and fine-grained contact reconstruction, which
focuses on local details. To address the limitations of explicit modeling, we
propose the End-to-End HOI Reconstruction Transformer with Graph-based Encoding
(HOI-TG). It implicitly learns the interaction between humans and objects by
leveraging self-attention mechanisms. Within the transformer architecture, we
devise graph residual blocks to aggregate the topology among vertices of
different spatial structures. This dual focus effectively balances global and
local representations. Without bells and whistles, HOI-TG achieves
state-of-the-art performance on BEHAVE and InterCap datasets. Particularly on
the challenging InterCap dataset, our method improves the reconstruction
results for human and object meshes by 8.9% and 8.6%, respectively.
| no_new_dataset | 0.946646 |
2503.06021 | Mingcong Xu | Mingcong Xu, Xiaojin Zhang, Wei Chen, Hai Jin | FedEM: A Privacy-Preserving Framework for Concurrent Utility
Preservation in Federated Learning | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Federated Learning (FL) enables collaborative training of models across
distributed clients without sharing local data, addressing privacy concerns in
decentralized systems. However, the gradient-sharing process exposes private
data to potential leakage, compromising FL's privacy guarantees in real-world
applications. To address this issue, we propose Federated Error Minimization
(FedEM), a novel algorithm that incorporates controlled perturbations through
adaptive noise injection. This mechanism effectively mitigates gradient leakage
attacks while maintaining model performance. Experimental results on benchmark
datasets demonstrate that FedEM significantly reduces privacy risks and
preserves model accuracy, achieving a robust balance between privacy protection
and utility preservation.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 02:48:00 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Xu",
"Mingcong",
""
],
[
"Zhang",
"Xiaojin",
""
],
[
"Chen",
"Wei",
""
],
[
"Jin",
"Hai",
""
]
]
| TITLE: FedEM: A Privacy-Preserving Framework for Concurrent Utility
Preservation in Federated Learning
ABSTRACT: Federated Learning (FL) enables collaborative training of models across
distributed clients without sharing local data, addressing privacy concerns in
decentralized systems. However, the gradient-sharing process exposes private
data to potential leakage, compromising FL's privacy guarantees in real-world
applications. To address this issue, we propose Federated Error Minimization
(FedEM), a novel algorithm that incorporates controlled perturbations through
adaptive noise injection. This mechanism effectively mitigates gradient leakage
attacks while maintaining model performance. Experimental results on benchmark
datasets demonstrate that FedEM significantly reduces privacy risks and
preserves model accuracy, achieving a robust balance between privacy protection
and utility preservation.
| no_new_dataset | 0.947039 |
2503.06026 | Kei Ota | Masaru Yajima, Kei Ota, Asako Kanezaki, Rei Kawakami | Zero-Shot Peg Insertion: Identifying Mating Holes and Estimating SE(2)
Poses with Vision-Language Models | Under submission | null | null | null | cs.RO cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Achieving zero-shot peg insertion, where inserting an arbitrary peg into an
unseen hole without task-specific training, remains a fundamental challenge in
robotics. This task demands a highly generalizable perception system capable of
detecting potential holes, selecting the correct mating hole from multiple
candidates, estimating its precise pose, and executing insertion despite
uncertainties. While learning-based methods have been applied to peg insertion,
they often fail to generalize beyond the specific peg-hole pairs encountered
during training. Recent advancements in Vision-Language Models (VLMs) offer a
promising alternative, leveraging large-scale datasets to enable robust
generalization across diverse tasks. Inspired by their success, we introduce a
novel zero-shot peg insertion framework that utilizes a VLM to identify mating
holes and estimate their poses without prior knowledge of their geometry.
Extensive experiments demonstrate that our method achieves 90.2% accuracy,
significantly outperforming baselines in identifying the correct mating hole
across a wide range of previously unseen peg-hole pairs, including 3D-printed
objects, toy puzzles, and industrial connectors. Furthermore, we validate the
effectiveness of our approach in a real-world connector insertion task on a
backpanel of a PC, where our system successfully detects holes, identifies the
correct mating hole, estimates its pose, and completes the insertion with a
success rate of 88.3%. These results highlight the potential of VLM-driven
zero-shot reasoning for enabling robust and generalizable robotic assembly.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 02:59:21 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yajima",
"Masaru",
""
],
[
"Ota",
"Kei",
""
],
[
"Kanezaki",
"Asako",
""
],
[
"Kawakami",
"Rei",
""
]
]
| TITLE: Zero-Shot Peg Insertion: Identifying Mating Holes and Estimating SE(2)
Poses with Vision-Language Models
ABSTRACT: Achieving zero-shot peg insertion, where inserting an arbitrary peg into an
unseen hole without task-specific training, remains a fundamental challenge in
robotics. This task demands a highly generalizable perception system capable of
detecting potential holes, selecting the correct mating hole from multiple
candidates, estimating its precise pose, and executing insertion despite
uncertainties. While learning-based methods have been applied to peg insertion,
they often fail to generalize beyond the specific peg-hole pairs encountered
during training. Recent advancements in Vision-Language Models (VLMs) offer a
promising alternative, leveraging large-scale datasets to enable robust
generalization across diverse tasks. Inspired by their success, we introduce a
novel zero-shot peg insertion framework that utilizes a VLM to identify mating
holes and estimate their poses without prior knowledge of their geometry.
Extensive experiments demonstrate that our method achieves 90.2% accuracy,
significantly outperforming baselines in identifying the correct mating hole
across a wide range of previously unseen peg-hole pairs, including 3D-printed
objects, toy puzzles, and industrial connectors. Furthermore, we validate the
effectiveness of our approach in a real-world connector insertion task on a
backpanel of a PC, where our system successfully detects holes, identifies the
correct mating hole, estimates its pose, and completes the insertion with a
success rate of 88.3%. These results highlight the potential of VLM-driven
zero-shot reasoning for enabling robust and generalizable robotic assembly.
| no_new_dataset | 0.94743 |
2503.06028 | Xinge Ma | Xinge Ma, Jin Wang, Xuejie Zhang | Data-Free Black-Box Federated Learning via Zeroth-Order Gradient
Estimation | Accepted by AAAI 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Federated learning (FL) enables decentralized clients to collaboratively
train a global model under the orchestration of a central server without
exposing their individual data. However, the iterative exchange of model
parameters between the server and clients imposes heavy communication burdens,
risks potential privacy leakage, and even precludes collaboration among
heterogeneous clients. Distillation-based FL tackles these challenges by
exchanging low-dimensional model outputs rather than model parameters, yet it
highly relies on a task-relevant auxiliary dataset that is often not available
in practice. Data-free FL attempts to overcome this limitation by training a
server-side generator to directly synthesize task-specific data samples for
knowledge transfer. However, the update rule of the generator requires clients
to share on-device models for white-box access, which greatly compromises the
advantages of distillation-based FL. This motivates us to explore a data-free
and black-box FL framework via Zeroth-order Gradient Estimation (FedZGE), which
estimates the gradients after flowing through on-device models in a black-box
optimization manner to complete the training of the generator in terms of
fidelity, transferability, diversity, and equilibrium, without involving any
auxiliary data or sharing any model parameters, thus combining the advantages
of both distillation-based FL and data-free FL. Experiments on large-scale
image classification datasets and network architectures demonstrate the
superiority of FedZGE in terms of data heterogeneity, model heterogeneity,
communication efficiency, and privacy protection.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 03:00:01 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ma",
"Xinge",
""
],
[
"Wang",
"Jin",
""
],
[
"Zhang",
"Xuejie",
""
]
]
| TITLE: Data-Free Black-Box Federated Learning via Zeroth-Order Gradient
Estimation
ABSTRACT: Federated learning (FL) enables decentralized clients to collaboratively
train a global model under the orchestration of a central server without
exposing their individual data. However, the iterative exchange of model
parameters between the server and clients imposes heavy communication burdens,
risks potential privacy leakage, and even precludes collaboration among
heterogeneous clients. Distillation-based FL tackles these challenges by
exchanging low-dimensional model outputs rather than model parameters, yet it
highly relies on a task-relevant auxiliary dataset that is often not available
in practice. Data-free FL attempts to overcome this limitation by training a
server-side generator to directly synthesize task-specific data samples for
knowledge transfer. However, the update rule of the generator requires clients
to share on-device models for white-box access, which greatly compromises the
advantages of distillation-based FL. This motivates us to explore a data-free
and black-box FL framework via Zeroth-order Gradient Estimation (FedZGE), which
estimates the gradients after flowing through on-device models in a black-box
optimization manner to complete the training of the generator in terms of
fidelity, transferability, diversity, and equilibrium, without involving any
auxiliary data or sharing any model parameters, thus combining the advantages
of both distillation-based FL and data-free FL. Experiments on large-scale
image classification datasets and network architectures demonstrate the
superiority of FedZGE in terms of data heterogeneity, model heterogeneity,
communication efficiency, and privacy protection.
| no_new_dataset | 0.948632 |
2503.06029 | Xudong Lu | Xudong Lu, Haohao Gao, Renshou Wu, Shuai Ren, Xiaoxin Chen, Hongsheng
Li, Fangyuan Li | SmartBench: Is Your LLM Truly a Good Chinese Smartphone Assistant? | 23 pages | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have become integral to daily life, especially
advancing as intelligent assistants through on-device deployment on
smartphones. However, existing LLM evaluation benchmarks predominantly focus on
objective tasks like mathematics and coding in English, which do not
necessarily reflect the practical use cases of on-device LLMs in real-world
mobile scenarios, especially for Chinese users. To address these gaps, we
introduce SmartBench, the first benchmark designed to evaluate the capabilities
of on-device LLMs in Chinese mobile contexts. We analyze functionalities
provided by representative smartphone manufacturers and divide them into five
categories: text summarization, text Q\&A, information extraction, content
creation, and notification management, further detailed into 20 specific tasks.
For each task, we construct high-quality datasets comprising 50 to 200
question-answer pairs that reflect everyday mobile interactions, and we develop
automated evaluation criteria tailored for these tasks. We conduct
comprehensive evaluations of on-device LLMs and MLLMs using SmartBench and also
assess their performance after quantized deployment on real smartphone NPUs.
Our contributions provide a standardized framework for evaluating on-device
LLMs in Chinese, promoting further development and optimization in this
critical area. Code and data will be available at
https://github.com/Lucky-Lance/SmartBench.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 03:02:21 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Lu",
"Xudong",
""
],
[
"Gao",
"Haohao",
""
],
[
"Wu",
"Renshou",
""
],
[
"Ren",
"Shuai",
""
],
[
"Chen",
"Xiaoxin",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Li",
"Fangyuan",
""
]
]
| TITLE: SmartBench: Is Your LLM Truly a Good Chinese Smartphone Assistant?
ABSTRACT: Large Language Models (LLMs) have become integral to daily life, especially
advancing as intelligent assistants through on-device deployment on
smartphones. However, existing LLM evaluation benchmarks predominantly focus on
objective tasks like mathematics and coding in English, which do not
necessarily reflect the practical use cases of on-device LLMs in real-world
mobile scenarios, especially for Chinese users. To address these gaps, we
introduce SmartBench, the first benchmark designed to evaluate the capabilities
of on-device LLMs in Chinese mobile contexts. We analyze functionalities
provided by representative smartphone manufacturers and divide them into five
categories: text summarization, text Q\&A, information extraction, content
creation, and notification management, further detailed into 20 specific tasks.
For each task, we construct high-quality datasets comprising 50 to 200
question-answer pairs that reflect everyday mobile interactions, and we develop
automated evaluation criteria tailored for these tasks. We conduct
comprehensive evaluations of on-device LLMs and MLLMs using SmartBench and also
assess their performance after quantized deployment on real smartphone NPUs.
Our contributions provide a standardized framework for evaluating on-device
LLMs in Chinese, promoting further development and optimization in this
critical area. Code and data will be available at
https://github.com/Lucky-Lance/SmartBench.
| new_dataset | 0.974677 |
2503.06030 | Yuxiang Lai | Yuheng Li, Yuxiang Lai, Maria Thor, Deborah Marshall, Zachary
Buchwald, David S. Yu, Xiaofeng Yang | Towards Universal Text-driven CT Image Segmentation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Computed tomography (CT) is extensively used for accurate visualization and
segmentation of organs and lesions. While deep learning models such as
convolutional neural networks (CNNs) and vision transformers (ViTs) have
significantly improved CT image analysis, their performance often declines when
applied to diverse, real-world clinical data. Although foundation models offer
a broader and more adaptable solution, their potential is limited due to the
challenge of obtaining large-scale, voxel-level annotations for medical images.
In response to these challenges, prompting-based models using visual or text
prompts have emerged. Visual-prompting methods, such as the Segment Anything
Model (SAM), still require significant manual input and can introduce ambiguity
when applied to clinical scenarios. Instead, foundation models that use text
prompts offer a more versatile and clinically relevant approach. Notably,
current text-prompt models, such as the CLIP-Driven Universal Model, are
limited to text prompts already encountered during training and struggle to
process the complex and diverse scenarios of real-world clinical applications.
Instead of fine-tuning models trained from natural imaging, we propose
OpenVocabCT, a vision-language model pretrained on large-scale 3D CT images for
universal text-driven segmentation. Using the large-scale CT-RATE dataset, we
decompose the diagnostic reports into fine-grained, organ-level descriptions
using large language models for multi-granular contrastive learning. We
evaluate our OpenVocabCT on downstream segmentation tasks across nine public
datasets for organ and tumor segmentation, demonstrating the superior
performance of our model compared to existing methods. All code, datasets, and
models will be publicly released at https://github.com/ricklisz/OpenVocabCT.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 03:02:57 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Yuheng",
""
],
[
"Lai",
"Yuxiang",
""
],
[
"Thor",
"Maria",
""
],
[
"Marshall",
"Deborah",
""
],
[
"Buchwald",
"Zachary",
""
],
[
"Yu",
"David S.",
""
],
[
"Yang",
"Xiaofeng",
""
]
]
| TITLE: Towards Universal Text-driven CT Image Segmentation
ABSTRACT: Computed tomography (CT) is extensively used for accurate visualization and
segmentation of organs and lesions. While deep learning models such as
convolutional neural networks (CNNs) and vision transformers (ViTs) have
significantly improved CT image analysis, their performance often declines when
applied to diverse, real-world clinical data. Although foundation models offer
a broader and more adaptable solution, their potential is limited due to the
challenge of obtaining large-scale, voxel-level annotations for medical images.
In response to these challenges, prompting-based models using visual or text
prompts have emerged. Visual-prompting methods, such as the Segment Anything
Model (SAM), still require significant manual input and can introduce ambiguity
when applied to clinical scenarios. Instead, foundation models that use text
prompts offer a more versatile and clinically relevant approach. Notably,
current text-prompt models, such as the CLIP-Driven Universal Model, are
limited to text prompts already encountered during training and struggle to
process the complex and diverse scenarios of real-world clinical applications.
Instead of fine-tuning models trained from natural imaging, we propose
OpenVocabCT, a vision-language model pretrained on large-scale 3D CT images for
universal text-driven segmentation. Using the large-scale CT-RATE dataset, we
decompose the diagnostic reports into fine-grained, organ-level descriptions
using large language models for multi-granular contrastive learning. We
evaluate our OpenVocabCT on downstream segmentation tasks across nine public
datasets for organ and tumor segmentation, demonstrating the superior
performance of our model compared to existing methods. All code, datasets, and
models will be publicly released at https://github.com/ricklisz/OpenVocabCT.
| no_new_dataset | 0.948585 |
2503.06034 | Shengyao Zhuang | Shengyao Zhuang, Xueguang Ma, Bevan Koopman, Jimmy Lin, Guido Zuccon | Rank-R1: Enhancing Reasoning in LLM-based Document Rerankers via
Reinforcement Learning | null | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce Rank-R1, a novel LLM-based reranker that performs
reasoning over both the user query and candidate documents before performing
the ranking task. Existing document reranking methods based on large language
models (LLMs) typically rely on prompting or fine-tuning LLMs to order or label
candidate documents according to their relevance to a query. For Rank-R1, we
use a reinforcement learning algorithm along with only a small set of relevance
labels (without any reasoning supervision) to enhance the reasoning ability of
LLM-based rerankers. Our hypothesis is that adding reasoning capabilities to
the rerankers can improve their relevance assessement and ranking capabilities.
Our experiments on the TREC DL and BRIGHT datasets show that Rank-R1 is highly
effective, especially for complex queries. In particular, we find that Rank-R1
achieves effectiveness on in-domain datasets at par with that of supervised
fine-tuning methods, but utilizing only 18\% of the training data used by the
fine-tuning methods. We also find that the model largely outperforms zero-shot
and supervised fine-tuning when applied to out-of-domain datasets featuring
complex queries, especially when a 14B-size model is used. Finally, we
qualitatively observe that Rank-R1's reasoning process improves the
explainability of the ranking results, opening new opportunities for search
engine results presentation and fruition.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 03:14:26 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhuang",
"Shengyao",
""
],
[
"Ma",
"Xueguang",
""
],
[
"Koopman",
"Bevan",
""
],
[
"Lin",
"Jimmy",
""
],
[
"Zuccon",
"Guido",
""
]
]
| TITLE: Rank-R1: Enhancing Reasoning in LLM-based Document Rerankers via
Reinforcement Learning
ABSTRACT: In this paper, we introduce Rank-R1, a novel LLM-based reranker that performs
reasoning over both the user query and candidate documents before performing
the ranking task. Existing document reranking methods based on large language
models (LLMs) typically rely on prompting or fine-tuning LLMs to order or label
candidate documents according to their relevance to a query. For Rank-R1, we
use a reinforcement learning algorithm along with only a small set of relevance
labels (without any reasoning supervision) to enhance the reasoning ability of
LLM-based rerankers. Our hypothesis is that adding reasoning capabilities to
the rerankers can improve their relevance assessement and ranking capabilities.
Our experiments on the TREC DL and BRIGHT datasets show that Rank-R1 is highly
effective, especially for complex queries. In particular, we find that Rank-R1
achieves effectiveness on in-domain datasets at par with that of supervised
fine-tuning methods, but utilizing only 18\% of the training data used by the
fine-tuning methods. We also find that the model largely outperforms zero-shot
and supervised fine-tuning when applied to out-of-domain datasets featuring
complex queries, especially when a 14B-size model is used. Finally, we
qualitatively observe that Rank-R1's reasoning process improves the
explainability of the ranking results, opening new opportunities for search
engine results presentation and fruition.
| no_new_dataset | 0.950411 |
2503.06035 | Chien-Yi Chang | Chien-yi Chang and Xin He | The Liabilities of Robots.txt | 28 pages | null | null | null | cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The robots.txt file, introduced as part of the Robots Exclusion Protocol in
1994, provides webmasters with a mechanism to communicate access permissions to
automated bots. While broadly adopted as a community standard, the legal
liabilities associated with violating robots.txt remain ambiguous. The rapid
rise of large language models, which depend on extensive datasets for training,
has amplified these challenges, prompting webmasters to increasingly use
robots.txt to restrict the activities of bots engaged in large-scale data
collection. This paper clarifies the liabilities associated with robots.txt
within the contexts of contract, copyright, and tort law. Drawing on key cases,
legal principles, and scholarly discourse, it proposes a legal framework for
web scraping disputes. It also addresses the growing fragmentation of the
internet, as restrictive practices by webmasters threaten the principles of
openness and collaboration. Through balancing innovation with accountability,
this paper offers insights to ensure that robots.txt remains an equitable
protocol for the internet and thus contributes to digital governance in the age
of AI.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 03:16:17 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chang",
"Chien-yi",
""
],
[
"He",
"Xin",
""
]
]
| TITLE: The Liabilities of Robots.txt
ABSTRACT: The robots.txt file, introduced as part of the Robots Exclusion Protocol in
1994, provides webmasters with a mechanism to communicate access permissions to
automated bots. While broadly adopted as a community standard, the legal
liabilities associated with violating robots.txt remain ambiguous. The rapid
rise of large language models, which depend on extensive datasets for training,
has amplified these challenges, prompting webmasters to increasingly use
robots.txt to restrict the activities of bots engaged in large-scale data
collection. This paper clarifies the liabilities associated with robots.txt
within the contexts of contract, copyright, and tort law. Drawing on key cases,
legal principles, and scholarly discourse, it proposes a legal framework for
web scraping disputes. It also addresses the growing fragmentation of the
internet, as restrictive practices by webmasters threaten the principles of
openness and collaboration. Through balancing innovation with accountability,
this paper offers insights to ensure that robots.txt remains an equitable
protocol for the internet and thus contributes to digital governance in the age
of AI.
| no_new_dataset | 0.954009 |
2503.06038 | Hongtao Wang | Hongtao Wang and Jiandong Liang and Lei Wang and Shuaizhe Liang and
Jinping Zhu and Chunxia Zhang and Jiangshe Zhang | A Label-Free High-Precision Residual Moveout Picking Method for Travel
Time Tomography based on Deep Learning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Residual moveout (RMO) provides critical information for travel time
tomography. The current industry-standard method for fitting RMO involves
scanning high-order polynomial equations. However, this analytical approach
does not accurately capture local saltation, leading to low iteration
efficiency in tomographic inversion. Supervised learning-based image
segmentation methods for picking can effectively capture local variations;
however, they encounter challenges such as a scarcity of reliable training
samples and the high complexity of post-processing. To address these issues,
this study proposes a deep learning-based cascade picking method. It
distinguishes accurate and robust RMOs using a segmentation network and a
post-processing technique based on trend regression. Additionally, a data
synthesis method is introduced, enabling the segmentation network to be trained
on synthetic datasets for effective picking in field data. Furthermore, a set
of metrics is proposed to quantify the quality of automatically picked RMOs.
Experimental results based on both model and real data demonstrate that,
compared to semblance-based methods, our approach achieves greater picking
density and accuracy.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 03:27:55 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Hongtao",
""
],
[
"Liang",
"Jiandong",
""
],
[
"Wang",
"Lei",
""
],
[
"Liang",
"Shuaizhe",
""
],
[
"Zhu",
"Jinping",
""
],
[
"Zhang",
"Chunxia",
""
],
[
"Zhang",
"Jiangshe",
""
]
]
| TITLE: A Label-Free High-Precision Residual Moveout Picking Method for Travel
Time Tomography based on Deep Learning
ABSTRACT: Residual moveout (RMO) provides critical information for travel time
tomography. The current industry-standard method for fitting RMO involves
scanning high-order polynomial equations. However, this analytical approach
does not accurately capture local saltation, leading to low iteration
efficiency in tomographic inversion. Supervised learning-based image
segmentation methods for picking can effectively capture local variations;
however, they encounter challenges such as a scarcity of reliable training
samples and the high complexity of post-processing. To address these issues,
this study proposes a deep learning-based cascade picking method. It
distinguishes accurate and robust RMOs using a segmentation network and a
post-processing technique based on trend regression. Additionally, a data
synthesis method is introduced, enabling the segmentation network to be trained
on synthetic datasets for effective picking in field data. Furthermore, a set
of metrics is proposed to quantify the quality of automatically picked RMOs.
Experimental results based on both model and real data demonstrate that,
compared to semblance-based methods, our approach achieves greater picking
density and accuracy.
| no_new_dataset | 0.950365 |
2503.06053 | Baoyu Fan | Runze Zhang, Guoguang Du, Xiaochuan Li, Qi Jia, Liang Jin, Lu Liu,
Jingjing Wang, Cong Xu, Zhenhua Guo, Yaqian Zhao, Xiaoli Gong, Rengang Li,
Baoyu Fan | DropletVideo: A Dataset and Approach to Explore Integral Spatio-Temporal
Consistent Video Generation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Spatio-temporal consistency is a critical research topic in video generation.
A qualified generated video segment must ensure plot plausibility and coherence
while maintaining visual consistency of objects and scenes across varying
viewpoints. Prior research, especially in open-source projects, primarily
focuses on either temporal or spatial consistency, or their basic combination,
such as appending a description of a camera movement after a prompt without
constraining the outcomes of this movement. However, camera movement may
introduce new objects to the scene or eliminate existing ones, thereby
overlaying and affecting the preceding narrative. Especially in videos with
numerous camera movements, the interplay between multiple plots becomes
increasingly complex. This paper introduces and examines integral
spatio-temporal consistency, considering the synergy between plot progression
and camera techniques, and the long-term impact of prior content on subsequent
generation. Our research encompasses dataset construction through to the
development of the model. Initially, we constructed a DropletVideo-10M dataset,
which comprises 10 million videos featuring dynamic camera motion and object
actions. Each video is annotated with an average caption of 206 words,
detailing various camera movements and plot developments. Following this, we
developed and trained the DropletVideo model, which excels in preserving
spatio-temporal coherence during video generation. The DropletVideo dataset and
model are accessible at https://dropletx.github.io.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 04:37:38 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Runze",
""
],
[
"Du",
"Guoguang",
""
],
[
"Li",
"Xiaochuan",
""
],
[
"Jia",
"Qi",
""
],
[
"Jin",
"Liang",
""
],
[
"Liu",
"Lu",
""
],
[
"Wang",
"Jingjing",
""
],
[
"Xu",
"Cong",
""
],
[
"Guo",
"Zhenhua",
""
],
[
"Zhao",
"Yaqian",
""
],
[
"Gong",
"Xiaoli",
""
],
[
"Li",
"Rengang",
""
],
[
"Fan",
"Baoyu",
""
]
]
| TITLE: DropletVideo: A Dataset and Approach to Explore Integral Spatio-Temporal
Consistent Video Generation
ABSTRACT: Spatio-temporal consistency is a critical research topic in video generation.
A qualified generated video segment must ensure plot plausibility and coherence
while maintaining visual consistency of objects and scenes across varying
viewpoints. Prior research, especially in open-source projects, primarily
focuses on either temporal or spatial consistency, or their basic combination,
such as appending a description of a camera movement after a prompt without
constraining the outcomes of this movement. However, camera movement may
introduce new objects to the scene or eliminate existing ones, thereby
overlaying and affecting the preceding narrative. Especially in videos with
numerous camera movements, the interplay between multiple plots becomes
increasingly complex. This paper introduces and examines integral
spatio-temporal consistency, considering the synergy between plot progression
and camera techniques, and the long-term impact of prior content on subsequent
generation. Our research encompasses dataset construction through to the
development of the model. Initially, we constructed a DropletVideo-10M dataset,
which comprises 10 million videos featuring dynamic camera motion and object
actions. Each video is annotated with an average caption of 206 words,
detailing various camera movements and plot developments. Following this, we
developed and trained the DropletVideo model, which excels in preserving
spatio-temporal coherence during video generation. The DropletVideo dataset and
model are accessible at https://dropletx.github.io.
| new_dataset | 0.958069 |
2503.06054 | Suvendu Mohanty | Suvendu Mohanty | Fine-Grained Bias Detection in LLM: Enhancing detection mechanisms for
nuanced biases | Bias detection, Large Language Models, nuanced biases, fine-grained
mechanisms, model transparency, ethical AI | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in Artificial Intelligence, particularly in Large
Language Models (LLMs), have transformed natural language processing by
improving generative capabilities. However, detecting biases embedded within
these models remains a challenge. Subtle biases can propagate misinformation,
influence decision-making, and reinforce stereotypes, raising ethical concerns.
This study presents a detection framework to identify nuanced biases in LLMs.
The approach integrates contextual analysis, interpretability via attention
mechanisms, and counterfactual data augmentation to capture hidden biases
across linguistic contexts. The methodology employs contrastive prompts and
synthetic datasets to analyze model behaviour across cultural, ideological, and
demographic scenarios.
Quantitative analysis using benchmark datasets and qualitative assessments
through expert reviews validate the effectiveness of the framework. Results
show improvements in detecting subtle biases compared to conventional methods,
which often fail to highlight disparities in model responses to race, gender,
and socio-political contexts. The framework also identifies biases arising from
imbalances in training data and model architectures. Continuous user feedback
ensures adaptability and refinement. This research underscores the importance
of proactive bias mitigation strategies and calls for collaboration between
policymakers, AI developers, and regulators. The proposed detection mechanisms
enhance model transparency and support responsible LLM deployment in sensitive
applications such as education, legal systems, and healthcare. Future work will
focus on real-time bias monitoring and cross-linguistic generalization to
improve fairness and inclusivity in AI-driven communication tools.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 04:43:01 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Mohanty",
"Suvendu",
""
]
]
| TITLE: Fine-Grained Bias Detection in LLM: Enhancing detection mechanisms for
nuanced biases
ABSTRACT: Recent advancements in Artificial Intelligence, particularly in Large
Language Models (LLMs), have transformed natural language processing by
improving generative capabilities. However, detecting biases embedded within
these models remains a challenge. Subtle biases can propagate misinformation,
influence decision-making, and reinforce stereotypes, raising ethical concerns.
This study presents a detection framework to identify nuanced biases in LLMs.
The approach integrates contextual analysis, interpretability via attention
mechanisms, and counterfactual data augmentation to capture hidden biases
across linguistic contexts. The methodology employs contrastive prompts and
synthetic datasets to analyze model behaviour across cultural, ideological, and
demographic scenarios.
Quantitative analysis using benchmark datasets and qualitative assessments
through expert reviews validate the effectiveness of the framework. Results
show improvements in detecting subtle biases compared to conventional methods,
which often fail to highlight disparities in model responses to race, gender,
and socio-political contexts. The framework also identifies biases arising from
imbalances in training data and model architectures. Continuous user feedback
ensures adaptability and refinement. This research underscores the importance
of proactive bias mitigation strategies and calls for collaboration between
policymakers, AI developers, and regulators. The proposed detection mechanisms
enhance model transparency and support responsible LLM deployment in sensitive
applications such as education, legal systems, and healthcare. Future work will
focus on real-time bias monitoring and cross-linguistic generalization to
improve fairness and inclusivity in AI-driven communication tools.
| no_new_dataset | 0.94428 |
2503.06059 | Miguel Contreras | Miguel Contreras, Jessica Sena, Andrea Davidson, Jiaqing Zhang, Tezcan
Ozrazgat-Baslanti, Yuanfang Ren, Ziyuan Guan, Jeremy Balch, Tyler Loftus,
Subhash Nerella, Azra Bihorac, Parisa Rashidi | MANDARIN: Mixture-of-Experts Framework for Dynamic Delirium and Coma
Prediction in ICU Patients: Development and Validation of an Acute Brain
Dysfunction Prediction Model | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Acute brain dysfunction (ABD) is a common, severe ICU complication,
presenting as delirium or coma and leading to prolonged stays, increased
mortality, and cognitive decline. Traditional screening tools like the Glasgow
Coma Scale (GCS), Confusion Assessment Method (CAM), and Richmond
Agitation-Sedation Scale (RASS) rely on intermittent assessments, causing
delays and inconsistencies. In this study, we propose MANDARIN
(Mixture-of-Experts Framework for Dynamic Delirium and Coma Prediction in ICU
Patients), a 1.5M-parameter mixture-of-experts neural network to predict ABD in
real-time among ICU patients. The model integrates temporal and static data
from the ICU to predict the brain status in the next 12 to 72 hours, using a
multi-branch approach to account for current brain status. The MANDARIN model
was trained on data from 92,734 patients (132,997 ICU admissions) from 2
hospitals between 2008-2019 and validated externally on data from 11,719
patients (14,519 ICU admissions) from 15 hospitals and prospectively on data
from 304 patients (503 ICU admissions) from one hospital in 2021-2024. Three
datasets were used: the University of Florida Health (UFH) dataset, the
electronic ICU Collaborative Research Database (eICU), and the Medical
Information Mart for Intensive Care (MIMIC)-IV dataset. MANDARIN significantly
outperforms the baseline neurological assessment scores (GCS, CAM, and RASS)
for delirium prediction in both external (AUROC 75.5% CI: 74.2%-76.8% vs 68.3%
CI: 66.9%-69.5%) and prospective (AUROC 82.0% CI: 74.8%-89.2% vs 72.7% CI:
65.5%-81.0%) cohorts, as well as for coma prediction (external AUROC 87.3% CI:
85.9%-89.0% vs 72.8% CI: 70.6%-74.9%, and prospective AUROC 93.4% CI:
88.5%-97.9% vs 67.7% CI: 57.7%-76.8%) with a 12-hour lead time. This tool has
the potential to assist clinicians in decision-making by continuously
monitoring the brain status of patients in the ICU.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 04:56:41 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Contreras",
"Miguel",
""
],
[
"Sena",
"Jessica",
""
],
[
"Davidson",
"Andrea",
""
],
[
"Zhang",
"Jiaqing",
""
],
[
"Ozrazgat-Baslanti",
"Tezcan",
""
],
[
"Ren",
"Yuanfang",
""
],
[
"Guan",
"Ziyuan",
""
],
[
"Balch",
"Jeremy",
""
],
[
"Loftus",
"Tyler",
""
],
[
"Nerella",
"Subhash",
""
],
[
"Bihorac",
"Azra",
""
],
[
"Rashidi",
"Parisa",
""
]
]
| TITLE: MANDARIN: Mixture-of-Experts Framework for Dynamic Delirium and Coma
Prediction in ICU Patients: Development and Validation of an Acute Brain
Dysfunction Prediction Model
ABSTRACT: Acute brain dysfunction (ABD) is a common, severe ICU complication,
presenting as delirium or coma and leading to prolonged stays, increased
mortality, and cognitive decline. Traditional screening tools like the Glasgow
Coma Scale (GCS), Confusion Assessment Method (CAM), and Richmond
Agitation-Sedation Scale (RASS) rely on intermittent assessments, causing
delays and inconsistencies. In this study, we propose MANDARIN
(Mixture-of-Experts Framework for Dynamic Delirium and Coma Prediction in ICU
Patients), a 1.5M-parameter mixture-of-experts neural network to predict ABD in
real-time among ICU patients. The model integrates temporal and static data
from the ICU to predict the brain status in the next 12 to 72 hours, using a
multi-branch approach to account for current brain status. The MANDARIN model
was trained on data from 92,734 patients (132,997 ICU admissions) from 2
hospitals between 2008-2019 and validated externally on data from 11,719
patients (14,519 ICU admissions) from 15 hospitals and prospectively on data
from 304 patients (503 ICU admissions) from one hospital in 2021-2024. Three
datasets were used: the University of Florida Health (UFH) dataset, the
electronic ICU Collaborative Research Database (eICU), and the Medical
Information Mart for Intensive Care (MIMIC)-IV dataset. MANDARIN significantly
outperforms the baseline neurological assessment scores (GCS, CAM, and RASS)
for delirium prediction in both external (AUROC 75.5% CI: 74.2%-76.8% vs 68.3%
CI: 66.9%-69.5%) and prospective (AUROC 82.0% CI: 74.8%-89.2% vs 72.7% CI:
65.5%-81.0%) cohorts, as well as for coma prediction (external AUROC 87.3% CI:
85.9%-89.0% vs 72.8% CI: 70.6%-74.9%, and prospective AUROC 93.4% CI:
88.5%-97.9% vs 67.7% CI: 57.7%-76.8%) with a 12-hour lead time. This tool has
the potential to assist clinicians in decision-making by continuously
monitoring the brain status of patients in the ICU.
| no_new_dataset | 0.946101 |
2503.06060 | Md Sadman Sakib | Md Sadman Sakib and Yu Sun | STAR: A Foundation Model-driven Framework for Robust Task Planning and
Failure Recovery in Robotic Systems | null | null | null | null | cs.RO cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Modern robotic systems, deployed across domains from industrial automation to
domestic assistance, face a critical challenge: executing tasks with precision
and adaptability in dynamic, unpredictable environments. To address this, we
propose STAR (Smart Task Adaptation and Recovery), a novel framework that
synergizes Foundation Models (FMs) with dynamically expanding Knowledge Graphs
(KGs) to enable resilient task planning and autonomous failure recovery. While
FMs offer remarkable generalization and contextual reasoning, their
limitations, including computational inefficiency, hallucinations, and output
inconsistencies hinder reliable deployment. STAR mitigates these issues by
embedding learned knowledge into structured, reusable KGs, which streamline
information retrieval, reduce redundant FM computations, and provide precise,
scenario-specific insights. The framework leverages FM-driven reasoning to
diagnose failures, generate context-aware recovery strategies, and execute
corrective actions without human intervention or system restarts. Unlike
conventional approaches that rely on rigid protocols, STAR dynamically expands
its KG with experiential knowledge, ensuring continuous adaptation to novel
scenarios. To evaluate the effectiveness of this approach, we developed a
comprehensive dataset that includes various robotic tasks and failure
scenarios. Through extensive experimentation, STAR demonstrated an 86% task
planning accuracy and 78% recovery success rate, showing significant
improvements over baseline methods. The framework's ability to continuously
learn from experience while maintaining structured knowledge representation
makes it particularly suitable for long-term deployment in real-world
applications.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 05:05:21 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Sakib",
"Md Sadman",
""
],
[
"Sun",
"Yu",
""
]
]
| TITLE: STAR: A Foundation Model-driven Framework for Robust Task Planning and
Failure Recovery in Robotic Systems
ABSTRACT: Modern robotic systems, deployed across domains from industrial automation to
domestic assistance, face a critical challenge: executing tasks with precision
and adaptability in dynamic, unpredictable environments. To address this, we
propose STAR (Smart Task Adaptation and Recovery), a novel framework that
synergizes Foundation Models (FMs) with dynamically expanding Knowledge Graphs
(KGs) to enable resilient task planning and autonomous failure recovery. While
FMs offer remarkable generalization and contextual reasoning, their
limitations, including computational inefficiency, hallucinations, and output
inconsistencies hinder reliable deployment. STAR mitigates these issues by
embedding learned knowledge into structured, reusable KGs, which streamline
information retrieval, reduce redundant FM computations, and provide precise,
scenario-specific insights. The framework leverages FM-driven reasoning to
diagnose failures, generate context-aware recovery strategies, and execute
corrective actions without human intervention or system restarts. Unlike
conventional approaches that rely on rigid protocols, STAR dynamically expands
its KG with experiential knowledge, ensuring continuous adaptation to novel
scenarios. To evaluate the effectiveness of this approach, we developed a
comprehensive dataset that includes various robotic tasks and failure
scenarios. Through extensive experimentation, STAR demonstrated an 86% task
planning accuracy and 78% recovery success rate, showing significant
improvements over baseline methods. The framework's ability to continuously
learn from experience while maintaining structured knowledge representation
makes it particularly suitable for long-term deployment in real-world
applications.
| new_dataset | 0.954265 |
2503.06064 | Wenzhuo Du | Wenzhuo Du, Gerun Wang, Guancheng Chen, Hang Zhao, Xin Li, Jian Gao | A Novel Trustworthy Video Summarization Algorithm Through a Mixture of
LoRA Experts | null | null | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | With the exponential growth of user-generated content on video-sharing
platforms, the challenge of facilitating efficient searching and browsing of
videos has garnered significant attention. To enhance users' ability to swiftly
locate and review pertinent videos, the creation of concise and informative
video summaries has become increasingly important. Video-llama is an effective
tool for generating video summarization, but it cannot effectively unify and
optimize the modeling of temporal and spatial features and requires a lot of
computational resources and time. Therefore, we propose MiLoRA-ViSum to more
efficiently capture complex temporal dynamics and spatial relationships
inherent in video data and to control the number of parameters for training. By
extending traditional Low-Rank Adaptation (LoRA) into a sophisticated
mixture-of-experts paradigm, MiLoRA-ViSum incorporates a dual temporal-spatial
adaptation mechanism tailored specifically for video summarization tasks. This
approach dynamically integrates specialized LoRA experts, each fine-tuned to
address distinct temporal or spatial dimensions. Extensive evaluations of the
VideoXum and ActivityNet datasets demonstrate that MiLoRA-ViSum achieves the
best summarization performance compared to state-of-the-art models, while
maintaining significantly lower computational costs. The proposed
mixture-of-experts strategy, combined with the dual adaptation mechanism,
highlights the model's potential to enhance video summarization capabilities,
particularly in large-scale applications requiring both efficiency and
precision.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 05:20:52 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Du",
"Wenzhuo",
""
],
[
"Wang",
"Gerun",
""
],
[
"Chen",
"Guancheng",
""
],
[
"Zhao",
"Hang",
""
],
[
"Li",
"Xin",
""
],
[
"Gao",
"Jian",
""
]
]
| TITLE: A Novel Trustworthy Video Summarization Algorithm Through a Mixture of
LoRA Experts
ABSTRACT: With the exponential growth of user-generated content on video-sharing
platforms, the challenge of facilitating efficient searching and browsing of
videos has garnered significant attention. To enhance users' ability to swiftly
locate and review pertinent videos, the creation of concise and informative
video summaries has become increasingly important. Video-llama is an effective
tool for generating video summarization, but it cannot effectively unify and
optimize the modeling of temporal and spatial features and requires a lot of
computational resources and time. Therefore, we propose MiLoRA-ViSum to more
efficiently capture complex temporal dynamics and spatial relationships
inherent in video data and to control the number of parameters for training. By
extending traditional Low-Rank Adaptation (LoRA) into a sophisticated
mixture-of-experts paradigm, MiLoRA-ViSum incorporates a dual temporal-spatial
adaptation mechanism tailored specifically for video summarization tasks. This
approach dynamically integrates specialized LoRA experts, each fine-tuned to
address distinct temporal or spatial dimensions. Extensive evaluations of the
VideoXum and ActivityNet datasets demonstrate that MiLoRA-ViSum achieves the
best summarization performance compared to state-of-the-art models, while
maintaining significantly lower computational costs. The proposed
mixture-of-experts strategy, combined with the dual adaptation mechanism,
highlights the model's potential to enhance video summarization capabilities,
particularly in large-scale applications requiring both efficiency and
precision.
| no_new_dataset | 0.947137 |
2503.06066 | Xin-Jian Xu | Murong Yang, Shihui Ying, Xin-Jian Xu, Yue Gao | Multi-view Spectral Clustering on the Grassmannian Manifold With
Hypergraph Representation | 14 pages, 6 figures, 4 tables | null | null | null | cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph-based multi-view spectral clustering methods have achieved notable
progress recently, yet they often fall short in either oversimplifying pairwise
relationships or struggling with inefficient spectral decompositions in
high-dimensional Euclidean spaces. In this paper, we introduce a novel approach
that begins to generate hypergraphs by leveraging sparse representation
learning from data points. Based on the generated hypergraph, we propose an
optimization function with orthogonality constraints for multi-view hypergraph
spectral clustering, which incorporates spectral clustering for each view and
ensures consistency across different views. In Euclidean space, solving the
orthogonality-constrained optimization problem may yield local maxima and
approximation errors. Innovately, we transform this problem into an
unconstrained form on the Grassmannian manifold. Finally, we devise an
alternating iterative Riemannian optimization algorithm to solve the problem.
To validate the effectiveness of the proposed algorithm, we test it on four
real-world multi-view datasets and compare its performance with seven
state-of-the-art multi-view clustering algorithms. The experimental results
demonstrate that our method outperforms the baselines in terms of clustering
performance due to its superior low-dimensional and resilient feature
representation.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 05:26:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yang",
"Murong",
""
],
[
"Ying",
"Shihui",
""
],
[
"Xu",
"Xin-Jian",
""
],
[
"Gao",
"Yue",
""
]
]
| TITLE: Multi-view Spectral Clustering on the Grassmannian Manifold With
Hypergraph Representation
ABSTRACT: Graph-based multi-view spectral clustering methods have achieved notable
progress recently, yet they often fall short in either oversimplifying pairwise
relationships or struggling with inefficient spectral decompositions in
high-dimensional Euclidean spaces. In this paper, we introduce a novel approach
that begins to generate hypergraphs by leveraging sparse representation
learning from data points. Based on the generated hypergraph, we propose an
optimization function with orthogonality constraints for multi-view hypergraph
spectral clustering, which incorporates spectral clustering for each view and
ensures consistency across different views. In Euclidean space, solving the
orthogonality-constrained optimization problem may yield local maxima and
approximation errors. Innovately, we transform this problem into an
unconstrained form on the Grassmannian manifold. Finally, we devise an
alternating iterative Riemannian optimization algorithm to solve the problem.
To validate the effectiveness of the proposed algorithm, we test it on four
real-world multi-view datasets and compare its performance with seven
state-of-the-art multi-view clustering algorithms. The experimental results
demonstrate that our method outperforms the baselines in terms of clustering
performance due to its superior low-dimensional and resilient feature
representation.
| no_new_dataset | 0.94545 |
2503.06072 | Guiyao Tie | Guiyao Tie, Zeli Zhao, Dingjie Song, Fuyang Wei, Rong Zhou, Yurou Dai,
Wen Yin, Zhejian Yang, Jiangyue Yan, Yao Su, Zhenhan Dai, Yifeng Xie, Yihan
Cao, Lichao Sun, Pan Zhou, Lifang He, Hechang Chen, Yu Zhang, Qingsong Wen,
Tianming Liu, Neil Zhenqiang Gong, Jiliang Tang, Caiming Xiong, Heng Ji,
Philip S. Yu, Jianfeng Gao | A Survey on Post-training of Large Language Models | 87 pages, 21 figures, 9 tables | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emergence of Large Language Models (LLMs) has fundamentally transformed
natural language processing, making them indispensable across domains ranging
from conversational systems to scientific exploration. However, their
pre-trained architectures often reveal limitations in specialized contexts,
including restricted reasoning capacities, ethical uncertainties, and
suboptimal domain-specific performance. These challenges necessitate advanced
post-training language models (PoLMs) to address these shortcomings, such as
OpenAI-o1/o3 and DeepSeek-R1 (collectively known as Large Reasoning Models, or
LRMs). This paper presents the first comprehensive survey of PoLMs,
systematically tracing their evolution across five core paradigms: Fine-tuning,
which enhances task-specific accuracy; Alignment, which ensures alignment with
human preferences; Reasoning, which advances multi-step inference despite
challenges in reward design; Efficiency, which optimizes resource utilization
amidst increasing complexity; and Integration and Adaptation, which extend
capabilities across diverse modalities while addressing coherence issues.
Charting progress from ChatGPT's foundational alignment strategies to
DeepSeek-R1's innovative reasoning advancements, we illustrate how PoLMs
leverage datasets to mitigate biases, deepen reasoning capabilities, and
enhance domain adaptability. Our contributions include a pioneering synthesis
of PoLM evolution, a structured taxonomy categorizing techniques and datasets,
and a strategic agenda emphasizing the role of LRMs in improving reasoning
proficiency and domain flexibility. As the first survey of its scope, this work
consolidates recent PoLM advancements and establishes a rigorous intellectual
framework for future research, fostering the development of LLMs that excel in
precision, ethical robustness, and versatility across scientific and societal
applications.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 05:41:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Tie",
"Guiyao",
""
],
[
"Zhao",
"Zeli",
""
],
[
"Song",
"Dingjie",
""
],
[
"Wei",
"Fuyang",
""
],
[
"Zhou",
"Rong",
""
],
[
"Dai",
"Yurou",
""
],
[
"Yin",
"Wen",
""
],
[
"Yang",
"Zhejian",
""
],
[
"Yan",
"Jiangyue",
""
],
[
"Su",
"Yao",
""
],
[
"Dai",
"Zhenhan",
""
],
[
"Xie",
"Yifeng",
""
],
[
"Cao",
"Yihan",
""
],
[
"Sun",
"Lichao",
""
],
[
"Zhou",
"Pan",
""
],
[
"He",
"Lifang",
""
],
[
"Chen",
"Hechang",
""
],
[
"Zhang",
"Yu",
""
],
[
"Wen",
"Qingsong",
""
],
[
"Liu",
"Tianming",
""
],
[
"Gong",
"Neil Zhenqiang",
""
],
[
"Tang",
"Jiliang",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Ji",
"Heng",
""
],
[
"Yu",
"Philip S.",
""
],
[
"Gao",
"Jianfeng",
""
]
]
| TITLE: A Survey on Post-training of Large Language Models
ABSTRACT: The emergence of Large Language Models (LLMs) has fundamentally transformed
natural language processing, making them indispensable across domains ranging
from conversational systems to scientific exploration. However, their
pre-trained architectures often reveal limitations in specialized contexts,
including restricted reasoning capacities, ethical uncertainties, and
suboptimal domain-specific performance. These challenges necessitate advanced
post-training language models (PoLMs) to address these shortcomings, such as
OpenAI-o1/o3 and DeepSeek-R1 (collectively known as Large Reasoning Models, or
LRMs). This paper presents the first comprehensive survey of PoLMs,
systematically tracing their evolution across five core paradigms: Fine-tuning,
which enhances task-specific accuracy; Alignment, which ensures alignment with
human preferences; Reasoning, which advances multi-step inference despite
challenges in reward design; Efficiency, which optimizes resource utilization
amidst increasing complexity; and Integration and Adaptation, which extend
capabilities across diverse modalities while addressing coherence issues.
Charting progress from ChatGPT's foundational alignment strategies to
DeepSeek-R1's innovative reasoning advancements, we illustrate how PoLMs
leverage datasets to mitigate biases, deepen reasoning capabilities, and
enhance domain adaptability. Our contributions include a pioneering synthesis
of PoLM evolution, a structured taxonomy categorizing techniques and datasets,
and a strategic agenda emphasizing the role of LRMs in improving reasoning
proficiency and domain flexibility. As the first survey of its scope, this work
consolidates recent PoLM advancements and establishes a rigorous intellectual
framework for future research, fostering the development of LLMs that excel in
precision, ethical robustness, and versatility across scientific and societal
applications.
| no_new_dataset | 0.943556 |
2503.06085 | You Zhang | You Zhang, Jin Wang, Liang-Chih Yu, Dan Xu, Xuejie Zhang | Multi-Attribute Multi-Grained Adaptation of Pre-Trained Language Models
for Text Understanding from Bayesian Perspective | Extended version accepted by AAAI 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current neural networks often employ multi-domain-learning or
attribute-injecting mechanisms to incorporate non-independent and identically
distributed (non-IID) information for text understanding tasks by capturing
individual characteristics and the relationships among samples. However, the
extent of the impact of non-IID information and how these methods affect
pre-trained language models (PLMs) remains unclear. This study revisits the
assumption that non-IID information enhances PLMs to achieve performance
improvements from a Bayesian perspective, which unearths and integrates non-IID
and IID features. Furthermore, we proposed a multi-attribute multi-grained
framework for PLM adaptations (M2A), which combines multi-attribute and
multi-grained views to mitigate uncertainty in a lightweight manner. We
evaluate M2A through prevalent text-understanding datasets and demonstrate its
superior performance, mainly when data are implicitly non-IID, and PLMs scale
larger.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 06:17:07 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"You",
""
],
[
"Wang",
"Jin",
""
],
[
"Yu",
"Liang-Chih",
""
],
[
"Xu",
"Dan",
""
],
[
"Zhang",
"Xuejie",
""
]
]
| TITLE: Multi-Attribute Multi-Grained Adaptation of Pre-Trained Language Models
for Text Understanding from Bayesian Perspective
ABSTRACT: Current neural networks often employ multi-domain-learning or
attribute-injecting mechanisms to incorporate non-independent and identically
distributed (non-IID) information for text understanding tasks by capturing
individual characteristics and the relationships among samples. However, the
extent of the impact of non-IID information and how these methods affect
pre-trained language models (PLMs) remains unclear. This study revisits the
assumption that non-IID information enhances PLMs to achieve performance
improvements from a Bayesian perspective, which unearths and integrates non-IID
and IID features. Furthermore, we proposed a multi-attribute multi-grained
framework for PLM adaptations (M2A), which combines multi-attribute and
multi-grained views to mitigate uncertainty in a lightweight manner. We
evaluate M2A through prevalent text-understanding datasets and demonstrate its
superior performance, mainly when data are implicitly non-IID, and PLMs scale
larger.
| no_new_dataset | 0.942718 |
2503.06089 | David Jeong | David C. Jeong, Aditya Puranik, James Vong, Vrushabh Abhijit
Deogirikar, Ryan Fell, Julianna Dietrich, Maria Kyrarini, Christopher Kitts | Fish2Mesh Transformer: 3D Human Mesh Recovery from Egocentric Vision | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Egocentric human body estimation allows for the inference of user body pose
and shape from a wearable camera's first-person perspective. Although research
has used pose estimation techniques to overcome self-occlusions and image
distortions caused by head-mounted fisheye images, similar advances in 3D human
mesh recovery (HMR) techniques have been limited. We introduce Fish2Mesh, a
fisheye-aware transformer-based model designed for 3D egocentric human mesh
recovery. We propose an egocentric position embedding block to generate an
ego-specific position table for the Swin Transformer to reduce fisheye image
distortion. Our model utilizes multi-task heads for SMPL parametric regression
and camera translations, estimating 3D and 2D joints as auxiliary loss to
support model training. To address the scarcity of egocentric camera data, we
create a training dataset by employing the pre-trained 4D-Human model and
third-person cameras for weak supervision. Our experiments demonstrate that
Fish2Mesh outperforms previous state-of-the-art 3D HMR models.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 06:34:49 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Jeong",
"David C.",
""
],
[
"Puranik",
"Aditya",
""
],
[
"Vong",
"James",
""
],
[
"Deogirikar",
"Vrushabh Abhijit",
""
],
[
"Fell",
"Ryan",
""
],
[
"Dietrich",
"Julianna",
""
],
[
"Kyrarini",
"Maria",
""
],
[
"Kitts",
"Christopher",
""
]
]
| TITLE: Fish2Mesh Transformer: 3D Human Mesh Recovery from Egocentric Vision
ABSTRACT: Egocentric human body estimation allows for the inference of user body pose
and shape from a wearable camera's first-person perspective. Although research
has used pose estimation techniques to overcome self-occlusions and image
distortions caused by head-mounted fisheye images, similar advances in 3D human
mesh recovery (HMR) techniques have been limited. We introduce Fish2Mesh, a
fisheye-aware transformer-based model designed for 3D egocentric human mesh
recovery. We propose an egocentric position embedding block to generate an
ego-specific position table for the Swin Transformer to reduce fisheye image
distortion. Our model utilizes multi-task heads for SMPL parametric regression
and camera translations, estimating 3D and 2D joints as auxiliary loss to
support model training. To address the scarcity of egocentric camera data, we
create a training dataset by employing the pre-trained 4D-Human model and
third-person cameras for weak supervision. Our experiments demonstrate that
Fish2Mesh outperforms previous state-of-the-art 3D HMR models.
| no_new_dataset | 0.911574 |
2503.06092 | Lunchen Xie | Lunchen Xie, Eugenio Lomurno, Matteo Gambella, Danilo Ardagna, Manual
Roveri, Matteo Matteucci, Qingjiang Shi | ZO-DARTS++: An Efficient and Size-Variable Zeroth-Order Neural
Architecture Search Algorithm | 14 pages, 8 figures | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differentiable Neural Architecture Search (NAS) provides a promising avenue
for automating the complex design of deep learning (DL) models. However,
current differentiable NAS methods often face constraints in efficiency,
operation selection, and adaptability under varying resource limitations. We
introduce ZO-DARTS++, a novel NAS method that effectively balances performance
and resource constraints. By integrating a zeroth-order approximation for
efficient gradient handling, employing a sparsemax function with temperature
annealing for clearer and more interpretable architecture distributions, and
adopting a size-variable search scheme for generating compact yet accurate
architectures, ZO-DARTS++ establishes a new balance between model complexity
and performance. In extensive tests on medical imaging datasets, ZO-DARTS++
improves the average accuracy by up to 1.8\% over standard DARTS-based methods
and shortens search time by approximately 38.6\%. Additionally, its
resource-constrained variants can reduce the number of parameters by more than
35\% while maintaining competitive accuracy levels. Thus, ZO-DARTS++ offers a
versatile and efficient framework for generating high-quality, resource-aware
DL models suitable for real-world medical applications.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 06:43:33 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Xie",
"Lunchen",
""
],
[
"Lomurno",
"Eugenio",
""
],
[
"Gambella",
"Matteo",
""
],
[
"Ardagna",
"Danilo",
""
],
[
"Roveri",
"Manual",
""
],
[
"Matteucci",
"Matteo",
""
],
[
"Shi",
"Qingjiang",
""
]
]
| TITLE: ZO-DARTS++: An Efficient and Size-Variable Zeroth-Order Neural
Architecture Search Algorithm
ABSTRACT: Differentiable Neural Architecture Search (NAS) provides a promising avenue
for automating the complex design of deep learning (DL) models. However,
current differentiable NAS methods often face constraints in efficiency,
operation selection, and adaptability under varying resource limitations. We
introduce ZO-DARTS++, a novel NAS method that effectively balances performance
and resource constraints. By integrating a zeroth-order approximation for
efficient gradient handling, employing a sparsemax function with temperature
annealing for clearer and more interpretable architecture distributions, and
adopting a size-variable search scheme for generating compact yet accurate
architectures, ZO-DARTS++ establishes a new balance between model complexity
and performance. In extensive tests on medical imaging datasets, ZO-DARTS++
improves the average accuracy by up to 1.8\% over standard DARTS-based methods
and shortens search time by approximately 38.6\%. Additionally, its
resource-constrained variants can reduce the number of parameters by more than
35\% while maintaining competitive accuracy levels. Thus, ZO-DARTS++ offers a
versatile and efficient framework for generating high-quality, resource-aware
DL models suitable for real-world medical applications.
| no_new_dataset | 0.947284 |
2503.06096 | Nicholas Kuo | Nicholas I-Hsien Kuo, Blanca Gallego, Louisa Jorm | Attention-Based Synthetic Data Generation for Calibration-Enhanced
Survival Analysis: A Case Study for Chronic Kidney Disease Using Electronic
Health Records | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Access to real-world healthcare data is limited by stringent privacy
regulations and data imbalances, hindering advancements in research and
clinical applications. Synthetic data presents a promising solution, yet
existing methods often fail to ensure the realism, utility, and calibration
essential for robust survival analysis. Here, we introduce Masked Clinical
Modelling (MCM), an attention-based framework capable of generating
high-fidelity synthetic datasets that preserve critical clinical insights, such
as hazard ratios, while enhancing survival model calibration. Unlike
traditional statistical methods like SMOTE and machine learning models such as
VAEs, MCM supports both standalone dataset synthesis for reproducibility and
conditional simulation for targeted augmentation, addressing diverse research
needs. Validated on a chronic kidney disease electronic health records dataset,
MCM reduced the general calibration loss over the entire dataset by 15%; and
MCM reduced a mean calibration loss by 9% across 10 clinically stratified
subgroups, outperforming 15 alternative methods. By bridging data accessibility
with translational utility, MCM advances the precision of healthcare models,
promoting more efficient use of scarce healthcare resources.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 06:58:33 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kuo",
"Nicholas I-Hsien",
""
],
[
"Gallego",
"Blanca",
""
],
[
"Jorm",
"Louisa",
""
]
]
| TITLE: Attention-Based Synthetic Data Generation for Calibration-Enhanced
Survival Analysis: A Case Study for Chronic Kidney Disease Using Electronic
Health Records
ABSTRACT: Access to real-world healthcare data is limited by stringent privacy
regulations and data imbalances, hindering advancements in research and
clinical applications. Synthetic data presents a promising solution, yet
existing methods often fail to ensure the realism, utility, and calibration
essential for robust survival analysis. Here, we introduce Masked Clinical
Modelling (MCM), an attention-based framework capable of generating
high-fidelity synthetic datasets that preserve critical clinical insights, such
as hazard ratios, while enhancing survival model calibration. Unlike
traditional statistical methods like SMOTE and machine learning models such as
VAEs, MCM supports both standalone dataset synthesis for reproducibility and
conditional simulation for targeted augmentation, addressing diverse research
needs. Validated on a chronic kidney disease electronic health records dataset,
MCM reduced the general calibration loss over the entire dataset by 15%; and
MCM reduced a mean calibration loss by 9% across 10 clinically stratified
subgroups, outperforming 15 alternative methods. By bridging data accessibility
with translational utility, MCM advances the precision of healthcare models,
promoting more efficient use of scarce healthcare resources.
| no_new_dataset | 0.9357 |
2503.06104 | Syed Sajid Ullah | Syed Sajid Ullah, Li Gang, Mudassir Riaz, Ahsan Ashfaq, Salman Khan,
Sajawal Khan | Handwritten Digit Recognition: An Ensemble-Based Approach for Superior
Performance | 11 pages,6 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Handwritten digit recognition remains a fundamental challenge in computer
vision, with applications ranging from postal code reading to document
digitization. This paper presents an ensemble-based approach that combines
Convolutional Neural Networks (CNNs) with traditional machine learning
techniques to improve recognition accuracy and robustness. We evaluate our
method on the MNIST dataset, comprising 70,000 handwritten digit images. Our
hybrid model, which uses CNNs for feature extraction and Support Vector
Machines (SVMs) for classification, achieves an accuracy of 99.30%. We also
explore the effectiveness of data augmentation and various ensemble techniques
in enhancing model performance. Our results demonstrate that this approach not
only achieves high accuracy but also shows improved generalization across
diverse handwriting styles. The findings contribute to the development of more
reliable handwritten digit recognition systems and highlight the potential of
combining deep learning with traditional machine learning methods in pattern
recognition tasks.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 07:09:49 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ullah",
"Syed Sajid",
""
],
[
"Gang",
"Li",
""
],
[
"Riaz",
"Mudassir",
""
],
[
"Ashfaq",
"Ahsan",
""
],
[
"Khan",
"Salman",
""
],
[
"Khan",
"Sajawal",
""
]
]
| TITLE: Handwritten Digit Recognition: An Ensemble-Based Approach for Superior
Performance
ABSTRACT: Handwritten digit recognition remains a fundamental challenge in computer
vision, with applications ranging from postal code reading to document
digitization. This paper presents an ensemble-based approach that combines
Convolutional Neural Networks (CNNs) with traditional machine learning
techniques to improve recognition accuracy and robustness. We evaluate our
method on the MNIST dataset, comprising 70,000 handwritten digit images. Our
hybrid model, which uses CNNs for feature extraction and Support Vector
Machines (SVMs) for classification, achieves an accuracy of 99.30%. We also
explore the effectiveness of data augmentation and various ensemble techniques
in enhancing model performance. Our results demonstrate that this approach not
only achieves high accuracy but also shows improved generalization across
diverse handwriting styles. The findings contribute to the development of more
reliable handwritten digit recognition systems and highlight the potential of
combining deep learning with traditional machine learning methods in pattern
recognition tasks.
| no_new_dataset | 0.9455 |
2503.06106 | Kuanghong Liu | Kuanghong Liu, Jin Wang, Kangjian He, Dan Xu, Xuejie Zhang | Vision-aware Multimodal Prompt Tuning for Uploadable Multi-source
Few-shot Domain Adaptation | Accepted by AAAI 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional multi-source domain few-shot adaptation (MFDA) faces the
challenge of further reducing the load on edge-side devices in low-resource
scenarios. Considering the native language-supervised advantage of CLIP and the
plug-and-play nature of prompt to transfer CLIP efficiently, this paper
introduces an uploadable multi-source few-shot domain adaptation (UMFDA)
schema. It belongs to a decentralized edge collaborative learning in the
edge-side models that must maintain a low computational load. And only a
limited amount of annotations in source domain data is provided, with most of
the data being unannotated. Further, this paper proposes a vision-aware
multimodal prompt tuning framework (VAMP) under the decentralized schema, where
the vision-aware prompt guides the text domain-specific prompt to maintain
semantic discriminability and perceive the domain information. The cross-modal
semantic and domain distribution alignment losses optimize each edge-side
model, while text classifier consistency and semantic diversity losses promote
collaborative learning among edge-side models. Extensive experiments were
conducted on OfficeHome and DomainNet datasets to demonstrate the effectiveness
of the proposed VAMP in the UMFDA, which outperformed the previous prompt
tuning methods.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 07:17:06 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Kuanghong",
""
],
[
"Wang",
"Jin",
""
],
[
"He",
"Kangjian",
""
],
[
"Xu",
"Dan",
""
],
[
"Zhang",
"Xuejie",
""
]
]
| TITLE: Vision-aware Multimodal Prompt Tuning for Uploadable Multi-source
Few-shot Domain Adaptation
ABSTRACT: Conventional multi-source domain few-shot adaptation (MFDA) faces the
challenge of further reducing the load on edge-side devices in low-resource
scenarios. Considering the native language-supervised advantage of CLIP and the
plug-and-play nature of prompt to transfer CLIP efficiently, this paper
introduces an uploadable multi-source few-shot domain adaptation (UMFDA)
schema. It belongs to a decentralized edge collaborative learning in the
edge-side models that must maintain a low computational load. And only a
limited amount of annotations in source domain data is provided, with most of
the data being unannotated. Further, this paper proposes a vision-aware
multimodal prompt tuning framework (VAMP) under the decentralized schema, where
the vision-aware prompt guides the text domain-specific prompt to maintain
semantic discriminability and perceive the domain information. The cross-modal
semantic and domain distribution alignment losses optimize each edge-side
model, while text classifier consistency and semantic diversity losses promote
collaborative learning among edge-side models. Extensive experiments were
conducted on OfficeHome and DomainNet datasets to demonstrate the effectiveness
of the proposed VAMP in the UMFDA, which outperformed the previous prompt
tuning methods.
| no_new_dataset | 0.949902 |
2503.06107 | Akshat Jain | Akshat Jain | Feature Fusion Attention Network with CycleGAN for Image Dehazing,
De-Snowing and De-Raining | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper presents a novel approach to image dehazing by combining Feature
Fusion Attention (FFA) networks with CycleGAN architecture. Our method
leverages both supervised and unsupervised learning techniques to effectively
remove haze from images while preserving crucial image details. The proposed
hybrid architecture demonstrates significant improvements in image quality
metrics, achieving superior PSNR and SSIM scores compared to traditional
dehazing methods. Through extensive experimentation on the RESIDE and DenseHaze
CVPR 2019 dataset, we show that our approach effectively handles both synthetic
and real-world hazy images. CycleGAN handles the unpaired nature of hazy and
clean images effectively, enabling the model to learn mappings even without
paired data.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 07:18:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Jain",
"Akshat",
""
]
]
| TITLE: Feature Fusion Attention Network with CycleGAN for Image Dehazing,
De-Snowing and De-Raining
ABSTRACT: This paper presents a novel approach to image dehazing by combining Feature
Fusion Attention (FFA) networks with CycleGAN architecture. Our method
leverages both supervised and unsupervised learning techniques to effectively
remove haze from images while preserving crucial image details. The proposed
hybrid architecture demonstrates significant improvements in image quality
metrics, achieving superior PSNR and SSIM scores compared to traditional
dehazing methods. Through extensive experimentation on the RESIDE and DenseHaze
CVPR 2019 dataset, we show that our approach effectively handles both synthetic
and real-world hazy images. CycleGAN handles the unpaired nature of hazy and
clean images effectively, enabling the model to learn mappings even without
paired data.
| no_new_dataset | 0.950319 |
2503.06108 | Weixuan Kong | Weixuan Kong, Jinpeng Yu, Zijun Li, Hanwei Liu, Jiqing Qu, Hui Xiao,
Xuefeng Li | Multi-modal expressive personality recognition in data non-ideal
audiovisual based on multi-scale feature enhancement and modal augment | null | null | null | null | cs.SD cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic personality recognition is a research hotspot in the intersection
of computer science and psychology, and in human-computer interaction,
personalised has a wide range of applications services and other scenarios. In
this paper, an end-to-end multimodal performance personality is established for
both visual and auditory modal datarecognition network , and the through
feature-level fusion , which effectively of the two modalities is carried out
the cross-attention mechanismfuses the features of the two modal data; and a is
proposed multiscale feature enhancement modalitiesmodule , which enhances for
visual and auditory boththe expression of the information of effective the
features and suppresses the interference of the redundant information. In
addition, during the training process, this paper proposes a modal enhancement
training strategy to simulate non-ideal such as modal loss and noise
interferencedata situations , which enhances the adaptability ofand the model
to non-ideal data scenarios improves the robustness of the model. Experimental
results show that the method proposed in this paper is able to achieve an
average Big Five personality accuracy of , which outperforms existing 0.916 on
the personality analysis dataset ChaLearn First Impressionother methods based
on audiovisual and audio-visual both modalities. The ablation experiments also
validate our proposed , respectivelythe contribution of module and modality
enhancement strategy to the model performance. Finally, we simulate in the
inference phase multi-scale feature enhancement six non-ideal data scenarios to
verify the modal enhancement strategy's improvement in model robustness.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 07:20:44 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kong",
"Weixuan",
""
],
[
"Yu",
"Jinpeng",
""
],
[
"Li",
"Zijun",
""
],
[
"Liu",
"Hanwei",
""
],
[
"Qu",
"Jiqing",
""
],
[
"Xiao",
"Hui",
""
],
[
"Li",
"Xuefeng",
""
]
]
| TITLE: Multi-modal expressive personality recognition in data non-ideal
audiovisual based on multi-scale feature enhancement and modal augment
ABSTRACT: Automatic personality recognition is a research hotspot in the intersection
of computer science and psychology, and in human-computer interaction,
personalised has a wide range of applications services and other scenarios. In
this paper, an end-to-end multimodal performance personality is established for
both visual and auditory modal datarecognition network , and the through
feature-level fusion , which effectively of the two modalities is carried out
the cross-attention mechanismfuses the features of the two modal data; and a is
proposed multiscale feature enhancement modalitiesmodule , which enhances for
visual and auditory boththe expression of the information of effective the
features and suppresses the interference of the redundant information. In
addition, during the training process, this paper proposes a modal enhancement
training strategy to simulate non-ideal such as modal loss and noise
interferencedata situations , which enhances the adaptability ofand the model
to non-ideal data scenarios improves the robustness of the model. Experimental
results show that the method proposed in this paper is able to achieve an
average Big Five personality accuracy of , which outperforms existing 0.916 on
the personality analysis dataset ChaLearn First Impressionother methods based
on audiovisual and audio-visual both modalities. The ablation experiments also
validate our proposed , respectivelythe contribution of module and modality
enhancement strategy to the model performance. Finally, we simulate in the
inference phase multi-scale feature enhancement six non-ideal data scenarios to
verify the modal enhancement strategy's improvement in model robustness.
| no_new_dataset | 0.951908 |
2503.06112 | Hoang Thang Ta Dr. | Hoang-Thang Ta, Anh Tran | AF-KAN: Activation Function-Based Kolmogorov-Arnold Networks for
Efficient Representation Learning | 25 pages | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Kolmogorov-Arnold Networks (KANs) have inspired numerous works exploring
their applications across a wide range of scientific problems, with the
potential to replace Multilayer Perceptrons (MLPs). While many KANs are
designed using basis and polynomial functions, such as B-splines, ReLU-KAN
utilizes a combination of ReLU functions to mimic the structure of B-splines
and take advantage of ReLU's speed. However, ReLU-KAN is not built for multiple
inputs, and its limitations stem from ReLU's handling of negative values, which
can restrict feature extraction. To address these issues, we introduce
Activation Function-Based Kolmogorov-Arnold Networks (AF-KAN), expanding
ReLU-KAN with various activations and their function combinations. This novel
KAN also incorporates parameter reduction methods, primarily attention
mechanisms and data normalization, to enhance performance on image
classification datasets. We explore different activation functions, function
combinations, grid sizes, and spline orders to validate the effectiveness of
AF-KAN and determine its optimal configuration. In the experiments, AF-KAN
significantly outperforms MLP, ReLU-KAN, and other KANs with the same parameter
count. It also remains competitive even when using fewer than 6 to 10 times the
parameters while maintaining the same network structure. However, AF-KAN
requires a longer training time and consumes more FLOPs. The repository for
this work is available at https://github.com/hoangthangta/All-KAN.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 07:38:51 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ta",
"Hoang-Thang",
""
],
[
"Tran",
"Anh",
""
]
]
| TITLE: AF-KAN: Activation Function-Based Kolmogorov-Arnold Networks for
Efficient Representation Learning
ABSTRACT: Kolmogorov-Arnold Networks (KANs) have inspired numerous works exploring
their applications across a wide range of scientific problems, with the
potential to replace Multilayer Perceptrons (MLPs). While many KANs are
designed using basis and polynomial functions, such as B-splines, ReLU-KAN
utilizes a combination of ReLU functions to mimic the structure of B-splines
and take advantage of ReLU's speed. However, ReLU-KAN is not built for multiple
inputs, and its limitations stem from ReLU's handling of negative values, which
can restrict feature extraction. To address these issues, we introduce
Activation Function-Based Kolmogorov-Arnold Networks (AF-KAN), expanding
ReLU-KAN with various activations and their function combinations. This novel
KAN also incorporates parameter reduction methods, primarily attention
mechanisms and data normalization, to enhance performance on image
classification datasets. We explore different activation functions, function
combinations, grid sizes, and spline orders to validate the effectiveness of
AF-KAN and determine its optimal configuration. In the experiments, AF-KAN
significantly outperforms MLP, ReLU-KAN, and other KANs with the same parameter
count. It also remains competitive even when using fewer than 6 to 10 times the
parameters while maintaining the same network structure. However, AF-KAN
requires a longer training time and consumes more FLOPs. The repository for
this work is available at https://github.com/hoangthangta/All-KAN.
| no_new_dataset | 0.949902 |
2503.06114 | Qi Zhang | Qi Zhang, Xiuyuan Chen, Ziyi He, Lianming Wu, Kun Wang, Jianqi Sun,
and Hongxing Shen | Pathology-Guided AI System for Accurate Segmentation and Diagnosis of
Cervical Spondylosis | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cervical spondylosis, a complex and prevalent condition, demands precise and
efficient diagnostic techniques for accurate assessment. While MRI offers
detailed visualization of cervical spine anatomy, manual interpretation remains
labor-intensive and prone to error. To address this, we developed an innovative
AI-assisted Expert-based Diagnosis System that automates both segmentation and
diagnosis of cervical spondylosis using MRI. Leveraging a dataset of 960
cervical MRI images from patients with cervical disc herniation, our system
features a pathology-guided segmentation model capable of accurately segmenting
key cervical anatomical structures. The segmentation is followed by an
expert-based diagnostic framework that automates the calculation of critical
clinical indicators. Our segmentation model achieved an impressive average Dice
coefficient exceeding 0.90 across four cervical spinal anatomies and
demonstrated enhanced accuracy in herniation areas. Diagnostic evaluation
further showcased the system precision, with a mean absolute error (MAE) of
2.44 degree for the C2-C7 Cobb angle and 3.60 precentage for the Maximum Spinal
Cord Compression (MSCC) coefficient. In addition, our method delivered high
accuracy, precision, recall, and F1 scores in herniation localization, K-line
status assessment, and T2 hyperintensity detection. Comparative analysis
demonstrates that our system outperforms existing methods, establishing a new
benchmark for segmentation and diagnostic tasks for cervical spondylosis.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 07:55:33 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Qi",
""
],
[
"Chen",
"Xiuyuan",
""
],
[
"He",
"Ziyi",
""
],
[
"Wu",
"Lianming",
""
],
[
"Wang",
"Kun",
""
],
[
"Sun",
"Jianqi",
""
],
[
"Shen",
"Hongxing",
""
]
]
| TITLE: Pathology-Guided AI System for Accurate Segmentation and Diagnosis of
Cervical Spondylosis
ABSTRACT: Cervical spondylosis, a complex and prevalent condition, demands precise and
efficient diagnostic techniques for accurate assessment. While MRI offers
detailed visualization of cervical spine anatomy, manual interpretation remains
labor-intensive and prone to error. To address this, we developed an innovative
AI-assisted Expert-based Diagnosis System that automates both segmentation and
diagnosis of cervical spondylosis using MRI. Leveraging a dataset of 960
cervical MRI images from patients with cervical disc herniation, our system
features a pathology-guided segmentation model capable of accurately segmenting
key cervical anatomical structures. The segmentation is followed by an
expert-based diagnostic framework that automates the calculation of critical
clinical indicators. Our segmentation model achieved an impressive average Dice
coefficient exceeding 0.90 across four cervical spinal anatomies and
demonstrated enhanced accuracy in herniation areas. Diagnostic evaluation
further showcased the system precision, with a mean absolute error (MAE) of
2.44 degree for the C2-C7 Cobb angle and 3.60 precentage for the Maximum Spinal
Cord Compression (MSCC) coefficient. In addition, our method delivered high
accuracy, precision, recall, and F1 scores in herniation localization, K-line
status assessment, and T2 hyperintensity detection. Comparative analysis
demonstrates that our system outperforms existing methods, establishing a new
benchmark for segmentation and diagnostic tasks for cervical spondylosis.
| no_new_dataset | 0.949389 |
2503.06117 | Hongjia Zhai | Hongjia Zhai, Boming Zhao, Hai Li, Xiaokun Pan, Yijia He, Zhaopeng
Cui, Hujun Bao, Guofeng Zhang | NeuraLoc: Visual Localization in Neural Implicit Map with Dual
Complementary Features | ICRA 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recently, neural radiance fields (NeRF) have gained significant attention in
the field of visual localization. However, existing NeRF-based approaches
either lack geometric constraints or require extensive storage for feature
matching, limiting their practical applications. To address these challenges,
we propose an efficient and novel visual localization approach based on the
neural implicit map with complementary features. Specifically, to enforce
geometric constraints and reduce storage requirements, we implicitly learn a 3D
keypoint descriptor field, avoiding the need to explicitly store point-wise
features. To further address the semantic ambiguity of descriptors, we
introduce additional semantic contextual feature fields, which enhance the
quality and reliability of 2D-3D correspondences. Besides, we propose
descriptor similarity distribution alignment to minimize the domain gap between
2D and 3D feature spaces during matching. Finally, we construct the matching
graph using both complementary descriptors and contextual features to establish
accurate 2D-3D correspondences for 6-DoF pose estimation. Compared with the
recent NeRF-based approaches, our method achieves a 3$\times$ faster training
speed and a 45$\times$ reduction in model storage. Extensive experiments on two
widely used datasets demonstrate that our approach outperforms or is highly
competitive with other state-of-the-art NeRF-based visual localization methods.
Project page:
\href{https://zju3dv.github.io/neuraloc}{https://zju3dv.github.io/neuraloc}
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 08:04:27 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhai",
"Hongjia",
""
],
[
"Zhao",
"Boming",
""
],
[
"Li",
"Hai",
""
],
[
"Pan",
"Xiaokun",
""
],
[
"He",
"Yijia",
""
],
[
"Cui",
"Zhaopeng",
""
],
[
"Bao",
"Hujun",
""
],
[
"Zhang",
"Guofeng",
""
]
]
| TITLE: NeuraLoc: Visual Localization in Neural Implicit Map with Dual
Complementary Features
ABSTRACT: Recently, neural radiance fields (NeRF) have gained significant attention in
the field of visual localization. However, existing NeRF-based approaches
either lack geometric constraints or require extensive storage for feature
matching, limiting their practical applications. To address these challenges,
we propose an efficient and novel visual localization approach based on the
neural implicit map with complementary features. Specifically, to enforce
geometric constraints and reduce storage requirements, we implicitly learn a 3D
keypoint descriptor field, avoiding the need to explicitly store point-wise
features. To further address the semantic ambiguity of descriptors, we
introduce additional semantic contextual feature fields, which enhance the
quality and reliability of 2D-3D correspondences. Besides, we propose
descriptor similarity distribution alignment to minimize the domain gap between
2D and 3D feature spaces during matching. Finally, we construct the matching
graph using both complementary descriptors and contextual features to establish
accurate 2D-3D correspondences for 6-DoF pose estimation. Compared with the
recent NeRF-based approaches, our method achieves a 3$\times$ faster training
speed and a 45$\times$ reduction in model storage. Extensive experiments on two
widely used datasets demonstrate that our approach outperforms or is highly
competitive with other state-of-the-art NeRF-based visual localization methods.
Project page:
\href{https://zju3dv.github.io/neuraloc}{https://zju3dv.github.io/neuraloc}
| no_new_dataset | 0.946941 |
2503.06121 | Xiao Liu | Li weile, Liu Xiao | BlackGoose Rimer: Harnessing RWKV-7 as a Simple yet Superior Replacement
for Transformers in Large-Scale Time Series Modeling | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Time series models face significant challenges in scaling to handle large and
complex datasets, akin to the scaling achieved by large language models (LLMs).
The unique characteristics of time series data and the computational demands of
model scaling necessitate innovative approaches. While researchers have
explored various architectures such as Transformers, LSTMs, and GRUs to address
these challenges, we propose a novel solution using RWKV-7, which incorporates
meta-learning into its state update mechanism. By integrating RWKV-7's time mix
and channel mix components into the transformer-based time series model Timer,
we achieve a substantial performance improvement of approximately 1.13 to 43.3x
and a 4.5x reduction in training time with 1/23 parameters, all while utilizing
fewer parameters. Our code and model weights are publicly available for further
research and development at https://github.com/Alic-Li/BlackGoose_Rimer.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 08:31:18 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"weile",
"Li",
""
],
[
"Xiao",
"Liu",
""
]
]
| TITLE: BlackGoose Rimer: Harnessing RWKV-7 as a Simple yet Superior Replacement
for Transformers in Large-Scale Time Series Modeling
ABSTRACT: Time series models face significant challenges in scaling to handle large and
complex datasets, akin to the scaling achieved by large language models (LLMs).
The unique characteristics of time series data and the computational demands of
model scaling necessitate innovative approaches. While researchers have
explored various architectures such as Transformers, LSTMs, and GRUs to address
these challenges, we propose a novel solution using RWKV-7, which incorporates
meta-learning into its state update mechanism. By integrating RWKV-7's time mix
and channel mix components into the transformer-based time series model Timer,
we achieve a substantial performance improvement of approximately 1.13 to 43.3x
and a 4.5x reduction in training time with 1/23 parameters, all while utilizing
fewer parameters. Our code and model weights are publicly available for further
research and development at https://github.com/Alic-Li/BlackGoose_Rimer.
| no_new_dataset | 0.945951 |
2503.06125 | Xiaohan Shi | Kai Yang, Zijian Bai, Yang Xiao, Xinyu Li, Xiaohan Shi | RGB-Phase Speckle: Cross-Scene Stereo 3D Reconstruction via Wrapped
Pre-Normalization | Submitted to ICCV 2025 | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D reconstruction garners increasing attention alongside the advancement of
high-level image applications, where dense stereo matching (DSM) serves as a
pivotal technique. Previous studies often rely on publicly available datasets
for training, focusing on modifying network architectures or incorporating
specialized modules to extract domain-invariant features and thus improve model
robustness. In contrast, inspired by single-frame structured-light
phase-shifting encoding, this study introduces RGB-Speckle, a cross-scene 3D
reconstruction framework based on an active stereo camera system, designed to
enhance robustness. Specifically, we propose a novel phase pre-normalization
encoding-decoding method: first, we randomly perturb phase-shift maps and embed
them into the three RGB channels to generate color speckle patterns;
subsequently, the camera captures phase-encoded images modulated by objects as
input to a stereo matching network. This technique effectively mitigates
external interference and ensures consistent input data for RGB-Speckle,
thereby bolstering cross-domain 3D reconstruction stability. To validate the
proposed method, we conduct complex experiments: (1) construct a color speckle
dataset for complex scenarios based on the proposed encoding scheme; (2)
evaluate the impact of the phase pre-normalization encoding-decoding technique
on 3D reconstruction accuracy; and (3) further investigate its robustness
across diverse conditions. Experimental results demonstrate that the proposed
RGB-Speckle model offers significant advantages in cross-domain and cross-scene
3D reconstruction tasks, enhancing model generalization and reinforcing
robustness in challenging environments, thus providing a novel solution for
robust 3D reconstruction research.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 08:37:20 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yang",
"Kai",
""
],
[
"Bai",
"Zijian",
""
],
[
"Xiao",
"Yang",
""
],
[
"Li",
"Xinyu",
""
],
[
"Shi",
"Xiaohan",
""
]
]
| TITLE: RGB-Phase Speckle: Cross-Scene Stereo 3D Reconstruction via Wrapped
Pre-Normalization
ABSTRACT: 3D reconstruction garners increasing attention alongside the advancement of
high-level image applications, where dense stereo matching (DSM) serves as a
pivotal technique. Previous studies often rely on publicly available datasets
for training, focusing on modifying network architectures or incorporating
specialized modules to extract domain-invariant features and thus improve model
robustness. In contrast, inspired by single-frame structured-light
phase-shifting encoding, this study introduces RGB-Speckle, a cross-scene 3D
reconstruction framework based on an active stereo camera system, designed to
enhance robustness. Specifically, we propose a novel phase pre-normalization
encoding-decoding method: first, we randomly perturb phase-shift maps and embed
them into the three RGB channels to generate color speckle patterns;
subsequently, the camera captures phase-encoded images modulated by objects as
input to a stereo matching network. This technique effectively mitigates
external interference and ensures consistent input data for RGB-Speckle,
thereby bolstering cross-domain 3D reconstruction stability. To validate the
proposed method, we conduct complex experiments: (1) construct a color speckle
dataset for complex scenarios based on the proposed encoding scheme; (2)
evaluate the impact of the phase pre-normalization encoding-decoding technique
on 3D reconstruction accuracy; and (3) further investigate its robustness
across diverse conditions. Experimental results demonstrate that the proposed
RGB-Speckle model offers significant advantages in cross-domain and cross-scene
3D reconstruction tasks, enhancing model generalization and reinforcing
robustness in challenging environments, thus providing a novel solution for
robust 3D reconstruction research.
| no_new_dataset | 0.937726 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.