Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2501.18423 | Tom Dooney | Tom Dooney, Harsh Narola, Stefano Bromuri, R. Lyana Curier, Chris Van
Den Broeck, Sarah Caudill, Daniel Stanley Tan | DeepExtractor: Time-domain reconstruction of signals and glitches in
gravitational wave data with deep learning | 22 pages, 16 figures, 4 tables | null | null | null | gr-qc astro-ph.IM cs.LG physics.data-an physics.ins-det | http://creativecommons.org/licenses/by/4.0/ | Gravitational wave (GW) interferometers, detect faint signals from distant
astrophysical events, such as binary black hole mergers. However, their high
sensitivity also makes them susceptible to background noise, which can obscure
these signals. This noise often includes transient artifacts called "glitches"
that can mimic astrophysical signals or mask their characteristics. Fast and
accurate reconstruction of both signals and glitches is crucial for reliable
scientific inference. In this study, we present DeepExtractor, a deep learning
framework designed to reconstruct signals and glitches with power exceeding
interferometer noise, regardless of their source. We design DeepExtractor to
model the inherent noise distribution of GW interferometers, following
conventional assumptions that the noise is Gaussian and stationary over short
time scales. It operates by predicting and subtracting the noise component of
the data, retaining only the clean reconstruction. Our approach achieves
superior generalization capabilities for arbitrary signals and glitches
compared to methods that directly map inputs to the clean training waveforms.
We validate DeepExtractor's effectiveness through three experiments: (1)
reconstructing simulated glitches injected into simulated detector noise, (2)
comparing performance with the state-of-the-art BayesWave algorithm, and (3)
analyzing real data from the Gravity Spy dataset to demonstrate effective
glitch subtraction from LIGO strain data. DeepExtractor achieves a median
mismatch of only 0.9% for simulated glitches, outperforming several deep
learning baselines. Additionally, DeepExtractor surpasses BayesWave in glitch
recovery, offering a dramatic computational speedup by reconstructing one
glitch sample in approx. 0.1 seconds on a CPU, compared to BayesWave's
processing time of approx. one hour per glitch.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2025 15:25:30 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 13:45:31 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Dooney",
"Tom",
""
],
[
"Narola",
"Harsh",
""
],
[
"Bromuri",
"Stefano",
""
],
[
"Curier",
"R. Lyana",
""
],
[
"Broeck",
"Chris Van Den",
""
],
[
"Caudill",
"Sarah",
""
],
[
"Tan",
"Daniel Stanley",
""
]
] | TITLE: DeepExtractor: Time-domain reconstruction of signals and glitches in
gravitational wave data with deep learning
ABSTRACT: Gravitational wave (GW) interferometers, detect faint signals from distant
astrophysical events, such as binary black hole mergers. However, their high
sensitivity also makes them susceptible to background noise, which can obscure
these signals. This noise often includes transient artifacts called "glitches"
that can mimic astrophysical signals or mask their characteristics. Fast and
accurate reconstruction of both signals and glitches is crucial for reliable
scientific inference. In this study, we present DeepExtractor, a deep learning
framework designed to reconstruct signals and glitches with power exceeding
interferometer noise, regardless of their source. We design DeepExtractor to
model the inherent noise distribution of GW interferometers, following
conventional assumptions that the noise is Gaussian and stationary over short
time scales. It operates by predicting and subtracting the noise component of
the data, retaining only the clean reconstruction. Our approach achieves
superior generalization capabilities for arbitrary signals and glitches
compared to methods that directly map inputs to the clean training waveforms.
We validate DeepExtractor's effectiveness through three experiments: (1)
reconstructing simulated glitches injected into simulated detector noise, (2)
comparing performance with the state-of-the-art BayesWave algorithm, and (3)
analyzing real data from the Gravity Spy dataset to demonstrate effective
glitch subtraction from LIGO strain data. DeepExtractor achieves a median
mismatch of only 0.9% for simulated glitches, outperforming several deep
learning baselines. Additionally, DeepExtractor surpasses BayesWave in glitch
recovery, offering a dramatic computational speedup by reconstructing one
glitch sample in approx. 0.1 seconds on a CPU, compared to BayesWave's
processing time of approx. one hour per glitch.
|
2501.18810 | Jeremy Clark | Joseph Al-Chami and Jeremy Clark | Quest Love: A First Look at Blockchain Loyalty Programs | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Blockchain ecosystems -- such as those built around chains, layers, and
services -- try to engage users for a variety of reasons: user education,
growing and protecting their market share, climbing metric-measuring
leaderboards with competing systems, demonstrating usage to investors, and
identifying worthy recipients for newly created tokens (airdrops). A popular
approach is offering user quests: small tasks that can be completed by a user,
exposing them to a common task they might want to do in the future, and
rewarding them for completion. In this paper, we analyze a proprietary dataset
from one deployed quest system that offered 43 unique quests over 10 months
with 80M completions. We offer insights about the factors that correlate with
task completion: amount of reward, monetary value of reward, difficulty, and
cost. We also discuss the role of farming and bots, and the factors that
complicate distinguishing real users from automated scripts.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 00:05:43 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 13:45:27 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Al-Chami",
"Joseph",
""
],
[
"Clark",
"Jeremy",
""
]
] | TITLE: Quest Love: A First Look at Blockchain Loyalty Programs
ABSTRACT: Blockchain ecosystems -- such as those built around chains, layers, and
services -- try to engage users for a variety of reasons: user education,
growing and protecting their market share, climbing metric-measuring
leaderboards with competing systems, demonstrating usage to investors, and
identifying worthy recipients for newly created tokens (airdrops). A popular
approach is offering user quests: small tasks that can be completed by a user,
exposing them to a common task they might want to do in the future, and
rewarding them for completion. In this paper, we analyze a proprietary dataset
from one deployed quest system that offered 43 unique quests over 10 months
with 80M completions. We offer insights about the factors that correlate with
task completion: amount of reward, monetary value of reward, difficulty, and
cost. We also discuss the role of farming and bots, and the factors that
complicate distinguishing real users from automated scripts.
|
2502.03272 | Matthias Schwab | Matthias Schwab, Mathias Pamminger, Christian Kremser, Markus
Haltmeier, Agnes Mayr | Deep Learning Pipeline for Fully Automated Myocardial Infarct
Segmentation from Clinical Cardiac MR Scans | null | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: To develop and evaluate a deep learning-based method that allows to
perform myocardial infarct segmentation in a fully-automated way.
Materials and Methods: For this retrospective study, a cascaded framework of
two and three-dimensional convolutional neural networks (CNNs), specialized on
identifying ischemic myocardial scars on late gadolinium enhancement (LGE)
cardiac magnetic resonance (CMR) images, was trained on an in-house training
dataset consisting of 144 examinations. On a separate test dataset from the
same institution, including images from 152 examinations obtained between 2021
and 2023, a quantitative comparison between artificial intelligence (AI)-based
segmentations and manual segmentations was performed. Further, qualitative
assessment of segmentation accuracy was evaluated for both human and
AI-generated contours by two CMR experts in a blinded experiment.
Results: Excellent agreement could be found between manually and
automatically calculated infarct volumes ($\rho_c$ = 0.9). The qualitative
evaluation showed that compared to human-based measurements, the experts rated
the AI-based segmentations to better represent the actual extent of infarction
significantly (p < 0.001) more often (33.4% AI, 25.1% human, 41.5% equal). On
the contrary, for segmentation of microvascular obstruction (MVO), manual
measurements were still preferred (11.3% AI, 55.6% human, 33.1% equal).
Conclusion: This fully-automated segmentation pipeline enables CMR infarct
size to be calculated in a very short time and without requiring any
pre-processing of the input images while matching the segmentation quality of
trained human observers. In a blinded experiment, experts preferred automated
infarct segmentations more often than manual segmentations, paving the way for
a potential clinical application.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 15:29:28 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 10:42:32 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Schwab",
"Matthias",
""
],
[
"Pamminger",
"Mathias",
""
],
[
"Kremser",
"Christian",
""
],
[
"Haltmeier",
"Markus",
""
],
[
"Mayr",
"Agnes",
""
]
] | TITLE: Deep Learning Pipeline for Fully Automated Myocardial Infarct
Segmentation from Clinical Cardiac MR Scans
ABSTRACT: Purpose: To develop and evaluate a deep learning-based method that allows to
perform myocardial infarct segmentation in a fully-automated way.
Materials and Methods: For this retrospective study, a cascaded framework of
two and three-dimensional convolutional neural networks (CNNs), specialized on
identifying ischemic myocardial scars on late gadolinium enhancement (LGE)
cardiac magnetic resonance (CMR) images, was trained on an in-house training
dataset consisting of 144 examinations. On a separate test dataset from the
same institution, including images from 152 examinations obtained between 2021
and 2023, a quantitative comparison between artificial intelligence (AI)-based
segmentations and manual segmentations was performed. Further, qualitative
assessment of segmentation accuracy was evaluated for both human and
AI-generated contours by two CMR experts in a blinded experiment.
Results: Excellent agreement could be found between manually and
automatically calculated infarct volumes ($\rho_c$ = 0.9). The qualitative
evaluation showed that compared to human-based measurements, the experts rated
the AI-based segmentations to better represent the actual extent of infarction
significantly (p < 0.001) more often (33.4% AI, 25.1% human, 41.5% equal). On
the contrary, for segmentation of microvascular obstruction (MVO), manual
measurements were still preferred (11.3% AI, 55.6% human, 33.1% equal).
Conclusion: This fully-automated segmentation pipeline enables CMR infarct
size to be calculated in a very short time and without requiring any
pre-processing of the input images while matching the segmentation quality of
trained human observers. In a blinded experiment, experts preferred automated
infarct segmentations more often than manual segmentations, paving the way for
a potential clinical application.
|
2502.05206 | Xingjun Ma | Xingjun Ma, Yifeng Gao, Yixu Wang, Ruofan Wang, Xin Wang, Ye Sun,
Yifan Ding, Hengyuan Xu, Yunhao Chen, Yunhan Zhao, Hanxun Huang, Yige Li,
Jiaming Zhang, Xiang Zheng, Yang Bai, Zuxuan Wu, Xipeng Qiu, Jingfeng Zhang,
Yiming Li, Xudong Han, Haonan Li, Jun Sun, Cong Wang, Jindong Gu, Baoyuan Wu,
Siheng Chen, Tianwei Zhang, Yang Liu, Mingming Gong, Tongliang Liu, Shirui
Pan, Cihang Xie, Tianyu Pang, Yinpeng Dong, Ruoxi Jia, Yang Zhang, Shiqing
Ma, Xiangyu Zhang, Neil Gong, Chaowei Xiao, Sarah Erfani, Tim Baldwin, Bo Li,
Masashi Sugiyama, Dacheng Tao, James Bailey, Yu-Gang Jiang | Safety at Scale: A Comprehensive Survey of Large Model Safety | 47 pages, 3 figures, 11 tables; GitHub:
https://github.com/xingjunm/Awesome-Large-Model-Safety | null | null | null | cs.CR cs.AI cs.CL cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The rapid advancement of large models, driven by their exceptional abilities
in learning and generalization through large-scale pre-training, has reshaped
the landscape of Artificial Intelligence (AI). These models are now
foundational to a wide range of applications, including conversational AI,
recommendation systems, autonomous driving, content generation, medical
diagnostics, and scientific discovery. However, their widespread deployment
also exposes them to significant safety risks, raising concerns about
robustness, reliability, and ethical implications. This survey provides a
systematic review of current safety research on large models, covering Vision
Foundation Models (VFMs), Large Language Models (LLMs), Vision-Language
Pre-training (VLP) models, Vision-Language Models (VLMs), Diffusion Models
(DMs), and large-model-based Agents. Our contributions are summarized as
follows: (1) We present a comprehensive taxonomy of safety threats to these
models, including adversarial attacks, data poisoning, backdoor attacks,
jailbreak and prompt injection attacks, energy-latency attacks, data and model
extraction attacks, and emerging agent-specific threats. (2) We review defense
strategies proposed for each type of attacks if available and summarize the
commonly used datasets and benchmarks for safety research. (3) Building on
this, we identify and discuss the open challenges in large model safety,
emphasizing the need for comprehensive safety evaluations, scalable and
effective defense mechanisms, and sustainable data practices. More importantly,
we highlight the necessity of collective efforts from the research community
and international collaboration. Our work can serve as a useful reference for
researchers and practitioners, fostering the ongoing development of
comprehensive defense systems and platforms to safeguard AI models.
| [
{
"version": "v1",
"created": "Sun, 2 Feb 2025 05:14:22 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Feb 2025 06:16:00 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 16:10:18 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ma",
"Xingjun",
""
],
[
"Gao",
"Yifeng",
""
],
[
"Wang",
"Yixu",
""
],
[
"Wang",
"Ruofan",
""
],
[
"Wang",
"Xin",
""
],
[
"Sun",
"Ye",
""
],
[
"Ding",
"Yifan",
""
],
[
"Xu",
"Hengyuan",
""
],
[
"Chen",
"Yunhao",
""
],
[
"Zhao",
"Yunhan",
""
],
[
"Huang",
"Hanxun",
""
],
[
"Li",
"Yige",
""
],
[
"Zhang",
"Jiaming",
""
],
[
"Zheng",
"Xiang",
""
],
[
"Bai",
"Yang",
""
],
[
"Wu",
"Zuxuan",
""
],
[
"Qiu",
"Xipeng",
""
],
[
"Zhang",
"Jingfeng",
""
],
[
"Li",
"Yiming",
""
],
[
"Han",
"Xudong",
""
],
[
"Li",
"Haonan",
""
],
[
"Sun",
"Jun",
""
],
[
"Wang",
"Cong",
""
],
[
"Gu",
"Jindong",
""
],
[
"Wu",
"Baoyuan",
""
],
[
"Chen",
"Siheng",
""
],
[
"Zhang",
"Tianwei",
""
],
[
"Liu",
"Yang",
""
],
[
"Gong",
"Mingming",
""
],
[
"Liu",
"Tongliang",
""
],
[
"Pan",
"Shirui",
""
],
[
"Xie",
"Cihang",
""
],
[
"Pang",
"Tianyu",
""
],
[
"Dong",
"Yinpeng",
""
],
[
"Jia",
"Ruoxi",
""
],
[
"Zhang",
"Yang",
""
],
[
"Ma",
"Shiqing",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Gong",
"Neil",
""
],
[
"Xiao",
"Chaowei",
""
],
[
"Erfani",
"Sarah",
""
],
[
"Baldwin",
"Tim",
""
],
[
"Li",
"Bo",
""
],
[
"Sugiyama",
"Masashi",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Bailey",
"James",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] | TITLE: Safety at Scale: A Comprehensive Survey of Large Model Safety
ABSTRACT: The rapid advancement of large models, driven by their exceptional abilities
in learning and generalization through large-scale pre-training, has reshaped
the landscape of Artificial Intelligence (AI). These models are now
foundational to a wide range of applications, including conversational AI,
recommendation systems, autonomous driving, content generation, medical
diagnostics, and scientific discovery. However, their widespread deployment
also exposes them to significant safety risks, raising concerns about
robustness, reliability, and ethical implications. This survey provides a
systematic review of current safety research on large models, covering Vision
Foundation Models (VFMs), Large Language Models (LLMs), Vision-Language
Pre-training (VLP) models, Vision-Language Models (VLMs), Diffusion Models
(DMs), and large-model-based Agents. Our contributions are summarized as
follows: (1) We present a comprehensive taxonomy of safety threats to these
models, including adversarial attacks, data poisoning, backdoor attacks,
jailbreak and prompt injection attacks, energy-latency attacks, data and model
extraction attacks, and emerging agent-specific threats. (2) We review defense
strategies proposed for each type of attacks if available and summarize the
commonly used datasets and benchmarks for safety research. (3) Building on
this, we identify and discuss the open challenges in large model safety,
emphasizing the need for comprehensive safety evaluations, scalable and
effective defense mechanisms, and sustainable data practices. More importantly,
we highlight the necessity of collective efforts from the research community
and international collaboration. Our work can serve as a useful reference for
researchers and practitioners, fostering the ongoing development of
comprehensive defense systems and platforms to safeguard AI models.
|
2502.12896 | Eva S\'anchez Salido | Eva S\'anchez Salido, Julio Gonzalo, Guillermo Marco | None of the Others: a General Technique to Distinguish Reasoning from
Memorization in Multiple-Choice LLM Evaluation Benchmarks | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In LLM evaluations, reasoning is often distinguished from recall/memorization
by performing numerical variations to math-oriented questions. Here we
introduce a general variation method for multiple-choice questions that
completely dissociates the correct answer from previously seen tokens or
concepts, requiring LLMs to understand and reason (rather than memorizing) in
order to answer correctly. Using this method, we evaluate state-of-the-art
proprietary and open-source LLMs on two datasets available in English and
Spanish: the public MMLU benchmark and the private UNED-Access 2024 dataset.
Results show that all models experience remarkable accuracy drops under our
proposed variation, with an average loss of 57% on MMLU and 50% on UNED-Access
2024, ranging from 10% to 93% across models. Notably, the most accurate model
in our experimentation (OpenAI-o3-mini) is not the most robust
(DeepSeek-R1-70B), suggesting that the best models in standard evaluations may
not be the ones with better reasoning capabilities. Also, we see larger
accuracy drops in public (vs private) datasets and questions posed in their
original language (vs a manual translation), which are signs of contamination
and also point to a relevant role of recall/memorization in current LLMs'
answers.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 14:32:44 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 14:15:12 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Salido",
"Eva Sánchez",
""
],
[
"Gonzalo",
"Julio",
""
],
[
"Marco",
"Guillermo",
""
]
] | TITLE: None of the Others: a General Technique to Distinguish Reasoning from
Memorization in Multiple-Choice LLM Evaluation Benchmarks
ABSTRACT: In LLM evaluations, reasoning is often distinguished from recall/memorization
by performing numerical variations to math-oriented questions. Here we
introduce a general variation method for multiple-choice questions that
completely dissociates the correct answer from previously seen tokens or
concepts, requiring LLMs to understand and reason (rather than memorizing) in
order to answer correctly. Using this method, we evaluate state-of-the-art
proprietary and open-source LLMs on two datasets available in English and
Spanish: the public MMLU benchmark and the private UNED-Access 2024 dataset.
Results show that all models experience remarkable accuracy drops under our
proposed variation, with an average loss of 57% on MMLU and 50% on UNED-Access
2024, ranging from 10% to 93% across models. Notably, the most accurate model
in our experimentation (OpenAI-o3-mini) is not the most robust
(DeepSeek-R1-70B), suggesting that the best models in standard evaluations may
not be the ones with better reasoning capabilities. Also, we see larger
accuracy drops in public (vs private) datasets and questions posed in their
original language (vs a manual translation), which are signs of contamination
and also point to a relevant role of recall/memorization in current LLMs'
answers.
|
2502.15013 | Arun Sharma | Majid Farhadloo, Arun Sharma, Mingzhou Yang, Bharat Jayaprakash,
William Northrop, Shashi Shekhar | Towards Physics-Guided Foundation Models | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Traditional foundation models are pre-trained on broad datasets to reduce the
training resources (e.g., time, energy, labeled samples) needed for fine-tuning
a wide range of downstream tasks. However, traditional foundation models
struggle with out-of-distribution prediction and can produce outputs that are
unrealistic and physically infeasible. We propose the notation of
physics-guided foundation models (PGFM), that is, foundation models integrated
with broad or general domain (e.g., scientific) physical knowledge applicable
to a wide range of downstream tasks.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 20:10:22 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 20:51:46 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Farhadloo",
"Majid",
""
],
[
"Sharma",
"Arun",
""
],
[
"Yang",
"Mingzhou",
""
],
[
"Jayaprakash",
"Bharat",
""
],
[
"Northrop",
"William",
""
],
[
"Shekhar",
"Shashi",
""
]
] | TITLE: Towards Physics-Guided Foundation Models
ABSTRACT: Traditional foundation models are pre-trained on broad datasets to reduce the
training resources (e.g., time, energy, labeled samples) needed for fine-tuning
a wide range of downstream tasks. However, traditional foundation models
struggle with out-of-distribution prediction and can produce outputs that are
unrealistic and physically infeasible. We propose the notation of
physics-guided foundation models (PGFM), that is, foundation models integrated
with broad or general domain (e.g., scientific) physical knowledge applicable
to a wide range of downstream tasks.
|
2502.16457 | Heegyu Kim | Heegyu Kim, Taeyang Jeon, Seungtaek Choi, Ji Hoon Hong, Dong Won Jeon,
Ga-Yeon Baek, Gyeong-Won Kwak, Dong-Hee Lee, Jisu Bae, Chihoon Lee, Yunseo
Kim, Seon-Jin Choi, Jin-Seong Park, Sung Beom Cho, Hyunsouk Cho | Towards Fully-Automated Materials Discovery via Large-Scale Synthesis
Dataset and Expert-Level LLM-as-a-Judge | under review | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Materials synthesis is vital for innovations such as energy storage,
catalysis, electronics, and biomedical devices. Yet, the process relies heavily
on empirical, trial-and-error methods guided by expert intuition. Our work aims
to support the materials science community by providing a practical,
data-driven resource. We have curated a comprehensive dataset of 17K
expert-verified synthesis recipes from open-access literature, which forms the
basis of our newly developed benchmark, AlchemyBench. AlchemyBench offers an
end-to-end framework that supports research in large language models applied to
synthesis prediction. It encompasses key tasks, including raw materials and
equipment prediction, synthesis procedure generation, and characterization
outcome forecasting. We propose an LLM-as-a-Judge framework that leverages
large language models for automated evaluation, demonstrating strong
statistical agreement with expert assessments. Overall, our contributions offer
a supportive foundation for exploring the capabilities of LLMs in predicting
and guiding materials synthesis, ultimately paving the way for more efficient
experimental design and accelerated innovation in materials science.
| [
{
"version": "v1",
"created": "Sun, 23 Feb 2025 06:16:23 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 00:40:18 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 14:00:39 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Mar 2025 11:37:27 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Kim",
"Heegyu",
""
],
[
"Jeon",
"Taeyang",
""
],
[
"Choi",
"Seungtaek",
""
],
[
"Hong",
"Ji Hoon",
""
],
[
"Jeon",
"Dong Won",
""
],
[
"Baek",
"Ga-Yeon",
""
],
[
"Kwak",
"Gyeong-Won",
""
],
[
"Lee",
"Dong-Hee",
""
],
[
"Bae",
"Jisu",
""
],
[
"Lee",
"Chihoon",
""
],
[
"Kim",
"Yunseo",
""
],
[
"Choi",
"Seon-Jin",
""
],
[
"Park",
"Jin-Seong",
""
],
[
"Cho",
"Sung Beom",
""
],
[
"Cho",
"Hyunsouk",
""
]
] | TITLE: Towards Fully-Automated Materials Discovery via Large-Scale Synthesis
Dataset and Expert-Level LLM-as-a-Judge
ABSTRACT: Materials synthesis is vital for innovations such as energy storage,
catalysis, electronics, and biomedical devices. Yet, the process relies heavily
on empirical, trial-and-error methods guided by expert intuition. Our work aims
to support the materials science community by providing a practical,
data-driven resource. We have curated a comprehensive dataset of 17K
expert-verified synthesis recipes from open-access literature, which forms the
basis of our newly developed benchmark, AlchemyBench. AlchemyBench offers an
end-to-end framework that supports research in large language models applied to
synthesis prediction. It encompasses key tasks, including raw materials and
equipment prediction, synthesis procedure generation, and characterization
outcome forecasting. We propose an LLM-as-a-Judge framework that leverages
large language models for automated evaluation, demonstrating strong
statistical agreement with expert assessments. Overall, our contributions offer
a supportive foundation for exploring the capabilities of LLMs in predicting
and guiding materials synthesis, ultimately paving the way for more efficient
experimental design and accelerated innovation in materials science.
|
2502.19459 | Yu Liu | Yu Liu, Baoxiong Jia, Ruijie Lu, Junfeng Ni, Song-Chun Zhu, Siyuan
Huang | ArtGS: Building Interactable Replicas of Complex Articulated Objects via
Gaussian Splatting | null | null | null | null | cs.GR cs.LG cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Building articulated objects is a key challenge in computer vision. Existing
methods often fail to effectively integrate information across different object
states, limiting the accuracy of part-mesh reconstruction and part dynamics
modeling, particularly for complex multi-part articulated objects. We introduce
ArtGS, a novel approach that leverages 3D Gaussians as a flexible and efficient
representation to address these issues. Our method incorporates canonical
Gaussians with coarse-to-fine initialization and updates for aligning
articulated part information across different object states, and employs a
skinning-inspired part dynamics modeling module to improve both part-mesh
reconstruction and articulation learning. Extensive experiments on both
synthetic and real-world datasets, including a new benchmark for complex
multi-part objects, demonstrate that ArtGS achieves state-of-the-art
performance in joint parameter estimation and part mesh reconstruction. Our
approach significantly improves reconstruction quality and efficiency,
especially for multi-part articulated objects. Additionally, we provide
comprehensive analyses of our design choices, validating the effectiveness of
each component to highlight potential areas for future improvement. Our work is
made publicly available at: https://articulate-gs.github.io.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 10:25:32 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 08:43:16 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Liu",
"Yu",
""
],
[
"Jia",
"Baoxiong",
""
],
[
"Lu",
"Ruijie",
""
],
[
"Ni",
"Junfeng",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Huang",
"Siyuan",
""
]
] | TITLE: ArtGS: Building Interactable Replicas of Complex Articulated Objects via
Gaussian Splatting
ABSTRACT: Building articulated objects is a key challenge in computer vision. Existing
methods often fail to effectively integrate information across different object
states, limiting the accuracy of part-mesh reconstruction and part dynamics
modeling, particularly for complex multi-part articulated objects. We introduce
ArtGS, a novel approach that leverages 3D Gaussians as a flexible and efficient
representation to address these issues. Our method incorporates canonical
Gaussians with coarse-to-fine initialization and updates for aligning
articulated part information across different object states, and employs a
skinning-inspired part dynamics modeling module to improve both part-mesh
reconstruction and articulation learning. Extensive experiments on both
synthetic and real-world datasets, including a new benchmark for complex
multi-part objects, demonstrate that ArtGS achieves state-of-the-art
performance in joint parameter estimation and part mesh reconstruction. Our
approach significantly improves reconstruction quality and efficiency,
especially for multi-part articulated objects. Additionally, we provide
comprehensive analyses of our design choices, validating the effectiveness of
each component to highlight potential areas for future improvement. Our work is
made publicly available at: https://articulate-gs.github.io.
|
2502.21201 | Otto Brookes | Otto Brookes, Maksim Kukushkin, Majid Mirmehdi, Colleen Stephens,
Paula Dieguez, Thurston C. Hicks, Sorrel Jones, Kevin Lee, Maureen S.
McCarthy, Amelia Meier, Emmanuelle Normand, Erin G. Wessling, Roman M.Wittig,
Kevin Langergraber, Klaus Zuberb\"uhler, Lukas Boesch, Thomas Schmid, Mimi
Arandjelovic, Hjalmar K\"uhl, Tilo Burghardt | The PanAf-FGBG Dataset: Understanding the Impact of Backgrounds in
Wildlife Behaviour Recognition | 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR) | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Computer vision analysis of camera trap video footage is essential for
wildlife conservation, as captured behaviours offer some of the earliest
indicators of changes in population health. Recently, several high-impact
animal behaviour datasets and methods have been introduced to encourage their
use; however, the role of behaviour-correlated background information and its
significant effect on out-of-distribution generalisation remain unexplored. In
response, we present the PanAf-FGBG dataset, featuring 20 hours of wild
chimpanzee behaviours, recorded at over 350 individual camera locations.
Uniquely, it pairs every video with a chimpanzee (referred to as a foreground
video) with a corresponding background video (with no chimpanzee) from the same
camera location. We present two views of the dataset: one with overlapping
camera locations and one with disjoint locations. This setup enables, for the
first time, direct evaluation of in-distribution and out-of-distribution
conditions, and for the impact of backgrounds on behaviour recognition models
to be quantified. All clips come with rich behavioural annotations and metadata
including unique camera IDs and detailed textual scene descriptions.
Additionally, we establish several baselines and present a highly effective
latent-space normalisation technique that boosts out-of-distribution
performance by +5.42% mAP for convolutional and +3.75% mAP for
transformer-based models. Finally, we provide an in-depth analysis on the role
of backgrounds in out-of-distribution behaviour recognition, including the so
far unexplored impact of background durations (i.e., the count of background
frames within foreground videos).
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 16:18:57 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 10:32:20 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 15:11:51 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Brookes",
"Otto",
""
],
[
"Kukushkin",
"Maksim",
""
],
[
"Mirmehdi",
"Majid",
""
],
[
"Stephens",
"Colleen",
""
],
[
"Dieguez",
"Paula",
""
],
[
"Hicks",
"Thurston C.",
""
],
[
"Jones",
"Sorrel",
""
],
[
"Lee",
"Kevin",
""
],
[
"McCarthy",
"Maureen S.",
""
],
[
"Meier",
"Amelia",
""
],
[
"Normand",
"Emmanuelle",
""
],
[
"Wessling",
"Erin G.",
""
],
[
"Wittig",
"Roman M.",
""
],
[
"Langergraber",
"Kevin",
""
],
[
"Zuberbühler",
"Klaus",
""
],
[
"Boesch",
"Lukas",
""
],
[
"Schmid",
"Thomas",
""
],
[
"Arandjelovic",
"Mimi",
""
],
[
"Kühl",
"Hjalmar",
""
],
[
"Burghardt",
"Tilo",
""
]
] | TITLE: The PanAf-FGBG Dataset: Understanding the Impact of Backgrounds in
Wildlife Behaviour Recognition
ABSTRACT: Computer vision analysis of camera trap video footage is essential for
wildlife conservation, as captured behaviours offer some of the earliest
indicators of changes in population health. Recently, several high-impact
animal behaviour datasets and methods have been introduced to encourage their
use; however, the role of behaviour-correlated background information and its
significant effect on out-of-distribution generalisation remain unexplored. In
response, we present the PanAf-FGBG dataset, featuring 20 hours of wild
chimpanzee behaviours, recorded at over 350 individual camera locations.
Uniquely, it pairs every video with a chimpanzee (referred to as a foreground
video) with a corresponding background video (with no chimpanzee) from the same
camera location. We present two views of the dataset: one with overlapping
camera locations and one with disjoint locations. This setup enables, for the
first time, direct evaluation of in-distribution and out-of-distribution
conditions, and for the impact of backgrounds on behaviour recognition models
to be quantified. All clips come with rich behavioural annotations and metadata
including unique camera IDs and detailed textual scene descriptions.
Additionally, we establish several baselines and present a highly effective
latent-space normalisation technique that boosts out-of-distribution
performance by +5.42% mAP for convolutional and +3.75% mAP for
transformer-based models. Finally, we provide an in-depth analysis on the role
of backgrounds in out-of-distribution behaviour recognition, including the so
far unexplored impact of background durations (i.e., the count of background
frames within foreground videos).
|
2503.00713 | Yinqian Sun | Yinqian Sun, Feifei Zhao, Mingyang Lv and Yi Zeng | Spiking World Model with Multi-Compartment Neurons for Model-based
Reinforcement Learning | null | null | null | null | cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Brain-inspired spiking neural networks (SNNs) have garnered significant
research attention in algorithm design and perception applications. However,
their potential in the decision-making domain, particularly in model-based
reinforcement learning, remains underexplored. The difficulty lies in the need
for spiking neurons with long-term temporal memory capabilities, as well as
network optimization that can integrate and learn information for accurate
predictions. The dynamic dendritic information integration mechanism of
biological neurons brings us valuable insights for addressing these challenges.
In this study, we propose a multi-compartment neuron model capable of
nonlinearly integrating information from multiple dendritic sources to
dynamically process long sequential inputs. Based on this model, we construct a
Spiking World Model (Spiking-WM), to enable model-based deep reinforcement
learning (DRL) with SNNs. We evaluated our model using the DeepMind Control
Suite, demonstrating that Spiking-WM outperforms existing SNN-based models and
achieves performance comparable to artificial neural network (ANN)-based world
models employing Gated Recurrent Units (GRUs). Furthermore, we assess the
long-term memory capabilities of the proposed model in speech datasets,
including SHD, TIMIT, and LibriSpeech 100h, showing that our multi-compartment
neuron model surpasses other SNN-based architectures in processing long
sequences. Our findings underscore the critical role of dendritic information
integration in shaping neuronal function, emphasizing the importance of
cooperative dendritic processing in enhancing neural computation.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 03:40:10 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 02:49:46 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 09:47:14 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Sun",
"Yinqian",
""
],
[
"Zhao",
"Feifei",
""
],
[
"Lv",
"Mingyang",
""
],
[
"Zeng",
"Yi",
""
]
] | TITLE: Spiking World Model with Multi-Compartment Neurons for Model-based
Reinforcement Learning
ABSTRACT: Brain-inspired spiking neural networks (SNNs) have garnered significant
research attention in algorithm design and perception applications. However,
their potential in the decision-making domain, particularly in model-based
reinforcement learning, remains underexplored. The difficulty lies in the need
for spiking neurons with long-term temporal memory capabilities, as well as
network optimization that can integrate and learn information for accurate
predictions. The dynamic dendritic information integration mechanism of
biological neurons brings us valuable insights for addressing these challenges.
In this study, we propose a multi-compartment neuron model capable of
nonlinearly integrating information from multiple dendritic sources to
dynamically process long sequential inputs. Based on this model, we construct a
Spiking World Model (Spiking-WM), to enable model-based deep reinforcement
learning (DRL) with SNNs. We evaluated our model using the DeepMind Control
Suite, demonstrating that Spiking-WM outperforms existing SNN-based models and
achieves performance comparable to artificial neural network (ANN)-based world
models employing Gated Recurrent Units (GRUs). Furthermore, we assess the
long-term memory capabilities of the proposed model in speech datasets,
including SHD, TIMIT, and LibriSpeech 100h, showing that our multi-compartment
neuron model surpasses other SNN-based architectures in processing long
sequences. Our findings underscore the critical role of dendritic information
integration in shaping neuronal function, emphasizing the importance of
cooperative dendritic processing in enhancing neural computation.
|
2503.02191 | Mia Mohammad Imran | Mia Mohammad Imran, Robert Zita, Rebekah Copeland, Preetha Chatterjee,
Rahat Rizvi Rahman, and Kostadin Damevski | Understanding and Predicting Derailment in Toxic Conversations on GitHub | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Software projects thrive on the involvement and contributions of individuals
from different backgrounds. However, toxic language and negative interactions
can hinder the participation and retention of contributors and alienate
newcomers. Proactive moderation strategies aim to prevent toxicity from
occurring by addressing conversations that have derailed from their intended
purpose. This study aims to understand and predict conversational derailment
leading to toxicity on GitHub.
To facilitate this research, we curate a novel dataset comprising 202 toxic
conversations from GitHub with annotated derailment points, along with 696
non-toxic conversations as a baseline. Based on this dataset, we identify
unique characteristics of toxic conversations and derailment points, including
linguistic markers such as second-person pronouns, negation terms, and tones of
Bitter Frustration and Impatience, as well as patterns in conversational
dynamics between project contributors and external participants.
Leveraging these empirical observations, we propose a proactive moderation
approach to automatically detect and address potentially harmful conversations
before escalation. By utilizing modern LLMs, we develop a conversation
trajectory summary technique that captures the evolution of discussions and
identifies early signs of derailment. Our experiments demonstrate that LLM
prompts tailored to provide summaries of GitHub conversations achieve 70%
F1-Score in predicting conversational derailment, strongly improving over a set
of baseline approaches.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 02:01:37 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 03:25:44 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 14:54:16 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Imran",
"Mia Mohammad",
""
],
[
"Zita",
"Robert",
""
],
[
"Copeland",
"Rebekah",
""
],
[
"Chatterjee",
"Preetha",
""
],
[
"Rahman",
"Rahat Rizvi",
""
],
[
"Damevski",
"Kostadin",
""
]
] | TITLE: Understanding and Predicting Derailment in Toxic Conversations on GitHub
ABSTRACT: Software projects thrive on the involvement and contributions of individuals
from different backgrounds. However, toxic language and negative interactions
can hinder the participation and retention of contributors and alienate
newcomers. Proactive moderation strategies aim to prevent toxicity from
occurring by addressing conversations that have derailed from their intended
purpose. This study aims to understand and predict conversational derailment
leading to toxicity on GitHub.
To facilitate this research, we curate a novel dataset comprising 202 toxic
conversations from GitHub with annotated derailment points, along with 696
non-toxic conversations as a baseline. Based on this dataset, we identify
unique characteristics of toxic conversations and derailment points, including
linguistic markers such as second-person pronouns, negation terms, and tones of
Bitter Frustration and Impatience, as well as patterns in conversational
dynamics between project contributors and external participants.
Leveraging these empirical observations, we propose a proactive moderation
approach to automatically detect and address potentially harmful conversations
before escalation. By utilizing modern LLMs, we develop a conversation
trajectory summary technique that captures the evolution of discussions and
identifies early signs of derailment. Our experiments demonstrate that LLM
prompts tailored to provide summaries of GitHub conversations achieve 70%
F1-Score in predicting conversational derailment, strongly improving over a set
of baseline approaches.
|
2503.06894 | Xiaoqian Hu | Xiaoqian Hu | A Deep Learning Approach for Augmenting Perceptional Understanding of
Histopathology Images | Accepted by International Conference on Semantic & Natural Language
Processing (SNLP 2025) | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | In Recent Years, Digital Technologies Have Made Significant Strides In
Augmenting-Human-Health, Cognition, And Perception, Particularly Within The
Field Of Computational-Pathology. This Paper Presents A Novel Approach To
Enhancing The Analysis Of Histopathology Images By Leveraging A
Mult-modal-Model That Combines Vision Transformers (Vit) With Gpt-2 For Image
Captioning. The Model Is Fine-Tuned On The Specialized Arch-Dataset, Which
Includes Dense Image Captions Derived From Clinical And Academic Resources, To
Capture The Complexities Of Pathology Images Such As Tissue Morphologies,
Staining Variations, And Pathological Conditions. By Generating Accurate,
Contextually Captions, The Model Augments The Cognitive Capabilities Of
Healthcare Professionals, Enabling More Efficient Disease Classification,
Segmentation, And Detection. The Model Enhances The Perception Of Subtle
Pathological Features In Images That Might Otherwise Go Unnoticed, Thereby
Improving Diagnostic Accuracy. Our Approach Demonstrates The Potential For
Digital Technologies To Augment Human Cognitive Abilities In Medical Image
Analysis, Providing Steps Toward More Personalized And Accurate Healthcare
Outcomes.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 03:50:25 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 08:18:22 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Hu",
"Xiaoqian",
""
]
] | TITLE: A Deep Learning Approach for Augmenting Perceptional Understanding of
Histopathology Images
ABSTRACT: In Recent Years, Digital Technologies Have Made Significant Strides In
Augmenting-Human-Health, Cognition, And Perception, Particularly Within The
Field Of Computational-Pathology. This Paper Presents A Novel Approach To
Enhancing The Analysis Of Histopathology Images By Leveraging A
Mult-modal-Model That Combines Vision Transformers (Vit) With Gpt-2 For Image
Captioning. The Model Is Fine-Tuned On The Specialized Arch-Dataset, Which
Includes Dense Image Captions Derived From Clinical And Academic Resources, To
Capture The Complexities Of Pathology Images Such As Tissue Morphologies,
Staining Variations, And Pathological Conditions. By Generating Accurate,
Contextually Captions, The Model Augments The Cognitive Capabilities Of
Healthcare Professionals, Enabling More Efficient Disease Classification,
Segmentation, And Detection. The Model Enhances The Perception Of Subtle
Pathological Features In Images That Might Otherwise Go Unnoticed, Thereby
Improving Diagnostic Accuracy. Our Approach Demonstrates The Potential For
Digital Technologies To Augment Human Cognitive Abilities In Medical Image
Analysis, Providing Steps Toward More Personalized And Accurate Healthcare
Outcomes.
|
2503.07978 | Jiahao Xu | Jiahao Xu, Zikai Zhang and Rui Hu | Detecting Backdoor Attacks in Federated Learning via Direction Alignment
Inspection | null | null | null | null | cs.LG cs.CR cs.DC | http://creativecommons.org/licenses/by/4.0/ | The distributed nature of training makes Federated Learning (FL) vulnerable
to backdoor attacks, where malicious model updates aim to compromise the global
model's performance on specific tasks. Existing defense methods show limited
efficacy as they overlook the inconsistency between benign and malicious model
updates regarding both general and fine-grained directions. To fill this gap,
we introduce AlignIns, a novel defense method designed to safeguard FL systems
against backdoor attacks. AlignIns looks into the direction of each model
update through a direction alignment inspection process. Specifically, it
examines the alignment of model updates with the overall update direction and
analyzes the distribution of the signs of their significant parameters,
comparing them with the principle sign across all model updates. Model updates
that exhibit an unusual degree of alignment are considered malicious and thus
be filtered out. We provide the theoretical analysis of the robustness of
AlignIns and its propagation error in FL. Our empirical results on both
independent and identically distributed (IID) and non-IID datasets demonstrate
that AlignIns achieves higher robustness compared to the state-of-the-art
defense methods. The code is available at
https://github.com/JiiahaoXU/AlignIns.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 02:24:53 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 22:09:36 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Xu",
"Jiahao",
""
],
[
"Zhang",
"Zikai",
""
],
[
"Hu",
"Rui",
""
]
] | TITLE: Detecting Backdoor Attacks in Federated Learning via Direction Alignment
Inspection
ABSTRACT: The distributed nature of training makes Federated Learning (FL) vulnerable
to backdoor attacks, where malicious model updates aim to compromise the global
model's performance on specific tasks. Existing defense methods show limited
efficacy as they overlook the inconsistency between benign and malicious model
updates regarding both general and fine-grained directions. To fill this gap,
we introduce AlignIns, a novel defense method designed to safeguard FL systems
against backdoor attacks. AlignIns looks into the direction of each model
update through a direction alignment inspection process. Specifically, it
examines the alignment of model updates with the overall update direction and
analyzes the distribution of the signs of their significant parameters,
comparing them with the principle sign across all model updates. Model updates
that exhibit an unusual degree of alignment are considered malicious and thus
be filtered out. We provide the theoretical analysis of the robustness of
AlignIns and its propagation error in FL. Our empirical results on both
independent and identically distributed (IID) and non-IID datasets demonstrate
that AlignIns achieves higher robustness compared to the state-of-the-art
defense methods. The code is available at
https://github.com/JiiahaoXU/AlignIns.
|
2503.08352 | Ruiqi Zhang | Ruiqi Zhang, Hao Zhu, Jingyi Zhao, Qi Zhang, Xun Cao, Zhan Ma | Mitigating Ambiguities in 3D Classification with Gaussian Splatting | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D classification with point cloud input is a fundamental problem in 3D
vision. However, due to the discrete nature and the insufficient material
description of point cloud representations, there are ambiguities in
distinguishing wire-like and flat surfaces, as well as transparent or
reflective objects. To address these issues, we propose Gaussian Splatting (GS)
point cloud-based 3D classification. We find that the scale and rotation
coefficients in the GS point cloud help characterize surface types.
Specifically, wire-like surfaces consist of multiple slender Gaussian
ellipsoids, while flat surfaces are composed of a few flat Gaussian ellipsoids.
Additionally, the opacity in the GS point cloud represents the transparency
characteristics of objects. As a result, ambiguities in point cloud-based 3D
classification can be mitigated utilizing GS point cloud as input. To verify
the effectiveness of GS point cloud input, we construct the first real-world GS
point cloud dataset in the community, which includes 20 categories with 200
objects in each category. Experiments not only validate the superiority of GS
point cloud input, especially in distinguishing ambiguous objects, but also
demonstrate the generalization ability across different classification methods.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 12:06:57 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 14:18:14 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhang",
"Ruiqi",
""
],
[
"Zhu",
"Hao",
""
],
[
"Zhao",
"Jingyi",
""
],
[
"Zhang",
"Qi",
""
],
[
"Cao",
"Xun",
""
],
[
"Ma",
"Zhan",
""
]
] | TITLE: Mitigating Ambiguities in 3D Classification with Gaussian Splatting
ABSTRACT: 3D classification with point cloud input is a fundamental problem in 3D
vision. However, due to the discrete nature and the insufficient material
description of point cloud representations, there are ambiguities in
distinguishing wire-like and flat surfaces, as well as transparent or
reflective objects. To address these issues, we propose Gaussian Splatting (GS)
point cloud-based 3D classification. We find that the scale and rotation
coefficients in the GS point cloud help characterize surface types.
Specifically, wire-like surfaces consist of multiple slender Gaussian
ellipsoids, while flat surfaces are composed of a few flat Gaussian ellipsoids.
Additionally, the opacity in the GS point cloud represents the transparency
characteristics of objects. As a result, ambiguities in point cloud-based 3D
classification can be mitigated utilizing GS point cloud as input. To verify
the effectiveness of GS point cloud input, we construct the first real-world GS
point cloud dataset in the community, which includes 20 categories with 200
objects in each category. Experiments not only validate the superiority of GS
point cloud input, especially in distinguishing ambiguous objects, but also
demonstrate the generalization ability across different classification methods.
|
2503.08415 | Feiyang Wu | Feiyang Wu, Zhuohang Bian, Guoyang Duan, Tianle Xu, Junchi Wu, Teng
Ma, Yongqiang Yao, Ruihao Gong, Youwei Zhuo | TokenSim: Enabling Hardware and Software Exploration for Large Language
Model Inference Systems | 9 pages, 15 figures | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing demand for large language model (LLM) serving has necessitated
significant advancements in the optimization and profiling of LLM inference
systems. As these models become integral to a wide range of applications, the
need for efficient and scalable serving solutions has grown exponentially. This
work introduces TokenSim, a comprehensive hardware and software exploration
system designed specifically for LLM inference. TokenSim is characterized by
its support for extensible system optimizations including scheduling and memory
management. We validate the results with systems running with realworld
datasets, achieving an error rate of less than 1%. Furthermore, TokenSim
facilitates various insightful explorations into the performance and
optimization of LLM serving systems.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 13:24:39 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 15:40:41 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wu",
"Feiyang",
""
],
[
"Bian",
"Zhuohang",
""
],
[
"Duan",
"Guoyang",
""
],
[
"Xu",
"Tianle",
""
],
[
"Wu",
"Junchi",
""
],
[
"Ma",
"Teng",
""
],
[
"Yao",
"Yongqiang",
""
],
[
"Gong",
"Ruihao",
""
],
[
"Zhuo",
"Youwei",
""
]
] | TITLE: TokenSim: Enabling Hardware and Software Exploration for Large Language
Model Inference Systems
ABSTRACT: The increasing demand for large language model (LLM) serving has necessitated
significant advancements in the optimization and profiling of LLM inference
systems. As these models become integral to a wide range of applications, the
need for efficient and scalable serving solutions has grown exponentially. This
work introduces TokenSim, a comprehensive hardware and software exploration
system designed specifically for LLM inference. TokenSim is characterized by
its support for extensible system optimizations including scheduling and memory
management. We validate the results with systems running with realworld
datasets, achieving an error rate of less than 1%. Furthermore, TokenSim
facilitates various insightful explorations into the performance and
optimization of LLM serving systems.
|
2503.08696 | Kasymkhan Khubiev | Kasymkhan Khubiev and Mikhail Semenov | Multimodal Stock Price Prediction: A Case Study of the Russian
Securities Market | NSCF-2024, PROGRAM SYSTEMS: THEORY AND APPLICATIONS | http://psta.psiras.ru:8081/ru/2025/1_83-130 | 10.25209/2079-3316-2025-16-1-83-130 | null | q-fin.ST cs.LG q-fin.CP | http://creativecommons.org/licenses/by-sa/4.0/ | Classical asset price forecasting methods primarily rely on numerical data,
such as price time series, trading volumes, limit order book data, and
technical analysis indicators. However, the news flow plays a significant role
in price formation, making the development of multimodal approaches that
combine textual and numerical data for improved prediction accuracy highly
relevant. This paper addresses the problem of forecasting financial asset
prices using the multimodal approach that combines candlestick time series and
textual news flow data. A unique dataset was collected for the study, which
includes time series for 176 Russian stocks traded on the Moscow Exchange and
79,555 financial news articles in Russian. For processing textual data,
pre-trained models RuBERT and Vikhr-Qwen2.5-0.5b-Instruct (a large language
model) were used, while time series and vectorized text data were processed
using an LSTM recurrent neural network. The experiments compared models based
on a single modality (time series only) and two modalities, as well as various
methods for aggregating text vector representations. Prediction quality was
estimated using two key metrics: Accuracy (direction of price movement
prediction: up or down) and Mean Absolute Percentage Error (MAPE), which
measures the deviation of the predicted price from the true price. The
experiments showed that incorporating textual modality reduced the MAPE value
by 55%. The resulting multimodal dataset holds value for the further adaptation
of language models in the financial sector. Future research directions include
optimizing textual modality parameters, such as the time window, sentiment, and
chronological order of news messages.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 21:20:32 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Khubiev",
"Kasymkhan",
""
],
[
"Semenov",
"Mikhail",
""
]
] | TITLE: Multimodal Stock Price Prediction: A Case Study of the Russian
Securities Market
ABSTRACT: Classical asset price forecasting methods primarily rely on numerical data,
such as price time series, trading volumes, limit order book data, and
technical analysis indicators. However, the news flow plays a significant role
in price formation, making the development of multimodal approaches that
combine textual and numerical data for improved prediction accuracy highly
relevant. This paper addresses the problem of forecasting financial asset
prices using the multimodal approach that combines candlestick time series and
textual news flow data. A unique dataset was collected for the study, which
includes time series for 176 Russian stocks traded on the Moscow Exchange and
79,555 financial news articles in Russian. For processing textual data,
pre-trained models RuBERT and Vikhr-Qwen2.5-0.5b-Instruct (a large language
model) were used, while time series and vectorized text data were processed
using an LSTM recurrent neural network. The experiments compared models based
on a single modality (time series only) and two modalities, as well as various
methods for aggregating text vector representations. Prediction quality was
estimated using two key metrics: Accuracy (direction of price movement
prediction: up or down) and Mean Absolute Percentage Error (MAPE), which
measures the deviation of the predicted price from the true price. The
experiments showed that incorporating textual modality reduced the MAPE value
by 55%. The resulting multimodal dataset holds value for the further adaptation
of language models in the financial sector. Future research directions include
optimizing textual modality parameters, such as the time window, sentiment, and
chronological order of news messages.
|
2503.10538 | Teresa Head-Gordon | Eric C.-Y. Yuan, Yunsheng Liu, Junmin Chen, Peichen Zhong, Sanjeev
Raja, Tobias Kreiman, Santiago Vargas, Wenbin Xu, Martin Head-Gordon, Chao
Yang, Samuel M. Blau, Bingqing Cheng, Aditi Krishnapriyan, Teresa Head-Gordon | Foundation Models for Atomistic Simulation of Chemistry and Materials | null | null | null | null | physics.chem-ph | http://creativecommons.org/licenses/by-sa/4.0/ | Given the power of large language and large vision models, it is of profound
and fundamental interest to ask if a foundational model based on data and
parameter scaling laws and pre-training strategies is possible for learned
simulations of chemistry and materials. The scaling of large and diverse
datasets and highly expressive architectures for chemical and materials
sciences should result in a foundation model that is more efficient and broadly
transferable, robust to out-of-distribution challenges, and easily fine-tuned
to a variety of downstream observables, when compared to specific training from
scratch on targeted applications in atomistic simulation. In this Perspective
we aim to cover the rapidly advancing field of machine learned interatomic
potentials (MLIP), and to illustrate a path to create chemistry and materials
MLIP foundation models at larger scale.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 16:52:12 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 07:16:25 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Yuan",
"Eric C. -Y.",
""
],
[
"Liu",
"Yunsheng",
""
],
[
"Chen",
"Junmin",
""
],
[
"Zhong",
"Peichen",
""
],
[
"Raja",
"Sanjeev",
""
],
[
"Kreiman",
"Tobias",
""
],
[
"Vargas",
"Santiago",
""
],
[
"Xu",
"Wenbin",
""
],
[
"Head-Gordon",
"Martin",
""
],
[
"Yang",
"Chao",
""
],
[
"Blau",
"Samuel M.",
""
],
[
"Cheng",
"Bingqing",
""
],
[
"Krishnapriyan",
"Aditi",
""
],
[
"Head-Gordon",
"Teresa",
""
]
] | TITLE: Foundation Models for Atomistic Simulation of Chemistry and Materials
ABSTRACT: Given the power of large language and large vision models, it is of profound
and fundamental interest to ask if a foundational model based on data and
parameter scaling laws and pre-training strategies is possible for learned
simulations of chemistry and materials. The scaling of large and diverse
datasets and highly expressive architectures for chemical and materials
sciences should result in a foundation model that is more efficient and broadly
transferable, robust to out-of-distribution challenges, and easily fine-tuned
to a variety of downstream observables, when compared to specific training from
scratch on targeted applications in atomistic simulation. In this Perspective
we aim to cover the rapidly advancing field of machine learned interatomic
potentials (MLIP), and to illustrate a path to create chemistry and materials
MLIP foundation models at larger scale.
|
2503.10695 | Mooho Song | Mooho Song, Hyeryung Son, Jay-Yoon Lee | Introducing Verification Task of Set Consistency with Set-Consistency
Energy Networks | null | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Examining logical inconsistencies among multiple statements (such as
collections of sentences or question-answer pairs) is a crucial challenge in
machine learning, particularly for ensuring the safety and reliability of
models. Traditional methods that rely on pairwise comparisons often fail to
capture inconsistencies that only emerge when more than two statements are
evaluated collectively. To address this gap, we introduce the task of
set-consistency verification, an extension of natural language inference (NLI)
that assesses the logical coherence of entire sets rather than isolated pairs.
Building on this task, we present the Set-Consistency Energy Network
(SC-Energy), a novel model that employs a contrastive loss framework to learn
the compatibility among a collection of statements. Our approach not only
efficiently verifies inconsistencies and pinpoints the specific statements
responsible for logical contradictions, but also significantly outperforms
existing methods including prompting-based LLM models. Furthermore, we release
two new datasets: Set-LConVQA and Set-SNLI for set-consistency verification
task.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 05:11:11 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 04:07:06 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Song",
"Mooho",
""
],
[
"Son",
"Hyeryung",
""
],
[
"Lee",
"Jay-Yoon",
""
]
] | TITLE: Introducing Verification Task of Set Consistency with Set-Consistency
Energy Networks
ABSTRACT: Examining logical inconsistencies among multiple statements (such as
collections of sentences or question-answer pairs) is a crucial challenge in
machine learning, particularly for ensuring the safety and reliability of
models. Traditional methods that rely on pairwise comparisons often fail to
capture inconsistencies that only emerge when more than two statements are
evaluated collectively. To address this gap, we introduce the task of
set-consistency verification, an extension of natural language inference (NLI)
that assesses the logical coherence of entire sets rather than isolated pairs.
Building on this task, we present the Set-Consistency Energy Network
(SC-Energy), a novel model that employs a contrastive loss framework to learn
the compatibility among a collection of statements. Our approach not only
efficiently verifies inconsistencies and pinpoints the specific statements
responsible for logical contradictions, but also significantly outperforms
existing methods including prompting-based LLM models. Furthermore, we release
two new datasets: Set-LConVQA and Set-SNLI for set-consistency verification
task.
|
2503.11117 | Kaixuan Jiang | Kaixuan Jiang, Yang Liu, Weixing Chen, Jingzhou Luo, Ziliang Chen,
Ling Pan, Guanbin Li, Liang Lin | Beyond the Destination: A Novel Benchmark for Exploration-Aware Embodied
Question Answering | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Embodied Question Answering (EQA) is a challenging task in embodied
intelligence that requires agents to dynamically explore 3D environments,
actively gather visual information, and perform multi-step reasoning to answer
questions. However, current EQA approaches suffer from critical limitations in
exploration efficiency, dataset design, and evaluation metrics. Moreover,
existing datasets often introduce biases or prior knowledge, leading to
disembodied reasoning, while frontier-based exploration strategies struggle in
cluttered environments and fail to ensure fine-grained exploration of
task-relevant areas. To address these challenges, we construct the
EXPloration-awaRe Embodied queStion anSwering Benchmark (EXPRESS-Bench), the
largest dataset designed specifically to evaluate both exploration and
reasoning capabilities. EXPRESS-Bench consists of 777 exploration trajectories
and 2,044 question-trajectory pairs. To improve exploration efficiency, we
propose Fine-EQA, a hybrid exploration model that integrates frontier-based and
goal-oriented navigation to guide agents toward task-relevant regions more
effectively. Additionally, we introduce a novel evaluation metric,
Exploration-Answer Consistency (EAC), which ensures faithful assessment by
measuring the alignment between answer grounding and exploration reliability.
Extensive experimental comparisons with state-of-the-art EQA models demonstrate
the effectiveness of our EXPRESS-Bench in advancing embodied exploration and
question reasoning.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 06:29:47 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 06:56:19 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Jiang",
"Kaixuan",
""
],
[
"Liu",
"Yang",
""
],
[
"Chen",
"Weixing",
""
],
[
"Luo",
"Jingzhou",
""
],
[
"Chen",
"Ziliang",
""
],
[
"Pan",
"Ling",
""
],
[
"Li",
"Guanbin",
""
],
[
"Lin",
"Liang",
""
]
] | TITLE: Beyond the Destination: A Novel Benchmark for Exploration-Aware Embodied
Question Answering
ABSTRACT: Embodied Question Answering (EQA) is a challenging task in embodied
intelligence that requires agents to dynamically explore 3D environments,
actively gather visual information, and perform multi-step reasoning to answer
questions. However, current EQA approaches suffer from critical limitations in
exploration efficiency, dataset design, and evaluation metrics. Moreover,
existing datasets often introduce biases or prior knowledge, leading to
disembodied reasoning, while frontier-based exploration strategies struggle in
cluttered environments and fail to ensure fine-grained exploration of
task-relevant areas. To address these challenges, we construct the
EXPloration-awaRe Embodied queStion anSwering Benchmark (EXPRESS-Bench), the
largest dataset designed specifically to evaluate both exploration and
reasoning capabilities. EXPRESS-Bench consists of 777 exploration trajectories
and 2,044 question-trajectory pairs. To improve exploration efficiency, we
propose Fine-EQA, a hybrid exploration model that integrates frontier-based and
goal-oriented navigation to guide agents toward task-relevant regions more
effectively. Additionally, we introduce a novel evaluation metric,
Exploration-Answer Consistency (EAC), which ensures faithful assessment by
measuring the alignment between answer grounding and exploration reliability.
Extensive experimental comparisons with state-of-the-art EQA models demonstrate
the effectiveness of our EXPRESS-Bench in advancing embodied exploration and
question reasoning.
|
2503.11197 | Gang Li | Gang Li, Jizhong Liu, Heinrich Dinkel, Yadong Niu, Junbo Zhang, Jian
Luan | Reinforcement Learning Outperforms Supervised Fine-Tuning: A Case Study
on Audio Question Answering | null | null | null | null | cs.SD cs.AI cs.CL eess.AS | http://creativecommons.org/licenses/by/4.0/ | Recently, reinforcement learning (RL) has been shown to greatly enhance the
reasoning capabilities of large language models (LLMs), and RL-based approaches
have been progressively applied to visual multimodal tasks. However, the audio
modality has largely been overlooked in these developments. Thus, we conduct a
series of RL explorations in audio understanding and reasoning, specifically
focusing on the audio question answering (AQA) task. We leverage the group
relative policy optimization (GRPO) algorithm to Qwen2-Audio-7B-Instruct, and
our experiments demonstrated state-of-the-art performance on the MMAU Test-mini
benchmark, achieving an accuracy rate of 64.5%. The main findings in this
technical report are as follows: 1) The GRPO algorithm can be effectively
applied to large audio language models (LALMs), even when the model has only
8.2B parameters; 2) With only 38k post-training samples, RL significantly
outperforms supervised fine-tuning (SFT), indicating that RL-based approaches
can be effective without large datasets; 3) The explicit reasoning process has
not shown significant benefits for AQA tasks, and how to efficiently utilize
deep thinking remains an open question for further research; 4) LALMs still lag
far behind humans auditory-language reasoning, suggesting that the RL-based
approaches warrant further exploration. Our project is available at
https://github.com/xiaomi-research/r1-aqa and
https://huggingface.co/mispeech/r1-aqa.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 08:43:53 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 04:20:29 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 16:33:16 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Gang",
""
],
[
"Liu",
"Jizhong",
""
],
[
"Dinkel",
"Heinrich",
""
],
[
"Niu",
"Yadong",
""
],
[
"Zhang",
"Junbo",
""
],
[
"Luan",
"Jian",
""
]
] | TITLE: Reinforcement Learning Outperforms Supervised Fine-Tuning: A Case Study
on Audio Question Answering
ABSTRACT: Recently, reinforcement learning (RL) has been shown to greatly enhance the
reasoning capabilities of large language models (LLMs), and RL-based approaches
have been progressively applied to visual multimodal tasks. However, the audio
modality has largely been overlooked in these developments. Thus, we conduct a
series of RL explorations in audio understanding and reasoning, specifically
focusing on the audio question answering (AQA) task. We leverage the group
relative policy optimization (GRPO) algorithm to Qwen2-Audio-7B-Instruct, and
our experiments demonstrated state-of-the-art performance on the MMAU Test-mini
benchmark, achieving an accuracy rate of 64.5%. The main findings in this
technical report are as follows: 1) The GRPO algorithm can be effectively
applied to large audio language models (LALMs), even when the model has only
8.2B parameters; 2) With only 38k post-training samples, RL significantly
outperforms supervised fine-tuning (SFT), indicating that RL-based approaches
can be effective without large datasets; 3) The explicit reasoning process has
not shown significant benefits for AQA tasks, and how to efficiently utilize
deep thinking remains an open question for further research; 4) LALMs still lag
far behind humans auditory-language reasoning, suggesting that the RL-based
approaches warrant further exploration. Our project is available at
https://github.com/xiaomi-research/r1-aqa and
https://huggingface.co/mispeech/r1-aqa.
|
2503.11221 | Du Chen | Du Chen, Tianhe Wu, Kede Ma, Lei Zhang | Toward Generalized Image Quality Assessment: Relaxing the Perfect
Reference Quality Assumption | Accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Full-reference image quality assessment (FR-IQA) generally assumes that
reference images are of perfect quality. However, this assumption is flawed due
to the sensor and optical limitations of modern imaging systems. Moreover,
recent generative enhancement methods are capable of producing images of higher
quality than their original. All of these challenge the effectiveness and
applicability of current FR-IQA models. To relax the assumption of perfect
reference image quality, we build a large-scale IQA database, namely DiffIQA,
containing approximately 180,000 images generated by a diffusion-based image
enhancer with adjustable hyper-parameters. Each image is annotated by human
subjects as either worse, similar, or better quality compared to its reference.
Building on this, we present a generalized FR-IQA model, namely Adaptive
Fidelity-Naturalness Evaluator (A-FINE), to accurately assess and adaptively
combine the fidelity and naturalness of a test image. A-FINE aligns well with
standard FR-IQA when the reference image is much more natural than the test
image. We demonstrate by extensive experiments that A-FINE surpasses standard
FR-IQA models on well-established IQA datasets and our newly created DiffIQA.
To further validate A-FINE, we additionally construct a super-resolution IQA
benchmark (SRIQA-Bench), encompassing test images derived from ten
state-of-the-art SR methods with reliable human quality annotations. Tests on
SRIQA-Bench re-affirm the advantages of A-FINE. The code and dataset are
available at https://tianhewu.github.io/A-FINE-page.github.io/.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:12:03 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 07:26:14 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Chen",
"Du",
""
],
[
"Wu",
"Tianhe",
""
],
[
"Ma",
"Kede",
""
],
[
"Zhang",
"Lei",
""
]
] | TITLE: Toward Generalized Image Quality Assessment: Relaxing the Perfect
Reference Quality Assumption
ABSTRACT: Full-reference image quality assessment (FR-IQA) generally assumes that
reference images are of perfect quality. However, this assumption is flawed due
to the sensor and optical limitations of modern imaging systems. Moreover,
recent generative enhancement methods are capable of producing images of higher
quality than their original. All of these challenge the effectiveness and
applicability of current FR-IQA models. To relax the assumption of perfect
reference image quality, we build a large-scale IQA database, namely DiffIQA,
containing approximately 180,000 images generated by a diffusion-based image
enhancer with adjustable hyper-parameters. Each image is annotated by human
subjects as either worse, similar, or better quality compared to its reference.
Building on this, we present a generalized FR-IQA model, namely Adaptive
Fidelity-Naturalness Evaluator (A-FINE), to accurately assess and adaptively
combine the fidelity and naturalness of a test image. A-FINE aligns well with
standard FR-IQA when the reference image is much more natural than the test
image. We demonstrate by extensive experiments that A-FINE surpasses standard
FR-IQA models on well-established IQA datasets and our newly created DiffIQA.
To further validate A-FINE, we additionally construct a super-resolution IQA
benchmark (SRIQA-Bench), encompassing test images derived from ten
state-of-the-art SR methods with reliable human quality annotations. Tests on
SRIQA-Bench re-affirm the advantages of A-FINE. The code and dataset are
available at https://tianhewu.github.io/A-FINE-page.github.io/.
|
2503.11280 | Bryan Wilie | Bryan Wilie, Samuel Cahyawijaya, Junxian He, Pascale Fung | High-Dimensional Interlingual Representations of Large Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Large language models (LLMs) trained on massive multilingual datasets hint at
the formation of interlingual constructs--a shared subspace in the
representation space. However, evidence regarding this phenomenon is mixed,
leaving it unclear whether these models truly develop unified interlingual
representations, or present a partially aligned constructs. We explore 31
diverse languages varying on their resource-levels, typologies, and
geographical regions; and find that multilingual LLMs exhibit inconsistent
cross-lingual alignments. To address this, we propose an interlingual
representation framework identifying both the shared interlingual semantic
subspace and fragmented components, existed due to representational
limitations. We introduce Interlingual Local Overlap (ILO) score to quantify
interlingual alignment by comparing the local neighborhood structures of
high-dimensional representations. We utilize ILO to investigate the impact of
single-language fine-tuning on the interlingual representations in multilingual
LLMs. Our results indicate that training exclusively on a single language
disrupts the alignment in early layers, while freezing these layers preserves
the alignment of interlingual representations, leading to improved
cross-lingual generalization. These results validate our framework and metric
for evaluating interlingual representation, and further underscore that
interlingual alignment is crucial for scalable multilingual learning.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 10:39:27 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 12:16:42 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wilie",
"Bryan",
""
],
[
"Cahyawijaya",
"Samuel",
""
],
[
"He",
"Junxian",
""
],
[
"Fung",
"Pascale",
""
]
] | TITLE: High-Dimensional Interlingual Representations of Large Language Models
ABSTRACT: Large language models (LLMs) trained on massive multilingual datasets hint at
the formation of interlingual constructs--a shared subspace in the
representation space. However, evidence regarding this phenomenon is mixed,
leaving it unclear whether these models truly develop unified interlingual
representations, or present a partially aligned constructs. We explore 31
diverse languages varying on their resource-levels, typologies, and
geographical regions; and find that multilingual LLMs exhibit inconsistent
cross-lingual alignments. To address this, we propose an interlingual
representation framework identifying both the shared interlingual semantic
subspace and fragmented components, existed due to representational
limitations. We introduce Interlingual Local Overlap (ILO) score to quantify
interlingual alignment by comparing the local neighborhood structures of
high-dimensional representations. We utilize ILO to investigate the impact of
single-language fine-tuning on the interlingual representations in multilingual
LLMs. Our results indicate that training exclusively on a single language
disrupts the alignment in early layers, while freezing these layers preserves
the alignment of interlingual representations, leading to improved
cross-lingual generalization. These results validate our framework and metric
for evaluating interlingual representation, and further underscore that
interlingual alignment is crucial for scalable multilingual learning.
|
2503.11281 | Anandakumar D | Praveen Shastry, Bhawana Sonawane, Kavya Mohan, Naveen Kumarasami,
Raghotham Sripadraj, Anandakumar D, Keerthana R, Mounigasri M, Kaviya SP,
Kishore Prasath Venkatesh, Bargava Subramanian, Kalyan Sivasailam | AI and Deep Learning for Automated Segmentation and Quantitative
Measurement of Spinal Structures in MRI | 16 pages, 2 figures | null | null | null | eess.IV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Accurate spinal structure measurement is crucial for assessing
spine health and diagnosing conditions like spondylosis, disc herniation, and
stenosis. Manual methods for measuring intervertebral disc height and spinal
canal diameter are subjective and time-consuming. Automated solutions are
needed to improve accuracy, efficiency, and reproducibility in clinical
practice.
Purpose: This study develops an autonomous AI system for segmenting and
measuring key spinal structures in MRI scans, focusing on intervertebral disc
height and spinal canal anteroposterior (AP) diameter in the cervical, lumbar,
and thoracic regions. The goal is to reduce clinician workload, enhance
diagnostic consistency, and improve assessments.
Methods: The AI model leverages deep learning architectures, including UNet,
nnU-Net, and CNNs. Trained on a large proprietary MRI dataset, it was validated
against expert annotations. Performance was evaluated using Dice coefficients
and segmentation accuracy.
Results: The AI model achieved Dice coefficients of 0.94 for lumbar, 0.91 for
cervical, and 0.90 for dorsal spine segmentation (D1-D12). It precisely
measured spinal parameters like disc height and canal diameter, demonstrating
robustness and clinical applicability.
Conclusion: The AI system effectively automates MRI-based spinal
measurements, improving accuracy and reducing clinician workload. Its
consistent performance across spinal regions supports clinical decision-making,
particularly in high-demand settings, enhancing spinal assessments and patient
outcomes.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 10:39:52 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 07:43:55 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 06:18:20 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Shastry",
"Praveen",
""
],
[
"Sonawane",
"Bhawana",
""
],
[
"Mohan",
"Kavya",
""
],
[
"Kumarasami",
"Naveen",
""
],
[
"Sripadraj",
"Raghotham",
""
],
[
"D",
"Anandakumar",
""
],
[
"R",
"Keerthana",
""
],
[
"M",
"Mounigasri",
""
],
[
"SP",
"Kaviya",
""
],
[
"Venkatesh",
"Kishore Prasath",
""
],
[
"Subramanian",
"Bargava",
""
],
[
"Sivasailam",
"Kalyan",
""
]
] | TITLE: AI and Deep Learning for Automated Segmentation and Quantitative
Measurement of Spinal Structures in MRI
ABSTRACT: Background: Accurate spinal structure measurement is crucial for assessing
spine health and diagnosing conditions like spondylosis, disc herniation, and
stenosis. Manual methods for measuring intervertebral disc height and spinal
canal diameter are subjective and time-consuming. Automated solutions are
needed to improve accuracy, efficiency, and reproducibility in clinical
practice.
Purpose: This study develops an autonomous AI system for segmenting and
measuring key spinal structures in MRI scans, focusing on intervertebral disc
height and spinal canal anteroposterior (AP) diameter in the cervical, lumbar,
and thoracic regions. The goal is to reduce clinician workload, enhance
diagnostic consistency, and improve assessments.
Methods: The AI model leverages deep learning architectures, including UNet,
nnU-Net, and CNNs. Trained on a large proprietary MRI dataset, it was validated
against expert annotations. Performance was evaluated using Dice coefficients
and segmentation accuracy.
Results: The AI model achieved Dice coefficients of 0.94 for lumbar, 0.91 for
cervical, and 0.90 for dorsal spine segmentation (D1-D12). It precisely
measured spinal parameters like disc height and canal diameter, demonstrating
robustness and clinical applicability.
Conclusion: The AI system effectively automates MRI-based spinal
measurements, improving accuracy and reducing clinician workload. Its
consistent performance across spinal regions supports clinical decision-making,
particularly in high-demand settings, enhancing spinal assessments and patient
outcomes.
|
2503.11498 | Slavek Zbirovsky | Sl\'avek Zbirovsk\'y, V\'aclav Ne\v{z}erka | Cloud2BIM: An open-source automatic pipeline for efficient conversion of
large-scale point clouds into IFC format | 53 pages, 23 figures | null | null | null | cs.CV cs.SE | http://creativecommons.org/licenses/by/4.0/ | Building Information Modeling (BIM) is an essential component in the
sustainable reconstruction and revitalization of ageing structures. However,
model creation usually relies on laborious manual transformation of the
unstructured point cloud data provided by laser scans or photogrammetry. This
paper presents Cloud2BIM, an open-source software tool designed to automate the
conversion of point clouds into BIM models compliant with the Industry
Foundation Classes (IFC) standard. Cloud2BIM integrates advanced algorithms for
wall and slab segmentation, opening detection, and room zoning based on real
wall surfaces, resulting in a comprehensive and fully automated workflow.
Unlike existing tools, it avoids computationally- and calibration-intensive
techniques such as RANSAC, supports non-orthogonal geometries, and provides
unprecedented processing speed-achieving results up to seven times faster than
fastest competing solutions. Systematic validation using benchmark datasets
confirms that Cloud2BIM is an easy-to-use, efficient, and scalable solution for
generating accurate BIM models, capable of converting extensive point cloud
datasets for entire buildings into IFC format with minimal user input.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 15:26:02 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 21:53:55 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zbirovský",
"Slávek",
""
],
[
"Nežerka",
"Václav",
""
]
] | TITLE: Cloud2BIM: An open-source automatic pipeline for efficient conversion of
large-scale point clouds into IFC format
ABSTRACT: Building Information Modeling (BIM) is an essential component in the
sustainable reconstruction and revitalization of ageing structures. However,
model creation usually relies on laborious manual transformation of the
unstructured point cloud data provided by laser scans or photogrammetry. This
paper presents Cloud2BIM, an open-source software tool designed to automate the
conversion of point clouds into BIM models compliant with the Industry
Foundation Classes (IFC) standard. Cloud2BIM integrates advanced algorithms for
wall and slab segmentation, opening detection, and room zoning based on real
wall surfaces, resulting in a comprehensive and fully automated workflow.
Unlike existing tools, it avoids computationally- and calibration-intensive
techniques such as RANSAC, supports non-orthogonal geometries, and provides
unprecedented processing speed-achieving results up to seven times faster than
fastest competing solutions. Systematic validation using benchmark datasets
confirms that Cloud2BIM is an easy-to-use, efficient, and scalable solution for
generating accurate BIM models, capable of converting extensive point cloud
datasets for entire buildings into IFC format with minimal user input.
|
2503.11509 | Jonas Belouadi | Jonas Belouadi, Eddy Ilg, Margret Keuper, Hideki Tanaka, Masao
Utiyama, Raj Dabre, Steffen Eger, Simone Paolo Ponzetto | TikZero: Zero-Shot Text-Guided Graphics Program Synthesis | Project page: https://github.com/potamides/DeTikZify | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | With the rise of generative AI, synthesizing figures from text captions
becomes a compelling application. However, achieving high geometric precision
and editability requires representing figures as graphics programs in languages
like TikZ, and aligned training data (i.e., graphics programs with captions)
remains scarce. Meanwhile, large amounts of unaligned graphics programs and
captioned raster images are more readily available. We reconcile these
disparate data sources by presenting TikZero, which decouples graphics program
generation from text understanding by using image representations as an
intermediary bridge. It enables independent training on graphics programs and
captioned images and allows for zero-shot text-guided graphics program
synthesis during inference. We show that our method substantially outperforms
baselines that can only operate with caption-aligned graphics programs.
Furthermore, when leveraging caption-aligned graphics programs as a
complementary training signal, TikZero matches or exceeds the performance of
much larger models, including commercial systems like GPT-4o. Our code,
datasets, and select models are publicly available.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 15:29:58 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 12:42:41 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Belouadi",
"Jonas",
""
],
[
"Ilg",
"Eddy",
""
],
[
"Keuper",
"Margret",
""
],
[
"Tanaka",
"Hideki",
""
],
[
"Utiyama",
"Masao",
""
],
[
"Dabre",
"Raj",
""
],
[
"Eger",
"Steffen",
""
],
[
"Ponzetto",
"Simone Paolo",
""
]
] | TITLE: TikZero: Zero-Shot Text-Guided Graphics Program Synthesis
ABSTRACT: With the rise of generative AI, synthesizing figures from text captions
becomes a compelling application. However, achieving high geometric precision
and editability requires representing figures as graphics programs in languages
like TikZ, and aligned training data (i.e., graphics programs with captions)
remains scarce. Meanwhile, large amounts of unaligned graphics programs and
captioned raster images are more readily available. We reconcile these
disparate data sources by presenting TikZero, which decouples graphics program
generation from text understanding by using image representations as an
intermediary bridge. It enables independent training on graphics programs and
captioned images and allows for zero-shot text-guided graphics program
synthesis during inference. We show that our method substantially outperforms
baselines that can only operate with caption-aligned graphics programs.
Furthermore, when leveraging caption-aligned graphics programs as a
complementary training signal, TikZero matches or exceeds the performance of
much larger models, including commercial systems like GPT-4o. Our code,
datasets, and select models are publicly available.
|
2503.12167 | Cheng Deng | Cheng Deng, Luoyang Sun, Jiwen Jiang, Yongcheng Zeng, Xinjian Wu,
Wenxin Zhao, Qingfa Xiao, Jiachuan Wang, Haoyang Li, Lei Chen, Lionel M. Ni,
Haifeng Zhang, Jun Wang | PLM: Efficient Peripheral Language Models Hardware-Co-Designed for
Ubiquitous Computing | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While scaling laws have been continuously validated in large language models
(LLMs) with increasing model parameters, the inherent tension between the
inference demands of LLMs and the limited resources of edge devices poses a
critical challenge to the development of edge intelligence. Recently, numerous
small language models have emerged, aiming to distill the capabilities of LLMs
into smaller footprints. However, these models often retain the fundamental
architectural principles of their larger counterparts, still imposing
considerable strain on the storage and bandwidth capacities of edge devices. In
this paper, we introduce the PLM, a Peripheral Language Model, developed
through a co-design process that jointly optimizes model architecture and edge
system constraints. The PLM utilizes a Multi-head Latent Attention mechanism
and employs the squared ReLU activation function to encourage sparsity, thereby
reducing peak memory footprint during inference. During training, we collect
and reorganize open-source datasets, implement a multi-phase training strategy,
and empirically investigate the Warmup-Stable-Decay-Constant (WSDC) learning
rate scheduler. Additionally, we incorporate Reinforcement Learning from Human
Feedback (RLHF) by adopting the ARIES preference learning approach. Following a
two-phase SFT process, this method yields performance gains of 2% in general
tasks, 9% in the GSM8K task, and 11% in coding tasks. In addition to its novel
architecture, evaluation results demonstrate that PLM outperforms existing
small language models trained on publicly available data while maintaining the
lowest number of activated parameters. Furthermore, deployment across various
edge devices, including consumer-grade GPUs, mobile phones, and Raspberry Pis,
validates PLM's suitability for peripheral applications. The PLM series models
are publicly available at https://github.com/plm-team/PLM.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 15:11:17 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 15:23:29 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Deng",
"Cheng",
""
],
[
"Sun",
"Luoyang",
""
],
[
"Jiang",
"Jiwen",
""
],
[
"Zeng",
"Yongcheng",
""
],
[
"Wu",
"Xinjian",
""
],
[
"Zhao",
"Wenxin",
""
],
[
"Xiao",
"Qingfa",
""
],
[
"Wang",
"Jiachuan",
""
],
[
"Li",
"Haoyang",
""
],
[
"Chen",
"Lei",
""
],
[
"Ni",
"Lionel M.",
""
],
[
"Zhang",
"Haifeng",
""
],
[
"Wang",
"Jun",
""
]
] | TITLE: PLM: Efficient Peripheral Language Models Hardware-Co-Designed for
Ubiquitous Computing
ABSTRACT: While scaling laws have been continuously validated in large language models
(LLMs) with increasing model parameters, the inherent tension between the
inference demands of LLMs and the limited resources of edge devices poses a
critical challenge to the development of edge intelligence. Recently, numerous
small language models have emerged, aiming to distill the capabilities of LLMs
into smaller footprints. However, these models often retain the fundamental
architectural principles of their larger counterparts, still imposing
considerable strain on the storage and bandwidth capacities of edge devices. In
this paper, we introduce the PLM, a Peripheral Language Model, developed
through a co-design process that jointly optimizes model architecture and edge
system constraints. The PLM utilizes a Multi-head Latent Attention mechanism
and employs the squared ReLU activation function to encourage sparsity, thereby
reducing peak memory footprint during inference. During training, we collect
and reorganize open-source datasets, implement a multi-phase training strategy,
and empirically investigate the Warmup-Stable-Decay-Constant (WSDC) learning
rate scheduler. Additionally, we incorporate Reinforcement Learning from Human
Feedback (RLHF) by adopting the ARIES preference learning approach. Following a
two-phase SFT process, this method yields performance gains of 2% in general
tasks, 9% in the GSM8K task, and 11% in coding tasks. In addition to its novel
architecture, evaluation results demonstrate that PLM outperforms existing
small language models trained on publicly available data while maintaining the
lowest number of activated parameters. Furthermore, deployment across various
edge devices, including consumer-grade GPUs, mobile phones, and Raspberry Pis,
validates PLM's suitability for peripheral applications. The PLM series models
are publicly available at https://github.com/plm-team/PLM.
|
2503.12374 | Zhi Chen | Zhi Chen, Wei Ma, Lingxiao Jiang | Unveiling Pitfalls: Understanding Why AI-driven Code Agents Fail at
GitHub Issue Resolution | null | null | null | null | cs.SE cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI-driven software development has rapidly advanced with the emergence of
software development agents that leverage large language models (LLMs) to
tackle complex, repository-level software engineering tasks. These agents go
beyond just generation of final code; they engage in multi-step reasoning,
utilize various tools for code modification and debugging, and interact with
execution environments to diagnose and iteratively resolve issues. However,
most existing evaluations focus primarily on static analyses of final code
outputs, yielding limited insights into the agents' dynamic problem-solving
processes. To fill this gap, we conduct an in-depth empirical study on 3,977
solving-phase trajectories and 3,931 testing-phase logs from 8 top-ranked
agents evaluated on 500 GitHub issues in the SWE-Bench benchmark. Our
exploratory analysis shows that Python execution errors during the issue
resolution phase correlate with lower resolution rates and increased reasoning
overheads. We have identified the most prevalent errors -- such as
ModuleNotFoundError and TypeError -- and highlighted particularly challenging
errors like OSError and database-related issues (e.g., IntegrityError) that
demand significantly more debugging effort. Furthermore, we have discovered 3
bugs in the SWE-Bench platform that affect benchmark fairness and accuracy;
these issues have been reported to and confirmed by the maintainers. To promote
transparency and foster future research, we publicly share our datasets and
analysis scripts.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 06:24:51 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 10:08:16 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Chen",
"Zhi",
""
],
[
"Ma",
"Wei",
""
],
[
"Jiang",
"Lingxiao",
""
]
] | TITLE: Unveiling Pitfalls: Understanding Why AI-driven Code Agents Fail at
GitHub Issue Resolution
ABSTRACT: AI-driven software development has rapidly advanced with the emergence of
software development agents that leverage large language models (LLMs) to
tackle complex, repository-level software engineering tasks. These agents go
beyond just generation of final code; they engage in multi-step reasoning,
utilize various tools for code modification and debugging, and interact with
execution environments to diagnose and iteratively resolve issues. However,
most existing evaluations focus primarily on static analyses of final code
outputs, yielding limited insights into the agents' dynamic problem-solving
processes. To fill this gap, we conduct an in-depth empirical study on 3,977
solving-phase trajectories and 3,931 testing-phase logs from 8 top-ranked
agents evaluated on 500 GitHub issues in the SWE-Bench benchmark. Our
exploratory analysis shows that Python execution errors during the issue
resolution phase correlate with lower resolution rates and increased reasoning
overheads. We have identified the most prevalent errors -- such as
ModuleNotFoundError and TypeError -- and highlighted particularly challenging
errors like OSError and database-related issues (e.g., IntegrityError) that
demand significantly more debugging effort. Furthermore, we have discovered 3
bugs in the SWE-Bench platform that affect benchmark fairness and accuracy;
these issues have been reported to and confirmed by the maintainers. To promote
transparency and foster future research, we publicly share our datasets and
analysis scripts.
|
2503.12524 | Jinsik Lee | LG AI Research, Kyunghoon Bae, Eunbi Choi, Kibong Choi, Stanley
Jungkyu Choi, Yemuk Choi, Seokhee Hong, Junwon Hwang, Hyojin Jeon, Kijeong
Jeon, Gerrard Jeongwon Jo, Hyunjik Jo, Jiyeon Jung, Hyosang Kim, Joonkee Kim,
Seonghwan Kim, Soyeon Kim, Sunkyoung Kim, Yireun Kim, Yongil Kim, Youchul
Kim, Edward Hwayoung Lee, Haeju Lee, Honglak Lee, Jinsik Lee, Kyungmin Lee,
Sangha Park, Yongmin Park, Sihoon Yang, Heuiyeen Yeen, Sihyuk Yi, Hyeongu Yun | EXAONE Deep: Reasoning Enhanced Language Models | arXiv admin note: substantial text overlap with arXiv:2412.04862,
arXiv:2408.03541 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present EXAONE Deep series, which exhibits superior capabilities in
various reasoning tasks, including math and coding benchmarks. We train our
models mainly on the reasoning-specialized dataset that incorporates long
streams of thought processes. Evaluation results show that our smaller models,
EXAONE Deep 2.4B and 7.8B, outperform other models of comparable size, while
the largest model, EXAONE Deep 32B, demonstrates competitive performance
against leading open-weight models. All EXAONE Deep models are openly available
for research purposes and can be downloaded from
https://huggingface.co/LGAI-EXAONE
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 14:39:33 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 07:09:24 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Research",
"LG AI",
""
],
[
"Bae",
"Kyunghoon",
""
],
[
"Choi",
"Eunbi",
""
],
[
"Choi",
"Kibong",
""
],
[
"Choi",
"Stanley Jungkyu",
""
],
[
"Choi",
"Yemuk",
""
],
[
"Hong",
"Seokhee",
""
],
[
"Hwang",
"Junwon",
""
],
[
"Jeon",
"Hyojin",
""
],
[
"Jeon",
"Kijeong",
""
],
[
"Jo",
"Gerrard Jeongwon",
""
],
[
"Jo",
"Hyunjik",
""
],
[
"Jung",
"Jiyeon",
""
],
[
"Kim",
"Hyosang",
""
],
[
"Kim",
"Joonkee",
""
],
[
"Kim",
"Seonghwan",
""
],
[
"Kim",
"Soyeon",
""
],
[
"Kim",
"Sunkyoung",
""
],
[
"Kim",
"Yireun",
""
],
[
"Kim",
"Yongil",
""
],
[
"Kim",
"Youchul",
""
],
[
"Lee",
"Edward Hwayoung",
""
],
[
"Lee",
"Haeju",
""
],
[
"Lee",
"Honglak",
""
],
[
"Lee",
"Jinsik",
""
],
[
"Lee",
"Kyungmin",
""
],
[
"Park",
"Sangha",
""
],
[
"Park",
"Yongmin",
""
],
[
"Yang",
"Sihoon",
""
],
[
"Yeen",
"Heuiyeen",
""
],
[
"Yi",
"Sihyuk",
""
],
[
"Yun",
"Hyeongu",
""
]
] | TITLE: EXAONE Deep: Reasoning Enhanced Language Models
ABSTRACT: We present EXAONE Deep series, which exhibits superior capabilities in
various reasoning tasks, including math and coding benchmarks. We train our
models mainly on the reasoning-specialized dataset that incorporates long
streams of thought processes. Evaluation results show that our smaller models,
EXAONE Deep 2.4B and 7.8B, outperform other models of comparable size, while
the largest model, EXAONE Deep 32B, demonstrates competitive performance
against leading open-weight models. All EXAONE Deep models are openly available
for research purposes and can be downloaded from
https://huggingface.co/LGAI-EXAONE
|
2503.12793 | Yingzhe Xu | Yechao Zhang, Yingzhe Xu, Junyu Shi, Leo Yu Zhang, Shengshan Hu,
Minghui Li, Yanjun Zhang | Improving Generalization of Universal Adversarial Perturbation via
Dynamic Maximin Optimization | Accepted in AAAI 2025 | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks (DNNs) are susceptible to universal adversarial
perturbations (UAPs). These perturbations are meticulously designed to fool the
target model universally across all sample classes. Unlike instance-specific
adversarial examples (AEs), generating UAPs is more complex because they must
be generalized across a wide range of data samples and models. Our research
reveals that existing universal attack methods, which optimize UAPs using DNNs
with static model parameter snapshots, do not fully leverage the potential of
DNNs to generate more effective UAPs. Rather than optimizing UAPs against
static DNN models with a fixed training set, we suggest using dynamic
model-data pairs to generate UAPs. In particular, we introduce a dynamic
maximin optimization strategy, aiming to optimize the UAP across a variety of
optimal model-data pairs. We term this approach DM-UAP. DM-UAP utilizes an
iterative max-min-min optimization framework that refines the model-data pairs,
coupled with a curriculum UAP learning algorithm to examine the combined space
of model parameters and data thoroughly. Comprehensive experiments on the
ImageNet dataset demonstrate that the proposed DM-UAP markedly enhances both
cross-sample universality and cross-model transferability of UAPs. Using only
500 samples for UAP generation, DM-UAP outperforms the state-of-the-art
approach with an average increase in fooling ratio of 12.108%.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 04:01:37 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 09:12:16 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhang",
"Yechao",
""
],
[
"Xu",
"Yingzhe",
""
],
[
"Shi",
"Junyu",
""
],
[
"Zhang",
"Leo Yu",
""
],
[
"Hu",
"Shengshan",
""
],
[
"Li",
"Minghui",
""
],
[
"Zhang",
"Yanjun",
""
]
] | TITLE: Improving Generalization of Universal Adversarial Perturbation via
Dynamic Maximin Optimization
ABSTRACT: Deep neural networks (DNNs) are susceptible to universal adversarial
perturbations (UAPs). These perturbations are meticulously designed to fool the
target model universally across all sample classes. Unlike instance-specific
adversarial examples (AEs), generating UAPs is more complex because they must
be generalized across a wide range of data samples and models. Our research
reveals that existing universal attack methods, which optimize UAPs using DNNs
with static model parameter snapshots, do not fully leverage the potential of
DNNs to generate more effective UAPs. Rather than optimizing UAPs against
static DNN models with a fixed training set, we suggest using dynamic
model-data pairs to generate UAPs. In particular, we introduce a dynamic
maximin optimization strategy, aiming to optimize the UAP across a variety of
optimal model-data pairs. We term this approach DM-UAP. DM-UAP utilizes an
iterative max-min-min optimization framework that refines the model-data pairs,
coupled with a curriculum UAP learning algorithm to examine the combined space
of model parameters and data thoroughly. Comprehensive experiments on the
ImageNet dataset demonstrate that the proposed DM-UAP markedly enhances both
cross-sample universality and cross-model transferability of UAPs. Using only
500 samples for UAP generation, DM-UAP outperforms the state-of-the-art
approach with an average increase in fooling ratio of 12.108%.
|
2503.13265 | Luxi Chen | Luxi Chen, Zihan Zhou, Min Zhao, Yikai Wang, Ge Zhang, Wenhao Huang,
Hao Sun, Ji-Rong Wen, Chongxuan Li | FlexWorld: Progressively Expanding 3D Scenes for Flexiable-View
Synthesis | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating flexible-view 3D scenes, including 360{\deg} rotation and zooming,
from single images is challenging due to a lack of 3D data. To this end, we
introduce FlexWorld, a novel framework consisting of two key components: (1) a
strong video-to-video (V2V) diffusion model to generate high-quality novel view
images from incomplete input rendered from a coarse scene, and (2) a
progressive expansion process to construct a complete 3D scene. In particular,
leveraging an advanced pre-trained video model and accurate depth-estimated
training pairs, our V2V model can generate novel views under large camera pose
variations. Building upon it, FlexWorld progressively generates new 3D content
and integrates it into the global scene through geometry-aware scene fusion.
Extensive experiments demonstrate the effectiveness of FlexWorld in generating
high-quality novel view videos and flexible-view 3D scenes from single images,
achieving superior visual quality under multiple popular metrics and datasets
compared to existing state-of-the-art methods. Qualitatively, we highlight that
FlexWorld can generate high-fidelity scenes with flexible views like 360{\deg}
rotations and zooming. Project page: https://ml-gsai.github.io/FlexWorld.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 15:18:38 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 08:26:31 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Chen",
"Luxi",
""
],
[
"Zhou",
"Zihan",
""
],
[
"Zhao",
"Min",
""
],
[
"Wang",
"Yikai",
""
],
[
"Zhang",
"Ge",
""
],
[
"Huang",
"Wenhao",
""
],
[
"Sun",
"Hao",
""
],
[
"Wen",
"Ji-Rong",
""
],
[
"Li",
"Chongxuan",
""
]
] | TITLE: FlexWorld: Progressively Expanding 3D Scenes for Flexiable-View
Synthesis
ABSTRACT: Generating flexible-view 3D scenes, including 360{\deg} rotation and zooming,
from single images is challenging due to a lack of 3D data. To this end, we
introduce FlexWorld, a novel framework consisting of two key components: (1) a
strong video-to-video (V2V) diffusion model to generate high-quality novel view
images from incomplete input rendered from a coarse scene, and (2) a
progressive expansion process to construct a complete 3D scene. In particular,
leveraging an advanced pre-trained video model and accurate depth-estimated
training pairs, our V2V model can generate novel views under large camera pose
variations. Building upon it, FlexWorld progressively generates new 3D content
and integrates it into the global scene through geometry-aware scene fusion.
Extensive experiments demonstrate the effectiveness of FlexWorld in generating
high-quality novel view videos and flexible-view 3D scenes from single images,
achieving superior visual quality under multiple popular metrics and datasets
compared to existing state-of-the-art methods. Qualitatively, we highlight that
FlexWorld can generate high-fidelity scenes with flexible views like 360{\deg}
rotations and zooming. Project page: https://ml-gsai.github.io/FlexWorld.
|
2503.13491 | Andreas Patakis | George S. Theodoropoulos, Andreas Patakis, Andreas Tritsarolis, Yannis
Theodoridis | FLP-XR: Future Location Prediction on Extreme Scale Maritime Data in
Real-time | null | null | null | null | eess.SP cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Movements of maritime vessels are inherently complex and challenging to model
due to the dynamic and often unpredictable nature of maritime operations. Even
within structured maritime environments, such as shipping lanes and port
approaches, where vessels adhere to navigational rules and predefined sea
routes, uncovering underlying patterns is far from trivial. The necessity for
accurate modeling of the mobility of maritime vessels arises from the numerous
applications it serves, including risk assessment for collision avoidance,
optimization of shipping routes, and efficient port management. This paper
introduces FLP-XR, a model that leverages maritime mobility data to construct a
robust framework that offers precise predictions while ensuring extremely fast
training and inference capabilities. We demonstrate the efficiency of our
approach through an extensive experimental study using three real-world AIS
datasets. According to the experimental results, FLP-XR outperforms the current
state-of-the-art in many cases, whereas it performs 2-3 orders of magnitude
faster in terms of training and inference.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 13:31:42 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 07:34:50 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Theodoropoulos",
"George S.",
""
],
[
"Patakis",
"Andreas",
""
],
[
"Tritsarolis",
"Andreas",
""
],
[
"Theodoridis",
"Yannis",
""
]
] | TITLE: FLP-XR: Future Location Prediction on Extreme Scale Maritime Data in
Real-time
ABSTRACT: Movements of maritime vessels are inherently complex and challenging to model
due to the dynamic and often unpredictable nature of maritime operations. Even
within structured maritime environments, such as shipping lanes and port
approaches, where vessels adhere to navigational rules and predefined sea
routes, uncovering underlying patterns is far from trivial. The necessity for
accurate modeling of the mobility of maritime vessels arises from the numerous
applications it serves, including risk assessment for collision avoidance,
optimization of shipping routes, and efficient port management. This paper
introduces FLP-XR, a model that leverages maritime mobility data to construct a
robust framework that offers precise predictions while ensuring extremely fast
training and inference capabilities. We demonstrate the efficiency of our
approach through an extensive experimental study using three real-world AIS
datasets. According to the experimental results, FLP-XR outperforms the current
state-of-the-art in many cases, whereas it performs 2-3 orders of magnitude
faster in terms of training and inference.
|
2503.13551 | Teng Wang | Teng Wang, Zhangyi Jiang, Zhenqi He, Wenhan Yang, Yanan Zheng, Zeyu
Li, Zifan He, Shenyang Tong, Hailei Gong | Towards Hierarchical Multi-Step Reward Models for Enhanced Reasoning in
Large Language Models | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies show that Large Language Models (LLMs) achieve strong
reasoning capabilities through supervised fine-tuning or reinforcement
learning. However, a key approach, the Process Reward Model (PRM), suffers from
reward hacking, making it unreliable in identifying the best intermediate
steps. In this paper, we propose a novel reward model approach, Hierarchical
Reward Model (HRM), which evaluates both individual and consecutive reasoning
steps from fine-grained and coarse-grained level. HRM performs better in
assessing reasoning coherence and self-reflection, particularly when the
previous reasoning step is incorrect. Furthermore, to address the inefficiency
of autonomous generating PRM training data via Monte Carlo Tree Search (MCTS),
we introduce a lightweight and effective data augmentation strategy called
Hierarchical Node Compression (HNC) based on node merging (combining two
consecutive reasoning steps into one step) in the tree structure. This approach
diversifies MCTS results for HRM with negligible computational overhead,
enhancing label robustness by introducing noise. Empirical results on the
PRM800K dataset demonstrate that HRM, in conjunction with HNC, achieves
superior stability and reliability in evaluation compared to PRM. Furthermore,
cross-domain evaluations on MATH500 and GSM8K confirm HRM's superior
generalization and robustness across diverse reasoning tasks. The code for all
experiments will be released at https:
//github.com/tengwang0318/hierarchial_reward_model.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 15:18:40 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 15:43:56 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wang",
"Teng",
""
],
[
"Jiang",
"Zhangyi",
""
],
[
"He",
"Zhenqi",
""
],
[
"Yang",
"Wenhan",
""
],
[
"Zheng",
"Yanan",
""
],
[
"Li",
"Zeyu",
""
],
[
"He",
"Zifan",
""
],
[
"Tong",
"Shenyang",
""
],
[
"Gong",
"Hailei",
""
]
] | TITLE: Towards Hierarchical Multi-Step Reward Models for Enhanced Reasoning in
Large Language Models
ABSTRACT: Recent studies show that Large Language Models (LLMs) achieve strong
reasoning capabilities through supervised fine-tuning or reinforcement
learning. However, a key approach, the Process Reward Model (PRM), suffers from
reward hacking, making it unreliable in identifying the best intermediate
steps. In this paper, we propose a novel reward model approach, Hierarchical
Reward Model (HRM), which evaluates both individual and consecutive reasoning
steps from fine-grained and coarse-grained level. HRM performs better in
assessing reasoning coherence and self-reflection, particularly when the
previous reasoning step is incorrect. Furthermore, to address the inefficiency
of autonomous generating PRM training data via Monte Carlo Tree Search (MCTS),
we introduce a lightweight and effective data augmentation strategy called
Hierarchical Node Compression (HNC) based on node merging (combining two
consecutive reasoning steps into one step) in the tree structure. This approach
diversifies MCTS results for HRM with negligible computational overhead,
enhancing label robustness by introducing noise. Empirical results on the
PRM800K dataset demonstrate that HRM, in conjunction with HNC, achieves
superior stability and reliability in evaluation compared to PRM. Furthermore,
cross-domain evaluations on MATH500 and GSM8K confirm HRM's superior
generalization and robustness across diverse reasoning tasks. The code for all
experiments will be released at https:
//github.com/tengwang0318/hierarchial_reward_model.
|
2503.13677 | Mehrnoush Ghazanfariharandi | Mehrnoush Ghazanfariharandi, Robert Mieth | Value-Oriented Forecast Combinations for Unit Commitment | null | null | null | null | math.OC cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Value-oriented forecasts for two-stage power system operational problems have
been demonstrated to reduce cost, but prove to be computationally challenging
for large-scale systems because the underlying optimization problem must be
internalized into the forecast model training. Therefore, existing approaches
typically scale poorly in the usable training data or require relaxations of
the underlying optimization. This paper presents a method for value-oriented
forecast combinations using progressive hedging, which unlocks high-fidelity,
at-scale models and large-scale datasets in training. We also derive a direct
one-shot training model for reference and study how different modifications of
the training model impact the solution quality. Our method reduces operation
cost by 1.8% on average and trains forecast combinations for a 2736-bus test
system with one year of data within 20 hours.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 19:31:13 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ghazanfariharandi",
"Mehrnoush",
""
],
[
"Mieth",
"Robert",
""
]
] | TITLE: Value-Oriented Forecast Combinations for Unit Commitment
ABSTRACT: Value-oriented forecasts for two-stage power system operational problems have
been demonstrated to reduce cost, but prove to be computationally challenging
for large-scale systems because the underlying optimization problem must be
internalized into the forecast model training. Therefore, existing approaches
typically scale poorly in the usable training data or require relaxations of
the underlying optimization. This paper presents a method for value-oriented
forecast combinations using progressive hedging, which unlocks high-fidelity,
at-scale models and large-scale datasets in training. We also derive a direct
one-shot training model for reference and study how different modifications of
the training model impact the solution quality. Our method reduces operation
cost by 1.8% on average and trains forecast combinations for a 2736-bus test
system with one year of data within 20 hours.
|
2503.13954 | Ni Tianhao | Tianhao Ni, Bingjie Li and Zhigang Yao | Enhanced High-Dimensional Data Visualization through Adaptive
Multi-Scale Manifold Embedding | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | To address the dual challenges of the curse of dimensionality and the
difficulty in separating intra-cluster and inter-cluster structures in
high-dimensional manifold embedding, we proposes an Adaptive Multi-Scale
Manifold Embedding (AMSME) algorithm. By introducing ordinal distance to
replace traditional Euclidean distances, we theoretically demonstrate that
ordinal distance overcomes the constraints of the curse of dimensionality in
high-dimensional spaces, effectively distinguishing heterogeneous samples. We
design an adaptive neighborhood adjustment method to construct similarity
graphs that simultaneously balance intra-cluster compactness and inter-cluster
separability. Furthermore, we develop a two-stage embedding framework: the
first stage achieves preliminary cluster separation while preserving
connectivity between structurally similar clusters via the similarity graph,
and the second stage enhances inter-cluster separation through a label-driven
distance reweighting. Experimental results demonstrate that AMSME significantly
preserves intra-cluster topological structures and improves inter-cluster
separation on real-world datasets. Additionally, leveraging its
multi-resolution analysis capability, AMSME discovers novel neuronal subtypes
in the mouse lumbar dorsal root ganglion scRNA-seq dataset, with marker gene
analysis revealing their distinct biological roles.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 06:46:53 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 05:21:06 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ni",
"Tianhao",
""
],
[
"Li",
"Bingjie",
""
],
[
"Yao",
"Zhigang",
""
]
] | TITLE: Enhanced High-Dimensional Data Visualization through Adaptive
Multi-Scale Manifold Embedding
ABSTRACT: To address the dual challenges of the curse of dimensionality and the
difficulty in separating intra-cluster and inter-cluster structures in
high-dimensional manifold embedding, we proposes an Adaptive Multi-Scale
Manifold Embedding (AMSME) algorithm. By introducing ordinal distance to
replace traditional Euclidean distances, we theoretically demonstrate that
ordinal distance overcomes the constraints of the curse of dimensionality in
high-dimensional spaces, effectively distinguishing heterogeneous samples. We
design an adaptive neighborhood adjustment method to construct similarity
graphs that simultaneously balance intra-cluster compactness and inter-cluster
separability. Furthermore, we develop a two-stage embedding framework: the
first stage achieves preliminary cluster separation while preserving
connectivity between structurally similar clusters via the similarity graph,
and the second stage enhances inter-cluster separation through a label-driven
distance reweighting. Experimental results demonstrate that AMSME significantly
preserves intra-cluster topological structures and improves inter-cluster
separation on real-world datasets. Additionally, leveraging its
multi-resolution analysis capability, AMSME discovers novel neuronal subtypes
in the mouse lumbar dorsal root ganglion scRNA-seq dataset, with marker gene
analysis revealing their distinct biological roles.
|
2503.14234 | Ruiyi Yang | Ruiyi Yang, Hao Xue, Imran Razzak, Hakim Hacid, Flora D. Salim | KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented
Generation Framework for Temporal Reasoning | 14 pages, 4 figures | null | null | null | cs.AI cs.MA | http://creativecommons.org/licenses/by/4.0/ | Graph Retrieval-Augmented Generation (GraphRAG) has proven highly effective
in enhancing the performance of Large Language Models (LLMs) on tasks that
require external knowledge. By leveraging Knowledge Graphs (KGs), GraphRAG
improves information retrieval for complex reasoning tasks, providing more
precise and comprehensive retrieval and generating more accurate responses to
QAs. However, most RAG methods fall short in addressing multi-step reasoning,
particularly when both information extraction and inference are necessary. To
address this limitation, this paper presents Knowledge Graph-Based Iterative
Retrieval-Augmented Generation (KG-IRAG), a novel framework that integrates KGs
with iterative reasoning to improve LLMs' ability to handle queries involving
temporal and logical dependencies. Through iterative retrieval steps, KG-IRAG
incrementally gathers relevant data from external KGs, enabling step-by-step
reasoning. The proposed approach is particularly suited for scenarios where
reasoning is required alongside dynamic temporal data extraction, such as
determining optimal travel times based on weather conditions or traffic
patterns. Experimental results show that KG-IRAG improves accuracy in complex
reasoning tasks by effectively integrating external knowledge with iterative,
logic-based retrieval. Additionally, three new datasets: weatherQA-Irish,
weatherQA-Sydney, and trafficQA-TFNSW, are formed to evaluate KG-IRAG's
performance, demonstrating its potential beyond traditional RAG applications.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 13:11:43 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 04:49:29 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Yang",
"Ruiyi",
""
],
[
"Xue",
"Hao",
""
],
[
"Razzak",
"Imran",
""
],
[
"Hacid",
"Hakim",
""
],
[
"Salim",
"Flora D.",
""
]
] | TITLE: KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented
Generation Framework for Temporal Reasoning
ABSTRACT: Graph Retrieval-Augmented Generation (GraphRAG) has proven highly effective
in enhancing the performance of Large Language Models (LLMs) on tasks that
require external knowledge. By leveraging Knowledge Graphs (KGs), GraphRAG
improves information retrieval for complex reasoning tasks, providing more
precise and comprehensive retrieval and generating more accurate responses to
QAs. However, most RAG methods fall short in addressing multi-step reasoning,
particularly when both information extraction and inference are necessary. To
address this limitation, this paper presents Knowledge Graph-Based Iterative
Retrieval-Augmented Generation (KG-IRAG), a novel framework that integrates KGs
with iterative reasoning to improve LLMs' ability to handle queries involving
temporal and logical dependencies. Through iterative retrieval steps, KG-IRAG
incrementally gathers relevant data from external KGs, enabling step-by-step
reasoning. The proposed approach is particularly suited for scenarios where
reasoning is required alongside dynamic temporal data extraction, such as
determining optimal travel times based on weather conditions or traffic
patterns. Experimental results show that KG-IRAG improves accuracy in complex
reasoning tasks by effectively integrating external knowledge with iterative,
logic-based retrieval. Additionally, three new datasets: weatherQA-Irish,
weatherQA-Sydney, and trafficQA-TFNSW, are formed to evaluate KG-IRAG's
performance, demonstrating its potential beyond traditional RAG applications.
|
2503.14286 | Nicolas Le Roux | Nicolas Le Roux, Marc G. Bellemare, Jonathan Lebensold, Arnaud
Bergeron, Joshua Greaves, Alex Fr\'echette, Carolyne Pelletier, Eric
Thibodeau-Laufer, S\'andor Toth, Sam Work | Tapered Off-Policy REINFORCE: Stable and efficient reinforcement
learning for LLMs | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose a new algorithm for fine-tuning large language models using
reinforcement learning. Tapered Off-Policy REINFORCE (TOPR) uses an asymmetric,
tapered variant of importance sampling to speed up learning while maintaining
stable learning dynamics, even without the use of KL regularization. TOPR can
be applied in a fully offline fashion, allows the handling of positive and
negative examples in a unified framework, and benefits from the
implementational simplicity that is typical of Monte Carlo algorithms. We
demonstrate the effectiveness of our approach with a series of experiments on
the GSM8K and MATH reasoning benchmarks, finding performance gains for training
both a model for solution generation and as a generative verifier. We show that
properly leveraging positive and negative examples alike in the off-policy
regime simultaneously increases test-time accuracy and training data
efficiency, all the while avoiding the ``wasted inference'' that comes with
discarding negative examples. We find that this advantage persists over
multiple iterations of training and can be amplified by dataset curation
techniques, enabling us to match 70B-parameter model performance with 8B
language models. As a corollary to this work, we find that REINFORCE's baseline
parameter plays an important and unexpected role in defining dataset
composition in the presence of negative examples, and is consequently critical
in driving off-policy performance.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 14:23:37 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 14:25:30 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Roux",
"Nicolas Le",
""
],
[
"Bellemare",
"Marc G.",
""
],
[
"Lebensold",
"Jonathan",
""
],
[
"Bergeron",
"Arnaud",
""
],
[
"Greaves",
"Joshua",
""
],
[
"Fréchette",
"Alex",
""
],
[
"Pelletier",
"Carolyne",
""
],
[
"Thibodeau-Laufer",
"Eric",
""
],
[
"Toth",
"Sándor",
""
],
[
"Work",
"Sam",
""
]
] | TITLE: Tapered Off-Policy REINFORCE: Stable and efficient reinforcement
learning for LLMs
ABSTRACT: We propose a new algorithm for fine-tuning large language models using
reinforcement learning. Tapered Off-Policy REINFORCE (TOPR) uses an asymmetric,
tapered variant of importance sampling to speed up learning while maintaining
stable learning dynamics, even without the use of KL regularization. TOPR can
be applied in a fully offline fashion, allows the handling of positive and
negative examples in a unified framework, and benefits from the
implementational simplicity that is typical of Monte Carlo algorithms. We
demonstrate the effectiveness of our approach with a series of experiments on
the GSM8K and MATH reasoning benchmarks, finding performance gains for training
both a model for solution generation and as a generative verifier. We show that
properly leveraging positive and negative examples alike in the off-policy
regime simultaneously increases test-time accuracy and training data
efficiency, all the while avoiding the ``wasted inference'' that comes with
discarding negative examples. We find that this advantage persists over
multiple iterations of training and can be amplified by dataset curation
techniques, enabling us to match 70B-parameter model performance with 8B
language models. As a corollary to this work, we find that REINFORCE's baseline
parameter plays an important and unexpected role in defining dataset
composition in the presence of negative examples, and is consequently critical
in driving off-policy performance.
|
2503.14293 | Sakib Matin | Sakib Matin, Emily Shinkle, Yulia Pimonova, Galen T. Craven,
Aleksandra Pachalieva, Ying Wai Li, Kipton Barros, Nicholas Lubbers | Ensemble Knowledge Distillation for Machine Learning Interatomic
Potentials | null | null | null | null | physics.chem-ph cs.AI | http://creativecommons.org/licenses/by/4.0/ | Machine learning interatomic potentials (MLIPs) are a promising tool to
accelerate atomistic simulations and molecular property prediction. The quality
of MLIPs strongly depends on the quantity of available training data as well as
the quantum chemistry (QC) level of theory used to generate that data. Datasets
generated with high-fidelity QC methods, such as coupled cluster, are typically
restricted to small molecules and may be missing energy gradients. With this
limited quantity of data, it is often difficult to train good MLIP models. We
present an ensemble knowledge distillation (EKD) method to improve MLIP
accuracy when trained to energy-only datasets. In our EKD approach, first,
multiple teacher models are trained to QC energies and then used to generate
atomic forces for all configurations in the dataset. Next, a student MLIP is
trained to both QC energies and to ensemble-averaged forces generated by the
teacher models. We apply this workflow on the ANI-1ccx dataset which consists
of organic molecules with configuration energies computed at the coupled
cluster level of theory. The resulting student MLIPs achieve new
state-of-the-art accuracy on the out-of-sample COMP6 benchmark and improved
stability for molecular dynamics simulations. The EKD approach for MLIP is
broadly applicable for chemical, biomolecular and materials science
simulations.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 14:32:51 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 15:03:39 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Matin",
"Sakib",
""
],
[
"Shinkle",
"Emily",
""
],
[
"Pimonova",
"Yulia",
""
],
[
"Craven",
"Galen T.",
""
],
[
"Pachalieva",
"Aleksandra",
""
],
[
"Li",
"Ying Wai",
""
],
[
"Barros",
"Kipton",
""
],
[
"Lubbers",
"Nicholas",
""
]
] | TITLE: Ensemble Knowledge Distillation for Machine Learning Interatomic
Potentials
ABSTRACT: Machine learning interatomic potentials (MLIPs) are a promising tool to
accelerate atomistic simulations and molecular property prediction. The quality
of MLIPs strongly depends on the quantity of available training data as well as
the quantum chemistry (QC) level of theory used to generate that data. Datasets
generated with high-fidelity QC methods, such as coupled cluster, are typically
restricted to small molecules and may be missing energy gradients. With this
limited quantity of data, it is often difficult to train good MLIP models. We
present an ensemble knowledge distillation (EKD) method to improve MLIP
accuracy when trained to energy-only datasets. In our EKD approach, first,
multiple teacher models are trained to QC energies and then used to generate
atomic forces for all configurations in the dataset. Next, a student MLIP is
trained to both QC energies and to ensemble-averaged forces generated by the
teacher models. We apply this workflow on the ANI-1ccx dataset which consists
of organic molecules with configuration energies computed at the coupled
cluster level of theory. The resulting student MLIPs achieve new
state-of-the-art accuracy on the out-of-sample COMP6 benchmark and improved
stability for molecular dynamics simulations. The EKD approach for MLIP is
broadly applicable for chemical, biomolecular and materials science
simulations.
|
2503.14329 | Yufei Zhu | Yufei Zhu, Yiming Zhong, Zemin Yang, Peishan Cong, Jingyi Yu, Xinge
Zhu, Yuexin Ma | EvolvingGrasp: Evolutionary Grasp Generation via Efficient Preference
Alignment | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dexterous robotic hands often struggle to generalize effectively in complex
environments due to the limitations of models trained on low-diversity data.
However, the real world presents an inherently unbounded range of scenarios,
making it impractical to account for every possible variation. A natural
solution is to enable robots learning from experience in complex environments,
an approach akin to evolution, where systems improve through continuous
feedback, learning from both failures and successes, and iterating toward
optimal performance. Motivated by this, we propose EvolvingGrasp, an
evolutionary grasp generation method that continuously enhances grasping
performance through efficient preference alignment. Specifically, we introduce
Handpose wise Preference Optimization (HPO), which allows the model to
continuously align with preferences from both positive and negative feedback
while progressively refining its grasping strategies. To further enhance
efficiency and reliability during online adjustments, we incorporate a
Physics-aware Consistency Model within HPO, which accelerates inference,
reduces the number of timesteps needed for preference finetuning, and ensures
physical plausibility throughout the process. Extensive experiments across four
benchmark datasets demonstrate state of the art performance of our method in
grasp success rate and sampling efficiency. Our results validate that
EvolvingGrasp enables evolutionary grasp generation, ensuring robust,
physically feasible, and preference-aligned grasping in both simulation and
real scenarios.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 15:01:47 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 08:55:21 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhu",
"Yufei",
""
],
[
"Zhong",
"Yiming",
""
],
[
"Yang",
"Zemin",
""
],
[
"Cong",
"Peishan",
""
],
[
"Yu",
"Jingyi",
""
],
[
"Zhu",
"Xinge",
""
],
[
"Ma",
"Yuexin",
""
]
] | TITLE: EvolvingGrasp: Evolutionary Grasp Generation via Efficient Preference
Alignment
ABSTRACT: Dexterous robotic hands often struggle to generalize effectively in complex
environments due to the limitations of models trained on low-diversity data.
However, the real world presents an inherently unbounded range of scenarios,
making it impractical to account for every possible variation. A natural
solution is to enable robots learning from experience in complex environments,
an approach akin to evolution, where systems improve through continuous
feedback, learning from both failures and successes, and iterating toward
optimal performance. Motivated by this, we propose EvolvingGrasp, an
evolutionary grasp generation method that continuously enhances grasping
performance through efficient preference alignment. Specifically, we introduce
Handpose wise Preference Optimization (HPO), which allows the model to
continuously align with preferences from both positive and negative feedback
while progressively refining its grasping strategies. To further enhance
efficiency and reliability during online adjustments, we incorporate a
Physics-aware Consistency Model within HPO, which accelerates inference,
reduces the number of timesteps needed for preference finetuning, and ensures
physical plausibility throughout the process. Extensive experiments across four
benchmark datasets demonstrate state of the art performance of our method in
grasp success rate and sampling efficiency. Our results validate that
EvolvingGrasp enables evolutionary grasp generation, ensuring robust,
physically feasible, and preference-aligned grasping in both simulation and
real scenarios.
|
2503.14493 | Chuxin Wang | Chuxin Wang, Wenfei Yang, Xiang Liu, Tianzhu Zhang | State Space Model Meets Transformer: A New Paradigm for 3D Object
Detection | Accepted by ICLR 2025. Project url:
https://chuxwa.github.io/project_DEST/ | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DETR-based methods, which use multi-layer transformer decoders to refine
object queries iteratively, have shown promising performance in 3D indoor
object detection. However, the scene point features in the transformer decoder
remain fixed, leading to minimal contributions from later decoder layers,
thereby limiting performance improvement. Recently, State Space Models (SSM)
have shown efficient context modeling ability with linear complexity through
iterative interactions between system states and inputs. Inspired by SSMs, we
propose a new 3D object DEtection paradigm with an interactive STate space
model (DEST). In the interactive SSM, we design a novel state-dependent SSM
parameterization method that enables system states to effectively serve as
queries in 3D indoor detection tasks. In addition, we introduce four key
designs tailored to the characteristics of point cloud and SSM: The
serialization and bidirectional scanning strategies enable bidirectional
feature interaction among scene points within the SSM. The inter-state
attention mechanism models the relationships between state points, while the
gated feed-forward network enhances inter-channel correlations. To the best of
our knowledge, this is the first method to model queries as system states and
scene points as system inputs, which can simultaneously update scene point
features and query features with linear complexity. Extensive experiments on
two challenging datasets demonstrate the effectiveness of our DEST-based
method. Our method improves the GroupFree baseline in terms of AP50 on ScanNet
V2 (+5.3) and SUN RGB-D (+3.2) datasets. Based on the VDETR baseline, Our
method sets a new SOTA on the ScanNetV2 and SUN RGB-D datasets.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 17:58:03 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 14:10:18 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wang",
"Chuxin",
""
],
[
"Yang",
"Wenfei",
""
],
[
"Liu",
"Xiang",
""
],
[
"Zhang",
"Tianzhu",
""
]
] | TITLE: State Space Model Meets Transformer: A New Paradigm for 3D Object
Detection
ABSTRACT: DETR-based methods, which use multi-layer transformer decoders to refine
object queries iteratively, have shown promising performance in 3D indoor
object detection. However, the scene point features in the transformer decoder
remain fixed, leading to minimal contributions from later decoder layers,
thereby limiting performance improvement. Recently, State Space Models (SSM)
have shown efficient context modeling ability with linear complexity through
iterative interactions between system states and inputs. Inspired by SSMs, we
propose a new 3D object DEtection paradigm with an interactive STate space
model (DEST). In the interactive SSM, we design a novel state-dependent SSM
parameterization method that enables system states to effectively serve as
queries in 3D indoor detection tasks. In addition, we introduce four key
designs tailored to the characteristics of point cloud and SSM: The
serialization and bidirectional scanning strategies enable bidirectional
feature interaction among scene points within the SSM. The inter-state
attention mechanism models the relationships between state points, while the
gated feed-forward network enhances inter-channel correlations. To the best of
our knowledge, this is the first method to model queries as system states and
scene points as system inputs, which can simultaneously update scene point
features and query features with linear complexity. Extensive experiments on
two challenging datasets demonstrate the effectiveness of our DEST-based
method. Our method improves the GroupFree baseline in terms of AP50 on ScanNet
V2 (+5.3) and SUN RGB-D (+3.2) datasets. Based on the VDETR baseline, Our
method sets a new SOTA on the ScanNetV2 and SUN RGB-D datasets.
|
2503.14513 | S Muhammad Hossein Mousavi | Seyed Muhammad Hossein Mousavi | Synthetic Data Generation of Body Motion Data by Neural Gas Network for
Emotion Recognition | 18 pages | null | null | null | cs.CV cs.AI eess.IV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In the domain of emotion recognition using body motion, the primary challenge
lies in the scarcity of diverse and generalizable datasets. Automatic emotion
recognition uses machine learning and artificial intelligence techniques to
recognize a person's emotional state from various data types, such as text,
images, sound, and body motion. Body motion poses unique challenges as many
factors, such as age, gender, ethnicity, personality, and illness, affect its
appearance, leading to a lack of diverse and robust datasets specifically for
emotion recognition. To address this, employing Synthetic Data Generation (SDG)
methods, such as Generative Adversarial Networks (GANs) and Variational Auto
Encoders (VAEs), offers potential solutions, though these methods are often
complex. This research introduces a novel application of the Neural Gas Network
(NGN) algorithm for synthesizing body motion data and optimizing diversity and
generation speed. By learning skeletal structure topology, the NGN fits the
neurons or gas particles on body joints. Generated gas particles, which form
the skeletal structure later on, will be used to synthesize the new body
posture. By attaching body postures over frames, the final synthetic body
motion appears. We compared our generated dataset against others generated by
GANs, VAEs, and another benchmark algorithm, using benchmark metrics such as
Fr\'echet Inception Distance (FID), Diversity, and a few more. Furthermore, we
continued evaluation using classification metrics such as accuracy, precision,
recall, and a few others. Joint-related features or kinematic parameters were
extracted, and the system assessed model performance against unseen data. Our
findings demonstrate that the NGN algorithm produces more realistic and
emotionally distinct body motion data and does so with more synthesizing speed
than existing methods.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 13:16:30 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Mousavi",
"Seyed Muhammad Hossein",
""
]
] | TITLE: Synthetic Data Generation of Body Motion Data by Neural Gas Network for
Emotion Recognition
ABSTRACT: In the domain of emotion recognition using body motion, the primary challenge
lies in the scarcity of diverse and generalizable datasets. Automatic emotion
recognition uses machine learning and artificial intelligence techniques to
recognize a person's emotional state from various data types, such as text,
images, sound, and body motion. Body motion poses unique challenges as many
factors, such as age, gender, ethnicity, personality, and illness, affect its
appearance, leading to a lack of diverse and robust datasets specifically for
emotion recognition. To address this, employing Synthetic Data Generation (SDG)
methods, such as Generative Adversarial Networks (GANs) and Variational Auto
Encoders (VAEs), offers potential solutions, though these methods are often
complex. This research introduces a novel application of the Neural Gas Network
(NGN) algorithm for synthesizing body motion data and optimizing diversity and
generation speed. By learning skeletal structure topology, the NGN fits the
neurons or gas particles on body joints. Generated gas particles, which form
the skeletal structure later on, will be used to synthesize the new body
posture. By attaching body postures over frames, the final synthetic body
motion appears. We compared our generated dataset against others generated by
GANs, VAEs, and another benchmark algorithm, using benchmark metrics such as
Fr\'echet Inception Distance (FID), Diversity, and a few more. Furthermore, we
continued evaluation using classification metrics such as accuracy, precision,
recall, and a few others. Joint-related features or kinematic parameters were
extracted, and the system assessed model performance against unseen data. Our
findings demonstrate that the NGN algorithm produces more realistic and
emotionally distinct body motion data and does so with more synthesizing speed
than existing methods.
|
2503.14519 | Kar Balan | Kar Balan and Andrew Gilbert and John Collomosse | Content ARCs: Decentralized Content Rights in the Age of Generative AI | null | null | null | null | cs.CY cs.AI cs.DL eess.IV | http://creativecommons.org/licenses/by/4.0/ | The rise of Generative AI (GenAI) has sparked significant debate over
balancing the interests of creative rightsholders and AI developers. As GenAI
models are trained on vast datasets that often include copyrighted material,
questions around fair compensation and proper attribution have become
increasingly urgent. To address these challenges, this paper proposes a
framework called \emph{Content ARCs} (Authenticity, Rights, Compensation). By
combining open standards for provenance and dynamic licensing with data
attribution, and decentralized technologies, Content ARCs create a mechanism
for managing rights and compensating creators for using their work in AI
training. We characterize several nascent works in the AI data licensing space
within Content ARCs and identify where challenges remain to fully implement the
end-to-end framework.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 11:57:08 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Balan",
"Kar",
""
],
[
"Gilbert",
"Andrew",
""
],
[
"Collomosse",
"John",
""
]
] | TITLE: Content ARCs: Decentralized Content Rights in the Age of Generative AI
ABSTRACT: The rise of Generative AI (GenAI) has sparked significant debate over
balancing the interests of creative rightsholders and AI developers. As GenAI
models are trained on vast datasets that often include copyrighted material,
questions around fair compensation and proper attribution have become
increasingly urgent. To address these challenges, this paper proposes a
framework called \emph{Content ARCs} (Authenticity, Rights, Compensation). By
combining open standards for provenance and dynamic licensing with data
attribution, and decentralized technologies, Content ARCs create a mechanism
for managing rights and compensating creators for using their work in AI
training. We characterize several nascent works in the AI data licensing space
within Content ARCs and identify where challenges remain to fully implement the
end-to-end framework.
|
2503.14524 | Zhihao Zhu | Zhihao Zhu | Salient Temporal Encoding for Dynamic Scene Graph Generation | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Representing a dynamic scene using a structured spatial-temporal scene graph
is a novel and particularly challenging task. To tackle this task, it is
crucial to learn the temporal interactions between objects in addition to their
spatial relations. Due to the lack of explicitly annotated temporal relations
in current benchmark datasets, most of the existing spatial-temporal scene
graph generation methods build dense and abstract temporal connections among
all objects across frames. However, not all temporal connections are encoding
meaningful temporal dynamics. We propose a novel spatial-temporal scene graph
generation method that selectively builds temporal connections only between
temporal-relevant objects pairs and represents the temporal relations as
explicit edges in the scene graph. The resulting sparse and explicit temporal
representation allows us to improve upon strong scene graph generation
baselines by up to $4.4\%$ in Scene Graph Detection. In addition, we show that
our approach can be leveraged to improve downstream vision tasks. Particularly,
applying our approach to action recognition, shows 0.6\% gain in mAP in
comparison to the state-of-the-art
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 08:01:36 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhu",
"Zhihao",
""
]
] | TITLE: Salient Temporal Encoding for Dynamic Scene Graph Generation
ABSTRACT: Representing a dynamic scene using a structured spatial-temporal scene graph
is a novel and particularly challenging task. To tackle this task, it is
crucial to learn the temporal interactions between objects in addition to their
spatial relations. Due to the lack of explicitly annotated temporal relations
in current benchmark datasets, most of the existing spatial-temporal scene
graph generation methods build dense and abstract temporal connections among
all objects across frames. However, not all temporal connections are encoding
meaningful temporal dynamics. We propose a novel spatial-temporal scene graph
generation method that selectively builds temporal connections only between
temporal-relevant objects pairs and represents the temporal relations as
explicit edges in the scene graph. The resulting sparse and explicit temporal
representation allows us to improve upon strong scene graph generation
baselines by up to $4.4\%$ in Scene Graph Detection. In addition, we show that
our approach can be leveraged to improve downstream vision tasks. Particularly,
applying our approach to action recognition, shows 0.6\% gain in mAP in
comparison to the state-of-the-art
|
2503.14526 | Yu Fang | Yu Fang, Yue Yang, Xinghao Zhu, Kaiyuan Zheng, Gedas Bertasius, Daniel
Szafir, Mingyu Ding | ReBot: Scaling Robot Learning with Real-to-Sim-to-Real Robotic Video
Synthesis | Website: https://yuffish.github.io/rebot/ | null | null | null | cs.CV cs.GR cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-language-action (VLA) models present a promising paradigm by training
policies directly on real robot datasets like Open X-Embodiment. However, the
high cost of real-world data collection hinders further data scaling, thereby
restricting the generalizability of VLAs. In this paper, we introduce ReBot, a
novel real-to-sim-to-real approach for scaling real robot datasets and adapting
VLA models to target domains, which is the last-mile deployment challenge in
robot manipulation. Specifically, ReBot replays real-world robot trajectories
in simulation to diversify manipulated objects (real-to-sim), and integrates
the simulated movements with inpainted real-world background to synthesize
physically realistic and temporally consistent robot videos (sim-to-real). Our
approach has several advantages: 1) it enjoys the benefit of real data to
minimize the sim-to-real gap; 2) it leverages the scalability of simulation;
and 3) it can generalize a pretrained VLA to a target domain with fully
automated data pipelines. Extensive experiments in both simulation and
real-world environments show that ReBot significantly enhances the performance
and robustness of VLAs. For example, in SimplerEnv with the WidowX robot, ReBot
improved the in-domain performance of Octo by 7.2% and OpenVLA by 21.8%, and
out-of-domain generalization by 19.9% and 9.4%, respectively. For real-world
evaluation with a Franka robot, ReBot increased the success rates of Octo by
17% and OpenVLA by 20%. More information can be found at:
https://yuffish.github.io/rebot/
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 16:47:25 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Fang",
"Yu",
""
],
[
"Yang",
"Yue",
""
],
[
"Zhu",
"Xinghao",
""
],
[
"Zheng",
"Kaiyuan",
""
],
[
"Bertasius",
"Gedas",
""
],
[
"Szafir",
"Daniel",
""
],
[
"Ding",
"Mingyu",
""
]
] | TITLE: ReBot: Scaling Robot Learning with Real-to-Sim-to-Real Robotic Video
Synthesis
ABSTRACT: Vision-language-action (VLA) models present a promising paradigm by training
policies directly on real robot datasets like Open X-Embodiment. However, the
high cost of real-world data collection hinders further data scaling, thereby
restricting the generalizability of VLAs. In this paper, we introduce ReBot, a
novel real-to-sim-to-real approach for scaling real robot datasets and adapting
VLA models to target domains, which is the last-mile deployment challenge in
robot manipulation. Specifically, ReBot replays real-world robot trajectories
in simulation to diversify manipulated objects (real-to-sim), and integrates
the simulated movements with inpainted real-world background to synthesize
physically realistic and temporally consistent robot videos (sim-to-real). Our
approach has several advantages: 1) it enjoys the benefit of real data to
minimize the sim-to-real gap; 2) it leverages the scalability of simulation;
and 3) it can generalize a pretrained VLA to a target domain with fully
automated data pipelines. Extensive experiments in both simulation and
real-world environments show that ReBot significantly enhances the performance
and robustness of VLAs. For example, in SimplerEnv with the WidowX robot, ReBot
improved the in-domain performance of Octo by 7.2% and OpenVLA by 21.8%, and
out-of-domain generalization by 19.9% and 9.4%, respectively. For real-world
evaluation with a Franka robot, ReBot increased the success rates of Octo by
17% and OpenVLA by 20%. More information can be found at:
https://yuffish.github.io/rebot/
|
2503.14529 | Valeriy Buryachenko | Valeriy A. Buryachenko | Unified Micromechanics Theory of Composites | 89 pages, 514 refs | null | null | null | physics.class-ph cond-mat.mtrl-sci | http://creativecommons.org/licenses/by/4.0/ | We consider the matrix composite materials (CM) of either random
(statistically homogeneous or inhomogeneous), periodic, or deterministic
(neither random nor periodic) structures. CMs exhibit linear or nonlinear
behavior, coupled or uncoupled multi-physical phenomena, locally elastic,
weakly nonlocal (strain gradient and stress gradient), or strongly nonlocal
(strain-type and displacement-type, peridynamics) phase properties. A modified
Computational Analytical Micromechanics (CAM) approach introduces an exact
Additive General Integral Equation (AGIE) for CMs of any structure and phase
properties mentioned above. The unified iteration solution of static AGIEs is
adapted to the body force with compact support serving as a fundamentally new
universal training parameter. The approach also establishes a critical
threshold for filtering out unsuitable sub-datasets of effective parameters
through a novel Representative Volume Element (RVE) concept, which extends
Hill's classical framework. This RVE concept eliminates sample size, boundary
layer, and edge effects, making it applicable to CMs of any structure and phase
properties, regardless of local or nonlocal, linear or nonlinear. Incorporating
this new RVE concept into machine learning and neural network techniques
enables the construction of any unpredefined surrogate nonlocal operators. The
methodology is structured as a modular, block-based framework, allowing
independent development and refinement of software components. This flexible,
robust AGIE-CAM framework integrates data-driven, multi-scale, and
multi-physics modeling, accelerating research in CM of any microtopology and
phase properties considered. The AGIE-CAM framework represents a groundbreaking
paradigm shift in the micromechanics of composites, redefining the very
philosophy that underpins our understanding of their behavior at the
microscopic level.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 02:39:07 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Buryachenko",
"Valeriy A.",
""
]
] | TITLE: Unified Micromechanics Theory of Composites
ABSTRACT: We consider the matrix composite materials (CM) of either random
(statistically homogeneous or inhomogeneous), periodic, or deterministic
(neither random nor periodic) structures. CMs exhibit linear or nonlinear
behavior, coupled or uncoupled multi-physical phenomena, locally elastic,
weakly nonlocal (strain gradient and stress gradient), or strongly nonlocal
(strain-type and displacement-type, peridynamics) phase properties. A modified
Computational Analytical Micromechanics (CAM) approach introduces an exact
Additive General Integral Equation (AGIE) for CMs of any structure and phase
properties mentioned above. The unified iteration solution of static AGIEs is
adapted to the body force with compact support serving as a fundamentally new
universal training parameter. The approach also establishes a critical
threshold for filtering out unsuitable sub-datasets of effective parameters
through a novel Representative Volume Element (RVE) concept, which extends
Hill's classical framework. This RVE concept eliminates sample size, boundary
layer, and edge effects, making it applicable to CMs of any structure and phase
properties, regardless of local or nonlocal, linear or nonlinear. Incorporating
this new RVE concept into machine learning and neural network techniques
enables the construction of any unpredefined surrogate nonlocal operators. The
methodology is structured as a modular, block-based framework, allowing
independent development and refinement of software components. This flexible,
robust AGIE-CAM framework integrates data-driven, multi-scale, and
multi-physics modeling, accelerating research in CM of any microtopology and
phase properties considered. The AGIE-CAM framework represents a groundbreaking
paradigm shift in the micromechanics of composites, redefining the very
philosophy that underpins our understanding of their behavior at the
microscopic level.
|
2503.14534 | Satyanarayana Murthy | Bibi Erum Ayesha, T. Satyanarayana Murthy, Palamakula Ramesh Babu, and
Ramu Kuchipudi | Ship Detection in Remote Sensing Imagery for Arbitrarily Oriented Object
Detection | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | This research paper presents an innovative ship detection system tailored for
applications like maritime surveillance and ecological monitoring. The study
employs YOLOv8 and repurposed U-Net, two advanced deep learning models, to
significantly enhance ship detection accuracy. Evaluation metrics include Mean
Average Precision (mAP), processing speed, and overall accuracy. The research
utilizes the "Airbus Ship Detection" dataset, featuring diverse remote sensing
images, to assess the models' versatility in detecting ships with varying
orientations and environmental contexts. Conventional ship detection faces
challenges with arbitrary orientations, complex backgrounds, and obscured
perspectives. Our approach incorporates YOLOv8 for real-time processing and
U-Net for ship instance segmentation. Evaluation focuses on mAP, processing
speed, and overall accuracy. The dataset is chosen for its diverse images,
making it an ideal benchmark. Results demonstrate significant progress in ship
detection. YOLOv8 achieves an 88% mAP, excelling in accurate and rapid ship
detection. U Net, adapted for ship instance segmentation, attains an 89% mAP,
improving boundary delineation and handling occlusions. This research enhances
maritime surveillance, disaster response, and ecological monitoring,
exemplifying the potential of deep learning models in ship detection.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 10:49:41 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ayesha",
"Bibi Erum",
""
],
[
"Murthy",
"T. Satyanarayana",
""
],
[
"Babu",
"Palamakula Ramesh",
""
],
[
"Kuchipudi",
"Ramu",
""
]
] | TITLE: Ship Detection in Remote Sensing Imagery for Arbitrarily Oriented Object
Detection
ABSTRACT: This research paper presents an innovative ship detection system tailored for
applications like maritime surveillance and ecological monitoring. The study
employs YOLOv8 and repurposed U-Net, two advanced deep learning models, to
significantly enhance ship detection accuracy. Evaluation metrics include Mean
Average Precision (mAP), processing speed, and overall accuracy. The research
utilizes the "Airbus Ship Detection" dataset, featuring diverse remote sensing
images, to assess the models' versatility in detecting ships with varying
orientations and environmental contexts. Conventional ship detection faces
challenges with arbitrary orientations, complex backgrounds, and obscured
perspectives. Our approach incorporates YOLOv8 for real-time processing and
U-Net for ship instance segmentation. Evaluation focuses on mAP, processing
speed, and overall accuracy. The dataset is chosen for its diverse images,
making it an ideal benchmark. Results demonstrate significant progress in ship
detection. YOLOv8 achieves an 88% mAP, excelling in accurate and rapid ship
detection. U Net, adapted for ship instance segmentation, attains an 89% mAP,
improving boundary delineation and handling occlusions. This research enhances
maritime surveillance, disaster response, and ecological monitoring,
exemplifying the potential of deep learning models in ship detection.
|
2503.14547 | Shuheng Li | Shuheng Li, Jiayun Zhang, Xiaohan Fu, Xiyuan Zhang, Jingbo Shang,
Rajesh K. Gupta | Matching Skeleton-based Activity Representations with Heterogeneous
Signals for HAR | This paper is accepted by SenSys 2025 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | In human activity recognition (HAR), activity labels have typically been
encoded in one-hot format, which has a recent shift towards using textual
representations to provide contextual knowledge. Here, we argue that HAR should
be anchored to physical motion data, as motion forms the basis of activity and
applies effectively across sensing systems, whereas text is inherently limited.
We propose SKELAR, a novel HAR framework that pretrains activity
representations from skeleton data and matches them with heterogeneous HAR
signals. Our method addresses two major challenges: (1) capturing core motion
knowledge without context-specific details. We achieve this through a
self-supervised coarse angle reconstruction task that recovers joint rotation
angles, invariant to both users and deployments; (2) adapting the
representations to downstream tasks with varying modalities and focuses. To
address this, we introduce a self-attention matching module that dynamically
prioritizes relevant body parts in a data-driven manner. Given the lack of
corresponding labels in existing skeleton data, we establish MASD, a new HAR
dataset with IMU, WiFi, and skeleton, collected from 20 subjects performing 27
activities. This is the first broadly applicable HAR dataset with
time-synchronized data across three modalities. Experiments show that SKELAR
achieves the state-of-the-art performance in both full-shot and few-shot
settings. We also demonstrate that SKELAR can effectively leverage synthetic
skeleton data to extend its use in scenarios without skeleton collections.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 18:43:06 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Shuheng",
""
],
[
"Zhang",
"Jiayun",
""
],
[
"Fu",
"Xiaohan",
""
],
[
"Zhang",
"Xiyuan",
""
],
[
"Shang",
"Jingbo",
""
],
[
"Gupta",
"Rajesh K.",
""
]
] | TITLE: Matching Skeleton-based Activity Representations with Heterogeneous
Signals for HAR
ABSTRACT: In human activity recognition (HAR), activity labels have typically been
encoded in one-hot format, which has a recent shift towards using textual
representations to provide contextual knowledge. Here, we argue that HAR should
be anchored to physical motion data, as motion forms the basis of activity and
applies effectively across sensing systems, whereas text is inherently limited.
We propose SKELAR, a novel HAR framework that pretrains activity
representations from skeleton data and matches them with heterogeneous HAR
signals. Our method addresses two major challenges: (1) capturing core motion
knowledge without context-specific details. We achieve this through a
self-supervised coarse angle reconstruction task that recovers joint rotation
angles, invariant to both users and deployments; (2) adapting the
representations to downstream tasks with varying modalities and focuses. To
address this, we introduce a self-attention matching module that dynamically
prioritizes relevant body parts in a data-driven manner. Given the lack of
corresponding labels in existing skeleton data, we establish MASD, a new HAR
dataset with IMU, WiFi, and skeleton, collected from 20 subjects performing 27
activities. This is the first broadly applicable HAR dataset with
time-synchronized data across three modalities. Experiments show that SKELAR
achieves the state-of-the-art performance in both full-shot and few-shot
settings. We also demonstrate that SKELAR can effectively leverage synthetic
skeleton data to extend its use in scenarios without skeleton collections.
|
2503.14550 | Theo Dapamede | Theodorus Dapamede, Aisha Urooj, Vedant Joshi, Gabrielle Gershon,
Frank Li, Mohammadreza Chavoshi, Beatrice Brown-Mulry, Rohan Satya Isaac,
Aawez Mansuri, Chad Robichaux, Chadi Ayoub, Reza Arsanjani, Laurence
Sperling, Judy Gichoya, Marly van Assen, Charles W. ONeill, Imon Banerjee,
Hari Trivedi | Novel AI-Based Quantification of Breast Arterial Calcification to
Predict Cardiovascular Risk | null | null | null | null | eess.IV cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Women are underdiagnosed and undertreated for cardiovascular disease.
Automatic quantification of breast arterial calcification on screening
mammography can identify women at risk for cardiovascular disease and enable
earlier treatment and management of disease. In this retrospective study of
116,135 women from two healthcare systems, a transformer-based neural network
quantified BAC severity (no BAC, mild, moderate, and severe) on screening
mammograms. Outcomes included major adverse cardiovascular events (MACE) and
all-cause mortality. BAC severity was independently associated with MACE after
adjusting for cardiovascular risk factors, with increasing hazard ratios from
mild (HR 1.18-1.22), moderate (HR 1.38-1.47), to severe BAC (HR 2.03-2.22)
across datasets (all p<0.001). This association remained significant across all
age groups, with even mild BAC indicating increased risk in women under 50. BAC
remained an independent predictor when analyzed alongside ASCVD risk scores,
showing significant associations with myocardial infarction, stroke, heart
failure, and mortality (all p<0.005). Automated BAC quantification enables
opportunistic cardiovascular risk assessment during routine mammography without
additional radiation or cost. This approach provides value beyond traditional
risk factors, particularly in younger women, offering potential for early CVD
risk stratification in the millions of women undergoing annual mammography.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 19:38:17 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Dapamede",
"Theodorus",
""
],
[
"Urooj",
"Aisha",
""
],
[
"Joshi",
"Vedant",
""
],
[
"Gershon",
"Gabrielle",
""
],
[
"Li",
"Frank",
""
],
[
"Chavoshi",
"Mohammadreza",
""
],
[
"Brown-Mulry",
"Beatrice",
""
],
[
"Isaac",
"Rohan Satya",
""
],
[
"Mansuri",
"Aawez",
""
],
[
"Robichaux",
"Chad",
""
],
[
"Ayoub",
"Chadi",
""
],
[
"Arsanjani",
"Reza",
""
],
[
"Sperling",
"Laurence",
""
],
[
"Gichoya",
"Judy",
""
],
[
"van Assen",
"Marly",
""
],
[
"ONeill",
"Charles W.",
""
],
[
"Banerjee",
"Imon",
""
],
[
"Trivedi",
"Hari",
""
]
] | TITLE: Novel AI-Based Quantification of Breast Arterial Calcification to
Predict Cardiovascular Risk
ABSTRACT: Women are underdiagnosed and undertreated for cardiovascular disease.
Automatic quantification of breast arterial calcification on screening
mammography can identify women at risk for cardiovascular disease and enable
earlier treatment and management of disease. In this retrospective study of
116,135 women from two healthcare systems, a transformer-based neural network
quantified BAC severity (no BAC, mild, moderate, and severe) on screening
mammograms. Outcomes included major adverse cardiovascular events (MACE) and
all-cause mortality. BAC severity was independently associated with MACE after
adjusting for cardiovascular risk factors, with increasing hazard ratios from
mild (HR 1.18-1.22), moderate (HR 1.38-1.47), to severe BAC (HR 2.03-2.22)
across datasets (all p<0.001). This association remained significant across all
age groups, with even mild BAC indicating increased risk in women under 50. BAC
remained an independent predictor when analyzed alongside ASCVD risk scores,
showing significant associations with myocardial infarction, stroke, heart
failure, and mortality (all p<0.005). Automated BAC quantification enables
opportunistic cardiovascular risk assessment during routine mammography without
additional radiation or cost. This approach provides value beyond traditional
risk factors, particularly in younger women, offering potential for early CVD
risk stratification in the millions of women undergoing annual mammography.
|
2503.14552 | Sayed Pedram Haeri Boroujeni | Sayed Pedram Haeri Boroujeni, Niloufar Mehrabi, Fatemeh Afghah, Connor
Peter McGrath, Danish Bhatkar, Mithilesh Anil Biradar, Abolfazl Razi | Fire and Smoke Datasets in 20 Years: An In-depth Review | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fire and smoke phenomena pose a significant threat to the natural
environment, ecosystems, and global economy, as well as human lives and
wildlife. In this particular circumstance, there is a demand for more
sophisticated and advanced technologies to implement an effective strategy for
early detection, real-time monitoring, and minimizing the overall impacts of
fires on ecological balance and public safety. Recently, the rapid advancement
of Artificial Intelligence (AI) and Computer Vision (CV) frameworks has
substantially revolutionized the momentum for developing efficient fire
management systems. However, these systems extensively rely on the availability
of adequate and high-quality fire and smoke data to create proficient Machine
Learning (ML) methods for various tasks, such as detection and monitoring.
Although fire and smoke datasets play a critical role in training, evaluating,
and testing advanced Deep Learning (DL) models, a comprehensive review of the
existing datasets is still unexplored. For this purpose, we provide an in-depth
review to systematically analyze and evaluate fire and smoke datasets collected
over the past 20 years. We investigate the characteristics of each dataset,
including type, size, format, collection methods, and geographical diversities.
We also review and highlight the unique features of each dataset, such as
imaging modalities (RGB, thermal, infrared) and their applicability for
different fire management tasks (classification, segmentation, detection).
Furthermore, we summarize the strengths and weaknesses of each dataset and
discuss their potential for advancing research and technology in fire
management. Ultimately, we conduct extensive experimental analyses across
different datasets using several state-of-the-art algorithms, such as
ResNet-50, DeepLab-V3, and YoloV8.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 22:08:02 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Boroujeni",
"Sayed Pedram Haeri",
""
],
[
"Mehrabi",
"Niloufar",
""
],
[
"Afghah",
"Fatemeh",
""
],
[
"McGrath",
"Connor Peter",
""
],
[
"Bhatkar",
"Danish",
""
],
[
"Biradar",
"Mithilesh Anil",
""
],
[
"Razi",
"Abolfazl",
""
]
] | TITLE: Fire and Smoke Datasets in 20 Years: An In-depth Review
ABSTRACT: Fire and smoke phenomena pose a significant threat to the natural
environment, ecosystems, and global economy, as well as human lives and
wildlife. In this particular circumstance, there is a demand for more
sophisticated and advanced technologies to implement an effective strategy for
early detection, real-time monitoring, and minimizing the overall impacts of
fires on ecological balance and public safety. Recently, the rapid advancement
of Artificial Intelligence (AI) and Computer Vision (CV) frameworks has
substantially revolutionized the momentum for developing efficient fire
management systems. However, these systems extensively rely on the availability
of adequate and high-quality fire and smoke data to create proficient Machine
Learning (ML) methods for various tasks, such as detection and monitoring.
Although fire and smoke datasets play a critical role in training, evaluating,
and testing advanced Deep Learning (DL) models, a comprehensive review of the
existing datasets is still unexplored. For this purpose, we provide an in-depth
review to systematically analyze and evaluate fire and smoke datasets collected
over the past 20 years. We investigate the characteristics of each dataset,
including type, size, format, collection methods, and geographical diversities.
We also review and highlight the unique features of each dataset, such as
imaging modalities (RGB, thermal, infrared) and their applicability for
different fire management tasks (classification, segmentation, detection).
Furthermore, we summarize the strengths and weaknesses of each dataset and
discuss their potential for advancing research and technology in fire
management. Ultimately, we conduct extensive experimental analyses across
different datasets using several state-of-the-art algorithms, such as
ResNet-50, DeepLab-V3, and YoloV8.
|
2503.14556 | Md Rokibul Hasan | Reza E Rabbi Shawon, MD Rokibul Hasan, Md Anisur Rahman, Mohamed
Ghandri, Iman Ahmed Lamari, Mohammed Kawsar, Rubi Akter | Designing and Deploying AI Models for Sustainable Logistics
Optimization: A Case Study on Eco-Efficient Supply Chains in the USA | null | null | 10.62754/joe.v4i2.6610 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rapid evolution of Artificial Intelligence (AI) and Machine Learning (ML)
has significantly transformed logistics and supply chain management,
particularly in the pursuit of sustainability and eco-efficiency. This study
explores AI-based methodologies for optimizing logistics operations in the USA,
focusing on reducing environmental impact, improving fuel efficiency, and
minimizing costs. Key AI applications include predictive analytics for demand
forecasting, route optimization through machine learning, and AI-powered fuel
efficiency strategies. Various models, such as Linear Regression, XGBoost,
Support Vector Machine, and Neural Networks, are applied to real-world
logistics datasets to reduce carbon emissions based on logistics operations,
optimize travel routes to minimize distance and travel time, and predict future
deliveries to plan optimal routes. Other models such as K-Means and DBSCAN are
also used to optimize travel routes to minimize distance and travel time for
logistics operations. This study utilizes datasets from logistics companies'
databases. The study also assesses model performance using metrics such as mean
absolute error (MAE), mean squared error (MSE), and R2 score. This study also
explores how these models can be deployed to various platforms for real-time
logistics and supply chain use. The models are also examined through a thorough
case study, highlighting best practices and regulatory frameworks that promote
sustainability. The findings demonstrate AI's potential to enhance logistics
efficiency, reduce carbon footprints, and contribute to a more resilient and
adaptive supply chain ecosystem.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 00:46:35 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Shawon",
"Reza E Rabbi",
""
],
[
"Hasan",
"MD Rokibul",
""
],
[
"Rahman",
"Md Anisur",
""
],
[
"Ghandri",
"Mohamed",
""
],
[
"Lamari",
"Iman Ahmed",
""
],
[
"Kawsar",
"Mohammed",
""
],
[
"Akter",
"Rubi",
""
]
] | TITLE: Designing and Deploying AI Models for Sustainable Logistics
Optimization: A Case Study on Eco-Efficient Supply Chains in the USA
ABSTRACT: The rapid evolution of Artificial Intelligence (AI) and Machine Learning (ML)
has significantly transformed logistics and supply chain management,
particularly in the pursuit of sustainability and eco-efficiency. This study
explores AI-based methodologies for optimizing logistics operations in the USA,
focusing on reducing environmental impact, improving fuel efficiency, and
minimizing costs. Key AI applications include predictive analytics for demand
forecasting, route optimization through machine learning, and AI-powered fuel
efficiency strategies. Various models, such as Linear Regression, XGBoost,
Support Vector Machine, and Neural Networks, are applied to real-world
logistics datasets to reduce carbon emissions based on logistics operations,
optimize travel routes to minimize distance and travel time, and predict future
deliveries to plan optimal routes. Other models such as K-Means and DBSCAN are
also used to optimize travel routes to minimize distance and travel time for
logistics operations. This study utilizes datasets from logistics companies'
databases. The study also assesses model performance using metrics such as mean
absolute error (MAE), mean squared error (MSE), and R2 score. This study also
explores how these models can be deployed to various platforms for real-time
logistics and supply chain use. The models are also examined through a thorough
case study, highlighting best practices and regulatory frameworks that promote
sustainability. The findings demonstrate AI's potential to enhance logistics
efficiency, reduce carbon footprints, and contribute to a more resilient and
adaptive supply chain ecosystem.
|
2503.14557 | Rhys Howard | Rhys Howard, Nick Hawes, Lars Kunze | Generating Causal Explanations of Vehicular Agent Behavioural
Interactions with Learnt Reward Profiles | 8 Pages, 5 Figures, To be published in the Proceedings of the 2025
IEEE International Conference on Robotics & Automation, Initial upload of
accepted paper | null | null | null | cs.AI cs.MA cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transparency and explainability are important features that responsible
autonomous vehicles should possess, particularly when interacting with humans,
and causal reasoning offers a strong basis to provide these qualities. However,
even if one assumes agents act to maximise some concept of reward, it is
difficult to make accurate causal inferences of agent planning without
capturing what is of importance to the agent. Thus our work aims to learn a
weighting of reward metrics for agents such that explanations for agent
interactions can be causally inferred. We validate our approach quantitatively
and qualitatively across three real-world driving datasets, demonstrating a
functional improvement over previous methods and competitive performance across
evaluation metrics.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 01:53:59 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Howard",
"Rhys",
""
],
[
"Hawes",
"Nick",
""
],
[
"Kunze",
"Lars",
""
]
] | TITLE: Generating Causal Explanations of Vehicular Agent Behavioural
Interactions with Learnt Reward Profiles
ABSTRACT: Transparency and explainability are important features that responsible
autonomous vehicles should possess, particularly when interacting with humans,
and causal reasoning offers a strong basis to provide these qualities. However,
even if one assumes agents act to maximise some concept of reward, it is
difficult to make accurate causal inferences of agent planning without
capturing what is of importance to the agent. Thus our work aims to learn a
weighting of reward metrics for agents such that explanations for agent
interactions can be causally inferred. We validate our approach quantitatively
and qualitatively across three real-world driving datasets, demonstrating a
functional improvement over previous methods and competitive performance across
evaluation metrics.
|
2503.14559 | Weixiong Lin | Weixiong Lin, Chen Ju, Haicheng Wang, Shengchao Hu, Shuai Xiao,
Mengting Chen, Yuheng Jiao, Mingshuai Yao, Jinsong Lan, Qingwen Liu, Ying
Chen | Squeeze Out Tokens from Sample for Finer-Grained Data Governance | null | null | null | null | cs.LG cs.AI cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | Widely observed data scaling laws, in which error falls off as a power of the
training size, demonstrate the diminishing returns of unselective data
expansion. Hence, data governance is proposed to downsize datasets through
pruning non-informative samples. Yet, isolating the impact of a specific sample
on overall model performance is challenging, due to the vast computation
required for tryout all sample combinations. Current data governors circumvent
this complexity by estimating sample contributions through heuristic-derived
scalar scores, thereby discarding low-value ones. Despite thorough sample
sieving, retained samples contain substantial undesired tokens intrinsically,
underscoring the potential for further compression and purification. In this
work, we upgrade data governance from a 'sieving' approach to a 'juicing' one.
Instead of scanning for least-flawed samples, our dual-branch DataJuicer
applies finer-grained intra-sample governance. It squeezes out informative
tokens and boosts image-text alignments. Specifically, the vision branch
retains salient image patches and extracts relevant object classes, while the
text branch incorporates these classes to enhance captions. Consequently,
DataJuicer yields more refined datasets through finer-grained governance.
Extensive experiments across datasets demonstrate that DataJuicer significantly
outperforms existing DataSieve in image-text retrieval, classification, and
dense visual reasoning.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 04:06:50 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Lin",
"Weixiong",
""
],
[
"Ju",
"Chen",
""
],
[
"Wang",
"Haicheng",
""
],
[
"Hu",
"Shengchao",
""
],
[
"Xiao",
"Shuai",
""
],
[
"Chen",
"Mengting",
""
],
[
"Jiao",
"Yuheng",
""
],
[
"Yao",
"Mingshuai",
""
],
[
"Lan",
"Jinsong",
""
],
[
"Liu",
"Qingwen",
""
],
[
"Chen",
"Ying",
""
]
] | TITLE: Squeeze Out Tokens from Sample for Finer-Grained Data Governance
ABSTRACT: Widely observed data scaling laws, in which error falls off as a power of the
training size, demonstrate the diminishing returns of unselective data
expansion. Hence, data governance is proposed to downsize datasets through
pruning non-informative samples. Yet, isolating the impact of a specific sample
on overall model performance is challenging, due to the vast computation
required for tryout all sample combinations. Current data governors circumvent
this complexity by estimating sample contributions through heuristic-derived
scalar scores, thereby discarding low-value ones. Despite thorough sample
sieving, retained samples contain substantial undesired tokens intrinsically,
underscoring the potential for further compression and purification. In this
work, we upgrade data governance from a 'sieving' approach to a 'juicing' one.
Instead of scanning for least-flawed samples, our dual-branch DataJuicer
applies finer-grained intra-sample governance. It squeezes out informative
tokens and boosts image-text alignments. Specifically, the vision branch
retains salient image patches and extracts relevant object classes, while the
text branch incorporates these classes to enhance captions. Consequently,
DataJuicer yields more refined datasets through finer-grained governance.
Extensive experiments across datasets demonstrate that DataJuicer significantly
outperforms existing DataSieve in image-text retrieval, classification, and
dense visual reasoning.
|
2503.14562 | A. I. Medvedeva | A.I. Medvedeva, V.V. Bakutkin | Analysis of human visual field information using machine learning
methods and assessment of their accuracy | in Russian language | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Subject of research: is the study of methods for analyzing perimetric images
for the diagnosis and control of glaucoma diseases. Objects of research: is a
dataset collected on the ophthalmological perimeter with the results of various
patient pathologies, since the ophthalmological community is acutely aware of
the issue of disease control and import substitution. [5]. Purpose of research:
is to consider various machine learning methods that can classify glaucoma.
This is possible thanks to the classifier built after labeling the dataset. It
is able to determine from the image whether the visual fields depicted on it
are the results of the impact of glaucoma on the eyes or other visual diseases.
Earlier in the work [3], a dataset was described that was collected on the
Tomey perimeter. The average age of the examined patients ranged from 30 to 85
years. Methods of research: machine learning methods for classifying image
results (stochastic gradient descent, logistic regression, random forest, naive
Bayes). Main results of research: the result of the study is computer modeling
that can determine from the image whether the result is glaucoma or another
disease (binary classification).
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 07:39:41 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Medvedeva",
"A. I.",
""
],
[
"Bakutkin",
"V. V.",
""
]
] | TITLE: Analysis of human visual field information using machine learning
methods and assessment of their accuracy
ABSTRACT: Subject of research: is the study of methods for analyzing perimetric images
for the diagnosis and control of glaucoma diseases. Objects of research: is a
dataset collected on the ophthalmological perimeter with the results of various
patient pathologies, since the ophthalmological community is acutely aware of
the issue of disease control and import substitution. [5]. Purpose of research:
is to consider various machine learning methods that can classify glaucoma.
This is possible thanks to the classifier built after labeling the dataset. It
is able to determine from the image whether the visual fields depicted on it
are the results of the impact of glaucoma on the eyes or other visual diseases.
Earlier in the work [3], a dataset was described that was collected on the
Tomey perimeter. The average age of the examined patients ranged from 30 to 85
years. Methods of research: machine learning methods for classifying image
results (stochastic gradient descent, logistic regression, random forest, naive
Bayes). Main results of research: the result of the study is computer modeling
that can determine from the image whether the result is glaucoma or another
disease (binary classification).
|
2503.14568 | Iman Peivaste | Iman Peivaste, Ahmed Makradi, Salim Belouettar | Teaching Artificial Intelligence to Perform Rapid, Resolution-Invariant
Grain Growth Modeling via Fourier Neural Operator | null | null | null | null | cond-mat.mtrl-sci cs.AI cs.CE cs.LG | http://creativecommons.org/licenses/by/4.0/ | Microstructural evolution, particularly grain growth, plays a critical role
in shaping the physical, optical, and electronic properties of materials.
Traditional phase-field modeling accurately simulates these phenomena but is
computationally intensive, especially for large systems and fine spatial
resolutions. While machine learning approaches have been employed to accelerate
simulations, they often struggle with resolution dependence and generalization
across different grain scales. This study introduces a novel approach utilizing
Fourier Neural Operator (FNO) to achieve resolution-invariant modeling of
microstructure evolution in multi-grain systems. FNO operates in the Fourier
space and can inherently handle varying resolutions by learning mappings
between function spaces. By integrating FNO with the phase field method, we
developed a surrogate model that significantly reduces computational costs
while maintaining high accuracy across different spatial scales. We generated a
comprehensive dataset from phase-field simulations using the Fan Chen model,
capturing grain evolution over time. Data preparation involved creating
input-output pairs with a time shift, allowing the model to predict future
microstructures based on current and past states. The FNO-based neural network
was trained using sequences of microstructures and demonstrated remarkable
accuracy in predicting long-term evolution, even for unseen configurations and
higher-resolution grids not encountered during training.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:19:08 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Peivaste",
"Iman",
""
],
[
"Makradi",
"Ahmed",
""
],
[
"Belouettar",
"Salim",
""
]
] | TITLE: Teaching Artificial Intelligence to Perform Rapid, Resolution-Invariant
Grain Growth Modeling via Fourier Neural Operator
ABSTRACT: Microstructural evolution, particularly grain growth, plays a critical role
in shaping the physical, optical, and electronic properties of materials.
Traditional phase-field modeling accurately simulates these phenomena but is
computationally intensive, especially for large systems and fine spatial
resolutions. While machine learning approaches have been employed to accelerate
simulations, they often struggle with resolution dependence and generalization
across different grain scales. This study introduces a novel approach utilizing
Fourier Neural Operator (FNO) to achieve resolution-invariant modeling of
microstructure evolution in multi-grain systems. FNO operates in the Fourier
space and can inherently handle varying resolutions by learning mappings
between function spaces. By integrating FNO with the phase field method, we
developed a surrogate model that significantly reduces computational costs
while maintaining high accuracy across different spatial scales. We generated a
comprehensive dataset from phase-field simulations using the Fan Chen model,
capturing grain evolution over time. Data preparation involved creating
input-output pairs with a time shift, allowing the model to predict future
microstructures based on current and past states. The FNO-based neural network
was trained using sequences of microstructures and demonstrated remarkable
accuracy in predicting long-term evolution, even for unseen configurations and
higher-resolution grids not encountered during training.
|
2503.14569 | Liya Guo | Liya Guo, Zun Wang, Chang Liu, Junzhe Li, Pipi Hu, Yi Zhu | Potential Score Matching: Debiasing Molecular Structure Sampling with
Potential Energy Guidance | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ensemble average of physical properties of molecules is closely related
to the distribution of molecular conformations, and sampling such distributions
is a fundamental challenge in physics and chemistry. Traditional methods like
molecular dynamics (MD) simulations and Markov chain Monte Carlo (MCMC)
sampling are commonly used but can be time-consuming and costly. Recently,
diffusion models have emerged as efficient alternatives by learning the
distribution of training data. Obtaining an unbiased target distribution is
still an expensive task, primarily because it requires satisfying ergodicity.
To tackle these challenges, we propose Potential Score Matching (PSM), an
approach that utilizes the potential energy gradient to guide generative
models. PSM does not require exact energy functions and can debias sample
distributions even when trained on limited and biased data. Our method
outperforms existing state-of-the-art (SOTA) models on the Lennard-Jones (LJ)
potential, a commonly used toy model. Furthermore, we extend the evaluation of
PSM to high-dimensional problems using the MD17 and MD22 datasets. The results
demonstrate that molecular distributions generated by PSM more closely
approximate the Boltzmann distribution compared to traditional diffusion
models.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:27:28 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Guo",
"Liya",
""
],
[
"Wang",
"Zun",
""
],
[
"Liu",
"Chang",
""
],
[
"Li",
"Junzhe",
""
],
[
"Hu",
"Pipi",
""
],
[
"Zhu",
"Yi",
""
]
] | TITLE: Potential Score Matching: Debiasing Molecular Structure Sampling with
Potential Energy Guidance
ABSTRACT: The ensemble average of physical properties of molecules is closely related
to the distribution of molecular conformations, and sampling such distributions
is a fundamental challenge in physics and chemistry. Traditional methods like
molecular dynamics (MD) simulations and Markov chain Monte Carlo (MCMC)
sampling are commonly used but can be time-consuming and costly. Recently,
diffusion models have emerged as efficient alternatives by learning the
distribution of training data. Obtaining an unbiased target distribution is
still an expensive task, primarily because it requires satisfying ergodicity.
To tackle these challenges, we propose Potential Score Matching (PSM), an
approach that utilizes the potential energy gradient to guide generative
models. PSM does not require exact energy functions and can debias sample
distributions even when trained on limited and biased data. Our method
outperforms existing state-of-the-art (SOTA) models on the Lennard-Jones (LJ)
potential, a commonly used toy model. Furthermore, we extend the evaluation of
PSM to high-dimensional problems using the MD17 and MD22 datasets. The results
demonstrate that molecular distributions generated by PSM more closely
approximate the Boltzmann distribution compared to traditional diffusion
models.
|
2503.14574 | Sarwan Ali | Taslim Murad, Sarwan Ali, Murray Patterson | Sequence Analysis Using the Bezier Curve | null | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The analysis of sequences (e.g., protein, DNA, and SMILES string) is
essential for disease diagnosis, biomaterial engineering, genetic engineering,
and drug discovery domains. Conventional analytical methods focus on
transforming sequences into numerical representations for applying machine
learning/deep learning-based sequence characterization. However, their efficacy
is constrained by the intrinsic nature of deep learning (DL) models, which tend
to exhibit suboptimal performance when applied to tabular data. An alternative
group of methodologies endeavors to convert biological sequences into image
forms by applying the concept of Chaos Game Representation (CGR). However, a
noteworthy drawback of these methods lies in their tendency to map individual
elements of the sequence onto a relatively small subset of designated pixels
within the generated image. The resulting sparse image representation may not
adequately encapsulate the comprehensive sequence information, potentially
resulting in suboptimal predictions. In this study, we introduce a novel
approach to transform sequences into images using the B\'ezier curve concept
for element mapping. Mapping the elements onto a curve enhances the sequence
information representation in the respective images, hence yielding better
DL-based classification performance. We employed different sequence datasets to
validate our system by using different classification tasks, and the results
illustrate that our B\'ezier curve method is able to achieve good performance
for all the tasks.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 15:40:46 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Murad",
"Taslim",
""
],
[
"Ali",
"Sarwan",
""
],
[
"Patterson",
"Murray",
""
]
] | TITLE: Sequence Analysis Using the Bezier Curve
ABSTRACT: The analysis of sequences (e.g., protein, DNA, and SMILES string) is
essential for disease diagnosis, biomaterial engineering, genetic engineering,
and drug discovery domains. Conventional analytical methods focus on
transforming sequences into numerical representations for applying machine
learning/deep learning-based sequence characterization. However, their efficacy
is constrained by the intrinsic nature of deep learning (DL) models, which tend
to exhibit suboptimal performance when applied to tabular data. An alternative
group of methodologies endeavors to convert biological sequences into image
forms by applying the concept of Chaos Game Representation (CGR). However, a
noteworthy drawback of these methods lies in their tendency to map individual
elements of the sequence onto a relatively small subset of designated pixels
within the generated image. The resulting sparse image representation may not
adequately encapsulate the comprehensive sequence information, potentially
resulting in suboptimal predictions. In this study, we introduce a novel
approach to transform sequences into images using the B\'ezier curve concept
for element mapping. Mapping the elements onto a curve enhances the sequence
information representation in the respective images, hence yielding better
DL-based classification performance. We employed different sequence datasets to
validate our system by using different classification tasks, and the results
illustrate that our B\'ezier curve method is able to achieve good performance
for all the tasks.
|
2503.14577 | Chenyu Liu | Chenyu Liu and Luca Rossi | PHGNN: A Novel Prompted Hypergraph Neural Network to Diagnose
Alzheimer's Disease | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | The accurate diagnosis of Alzheimer's disease (AD) and prognosis of mild
cognitive impairment (MCI) conversion are crucial for early intervention.
However, existing multimodal methods face several challenges, from the
heterogeneity of input data, to underexplored modality interactions, missing
data due to patient dropouts, and limited data caused by the time-consuming and
costly data collection process. In this paper, we propose a novel Prompted
Hypergraph Neural Network (PHGNN) framework that addresses these limitations by
integrating hypergraph based learning with prompt learning. Hypergraphs capture
higher-order relationships between different modalities, while our prompt
learning approach for hypergraphs, adapted from NLP, enables efficient training
with limited data. Our model is validated through extensive experiments on the
ADNI dataset, outperforming SOTA methods in both AD diagnosis and the
prediction of MCI conversion.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 16:10:43 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Liu",
"Chenyu",
""
],
[
"Rossi",
"Luca",
""
]
] | TITLE: PHGNN: A Novel Prompted Hypergraph Neural Network to Diagnose
Alzheimer's Disease
ABSTRACT: The accurate diagnosis of Alzheimer's disease (AD) and prognosis of mild
cognitive impairment (MCI) conversion are crucial for early intervention.
However, existing multimodal methods face several challenges, from the
heterogeneity of input data, to underexplored modality interactions, missing
data due to patient dropouts, and limited data caused by the time-consuming and
costly data collection process. In this paper, we propose a novel Prompted
Hypergraph Neural Network (PHGNN) framework that addresses these limitations by
integrating hypergraph based learning with prompt learning. Hypergraphs capture
higher-order relationships between different modalities, while our prompt
learning approach for hypergraphs, adapted from NLP, enables efficient training
with limited data. Our model is validated through extensive experiments on the
ADNI dataset, outperforming SOTA methods in both AD diagnosis and the
prediction of MCI conversion.
|
2503.14607 | Shuo Xing | Shuo Xing, Zezhou Sun, Shuangyu Xie, Kaiyuan Chen, Yanjia Huang,
Yuping Wang, Jiachen Li, Dezhen Song, Zhengzhong Tu | Can Large Vision Language Models Read Maps Like a Human? | 35 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce MapBench-the first dataset specifically designed
for human-readable, pixel-based map-based outdoor navigation, curated from
complex path finding scenarios. MapBench comprises over 1600 pixel space map
path finding problems from 100 diverse maps. In MapBench, LVLMs generate
language-based navigation instructions given a map image and a query with
beginning and end landmarks. For each map, MapBench provides Map Space Scene
Graph (MSSG) as an indexing data structure to convert between natural language
and evaluate LVLM-generated results. We demonstrate that MapBench significantly
challenges state-of-the-art LVLMs both zero-shot prompting and a
Chain-of-Thought (CoT) augmented reasoning framework that decomposes map
navigation into sequential cognitive processes. Our evaluation of both
open-source and closed-source LVLMs underscores the substantial difficulty
posed by MapBench, revealing critical limitations in their spatial reasoning
and structured decision-making capabilities. We release all the code and
dataset in https://github.com/taco-group/MapBench.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 18:05:38 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Xing",
"Shuo",
""
],
[
"Sun",
"Zezhou",
""
],
[
"Xie",
"Shuangyu",
""
],
[
"Chen",
"Kaiyuan",
""
],
[
"Huang",
"Yanjia",
""
],
[
"Wang",
"Yuping",
""
],
[
"Li",
"Jiachen",
""
],
[
"Song",
"Dezhen",
""
],
[
"Tu",
"Zhengzhong",
""
]
] | TITLE: Can Large Vision Language Models Read Maps Like a Human?
ABSTRACT: In this paper, we introduce MapBench-the first dataset specifically designed
for human-readable, pixel-based map-based outdoor navigation, curated from
complex path finding scenarios. MapBench comprises over 1600 pixel space map
path finding problems from 100 diverse maps. In MapBench, LVLMs generate
language-based navigation instructions given a map image and a query with
beginning and end landmarks. For each map, MapBench provides Map Space Scene
Graph (MSSG) as an indexing data structure to convert between natural language
and evaluate LVLM-generated results. We demonstrate that MapBench significantly
challenges state-of-the-art LVLMs both zero-shot prompting and a
Chain-of-Thought (CoT) augmented reasoning framework that decomposes map
navigation into sequential cognitive processes. Our evaluation of both
open-source and closed-source LVLMs underscores the substantial difficulty
posed by MapBench, revealing critical limitations in their spatial reasoning
and structured decision-making capabilities. We release all the code and
dataset in https://github.com/taco-group/MapBench.
|
2503.14618 | Gustavo De Carvalho Bertoli | Leonardo Henrique de Melo, Gustavo de Carvalho Bertoli, Michele
Nogueira, Aldri Luiz dos Santos, Louren\c{c}o Alves Pereira Junior | Anomaly-Flow: A Multi-domain Federated Generative Adversarial Network
for Distributed Denial-of-Service Detection | 8 pages, 4 figures | null | null | null | cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Distributed denial-of-service (DDoS) attacks remain a critical threat to
Internet services, causing costly disruptions. While machine learning (ML) has
shown promise in DDoS detection, current solutions struggle with multi-domain
environments where attacks must be detected across heterogeneous networks and
organizational boundaries. This limitation severely impacts the practical
deployment of ML-based defenses in real-world settings.
This paper introduces Anomaly-Flow, a novel framework that addresses this
critical gap by combining Federated Learning (FL) with Generative Adversarial
Networks (GANs) for privacy-preserving, multi-domain DDoS detection. Our
proposal enables collaborative learning across diverse network domains while
preserving data privacy through synthetic flow generation. Through extensive
evaluation across three distinct network datasets, Anomaly-Flow achieves an
average F1-score of $0.747$, outperforming baseline models. Importantly, our
framework enables organizations to share attack detection capabilities without
exposing sensitive network data, making it particularly valuable for critical
infrastructure and privacy-sensitive sectors.
Beyond immediate technical contributions, this work provides insights into
the challenges and opportunities in multi-domain DDoS detection, establishing a
foundation for future research in collaborative network defense systems. Our
findings have important implications for academic research and industry
practitioners working to deploy practical ML-based security solutions.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 18:13:51 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"de Melo",
"Leonardo Henrique",
""
],
[
"Bertoli",
"Gustavo de Carvalho",
""
],
[
"Nogueira",
"Michele",
""
],
[
"Santos",
"Aldri Luiz dos",
""
],
[
"Junior",
"Lourenço Alves Pereira",
""
]
] | TITLE: Anomaly-Flow: A Multi-domain Federated Generative Adversarial Network
for Distributed Denial-of-Service Detection
ABSTRACT: Distributed denial-of-service (DDoS) attacks remain a critical threat to
Internet services, causing costly disruptions. While machine learning (ML) has
shown promise in DDoS detection, current solutions struggle with multi-domain
environments where attacks must be detected across heterogeneous networks and
organizational boundaries. This limitation severely impacts the practical
deployment of ML-based defenses in real-world settings.
This paper introduces Anomaly-Flow, a novel framework that addresses this
critical gap by combining Federated Learning (FL) with Generative Adversarial
Networks (GANs) for privacy-preserving, multi-domain DDoS detection. Our
proposal enables collaborative learning across diverse network domains while
preserving data privacy through synthetic flow generation. Through extensive
evaluation across three distinct network datasets, Anomaly-Flow achieves an
average F1-score of $0.747$, outperforming baseline models. Importantly, our
framework enables organizations to share attack detection capabilities without
exposing sensitive network data, making it particularly valuable for critical
infrastructure and privacy-sensitive sectors.
Beyond immediate technical contributions, this work provides insights into
the challenges and opportunities in multi-domain DDoS detection, establishing a
foundation for future research in collaborative network defense systems. Our
findings have important implications for academic research and industry
practitioners working to deploy practical ML-based security solutions.
|
2503.14621 | Akinyemi Sadeeq Akintola | Grace Funmilayo Farayola (University of Buckingham, Buckingham, UK),
Akinyemi Sadeeq Akintola (Universidade NOVA de Lisboa, Lisbon, Portugal),
Oluwole Fagbohun (Readrly Limited, London, UK), Chukwuka Michael Oforgu
(Readrly Limited, London, UK), Bisola Faith Kayode (Independent Researcher,
London, UK), Christian Chimezie (Independent Researcher, Bristol, UK),
Temitope Kadri (Readrly Limited, London, UK), Abiola Oludotun (Readrly
Limited, London, UK), Nelson Ogbeide (Independent Researcher, London, UK),
Mgbame Michael (Hankali Intel, Lagos, Nigeria), Adeseye Ifaturoti (University
of Greenwich, London, UK), and Toyese Oloyede (Independent Researcher,
Northampton, UK) | Reducing False Ventricular Tachycardia Alarms in ICU Settings: A Machine
Learning Approach | Preprint, Accepted to the International Conference on Machine
Learning Technologies (ICMLT 2025), Helsinki, Finland | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | False arrhythmia alarms in intensive care units (ICUs) are a significant
challenge, contributing to alarm fatigue and potentially compromising patient
safety. Ventricular tachycardia (VT) alarms are particularly difficult to
detect accurately due to their complex nature. This paper presents a machine
learning approach to reduce false VT alarms using the VTaC dataset, a benchmark
dataset of annotated VT alarms from ICU monitors. We extract time-domain and
frequency-domain features from waveform data, preprocess the data, and train
deep learning models to classify true and false VT alarms. Our results
demonstrate high performance, with ROC-AUC scores exceeding 0.96 across various
training configurations. This work highlights the potential of machine learning
to improve the accuracy of VT alarm detection in clinical settings.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 18:18:38 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Farayola",
"Grace Funmilayo",
"",
"University of Buckingham, Buckingham, UK"
],
[
"Akintola",
"Akinyemi Sadeeq",
"",
"Universidade NOVA de Lisboa, Lisbon, Portugal"
],
[
"Fagbohun",
"Oluwole",
"",
"Readrly Limited, London, UK"
],
[
"Oforgu",
"Chukwuka Michael",
"",
"Readrly Limited, London, UK"
],
[
"Kayode",
"Bisola Faith",
"",
"Independent Researcher,\n London, UK"
],
[
"Chimezie",
"Christian",
"",
"Independent Researcher, Bristol, UK"
],
[
"Kadri",
"Temitope",
"",
"Readrly Limited, London, UK"
],
[
"Oludotun",
"Abiola",
"",
"Readrly\n Limited, London, UK"
],
[
"Ogbeide",
"Nelson",
"",
"Independent Researcher, London, UK"
],
[
"Michael",
"Mgbame",
"",
"Hankali Intel, Lagos, Nigeria"
],
[
"Ifaturoti",
"Adeseye",
"",
"University\n of Greenwich, London, UK"
],
[
"Oloyede",
"Toyese",
"",
"Independent Researcher,\n Northampton, UK"
]
] | TITLE: Reducing False Ventricular Tachycardia Alarms in ICU Settings: A Machine
Learning Approach
ABSTRACT: False arrhythmia alarms in intensive care units (ICUs) are a significant
challenge, contributing to alarm fatigue and potentially compromising patient
safety. Ventricular tachycardia (VT) alarms are particularly difficult to
detect accurately due to their complex nature. This paper presents a machine
learning approach to reduce false VT alarms using the VTaC dataset, a benchmark
dataset of annotated VT alarms from ICU monitors. We extract time-domain and
frequency-domain features from waveform data, preprocess the data, and train
deep learning models to classify true and false VT alarms. Our results
demonstrate high performance, with ROC-AUC scores exceeding 0.96 across various
training configurations. This work highlights the potential of machine learning
to improve the accuracy of VT alarm detection in clinical settings.
|
2503.14630 | Priscylla Silva | Priscylla Silva and Evandro Costa | Assessing Large Language Models for Automated Feedback Generation in
Learning Programming Problem Solving | null | null | null | null | cs.SE cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Providing effective feedback is important for student learning in programming
problem-solving. In this sense, Large Language Models (LLMs) have emerged as
potential tools to automate feedback generation. However, their reliability and
ability to identify reasoning errors in student code remain not well
understood. This study evaluates the performance of four LLMs (GPT-4o, GPT-4o
mini, GPT-4-Turbo, and Gemini-1.5-pro) on a benchmark dataset of 45 student
solutions. We assessed the models' capacity to provide accurate and insightful
feedback, particularly in identifying reasoning mistakes. Our analysis reveals
that 63\% of feedback hints were accurate and complete, while 37\% contained
mistakes, including incorrect line identification, flawed explanations, or
hallucinated issues. These findings highlight the potential and limitations of
LLMs in programming education and underscore the need for improvements to
enhance reliability and minimize risks in educational applications.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 18:31:36 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Silva",
"Priscylla",
""
],
[
"Costa",
"Evandro",
""
]
] | TITLE: Assessing Large Language Models for Automated Feedback Generation in
Learning Programming Problem Solving
ABSTRACT: Providing effective feedback is important for student learning in programming
problem-solving. In this sense, Large Language Models (LLMs) have emerged as
potential tools to automate feedback generation. However, their reliability and
ability to identify reasoning errors in student code remain not well
understood. This study evaluates the performance of four LLMs (GPT-4o, GPT-4o
mini, GPT-4-Turbo, and Gemini-1.5-pro) on a benchmark dataset of 45 student
solutions. We assessed the models' capacity to provide accurate and insightful
feedback, particularly in identifying reasoning mistakes. Our analysis reveals
that 63\% of feedback hints were accurate and complete, while 37\% contained
mistakes, including incorrect line identification, flawed explanations, or
hallucinated issues. These findings highlight the potential and limitations of
LLMs in programming education and underscore the need for improvements to
enhance reliability and minimize risks in educational applications.
|
2503.14632 | Martin Matys | Martin Matys, James P. Thistlewood, Mariana Kecov\'a, Petr Valenta,
Martina Greplov\'a \v{Z}\'akov\'a, Martin Jirka, Prokopis Hadjisolomou,
Al\v{z}b\v{e}ta \v{S}p\'adov\'a, Marcel Lama\v{c} and Sergei V. Bulanov | Virtual reality and web browser visualization of high-intensity
laser-matter interactions | 20 pages 8 figures | null | null | null | physics.plasm-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the Virtual Beamline (VBL) application, an interactive web-based
platform for visualizing high-intensity laser-matter simulations (and
experimental data in the future). Developed at ELI Beamlines facility, VBL
integrates a custom-built WebGL engine with WebXR-based Virtual Reality (VR)
support, allowing users to explore complex plasma dynamics in non-VR mode on a
computer screen or in fully immersive VR mode using a head-mounted display. The
application runs directly in a standard web browser, ensuring broad
accessibility. VBL enhances the visualization of particle-in-cell simulations
by efficiently processing and rendering four main data types: point particles,
1D lines, 2D textures, and 3D volumes. By utilizing interactive 3D
visualization, it overcomes the limitations of traditional 2D representations,
offering enhanced spatial understanding and real-time manipulation of
visualization parameters such as time steps, data layers, colormaps. The user
can interactively explore the visualized data by moving their body or using a
controller for navigation, zooming, and rotation. These interactive
capabilities improve data exploration and interpretation, making the platform
valuable for both scientific analysis and educational outreach. We demonstrate
the application of VBL in visualizing various high-intensity laser-matter
interaction scenarios, including ion acceleration, electron acceleration,
$\gamma$-flash generation, electron-positron pair production, attosecond and
spiral pulse generation. The visualizations are hosted online and freely
accessible on our server. These studies highlight VBL's ability to provide an
intuitive and dynamic approach to exploring large-scale simulation datasets,
enhancing research capabilities and knowledge dissemination in high-intensity
laser-matter physics.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 18:33:09 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Matys",
"Martin",
""
],
[
"Thistlewood",
"James P.",
""
],
[
"Kecová",
"Mariana",
""
],
[
"Valenta",
"Petr",
""
],
[
"Žáková",
"Martina Greplová",
""
],
[
"Jirka",
"Martin",
""
],
[
"Hadjisolomou",
"Prokopis",
""
],
[
"Špádová",
"Alžběta",
""
],
[
"Lamač",
"Marcel",
""
],
[
"Bulanov",
"Sergei V.",
""
]
] | TITLE: Virtual reality and web browser visualization of high-intensity
laser-matter interactions
ABSTRACT: We present the Virtual Beamline (VBL) application, an interactive web-based
platform for visualizing high-intensity laser-matter simulations (and
experimental data in the future). Developed at ELI Beamlines facility, VBL
integrates a custom-built WebGL engine with WebXR-based Virtual Reality (VR)
support, allowing users to explore complex plasma dynamics in non-VR mode on a
computer screen or in fully immersive VR mode using a head-mounted display. The
application runs directly in a standard web browser, ensuring broad
accessibility. VBL enhances the visualization of particle-in-cell simulations
by efficiently processing and rendering four main data types: point particles,
1D lines, 2D textures, and 3D volumes. By utilizing interactive 3D
visualization, it overcomes the limitations of traditional 2D representations,
offering enhanced spatial understanding and real-time manipulation of
visualization parameters such as time steps, data layers, colormaps. The user
can interactively explore the visualized data by moving their body or using a
controller for navigation, zooming, and rotation. These interactive
capabilities improve data exploration and interpretation, making the platform
valuable for both scientific analysis and educational outreach. We demonstrate
the application of VBL in visualizing various high-intensity laser-matter
interaction scenarios, including ion acceleration, electron acceleration,
$\gamma$-flash generation, electron-positron pair production, attosecond and
spiral pulse generation. The visualizations are hosted online and freely
accessible on our server. These studies highlight VBL's ability to provide an
intuitive and dynamic approach to exploring large-scale simulation datasets,
enhancing research capabilities and knowledge dissemination in high-intensity
laser-matter physics.
|
2503.14655 | Minheng Chen | Minheng Chen, Xiaowei Yu, Jing Zhang, Tong Chen, Chao Cao, Yan Zhuang,
Yanjun Lyu, Lu Zhang, Tianming Liu, Dajiang Zhu | Core-Periphery Principle Guided State Space Model for Functional
Connectome Classification | null | null | null | null | q-bio.NC cs.AI cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the organization of human brain networks has become a central
focus in neuroscience, particularly in the study of functional connectivity,
which plays a crucial role in diagnosing neurological disorders. Advances in
functional magnetic resonance imaging and machine learning techniques have
significantly improved brain network analysis. However, traditional machine
learning approaches struggle to capture the complex relationships between brain
regions, while deep learning methods, particularly Transformer-based models,
face computational challenges due to their quadratic complexity in
long-sequence modeling. To address these limitations, we propose a
Core-Periphery State-Space Model (CP-SSM), an innovative framework for
functional connectome classification. Specifically, we introduce Mamba, a
selective state-space model with linear complexity, to effectively capture
long-range dependencies in functional brain networks. Furthermore, inspired by
the core-periphery (CP) organization, a fundamental characteristic of brain
networks that enhances efficient information transmission, we design CP-MoE, a
CP-guided Mixture-of-Experts that improves the representation learning of brain
connectivity patterns. We evaluate CP-SSM on two benchmark fMRI datasets: ABIDE
and ADNI. Experimental results demonstrate that CP-SSM surpasses
Transformer-based models in classification performance while significantly
reducing computational complexity. These findings highlight the effectiveness
and efficiency of CP-SSM in modeling brain functional connectivity, offering a
promising direction for neuroimaging-based neurological disease diagnosis.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 19:03:27 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Chen",
"Minheng",
""
],
[
"Yu",
"Xiaowei",
""
],
[
"Zhang",
"Jing",
""
],
[
"Chen",
"Tong",
""
],
[
"Cao",
"Chao",
""
],
[
"Zhuang",
"Yan",
""
],
[
"Lyu",
"Yanjun",
""
],
[
"Zhang",
"Lu",
""
],
[
"Liu",
"Tianming",
""
],
[
"Zhu",
"Dajiang",
""
]
] | TITLE: Core-Periphery Principle Guided State Space Model for Functional
Connectome Classification
ABSTRACT: Understanding the organization of human brain networks has become a central
focus in neuroscience, particularly in the study of functional connectivity,
which plays a crucial role in diagnosing neurological disorders. Advances in
functional magnetic resonance imaging and machine learning techniques have
significantly improved brain network analysis. However, traditional machine
learning approaches struggle to capture the complex relationships between brain
regions, while deep learning methods, particularly Transformer-based models,
face computational challenges due to their quadratic complexity in
long-sequence modeling. To address these limitations, we propose a
Core-Periphery State-Space Model (CP-SSM), an innovative framework for
functional connectome classification. Specifically, we introduce Mamba, a
selective state-space model with linear complexity, to effectively capture
long-range dependencies in functional brain networks. Furthermore, inspired by
the core-periphery (CP) organization, a fundamental characteristic of brain
networks that enhances efficient information transmission, we design CP-MoE, a
CP-guided Mixture-of-Experts that improves the representation learning of brain
connectivity patterns. We evaluate CP-SSM on two benchmark fMRI datasets: ABIDE
and ADNI. Experimental results demonstrate that CP-SSM surpasses
Transformer-based models in classification performance while significantly
reducing computational complexity. These findings highlight the effectiveness
and efficiency of CP-SSM in modeling brain functional connectivity, offering a
promising direction for neuroimaging-based neurological disease diagnosis.
|
2503.14671 | Xiangyong Chen | Xiangyong Chen, Xiaochuan Lin | Generating Medically-Informed Explanations for Depression Detection
using LLMs | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Early detection of depression from social media data offers a valuable
opportunity for timely intervention. However, this task poses significant
challenges, requiring both professional medical knowledge and the development
of accurate and explainable models. In this paper, we propose LLM-MTD (Large
Language Model for Multi-Task Depression Detection), a novel approach that
leverages a pre-trained large language model to simultaneously classify social
media posts for depression and generate textual explanations grounded in
medical diagnostic criteria. We train our model using a multi-task learning
framework with a combined loss function that optimizes both classification
accuracy and explanation quality. We evaluate LLM-MTD on the benchmark Reddit
Self-Reported Depression Dataset (RSDD) and compare its performance against
several competitive baseline methods, including traditional machine learning
and fine-tuned BERT. Our experimental results demonstrate that LLM-MTD achieves
state-of-the-art performance in depression detection, showing significant
improvements in AUPRC and other key metrics. Furthermore, human evaluation of
the generated explanations reveals their relevance, completeness, and medical
accuracy, highlighting the enhanced interpretability of our approach. This work
contributes a novel methodology for depression detection that combines the
power of large language models with the crucial aspect of explainability.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 19:23:22 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Chen",
"Xiangyong",
""
],
[
"Lin",
"Xiaochuan",
""
]
] | TITLE: Generating Medically-Informed Explanations for Depression Detection
using LLMs
ABSTRACT: Early detection of depression from social media data offers a valuable
opportunity for timely intervention. However, this task poses significant
challenges, requiring both professional medical knowledge and the development
of accurate and explainable models. In this paper, we propose LLM-MTD (Large
Language Model for Multi-Task Depression Detection), a novel approach that
leverages a pre-trained large language model to simultaneously classify social
media posts for depression and generate textual explanations grounded in
medical diagnostic criteria. We train our model using a multi-task learning
framework with a combined loss function that optimizes both classification
accuracy and explanation quality. We evaluate LLM-MTD on the benchmark Reddit
Self-Reported Depression Dataset (RSDD) and compare its performance against
several competitive baseline methods, including traditional machine learning
and fine-tuned BERT. Our experimental results demonstrate that LLM-MTD achieves
state-of-the-art performance in depression detection, showing significant
improvements in AUPRC and other key metrics. Furthermore, human evaluation of
the generated explanations reveals their relevance, completeness, and medical
accuracy, highlighting the enhanced interpretability of our approach. This work
contributes a novel methodology for depression detection that combines the
power of large language models with the crucial aspect of explainability.
|
2503.14674 | Amirul Rahman | Liu Jing, Amirul Rahman | Elevating Visual Question Answering through Implicitly Learned Reasoning
Pathways in LVLMs | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Vision-Language Models (LVLMs) have shown remarkable progress in
various multimodal tasks, yet they often struggle with complex visual reasoning
that requires multi-step inference. To address this limitation, we propose
MF-SQ-LLaVA, a novel approach that enhances LVLMs by enabling implicit
self-questioning through end-to-end training. Our method involves augmenting
visual question answering datasets with reasoning chains consisting of
sub-question and answer pairs, and training the LVLM with a multi-task loss
that encourages the generation and answering of these intermediate steps, as
well as the prediction of the final answer. We conduct extensive experiments on
the ScienceQA and VQAv2 datasets, demonstrating that MF-SQ-LLaVA significantly
outperforms existing state-of-the-art models, including the base LLaVA and the
original SQ-LLaVA. Ablation studies further validate the contribution of each
component of our approach, and human evaluation confirms the improved accuracy
and coherence of the reasoning process enabled by our method.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 19:29:07 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Jing",
"Liu",
""
],
[
"Rahman",
"Amirul",
""
]
] | TITLE: Elevating Visual Question Answering through Implicitly Learned Reasoning
Pathways in LVLMs
ABSTRACT: Large Vision-Language Models (LVLMs) have shown remarkable progress in
various multimodal tasks, yet they often struggle with complex visual reasoning
that requires multi-step inference. To address this limitation, we propose
MF-SQ-LLaVA, a novel approach that enhances LVLMs by enabling implicit
self-questioning through end-to-end training. Our method involves augmenting
visual question answering datasets with reasoning chains consisting of
sub-question and answer pairs, and training the LVLM with a multi-task loss
that encourages the generation and answering of these intermediate steps, as
well as the prediction of the final answer. We conduct extensive experiments on
the ScienceQA and VQAv2 datasets, demonstrating that MF-SQ-LLaVA significantly
outperforms existing state-of-the-art models, including the base LLaVA and the
original SQ-LLaVA. Ablation studies further validate the contribution of each
component of our approach, and human evaluation confirms the improved accuracy
and coherence of the reasoning process enabled by our method.
|
2503.14681 | Chen Gong | Chen Gong, Kecen Li, Zinan Lin, Tianhao Wang | DPImageBench: A Unified Benchmark for Differentially Private Image
Synthesis | The first two authors contributed equally; code available at
https://github.com/2019ChenGong/DPImageBench | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differentially private (DP) image synthesis aims to generate artificial
images that retain the properties of sensitive images while protecting the
privacy of individual images within the dataset. Despite recent advancements,
we find that inconsistent--and sometimes flawed--evaluation protocols have been
applied across studies. This not only impedes the understanding of current
methods but also hinders future advancements.
To address the issue, this paper introduces DPImageBench for DP image
synthesis, with thoughtful design across several dimensions: (1) Methods. We
study eleven prominent methods and systematically characterize each based on
model architecture, pretraining strategy, and privacy mechanism. (2)
Evaluation. We include nine datasets and seven fidelity and utility metrics to
thoroughly assess them. Notably, we find that a common practice of selecting
downstream classifiers based on the highest accuracy on the sensitive test set
not only violates DP but also overestimates the utility scores. DPImageBench
corrects for these mistakes. (3) Platform. Despite the methods and evaluation
protocols, DPImageBench provides a standardized interface that accommodates
current and future implementations within a unified framework. With
DPImageBench, we have several noteworthy findings. For example, contrary to the
common wisdom that pretraining on public image datasets is usually beneficial,
we find that the distributional similarity between pretraining and sensitive
images significantly impacts the performance of the synthetic images and does
not always yield improvements. In addition, adding noise to low-dimensional
features, such as the high-level characteristics of sensitive images, is less
affected by the privacy budget compared to adding noise to high-dimensional
features, like weight gradients. The former methods perform better than the
latter under a low privacy budget.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 19:37:35 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Gong",
"Chen",
""
],
[
"Li",
"Kecen",
""
],
[
"Lin",
"Zinan",
""
],
[
"Wang",
"Tianhao",
""
]
] | TITLE: DPImageBench: A Unified Benchmark for Differentially Private Image
Synthesis
ABSTRACT: Differentially private (DP) image synthesis aims to generate artificial
images that retain the properties of sensitive images while protecting the
privacy of individual images within the dataset. Despite recent advancements,
we find that inconsistent--and sometimes flawed--evaluation protocols have been
applied across studies. This not only impedes the understanding of current
methods but also hinders future advancements.
To address the issue, this paper introduces DPImageBench for DP image
synthesis, with thoughtful design across several dimensions: (1) Methods. We
study eleven prominent methods and systematically characterize each based on
model architecture, pretraining strategy, and privacy mechanism. (2)
Evaluation. We include nine datasets and seven fidelity and utility metrics to
thoroughly assess them. Notably, we find that a common practice of selecting
downstream classifiers based on the highest accuracy on the sensitive test set
not only violates DP but also overestimates the utility scores. DPImageBench
corrects for these mistakes. (3) Platform. Despite the methods and evaluation
protocols, DPImageBench provides a standardized interface that accommodates
current and future implementations within a unified framework. With
DPImageBench, we have several noteworthy findings. For example, contrary to the
common wisdom that pretraining on public image datasets is usually beneficial,
we find that the distributional similarity between pretraining and sensitive
images significantly impacts the performance of the synthetic images and does
not always yield improvements. In addition, adding noise to low-dimensional
features, such as the high-level characteristics of sensitive images, is less
affected by the privacy budget compared to adding noise to high-dimensional
features, like weight gradients. The former methods perform better than the
latter under a low privacy budget.
|
2503.14698 | Yiming Wang | Yiming Wang, Lucy Chai, Xuan Luo, Michael Niemeyer, Manuel Lagunas,
Stephen Lombardi, Siyu Tang, Tiancheng Sun | SplatVoxel: History-Aware Novel View Streaming without Temporal Training | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of novel view streaming from sparse-view videos, which
aims to generate a continuous sequence of high-quality, temporally consistent
novel views as new input frames arrive. However, existing novel view synthesis
methods struggle with temporal coherence and visual fidelity, leading to
flickering and inconsistency. To address these challenges, we introduce
history-awareness, leveraging previous frames to reconstruct the scene and
improve quality and stability. We propose a hybrid splat-voxel feed-forward
scene reconstruction approach that combines Gaussian Splatting to propagate
information over time, with a hierarchical voxel grid for temporal fusion.
Gaussian primitives are efficiently warped over time using a motion graph that
extends 2D tracking models to 3D motion, while a sparse voxel transformer
integrates new temporal observations in an error-aware manner. Crucially, our
method does not require training on multi-view video datasets, which are
currently limited in size and diversity, and can be directly applied to
sparse-view video streams in a history-aware manner at inference time. Our
approach achieves state-of-the-art performance in both static and streaming
scene reconstruction, effectively reducing temporal artifacts and visual
artifacts while running at interactive rates (15 fps with 350ms delay) on a
single H100 GPU. Project Page: https://19reborn.github.io/SplatVoxel/
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 20:00:47 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wang",
"Yiming",
""
],
[
"Chai",
"Lucy",
""
],
[
"Luo",
"Xuan",
""
],
[
"Niemeyer",
"Michael",
""
],
[
"Lagunas",
"Manuel",
""
],
[
"Lombardi",
"Stephen",
""
],
[
"Tang",
"Siyu",
""
],
[
"Sun",
"Tiancheng",
""
]
] | TITLE: SplatVoxel: History-Aware Novel View Streaming without Temporal Training
ABSTRACT: We study the problem of novel view streaming from sparse-view videos, which
aims to generate a continuous sequence of high-quality, temporally consistent
novel views as new input frames arrive. However, existing novel view synthesis
methods struggle with temporal coherence and visual fidelity, leading to
flickering and inconsistency. To address these challenges, we introduce
history-awareness, leveraging previous frames to reconstruct the scene and
improve quality and stability. We propose a hybrid splat-voxel feed-forward
scene reconstruction approach that combines Gaussian Splatting to propagate
information over time, with a hierarchical voxel grid for temporal fusion.
Gaussian primitives are efficiently warped over time using a motion graph that
extends 2D tracking models to 3D motion, while a sparse voxel transformer
integrates new temporal observations in an error-aware manner. Crucially, our
method does not require training on multi-view video datasets, which are
currently limited in size and diversity, and can be directly applied to
sparse-view video streams in a history-aware manner at inference time. Our
approach achieves state-of-the-art performance in both static and streaming
scene reconstruction, effectively reducing temporal artifacts and visual
artifacts while running at interactive rates (15 fps with 350ms delay) on a
single H100 GPU. Project Page: https://19reborn.github.io/SplatVoxel/
|
2503.14710 | Zhenhua Wang | Zhenhua Wang, Paul A. Parker, Scott H. Holan | Variational Autoencoded Multivariate Spatial Fay-Herriot Models | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Small area estimation models are essential for estimating population
characteristics in regions with limited sample sizes, thereby supporting policy
decisions, demographic studies, and resource allocation, among other use cases.
The spatial Fay-Herriot model is one such approach that incorporates spatial
dependence to improve estimation by borrowing strength from neighboring
regions. However, this approach often requires substantial computational
resources, limiting its scalability for high-dimensional datasets, especially
when considering multiple (multivariate) responses. This paper proposes two
methods that integrate the multivariate spatial Fay-Herriot model with spatial
random effects, learned through variational autoencoders, to efficiently
leverage spatial structure. Importantly, after training the variational
autoencoder to represent spatial dependence for a given set of geographies, it
may be used again in future modeling efforts, without the need for retraining.
Additionally, the use of the variational autoencoder to represent spatial
dependence results in extreme improvements in computational efficiency, even
for massive datasets. We demonstrate the effectiveness of our approach using
5-year period estimates from the American Community Survey over all census
tracts in California.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 20:19:09 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wang",
"Zhenhua",
""
],
[
"Parker",
"Paul A.",
""
],
[
"Holan",
"Scott H.",
""
]
] | TITLE: Variational Autoencoded Multivariate Spatial Fay-Herriot Models
ABSTRACT: Small area estimation models are essential for estimating population
characteristics in regions with limited sample sizes, thereby supporting policy
decisions, demographic studies, and resource allocation, among other use cases.
The spatial Fay-Herriot model is one such approach that incorporates spatial
dependence to improve estimation by borrowing strength from neighboring
regions. However, this approach often requires substantial computational
resources, limiting its scalability for high-dimensional datasets, especially
when considering multiple (multivariate) responses. This paper proposes two
methods that integrate the multivariate spatial Fay-Herriot model with spatial
random effects, learned through variational autoencoders, to efficiently
leverage spatial structure. Importantly, after training the variational
autoencoder to represent spatial dependence for a given set of geographies, it
may be used again in future modeling efforts, without the need for retraining.
Additionally, the use of the variational autoencoder to represent spatial
dependence results in extreme improvements in computational efficiency, even
for massive datasets. We demonstrate the effectiveness of our approach using
5-year period estimates from the American Community Survey over all census
tracts in California.
|
2503.14716 | Pei-Hsin Lin | Pei-Hsin Lin, Jacob J. Lin, Shang-Hsien Hsieh | Construction Site Scaffolding Completeness Detection Based on Mask R-CNN
and Hough Transform | The 30th EG-ICE: International Conference on Intelligent Computing in
Engineering | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Construction site scaffolding is essential for many building projects, and
ensuring its safety is crucial to prevent accidents. The safety inspector must
check the scaffolding's completeness and integrity, where most violations
occur. The inspection process includes ensuring all the components are in the
right place since workers often compromise safety for convenience and
disassemble parts such as cross braces. This paper proposes a deep
learning-based approach to detect the scaffolding and its cross braces using
computer vision. A scaffold image dataset with annotated labels is used to
train a convolutional neural network (CNN) model. With the proposed approach,
we can automatically detect the completeness of cross braces from images taken
at construction sites, without the need for manual inspection, saving a
significant amount of time and labor costs. This non-invasive and efficient
solution for detecting scaffolding completeness can help improve safety in
construction sites.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 20:27:22 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Lin",
"Pei-Hsin",
""
],
[
"Lin",
"Jacob J.",
""
],
[
"Hsieh",
"Shang-Hsien",
""
]
] | TITLE: Construction Site Scaffolding Completeness Detection Based on Mask R-CNN
and Hough Transform
ABSTRACT: Construction site scaffolding is essential for many building projects, and
ensuring its safety is crucial to prevent accidents. The safety inspector must
check the scaffolding's completeness and integrity, where most violations
occur. The inspection process includes ensuring all the components are in the
right place since workers often compromise safety for convenience and
disassemble parts such as cross braces. This paper proposes a deep
learning-based approach to detect the scaffolding and its cross braces using
computer vision. A scaffold image dataset with annotated labels is used to
train a convolutional neural network (CNN) model. With the proposed approach,
we can automatically detect the completeness of cross braces from images taken
at construction sites, without the need for manual inspection, saving a
significant amount of time and labor costs. This non-invasive and efficient
solution for detecting scaffolding completeness can help improve safety in
construction sites.
|
2503.14718 | Hakyung Sung | Hakyung Sung, Gyu-Ho Shin | Second language Korean Universal Dependency treebank v1.2: Focus on data
augmentation and annotation scheme refinement | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We expand the second language (L2) Korean Universal Dependencies (UD)
treebank with 5,454 manually annotated sentences. The annotation guidelines are
also revised to better align with the UD framework. Using this enhanced
treebank, we fine-tune three Korean language models and evaluate their
performance on in-domain and out-of-domain L2-Korean datasets. The results show
that fine-tuning significantly improves their performance across various
metrics, thus highlighting the importance of using well-tailored L2 datasets
for fine-tuning first-language-based, general-purpose language models for the
morphosyntactic analysis of L2 data.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 20:42:42 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Sung",
"Hakyung",
""
],
[
"Shin",
"Gyu-Ho",
""
]
] | TITLE: Second language Korean Universal Dependency treebank v1.2: Focus on data
augmentation and annotation scheme refinement
ABSTRACT: We expand the second language (L2) Korean Universal Dependencies (UD)
treebank with 5,454 manually annotated sentences. The annotation guidelines are
also revised to better align with the UD framework. Using this enhanced
treebank, we fine-tune three Korean language models and evaluate their
performance on in-domain and out-of-domain L2-Korean datasets. The results show
that fine-tuning significantly improves their performance across various
metrics, thus highlighting the importance of using well-tailored L2 datasets
for fine-tuning first-language-based, general-purpose language models for the
morphosyntactic analysis of L2 data.
|
2503.14719 | Diego Alberto Mercado-Ravell Dr. | Miguel S. Soriano-Garc\'ia and Diego A. Mercado-Ravell | ViVa-SAFELAND: a New Freeware for Safe Validation of Vision-based
Navigation in Aerial Vehicles | paper under review for publication | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | ViVa-SAFELAND is an open source software library, aimed to test and evaluate
vision-based navigation strategies for aerial vehicles, with special interest
in autonomous landing, while complying with legal regulations and people's
safety. It consists of a collection of high definition aerial videos, focusing
on real unstructured urban scenarios, recording moving obstacles of interest,
such as cars and people. Then, an Emulated Aerial Vehicle (EAV) with a virtual
moving camera is implemented in order to ``navigate" inside the video,
according to high-order commands. ViVa-SAFELAND provides a new, safe, simple
and fair comparison baseline to evaluate and compare different visual
navigation solutions under the same conditions, and to randomize variables
along several trials. It also facilitates the development of autonomous landing
and navigation strategies, as well as the generation of image datasets for
different training tasks. Moreover, it is useful for training either human of
autonomous pilots using deep learning. The effectiveness of the framework for
validating vision algorithms is demonstrated through two case studies,
detection of moving objects and risk assessment segmentation. To our knowledge,
this is the first safe validation framework of its kind, to test and compare
visual navigation solution for aerial vehicles, which is a crucial aspect for
urban deployment in complex real scenarios.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 20:48:50 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Soriano-García",
"Miguel S.",
""
],
[
"Mercado-Ravell",
"Diego A.",
""
]
] | TITLE: ViVa-SAFELAND: a New Freeware for Safe Validation of Vision-based
Navigation in Aerial Vehicles
ABSTRACT: ViVa-SAFELAND is an open source software library, aimed to test and evaluate
vision-based navigation strategies for aerial vehicles, with special interest
in autonomous landing, while complying with legal regulations and people's
safety. It consists of a collection of high definition aerial videos, focusing
on real unstructured urban scenarios, recording moving obstacles of interest,
such as cars and people. Then, an Emulated Aerial Vehicle (EAV) with a virtual
moving camera is implemented in order to ``navigate" inside the video,
according to high-order commands. ViVa-SAFELAND provides a new, safe, simple
and fair comparison baseline to evaluate and compare different visual
navigation solutions under the same conditions, and to randomize variables
along several trials. It also facilitates the development of autonomous landing
and navigation strategies, as well as the generation of image datasets for
different training tasks. Moreover, it is useful for training either human of
autonomous pilots using deep learning. The effectiveness of the framework for
validating vision algorithms is demonstrated through two case studies,
detection of moving objects and risk assessment segmentation. To our knowledge,
this is the first safe validation framework of its kind, to test and compare
visual navigation solution for aerial vehicles, which is a crucial aspect for
urban deployment in complex real scenarios.
|
2503.14751 | Nicola Franco | Rohan Menon, Nicola Franco, Stephan G\"unnemann | LipShiFT: A Certifiably Robust Shift-based Vision Transformer | ICLR 2025 Workshop: VerifAI: AI Verification in the Wild | ICLR 2025 Workshop: VerifAI: AI Verification in the Wild | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deriving tight Lipschitz bounds for transformer-based architectures presents
a significant challenge. The large input sizes and high-dimensional attention
modules typically prove to be crucial bottlenecks during the training process
and leads to sub-optimal results. Our research highlights practical constraints
of these methods in vision tasks. We find that Lipschitz-based margin training
acts as a strong regularizer while restricting weights in successive layers of
the model. Focusing on a Lipschitz continuous variant of the ShiftViT model, we
address significant training challenges for transformer-based architectures
under norm-constrained input setting. We provide an upper bound estimate for
the Lipschitz constants of this model using the $l_2$ norm on common image
classification datasets. Ultimately, we demonstrate that our method scales to
larger models and advances the state-of-the-art in certified robustness for
transformer-based architectures.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 21:38:18 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Menon",
"Rohan",
""
],
[
"Franco",
"Nicola",
""
],
[
"Günnemann",
"Stephan",
""
]
] | TITLE: LipShiFT: A Certifiably Robust Shift-based Vision Transformer
ABSTRACT: Deriving tight Lipschitz bounds for transformer-based architectures presents
a significant challenge. The large input sizes and high-dimensional attention
modules typically prove to be crucial bottlenecks during the training process
and leads to sub-optimal results. Our research highlights practical constraints
of these methods in vision tasks. We find that Lipschitz-based margin training
acts as a strong regularizer while restricting weights in successive layers of
the model. Focusing on a Lipschitz continuous variant of the ShiftViT model, we
address significant training challenges for transformer-based architectures
under norm-constrained input setting. We provide an upper bound estimate for
the Lipschitz constants of this model using the $l_2$ norm on common image
classification datasets. Ultimately, we demonstrate that our method scales to
larger models and advances the state-of-the-art in certified robustness for
transformer-based architectures.
|
2503.14755 | Omar Rakha | Omar E. Rakha, Hazem M. Abbas | Language Independent Named Entity Recognition via Orthogonal
Transformation of Word Vectors | Paper was initially released in 2017 but was never published | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Word embeddings have been a key building block for NLP in which models relied
heavily on word embeddings in many different tasks. In this paper, a model is
proposed based on using Bidirectional LSTM/CRF with word embeddings to perform
named entity recognition for any language. This is done by training a model on
a source language (English) and transforming word embeddings from the target
language into word embeddings of the source language by using an orthogonal
linear transformation matrix. Evaluation of the model shows that by training a
model on an English dataset the model was capable of detecting named entities
in an Arabic dataset without neither training or fine tuning the model on an
Arabic language dataset.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 21:57:58 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Rakha",
"Omar E.",
""
],
[
"Abbas",
"Hazem M.",
""
]
] | TITLE: Language Independent Named Entity Recognition via Orthogonal
Transformation of Word Vectors
ABSTRACT: Word embeddings have been a key building block for NLP in which models relied
heavily on word embeddings in many different tasks. In this paper, a model is
proposed based on using Bidirectional LSTM/CRF with word embeddings to perform
named entity recognition for any language. This is done by training a model on
a source language (English) and transforming word embeddings from the target
language into word embeddings of the source language by using an orthogonal
linear transformation matrix. Evaluation of the model shows that by training a
model on an English dataset the model was capable of detecting named entities
in an Arabic dataset without neither training or fine tuning the model on an
Arabic language dataset.
|
2503.14756 | Hou In Ivan Tam | Hou In Ivan Tam, Hou In Derek Pun, Austin T. Wang, Angel X. Chang,
Manolis Savva | SceneEval: Evaluating Semantic Coherence in Text-Conditioned 3D Indoor
Scene Synthesis | 20 pages, 6 figures, 6 tables | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite recent advances in text-conditioned 3D indoor scene generation, there
remain gaps in the evaluation of these methods. Existing metrics primarily
assess the realism of generated scenes by comparing them to a set of
ground-truth scenes, often overlooking alignment with the input text - a
critical factor in determining how effectively a method meets user
requirements. We present SceneEval, an evaluation framework designed to address
this limitation. SceneEval includes metrics for both explicit user
requirements, such as the presence of specific objects and their attributes
described in the input text, and implicit expectations, like the absence of
object collisions, providing a comprehensive assessment of scene quality. To
facilitate evaluation, we introduce SceneEval-100, a dataset of scene
descriptions with annotated ground-truth scene properties. We evaluate recent
scene generation methods using SceneEval and demonstrate its ability to provide
detailed assessments of the generated scenes, highlighting strengths and areas
for improvement across multiple dimensions. Our results show that current
methods struggle at generating scenes that meet user requirements, underscoring
the need for further research in this direction.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 22:02:35 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Tam",
"Hou In Ivan",
""
],
[
"Pun",
"Hou In Derek",
""
],
[
"Wang",
"Austin T.",
""
],
[
"Chang",
"Angel X.",
""
],
[
"Savva",
"Manolis",
""
]
] | TITLE: SceneEval: Evaluating Semantic Coherence in Text-Conditioned 3D Indoor
Scene Synthesis
ABSTRACT: Despite recent advances in text-conditioned 3D indoor scene generation, there
remain gaps in the evaluation of these methods. Existing metrics primarily
assess the realism of generated scenes by comparing them to a set of
ground-truth scenes, often overlooking alignment with the input text - a
critical factor in determining how effectively a method meets user
requirements. We present SceneEval, an evaluation framework designed to address
this limitation. SceneEval includes metrics for both explicit user
requirements, such as the presence of specific objects and their attributes
described in the input text, and implicit expectations, like the absence of
object collisions, providing a comprehensive assessment of scene quality. To
facilitate evaluation, we introduce SceneEval-100, a dataset of scene
descriptions with annotated ground-truth scene properties. We evaluate recent
scene generation methods using SceneEval and demonstrate its ability to provide
detailed assessments of the generated scenes, highlighting strengths and areas
for improvement across multiple dimensions. Our results show that current
methods struggle at generating scenes that meet user requirements, underscoring
the need for further research in this direction.
|
2503.14757 | Marcelo S\'anchez | Marcelo Sanchez, Gil Triginer, Ignacio Sarasua, Lara Raad, Coloma
Ballester | RETHINED: A New Benchmark and Baseline for Real-Time High-Resolution
Image Inpainting On Edge Devices | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Existing image inpainting methods have shown impressive completion results
for low-resolution images. However, most of these algorithms fail at high
resolutions and require powerful hardware, limiting their deployment on edge
devices. Motivated by this, we propose the first baseline for REal-Time
High-resolution image INpainting on Edge Devices (RETHINED) that is able to
inpaint at ultra-high-resolution and can run in real-time ($\leq$ 30ms) in a
wide variety of mobile devices. A simple, yet effective novel method formed by
a lightweight Convolutional Neural Network (CNN) to recover structure, followed
by a resolution-agnostic patch replacement mechanism to provide detailed
texture. Specially our pipeline leverages the structural capacity of CNN and
the high-level detail of patch-based methods, which is a key component for
high-resolution image inpainting. To demonstrate the real application of our
method, we conduct an extensive analysis on various mobile-friendly devices and
demonstrate similar inpainting performance while being $\mathrm{100 \times
faster}$ than existing state-of-the-art methods. Furthemore, we realease
DF8K-Inpainting, the first free-form mask UHD inpainting dataset.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 22:02:40 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Sanchez",
"Marcelo",
""
],
[
"Triginer",
"Gil",
""
],
[
"Sarasua",
"Ignacio",
""
],
[
"Raad",
"Lara",
""
],
[
"Ballester",
"Coloma",
""
]
] | TITLE: RETHINED: A New Benchmark and Baseline for Real-Time High-Resolution
Image Inpainting On Edge Devices
ABSTRACT: Existing image inpainting methods have shown impressive completion results
for low-resolution images. However, most of these algorithms fail at high
resolutions and require powerful hardware, limiting their deployment on edge
devices. Motivated by this, we propose the first baseline for REal-Time
High-resolution image INpainting on Edge Devices (RETHINED) that is able to
inpaint at ultra-high-resolution and can run in real-time ($\leq$ 30ms) in a
wide variety of mobile devices. A simple, yet effective novel method formed by
a lightweight Convolutional Neural Network (CNN) to recover structure, followed
by a resolution-agnostic patch replacement mechanism to provide detailed
texture. Specially our pipeline leverages the structural capacity of CNN and
the high-level detail of patch-based methods, which is a key component for
high-resolution image inpainting. To demonstrate the real application of our
method, we conduct an extensive analysis on various mobile-friendly devices and
demonstrate similar inpainting performance while being $\mathrm{100 \times
faster}$ than existing state-of-the-art methods. Furthemore, we realease
DF8K-Inpainting, the first free-form mask UHD inpainting dataset.
|
2503.14765 | Nirmalya Thakur | Nirmalya Thakur, Mingchen Shao, Victoria Knieling, Vanessa Su, Andrew
Bian, and Hongseok Jeong | Dynamics of COVID-19 Misinformation: An Analysis of Conspiracy Theories,
Fake Remedies, and False Reports | null | null | null | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | This paper makes four scientific contributions to the area of misinformation
detection and analysis on digital platforms, with a specific focus on
investigating how conspiracy theories, fake remedies, and false reports emerge,
propagate, and shape public perceptions in the context of COVID-19. A dataset
of 5,614 posts on the internet that contained misinformation about COVID-19 was
used for this study. These posts were published in 2020 on 427 online sources
(such as social media platforms, news channels, and online blogs) from 193
countries and in 49 languages. First, this paper presents a structured,
three-tier analytical framework that investigates how multiple motives -
including fear, politics, and profit - can lead to a misleading claim. Second,
it emphasizes the importance of narrative structures, systematically
identifying and quantifying the thematic elements that drive conspiracy
theories, fake remedies, and false reports. Third, it presents a comprehensive
analysis of different sources of misinformation, highlighting the varied roles
played by individuals, state-based organizations, media outlets, and other
sources. Finally, it discusses multiple potential implications of these
findings for public policy and health communication, illustrating how insights
gained from motive, narrative, and source analyses can guide more targeted
interventions in the context of misinformation detection on digital platforms.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 22:28:39 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Thakur",
"Nirmalya",
""
],
[
"Shao",
"Mingchen",
""
],
[
"Knieling",
"Victoria",
""
],
[
"Su",
"Vanessa",
""
],
[
"Bian",
"Andrew",
""
],
[
"Jeong",
"Hongseok",
""
]
] | TITLE: Dynamics of COVID-19 Misinformation: An Analysis of Conspiracy Theories,
Fake Remedies, and False Reports
ABSTRACT: This paper makes four scientific contributions to the area of misinformation
detection and analysis on digital platforms, with a specific focus on
investigating how conspiracy theories, fake remedies, and false reports emerge,
propagate, and shape public perceptions in the context of COVID-19. A dataset
of 5,614 posts on the internet that contained misinformation about COVID-19 was
used for this study. These posts were published in 2020 on 427 online sources
(such as social media platforms, news channels, and online blogs) from 193
countries and in 49 languages. First, this paper presents a structured,
three-tier analytical framework that investigates how multiple motives -
including fear, politics, and profit - can lead to a misleading claim. Second,
it emphasizes the importance of narrative structures, systematically
identifying and quantifying the thematic elements that drive conspiracy
theories, fake remedies, and false reports. Third, it presents a comprehensive
analysis of different sources of misinformation, highlighting the varied roles
played by individuals, state-based organizations, media outlets, and other
sources. Finally, it discusses multiple potential implications of these
findings for public policy and health communication, illustrating how insights
gained from motive, narrative, and source analyses can guide more targeted
interventions in the context of misinformation detection on digital platforms.
|
2503.14772 | Emiliano De Cristofaro | Ben Treves, Emiliano De Cristofaro, Yue Dong, Michalis Faloutsos | VIKI: Systematic Cross-Platform Profile Inference of Online Users | Published in the Proceedings of the 17th ACM Web Science Conference
(WebSci 2025). Please cite the WebSci version | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What can we learn about online users by comparing their profiles across
different platforms? We use the term profile to represent displayed personality
traits, interests, and behavioral patterns (e.g., offensiveness). We also use
the term {\it displayed personas} to refer to the personas that users manifest
on a platform. Though individuals have a single real persona, it is not
difficult to imagine that people can behave differently in different
``contexts'' as it happens in real life (e.g., behavior in office, bar,
football game). The vast majority of previous studies have focused on profiling
users on a single platform. Here, we propose VIKI, a systematic methodology for
extracting and integrating the displayed personas of users across different
social platforms. First, we extract multiple types of information, including
displayed personality traits, interests, and offensiveness. Second, we
evaluate, combine, and introduce methods to summarize and visualize
cross-platform profiles. Finally, we evaluate VIKI on a dataset that spans
three platforms -- GitHub, LinkedIn, and X. Our experiments show that displayed
personas change significantly across platforms, with over 78% of users
exhibiting a significant change. For instance, we find that neuroticism
exhibits the largest absolute change. We also identify significant correlations
between offensive behavior and displayed personality traits. Overall, we
consider VIKI as an essential building block for systematic and nuanced
profiling of users across platforms.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 22:49:16 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Treves",
"Ben",
""
],
[
"De Cristofaro",
"Emiliano",
""
],
[
"Dong",
"Yue",
""
],
[
"Faloutsos",
"Michalis",
""
]
] | TITLE: VIKI: Systematic Cross-Platform Profile Inference of Online Users
ABSTRACT: What can we learn about online users by comparing their profiles across
different platforms? We use the term profile to represent displayed personality
traits, interests, and behavioral patterns (e.g., offensiveness). We also use
the term {\it displayed personas} to refer to the personas that users manifest
on a platform. Though individuals have a single real persona, it is not
difficult to imagine that people can behave differently in different
``contexts'' as it happens in real life (e.g., behavior in office, bar,
football game). The vast majority of previous studies have focused on profiling
users on a single platform. Here, we propose VIKI, a systematic methodology for
extracting and integrating the displayed personas of users across different
social platforms. First, we extract multiple types of information, including
displayed personality traits, interests, and offensiveness. Second, we
evaluate, combine, and introduce methods to summarize and visualize
cross-platform profiles. Finally, we evaluate VIKI on a dataset that spans
three platforms -- GitHub, LinkedIn, and X. Our experiments show that displayed
personas change significantly across platforms, with over 78% of users
exhibiting a significant change. For instance, we find that neuroticism
exhibits the largest absolute change. We also identify significant correlations
between offensive behavior and displayed personality traits. Overall, we
consider VIKI as an essential building block for systematic and nuanced
profiling of users across platforms.
|
2503.14774 | David Serrano-Lozano | David Serrano-Lozano and Aditya Arora and Luis Herranz and
Konstantinos G. Derpanis and Michael S. Brown and Javier Vazquez-Corral | Revisiting Image Fusion for Multi-Illuminant White-Balance Correction | 10 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | White balance (WB) correction in scenes with multiple illuminants remains a
persistent challenge in computer vision. Recent methods explored fusion-based
approaches, where a neural network linearly blends multiple sRGB versions of an
input image, each processed with predefined WB presets. However, we demonstrate
that these methods are suboptimal for common multi-illuminant scenarios.
Additionally, existing fusion-based methods rely on sRGB WB datasets lacking
dedicated multi-illuminant images, limiting both training and evaluation. To
address these challenges, we introduce two key contributions. First, we propose
an efficient transformer-based model that effectively captures spatial
dependencies across sRGB WB presets, substantially improving upon linear fusion
techniques. Second, we introduce a large-scale multi-illuminant dataset
comprising over 16,000 sRGB images rendered with five different WB settings,
along with WB-corrected images. Our method achieves up to 100\% improvement
over existing techniques on our new multi-illuminant image fusion dataset.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 23:01:22 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Serrano-Lozano",
"David",
""
],
[
"Arora",
"Aditya",
""
],
[
"Herranz",
"Luis",
""
],
[
"Derpanis",
"Konstantinos G.",
""
],
[
"Brown",
"Michael S.",
""
],
[
"Vazquez-Corral",
"Javier",
""
]
] | TITLE: Revisiting Image Fusion for Multi-Illuminant White-Balance Correction
ABSTRACT: White balance (WB) correction in scenes with multiple illuminants remains a
persistent challenge in computer vision. Recent methods explored fusion-based
approaches, where a neural network linearly blends multiple sRGB versions of an
input image, each processed with predefined WB presets. However, we demonstrate
that these methods are suboptimal for common multi-illuminant scenarios.
Additionally, existing fusion-based methods rely on sRGB WB datasets lacking
dedicated multi-illuminant images, limiting both training and evaluation. To
address these challenges, we introduce two key contributions. First, we propose
an efficient transformer-based model that effectively captures spatial
dependencies across sRGB WB presets, substantially improving upon linear fusion
techniques. Second, we introduce a large-scale multi-illuminant dataset
comprising over 16,000 sRGB images rendered with five different WB settings,
along with WB-corrected images. Our method achieves up to 100\% improvement
over existing techniques on our new multi-illuminant image fusion dataset.
|
2503.14786 | Haiyang Ying | Haiyang Ying, Matthias Zwicker | SketchSplat: 3D Edge Reconstruction via Differentiable Multi-view Sketch
Splatting | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Edges are one of the most basic parametric primitives to describe structural
information in 3D. In this paper, we study parametric 3D edge reconstruction
from calibrated multi-view images. Previous methods usually reconstruct a 3D
edge point set from multi-view 2D edge images, and then fit 3D edges to the
point set. However, noise in the point set may cause gaps among fitted edges,
and the recovered edges may not align with input multi-view images since the
edge fitting depends only on the reconstructed 3D point set. To mitigate these
problems, we propose SketchSplat, a method to reconstruct accurate, complete,
and compact 3D edges via differentiable multi-view sketch splatting. We
represent 3D edges as sketches, which are parametric lines and curves defined
by attributes including control points, scales, and opacity. During edge
reconstruction, we iteratively sample Gaussian points from a set of sketches
and rasterize the Gaussians onto 2D edge images. Then the gradient of the image
error with respect to the input 2D edge images can be back-propagated to
optimize the sketch attributes. Our method bridges 2D edge images and 3D edges
in a differentiable manner, which ensures that 3D edges align well with 2D
images and leads to accurate and complete results. We also propose a series of
adaptive topological operations and apply them along with the sketch
optimization. The topological operations help reduce the number of sketches
required while ensuring high accuracy, yielding a more compact reconstruction.
Finally, we contribute an accurate 2D edge detector that improves the
performance of both ours and existing methods. Experiments show that our method
achieves state-of-the-art accuracy, completeness, and compactness on a
benchmark CAD dataset.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 23:30:03 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ying",
"Haiyang",
""
],
[
"Zwicker",
"Matthias",
""
]
] | TITLE: SketchSplat: 3D Edge Reconstruction via Differentiable Multi-view Sketch
Splatting
ABSTRACT: Edges are one of the most basic parametric primitives to describe structural
information in 3D. In this paper, we study parametric 3D edge reconstruction
from calibrated multi-view images. Previous methods usually reconstruct a 3D
edge point set from multi-view 2D edge images, and then fit 3D edges to the
point set. However, noise in the point set may cause gaps among fitted edges,
and the recovered edges may not align with input multi-view images since the
edge fitting depends only on the reconstructed 3D point set. To mitigate these
problems, we propose SketchSplat, a method to reconstruct accurate, complete,
and compact 3D edges via differentiable multi-view sketch splatting. We
represent 3D edges as sketches, which are parametric lines and curves defined
by attributes including control points, scales, and opacity. During edge
reconstruction, we iteratively sample Gaussian points from a set of sketches
and rasterize the Gaussians onto 2D edge images. Then the gradient of the image
error with respect to the input 2D edge images can be back-propagated to
optimize the sketch attributes. Our method bridges 2D edge images and 3D edges
in a differentiable manner, which ensures that 3D edges align well with 2D
images and leads to accurate and complete results. We also propose a series of
adaptive topological operations and apply them along with the sketch
optimization. The topological operations help reduce the number of sketches
required while ensuring high accuracy, yielding a more compact reconstruction.
Finally, we contribute an accurate 2D edge detector that improves the
performance of both ours and existing methods. Experiments show that our method
achieves state-of-the-art accuracy, completeness, and compactness on a
benchmark CAD dataset.
|
2503.14795 | Jake Fawkes | Jake Fawkes, Michael O'Riordan, Athanasios Vlontzos, Oriol Corcoll,
Ciar\'an Mark Gilligan-Lee | The Hardness of Validating Observational Studies with Experimental Data | Published at AISTATS 2025 | null | null | null | stat.ML cs.LG stat.ME | http://creativecommons.org/licenses/by/4.0/ | Observational data is often readily available in large quantities, but can
lead to biased causal effect estimates due to the presence of unobserved
confounding. Recent works attempt to remove this bias by supplementing
observational data with experimental data, which, when available, is typically
on a smaller scale due to the time and cost involved in running a randomised
controlled trial. In this work, we prove a theorem that places fundamental
limits on this ``best of both worlds'' approach. Using the framework of
impossible inference, we show that although it is possible to use experimental
data to \emph{falsify} causal effect estimates from observational data, in
general it is not possible to \emph{validate} such estimates. Our theorem
proves that while experimental data can be used to detect bias in observational
studies, without additional assumptions on the smoothness of the correction
function, it can not be used to remove it. We provide a practical example of
such an assumption, developing a novel Gaussian Process based approach to
construct intervals which contain the true treatment effect with high
probability, both inside and outside of the support of the experimental data.
We demonstrate our methodology on both simulated and semi-synthetic datasets
and make the \href{https://github.com/Jakefawkes/Obs_and_exp_data}{code
available}.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 00:06:23 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Fawkes",
"Jake",
""
],
[
"O'Riordan",
"Michael",
""
],
[
"Vlontzos",
"Athanasios",
""
],
[
"Corcoll",
"Oriol",
""
],
[
"Gilligan-Lee",
"Ciarán Mark",
""
]
] | TITLE: The Hardness of Validating Observational Studies with Experimental Data
ABSTRACT: Observational data is often readily available in large quantities, but can
lead to biased causal effect estimates due to the presence of unobserved
confounding. Recent works attempt to remove this bias by supplementing
observational data with experimental data, which, when available, is typically
on a smaller scale due to the time and cost involved in running a randomised
controlled trial. In this work, we prove a theorem that places fundamental
limits on this ``best of both worlds'' approach. Using the framework of
impossible inference, we show that although it is possible to use experimental
data to \emph{falsify} causal effect estimates from observational data, in
general it is not possible to \emph{validate} such estimates. Our theorem
proves that while experimental data can be used to detect bias in observational
studies, without additional assumptions on the smoothness of the correction
function, it can not be used to remove it. We provide a practical example of
such an assumption, developing a novel Gaussian Process based approach to
construct intervals which contain the true treatment effect with high
probability, both inside and outside of the support of the experimental data.
We demonstrate our methodology on both simulated and semi-synthetic datasets
and make the \href{https://github.com/Jakefawkes/Obs_and_exp_data}{code
available}.
|
2503.14799 | Fatemeh Dehrouyeh | Fatemeh Dehrouyeh, Ibrahim Shaer, Soodeh Nikan, Firouz Badrkhani
Ajaei, Abdallah Shami | Pruning-Based TinyML Optimization of Machine Learning Models for Anomaly
Detection in Electric Vehicle Charging Infrastructure | This paper has been accepted for presentation at IEEE ICC 2025. The
final published version will be available in the conference proceedings. The
implementation and code are available at:
https://github.com/Western-OC2-Lab/EVCI-Pruning | null | null | null | cs.LG eess.SP | http://creativecommons.org/licenses/by/4.0/ | With the growing need for real-time processing on IoT devices, optimizing
machine learning (ML) models' size, latency, and computational efficiency is
essential. This paper investigates a pruning method for anomaly detection in
resource-constrained environments, specifically targeting Electric Vehicle
Charging Infrastructure (EVCI). Using the CICEVSE2024 dataset, we trained and
optimized three models-Multi-Layer Perceptron (MLP), Long Short-Term Memory
(LSTM), and XGBoost-through hyperparameter tuning with Optuna, further refining
them using SHapley Additive exPlanations (SHAP)-based feature selection (FS)
and unstructured pruning techniques. The optimized models achieved significant
reductions in model size and inference times, with only a marginal impact on
their performance. Notably, our findings indicate that, in the context of EVCI,
pruning and FS can enhance computational efficiency while retaining critical
anomaly detection capabilities.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 00:18:37 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Dehrouyeh",
"Fatemeh",
""
],
[
"Shaer",
"Ibrahim",
""
],
[
"Nikan",
"Soodeh",
""
],
[
"Ajaei",
"Firouz Badrkhani",
""
],
[
"Shami",
"Abdallah",
""
]
] | TITLE: Pruning-Based TinyML Optimization of Machine Learning Models for Anomaly
Detection in Electric Vehicle Charging Infrastructure
ABSTRACT: With the growing need for real-time processing on IoT devices, optimizing
machine learning (ML) models' size, latency, and computational efficiency is
essential. This paper investigates a pruning method for anomaly detection in
resource-constrained environments, specifically targeting Electric Vehicle
Charging Infrastructure (EVCI). Using the CICEVSE2024 dataset, we trained and
optimized three models-Multi-Layer Perceptron (MLP), Long Short-Term Memory
(LSTM), and XGBoost-through hyperparameter tuning with Optuna, further refining
them using SHapley Additive exPlanations (SHAP)-based feature selection (FS)
and unstructured pruning techniques. The optimized models achieved significant
reductions in model size and inference times, with only a marginal impact on
their performance. Notably, our findings indicate that, in the context of EVCI,
pruning and FS can enhance computational efficiency while retaining critical
anomaly detection capabilities.
|
2503.14803 | Michelle Blom | Michelle Blom, Alexander Ek, Peter J. Stuckey, Vanessa Teague, and
Damjan Vukcevic | 3+ Seat Risk-Limiting Audits for Single Transferable Vote Elections | null | null | null | null | cs.CY cs.CR cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constructing efficient risk-limiting audits (RLAs) for multiwinner single
transferable vote (STV) elections is a challenging problem. An STV RLA is
designed to statistically verify that the reported winners of an election did
indeed win according to the voters' expressed preferences and not due to
mistabulation or interference, while limiting the risk of accepting an
incorrect outcome to a desired threshold (the risk limit). Existing methods
have shown that it is possible to form RLAs for two-seat STV elections in the
context where the first seat has been awarded to a candidate in the first round
of tabulation. This is called the first winner criterion. We present an
assertion-based approach to conducting full or partial RLAs for STV elections
with three or more seats, in which the first winner criterion is satisfied.
Although the chance of forming a full audit that verifies all winners drops
substantially as the number of seats increases, we show that we can quite often
form partial audits that verify most, and sometimes all, of the reported
winners. We evaluate our method on a dataset of over 500 three- and four-seat
STV elections from the 2017 and 2022 local council elections in Scotland.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 00:29:53 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Blom",
"Michelle",
""
],
[
"Ek",
"Alexander",
""
],
[
"Stuckey",
"Peter J.",
""
],
[
"Teague",
"Vanessa",
""
],
[
"Vukcevic",
"Damjan",
""
]
] | TITLE: 3+ Seat Risk-Limiting Audits for Single Transferable Vote Elections
ABSTRACT: Constructing efficient risk-limiting audits (RLAs) for multiwinner single
transferable vote (STV) elections is a challenging problem. An STV RLA is
designed to statistically verify that the reported winners of an election did
indeed win according to the voters' expressed preferences and not due to
mistabulation or interference, while limiting the risk of accepting an
incorrect outcome to a desired threshold (the risk limit). Existing methods
have shown that it is possible to form RLAs for two-seat STV elections in the
context where the first seat has been awarded to a candidate in the first round
of tabulation. This is called the first winner criterion. We present an
assertion-based approach to conducting full or partial RLAs for STV elections
with three or more seats, in which the first winner criterion is satisfied.
Although the chance of forming a full audit that verifies all winners drops
substantially as the number of seats increases, we show that we can quite often
form partial audits that verify most, and sometimes all, of the reported
winners. We evaluate our method on a dataset of over 500 three- and four-seat
STV elections from the 2017 and 2022 local council elections in Scotland.
|
2503.14823 | Mitsuo Oka | Mitsuo Oka, Tai D. Phan, Marit {\O}ieroset, Daniel J. Gershman, Roy B.
Torbert, James L. Burch, and Vassilis Angelopoulos | Scaling of Particle Heating in Shocks and Magnetic Reconnection | 15 pages, 8 figures; accepted for publication in Astrophysical
Journal | null | null | null | physics.plasm-ph physics.space-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Particles are heated efficiently through energy conversion processes such as
shocks and magnetic reconnection in collisionless plasma environments. While
empirical scaling laws for the temperature increase have been obtained, the
precise mechanism of energy partition between ions and electrons remains
unclear. Here we show, based on coupled theoretical and observational scaling
analyses, that the temperature increase, $\Delta T$, depends linearly on three
factors: the available magnetic energy per particle, the Alfv\'{e}n Mach number
(or reconnection rate), and the characteristic spatial scale $L$. Based on
statistical datasets obtained from Earth's plasma environment, we find that $L$
is on the order of (1) the ion gyro-radius for ion heating at shocks, (2) the
ion inertial length for ion heating in magnetic reconnection, and (3) the
hybrid inertial length for electron heating in both shocks and magnetic
reconnection. With these scales, we derive the ion-to-electron ratios of
temperature increase as $\Delta T_{\rm i}/\Delta T_{\rm e} = (3\beta_{\rm
i}/2)^{1/2}(m_{\rm i}/m_{\rm e})^{1/4}$ for shocks and $\Delta T_{\rm i}/\Delta
T_{\rm e} = (m_{\rm i}/m_{\rm e})^{1/4}$ for magnetic reconnection, where
$\beta_{\rm i}$ is the ion plasma beta, and $m_{\rm i}$ and $ m_{\rm e}$ are
the ion and electron particle masses, respectively. We anticipate that this
study will serve as a starting point for a better understanding of particle
heating in space plasmas, enabling more sophisticated modeling of its scaling
and universality.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 01:43:10 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Oka",
"Mitsuo",
""
],
[
"Phan",
"Tai D.",
""
],
[
"Øieroset",
"Marit",
""
],
[
"Gershman",
"Daniel J.",
""
],
[
"Torbert",
"Roy B.",
""
],
[
"Burch",
"James L.",
""
],
[
"Angelopoulos",
"Vassilis",
""
]
] | TITLE: Scaling of Particle Heating in Shocks and Magnetic Reconnection
ABSTRACT: Particles are heated efficiently through energy conversion processes such as
shocks and magnetic reconnection in collisionless plasma environments. While
empirical scaling laws for the temperature increase have been obtained, the
precise mechanism of energy partition between ions and electrons remains
unclear. Here we show, based on coupled theoretical and observational scaling
analyses, that the temperature increase, $\Delta T$, depends linearly on three
factors: the available magnetic energy per particle, the Alfv\'{e}n Mach number
(or reconnection rate), and the characteristic spatial scale $L$. Based on
statistical datasets obtained from Earth's plasma environment, we find that $L$
is on the order of (1) the ion gyro-radius for ion heating at shocks, (2) the
ion inertial length for ion heating in magnetic reconnection, and (3) the
hybrid inertial length for electron heating in both shocks and magnetic
reconnection. With these scales, we derive the ion-to-electron ratios of
temperature increase as $\Delta T_{\rm i}/\Delta T_{\rm e} = (3\beta_{\rm
i}/2)^{1/2}(m_{\rm i}/m_{\rm e})^{1/4}$ for shocks and $\Delta T_{\rm i}/\Delta
T_{\rm e} = (m_{\rm i}/m_{\rm e})^{1/4}$ for magnetic reconnection, where
$\beta_{\rm i}$ is the ion plasma beta, and $m_{\rm i}$ and $ m_{\rm e}$ are
the ion and electron particle masses, respectively. We anticipate that this
study will serve as a starting point for a better understanding of particle
heating in space plasmas, enabling more sophisticated modeling of its scaling
and universality.
|
2503.14824 | Zikun Zhou | Zikun Zhou, Yushuai Sun, Wenjie Pei, Xin Li, Yaowei Wang | Prototype Perturbation for Relaxing Alignment Constraints in
Backward-Compatible Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The traditional paradigm to update retrieval models requires re-computing the
embeddings of the gallery data, a time-consuming and computationally intensive
process known as backfilling. To circumvent backfilling, Backward-Compatible
Learning (BCL) has been widely explored, which aims to train a new model
compatible with the old one. Many previous works focus on effectively aligning
the embeddings of the new model with those of the old one to enhance the
backward-compatibility. Nevertheless, such strong alignment constraints would
compromise the discriminative ability of the new model, particularly when
different classes are closely clustered and hard to distinguish in the old
feature space. To address this issue, we propose to relax the constraints by
introducing perturbations to the old feature prototypes. This allows us to
align the new feature space with a pseudo-old feature space defined by these
perturbed prototypes, thereby preserving the discriminative ability of the new
model in backward-compatible learning. We have developed two approaches for
calculating the perturbations: Neighbor-Driven Prototype Perturbation (NDPP)
and Optimization-Driven Prototype Perturbation (ODPP). Particularly, they take
into account the feature distributions of not only the old but also the new
models to obtain proper perturbations along with new model updating. Extensive
experiments on the landmark and commodity datasets demonstrate that our
approaches perform favorably against state-of-the-art BCL algorithms.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 01:45:48 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhou",
"Zikun",
""
],
[
"Sun",
"Yushuai",
""
],
[
"Pei",
"Wenjie",
""
],
[
"Li",
"Xin",
""
],
[
"Wang",
"Yaowei",
""
]
] | TITLE: Prototype Perturbation for Relaxing Alignment Constraints in
Backward-Compatible Learning
ABSTRACT: The traditional paradigm to update retrieval models requires re-computing the
embeddings of the gallery data, a time-consuming and computationally intensive
process known as backfilling. To circumvent backfilling, Backward-Compatible
Learning (BCL) has been widely explored, which aims to train a new model
compatible with the old one. Many previous works focus on effectively aligning
the embeddings of the new model with those of the old one to enhance the
backward-compatibility. Nevertheless, such strong alignment constraints would
compromise the discriminative ability of the new model, particularly when
different classes are closely clustered and hard to distinguish in the old
feature space. To address this issue, we propose to relax the constraints by
introducing perturbations to the old feature prototypes. This allows us to
align the new feature space with a pseudo-old feature space defined by these
perturbed prototypes, thereby preserving the discriminative ability of the new
model in backward-compatible learning. We have developed two approaches for
calculating the perturbations: Neighbor-Driven Prototype Perturbation (NDPP)
and Optimization-Driven Prototype Perturbation (ODPP). Particularly, they take
into account the feature distributions of not only the old but also the new
models to obtain proper perturbations along with new model updating. Extensive
experiments on the landmark and commodity datasets demonstrate that our
approaches perform favorably against state-of-the-art BCL algorithms.
|
2503.14831 | Sojeong Park | Sojeong Park, Hyeonho Noh, Hyun Jong Yang | Robust Transmission of Punctured Text with Large Language Model-based
Recovery | This work has been submitted to the IEEE for possible publication | null | null | null | eess.SP cs.LG | http://creativecommons.org/licenses/by/4.0/ | With the recent advancements in deep learning, semantic communication which
transmits only task-oriented features, has rapidly emerged. However, since
feature extraction relies on learning-based models, its performance
fundamentally depends on the training dataset or tasks. For practical
scenarios, it is essential to design a model that demonstrates robust
performance regardless of dataset or tasks. In this correspondence, we propose
a novel text transmission model that selects and transmits only a few
characters and recovers the missing characters at the receiver using a large
language model (LLM). Additionally, we propose a novel importance character
extractor (ICE), which selects transmitted characters to enhance LLM recovery
performance. Simulations demonstrate that the proposed filter selection by ICE
outperforms random filter selection, which selects transmitted characters
randomly. Moreover, the proposed model exhibits robust performance across
different datasets and tasks and outperforms traditional bit-based
communication in low signal-to-noise ratio conditions.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 02:16:08 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Park",
"Sojeong",
""
],
[
"Noh",
"Hyeonho",
""
],
[
"Yang",
"Hyun Jong",
""
]
] | TITLE: Robust Transmission of Punctured Text with Large Language Model-based
Recovery
ABSTRACT: With the recent advancements in deep learning, semantic communication which
transmits only task-oriented features, has rapidly emerged. However, since
feature extraction relies on learning-based models, its performance
fundamentally depends on the training dataset or tasks. For practical
scenarios, it is essential to design a model that demonstrates robust
performance regardless of dataset or tasks. In this correspondence, we propose
a novel text transmission model that selects and transmits only a few
characters and recovers the missing characters at the receiver using a large
language model (LLM). Additionally, we propose a novel importance character
extractor (ICE), which selects transmitted characters to enhance LLM recovery
performance. Simulations demonstrate that the proposed filter selection by ICE
outperforms random filter selection, which selects transmitted characters
randomly. Moreover, the proposed model exhibits robust performance across
different datasets and tasks and outperforms traditional bit-based
communication in low signal-to-noise ratio conditions.
|
2503.14833 | Zihao Liu | Zihao Liu, Xing Liu, Yizhai Zhang, Zhengxiong Liu, Panfeng Huang | Curiosity-Diffuser: Curiosity Guide Diffusion Models for Reliability | null | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | One of the bottlenecks in robotic intelligence is the instability of neural
network models, which, unlike control models, lack a well-defined convergence
domain and stability. This leads to risks when applying intelligence in the
physical world. Specifically, imitation policy based on neural network may
generate hallucinations, leading to inaccurate behaviors that impact the safety
of real-world applications. To address this issue, this paper proposes the
Curiosity-Diffuser, aimed at guiding the conditional diffusion model to
generate trajectories with lower curiosity, thereby improving the reliability
of policy. The core idea is to use a Random Network Distillation (RND)
curiosity module to assess whether the model's behavior aligns with the
training data, and then minimize curiosity by classifier guidance diffusion to
reduce overgeneralization during inference. Additionally, we propose a
computationally efficient metric for evaluating the reliability of the policy,
measuring the similarity between the generated behaviors and the training
dataset, to facilitate research about reliability learning. Finally, simulation
verify the effectiveness and applicability of the proposed method to a variety
of scenarios, showing that Curiosity-Diffuser significantly improves task
performance and produces behaviors that are more similar to the training data.
The code for this work is available at: github.com/CarlDegio/Curiosity-Diffuser
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 02:25:36 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Liu",
"Zihao",
""
],
[
"Liu",
"Xing",
""
],
[
"Zhang",
"Yizhai",
""
],
[
"Liu",
"Zhengxiong",
""
],
[
"Huang",
"Panfeng",
""
]
] | TITLE: Curiosity-Diffuser: Curiosity Guide Diffusion Models for Reliability
ABSTRACT: One of the bottlenecks in robotic intelligence is the instability of neural
network models, which, unlike control models, lack a well-defined convergence
domain and stability. This leads to risks when applying intelligence in the
physical world. Specifically, imitation policy based on neural network may
generate hallucinations, leading to inaccurate behaviors that impact the safety
of real-world applications. To address this issue, this paper proposes the
Curiosity-Diffuser, aimed at guiding the conditional diffusion model to
generate trajectories with lower curiosity, thereby improving the reliability
of policy. The core idea is to use a Random Network Distillation (RND)
curiosity module to assess whether the model's behavior aligns with the
training data, and then minimize curiosity by classifier guidance diffusion to
reduce overgeneralization during inference. Additionally, we propose a
computationally efficient metric for evaluating the reliability of the policy,
measuring the similarity between the generated behaviors and the training
dataset, to facilitate research about reliability learning. Finally, simulation
verify the effectiveness and applicability of the proposed method to a variety
of scenarios, showing that Curiosity-Diffuser significantly improves task
performance and produces behaviors that are more similar to the training data.
The code for this work is available at: github.com/CarlDegio/Curiosity-Diffuser
|
2503.14836 | Kunyang Li | Kunyang Li, Jean-Charles Noirot Ferrand, Ryan Sheatsley, Blaine Hoak,
Yohan Beugin, Eric Pauley, Patrick McDaniel | On the Robustness Tradeoff in Fine-Tuning | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Fine-tuning has become the standard practice for adapting pre-trained
(upstream) models to downstream tasks. However, the impact on model robustness
is not well understood. In this work, we characterize the robustness-accuracy
trade-off in fine-tuning. We evaluate the robustness and accuracy of fine-tuned
models over 6 benchmark datasets and 7 different fine-tuning strategies. We
observe a consistent trade-off between adversarial robustness and accuracy.
Peripheral updates such as BitFit are more effective for simple tasks--over 75%
above the average measured with area under the Pareto frontiers on CIFAR-10 and
CIFAR-100. In contrast, fine-tuning information-heavy layers, such as attention
layers via Compacter, achieves a better Pareto frontier on more complex
tasks--57.5% and 34.6% above the average on Caltech-256 and CUB-200,
respectively. Lastly, we observe that robustness of fine-tuning against
out-of-distribution data closely tracks accuracy. These insights emphasize the
need for robustness-aware fine-tuning to ensure reliable real-world
deployments.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 02:35:01 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Kunyang",
""
],
[
"Ferrand",
"Jean-Charles Noirot",
""
],
[
"Sheatsley",
"Ryan",
""
],
[
"Hoak",
"Blaine",
""
],
[
"Beugin",
"Yohan",
""
],
[
"Pauley",
"Eric",
""
],
[
"McDaniel",
"Patrick",
""
]
] | TITLE: On the Robustness Tradeoff in Fine-Tuning
ABSTRACT: Fine-tuning has become the standard practice for adapting pre-trained
(upstream) models to downstream tasks. However, the impact on model robustness
is not well understood. In this work, we characterize the robustness-accuracy
trade-off in fine-tuning. We evaluate the robustness and accuracy of fine-tuned
models over 6 benchmark datasets and 7 different fine-tuning strategies. We
observe a consistent trade-off between adversarial robustness and accuracy.
Peripheral updates such as BitFit are more effective for simple tasks--over 75%
above the average measured with area under the Pareto frontiers on CIFAR-10 and
CIFAR-100. In contrast, fine-tuning information-heavy layers, such as attention
layers via Compacter, achieves a better Pareto frontier on more complex
tasks--57.5% and 34.6% above the average on Caltech-256 and CUB-200,
respectively. Lastly, we observe that robustness of fine-tuning against
out-of-distribution data closely tracks accuracy. These insights emphasize the
need for robustness-aware fine-tuning to ensure reliable real-world
deployments.
|
2503.14837 | Yinqi Chen | Yinqi Chen, Meiying Zhang, Qi Hao, Guang Zhou | SemanticFlow: A Self-Supervised Framework for Joint Scene Flow
Prediction and Instance Segmentation in Dynamic Environments | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate perception of dynamic traffic scenes is crucial for high-level
autonomous driving systems, requiring robust object motion estimation and
instance segmentation. However, traditional methods often treat them as
separate tasks, leading to suboptimal performance, spatio-temporal
inconsistencies, and inefficiency in complex scenarios due to the absence of
information sharing. This paper proposes a multi-task SemanticFlow framework to
simultaneously predict scene flow and instance segmentation of full-resolution
point clouds. The novelty of this work is threefold: 1) developing a
coarse-to-fine prediction based multi-task scheme, where an initial coarse
segmentation of static backgrounds and dynamic objects is used to provide
contextual information for refining motion and semantic information through a
shared feature processing module; 2) developing a set of loss functions to
enhance the performance of scene flow estimation and instance segmentation,
while can help ensure spatial and temporal consistency of both static and
dynamic objects within traffic scenes; 3) developing a self-supervised learning
scheme, which utilizes coarse segmentation to detect rigid objects and compute
their transformation matrices between sequential frames, enabling the
generation of self-supervised labels. The proposed framework is validated on
the Argoverse and Waymo datasets, demonstrating superior performance in
instance segmentation accuracy, scene flow estimation, and computational
efficiency, establishing a new benchmark for self-supervised methods in dynamic
scene understanding.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 02:43:19 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Chen",
"Yinqi",
""
],
[
"Zhang",
"Meiying",
""
],
[
"Hao",
"Qi",
""
],
[
"Zhou",
"Guang",
""
]
] | TITLE: SemanticFlow: A Self-Supervised Framework for Joint Scene Flow
Prediction and Instance Segmentation in Dynamic Environments
ABSTRACT: Accurate perception of dynamic traffic scenes is crucial for high-level
autonomous driving systems, requiring robust object motion estimation and
instance segmentation. However, traditional methods often treat them as
separate tasks, leading to suboptimal performance, spatio-temporal
inconsistencies, and inefficiency in complex scenarios due to the absence of
information sharing. This paper proposes a multi-task SemanticFlow framework to
simultaneously predict scene flow and instance segmentation of full-resolution
point clouds. The novelty of this work is threefold: 1) developing a
coarse-to-fine prediction based multi-task scheme, where an initial coarse
segmentation of static backgrounds and dynamic objects is used to provide
contextual information for refining motion and semantic information through a
shared feature processing module; 2) developing a set of loss functions to
enhance the performance of scene flow estimation and instance segmentation,
while can help ensure spatial and temporal consistency of both static and
dynamic objects within traffic scenes; 3) developing a self-supervised learning
scheme, which utilizes coarse segmentation to detect rigid objects and compute
their transformation matrices between sequential frames, enabling the
generation of self-supervised labels. The proposed framework is validated on
the Argoverse and Waymo datasets, demonstrating superior performance in
instance segmentation accuracy, scene flow estimation, and computational
efficiency, establishing a new benchmark for self-supervised methods in dynamic
scene understanding.
|
2503.14838 | Chengran Yang | Chengran Yang, Zhensu Sun, Hong Jin Kang, Jieke Shi, David Lo | Think Like Human Developers: Harnessing Community Knowledge for
Structured Code Reasoning | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have significantly advanced automated code
generation, yet they struggle with complex coding tasks requiring multi-step
logical reasoning. High-quality reasoning data is crucial for improving LLMs'
reasoning capabilities, but such datasets remain scarce. Existing approaches
either rely on computationally expensive reinforcement learning (RL) or
error-prone reasoning chains synthesized by LLMs, posing challenges in
scalability and accuracy.
To address this challenge, we propose SVRC (Structured and Validated
Reasoning Chains for Code Generation), a novel framework that mines,
restructures, and enriches reasoning chains from community-driven discussions
on software engineering platforms. SVRC refines unstructured and incomplete
discussions of coding problems by aligning them with Software Development Life
Cycle (SDLC) principles, ensuring that reasoning chains capture real-world
problem-solving strategies and support iterative refinement.
To evaluate the effectiveness of SVRC, we introduce CodeThinker, an LLM
fine-tuned on 12,444 reasoning-augmented samples generated by SVRC. Experiments
on LiveCodeBench show that CodeThinker surpasses its base model by 42.86\% on
medium-level code problems in terms of pass@1 and outperforms GPT-4o-mini and
GPT-4o by 73.14\% and 115.86\%, respectively. Our ablation study further
highlights that each component of SVRC contributes to the reasoning
capabilities of CodeThinker.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 02:45:13 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Yang",
"Chengran",
""
],
[
"Sun",
"Zhensu",
""
],
[
"Kang",
"Hong Jin",
""
],
[
"Shi",
"Jieke",
""
],
[
"Lo",
"David",
""
]
] | TITLE: Think Like Human Developers: Harnessing Community Knowledge for
Structured Code Reasoning
ABSTRACT: Large Language Models (LLMs) have significantly advanced automated code
generation, yet they struggle with complex coding tasks requiring multi-step
logical reasoning. High-quality reasoning data is crucial for improving LLMs'
reasoning capabilities, but such datasets remain scarce. Existing approaches
either rely on computationally expensive reinforcement learning (RL) or
error-prone reasoning chains synthesized by LLMs, posing challenges in
scalability and accuracy.
To address this challenge, we propose SVRC (Structured and Validated
Reasoning Chains for Code Generation), a novel framework that mines,
restructures, and enriches reasoning chains from community-driven discussions
on software engineering platforms. SVRC refines unstructured and incomplete
discussions of coding problems by aligning them with Software Development Life
Cycle (SDLC) principles, ensuring that reasoning chains capture real-world
problem-solving strategies and support iterative refinement.
To evaluate the effectiveness of SVRC, we introduce CodeThinker, an LLM
fine-tuned on 12,444 reasoning-augmented samples generated by SVRC. Experiments
on LiveCodeBench show that CodeThinker surpasses its base model by 42.86\% on
medium-level code problems in terms of pass@1 and outperforms GPT-4o-mini and
GPT-4o by 73.14\% and 115.86\%, respectively. Our ablation study further
highlights that each component of SVRC contributes to the reasoning
capabilities of CodeThinker.
|
2503.14849 | Zhuoyi Yang | Zhuoyi Yang and Ian G. Harris | LogLLaMA: Transformer-based log anomaly detection with LLaMA | 8 pages, 5 figures | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Log anomaly detection refers to the task that distinguishes the anomalous log
messages from normal log messages. Transformer-based large language models
(LLMs) are becoming popular for log anomaly detection because of their superb
ability to understand complex and long language patterns. In this paper, we
propose LogLLaMA, a novel framework that leverages LLaMA2. LogLLaMA is first
finetuned on normal log messages from three large-scale datasets to learn their
patterns. After finetuning, the model is capable of generating successive log
messages given previous log messages. Our generative model is further trained
to identify anomalous log messages using reinforcement learning (RL). The
experimental results show that LogLLaMA outperforms the state-of-the-art
approaches for anomaly detection on BGL, Thunderbird, and HDFS datasets.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 03:13:37 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Yang",
"Zhuoyi",
""
],
[
"Harris",
"Ian G.",
""
]
] | TITLE: LogLLaMA: Transformer-based log anomaly detection with LLaMA
ABSTRACT: Log anomaly detection refers to the task that distinguishes the anomalous log
messages from normal log messages. Transformer-based large language models
(LLMs) are becoming popular for log anomaly detection because of their superb
ability to understand complex and long language patterns. In this paper, we
propose LogLLaMA, a novel framework that leverages LLaMA2. LogLLaMA is first
finetuned on normal log messages from three large-scale datasets to learn their
patterns. After finetuning, the model is capable of generating successive log
messages given previous log messages. Our generative model is further trained
to identify anomalous log messages using reinforcement learning (RL). The
experimental results show that LogLLaMA outperforms the state-of-the-art
approaches for anomaly detection on BGL, Thunderbird, and HDFS datasets.
|
2503.14852 | Lam Nguyen Tung | Lam Nguyen Tung, Xiaoning Du, Neelofar Neelofar, Aldeida Aleti | UntrustVul: An Automated Approach for Identifying Untrustworthy Alerts
in Vulnerability Detection Models | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Machine learning (ML) has shown promise in detecting vulnerabilities. To
review vulnerabilities detected by ML predictions, developers manually assess
suspicious lines in their interpretations. However, studies have revealed that
these models often learn and predict based on irrelevant features frequently
appearing in vulnerable code. This leads to predictions that may correctly flag
vulnerable functions but for the wrong reasons, which we call untrustworthy.
These predictions can mislead developers, hindering them from locating the
vulnerabilities. This increases the efforts of manual assessment and, worse,
risks creating flawed patches that fail to address existing vulnerabilities and
even introduce new ones. Hence, automated approaches are needed to detect
untrustworthy predictions, preventing overlooked vulnerabilities and
alleviating the burden of manual assessment.
We propose UntrustVul, the first automated approach to identify untrustworthy
vulnerability predictions. Given a vulnerability prediction during inference,
UntrustVul systematically assesses whether suspicious lines annotated by the
prediction are vulnerability-unrelated. It simulates developers' rationales,
considering a line unrelated if (1) it is absent from historical
vulnerabilities and (2) it cannot reach any vulnerabilities in execution flows.
UntrustVul assesses (1) by analysing its syntactic meaning using deep
representations to determine whether it is syntax-benign. To assess (2),
UntrustVul traces dependencies of the syntax-benign lines on other suspicious
lines using static and rule-based analyses. We evaluate UntrustVul on 155K
vulnerability predictions by four models across three datasets. UntrustVul
effectively detects untrustworthy predictions with an F1-score of 82%-94% and
helps improve the ability of models to detect vulnerabilities by up to 321% in
F1-score and 100% in trustworthiness.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 03:18:45 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Tung",
"Lam Nguyen",
""
],
[
"Du",
"Xiaoning",
""
],
[
"Neelofar",
"Neelofar",
""
],
[
"Aleti",
"Aldeida",
""
]
] | TITLE: UntrustVul: An Automated Approach for Identifying Untrustworthy Alerts
in Vulnerability Detection Models
ABSTRACT: Machine learning (ML) has shown promise in detecting vulnerabilities. To
review vulnerabilities detected by ML predictions, developers manually assess
suspicious lines in their interpretations. However, studies have revealed that
these models often learn and predict based on irrelevant features frequently
appearing in vulnerable code. This leads to predictions that may correctly flag
vulnerable functions but for the wrong reasons, which we call untrustworthy.
These predictions can mislead developers, hindering them from locating the
vulnerabilities. This increases the efforts of manual assessment and, worse,
risks creating flawed patches that fail to address existing vulnerabilities and
even introduce new ones. Hence, automated approaches are needed to detect
untrustworthy predictions, preventing overlooked vulnerabilities and
alleviating the burden of manual assessment.
We propose UntrustVul, the first automated approach to identify untrustworthy
vulnerability predictions. Given a vulnerability prediction during inference,
UntrustVul systematically assesses whether suspicious lines annotated by the
prediction are vulnerability-unrelated. It simulates developers' rationales,
considering a line unrelated if (1) it is absent from historical
vulnerabilities and (2) it cannot reach any vulnerabilities in execution flows.
UntrustVul assesses (1) by analysing its syntactic meaning using deep
representations to determine whether it is syntax-benign. To assess (2),
UntrustVul traces dependencies of the syntax-benign lines on other suspicious
lines using static and rule-based analyses. We evaluate UntrustVul on 155K
vulnerability predictions by four models across three datasets. UntrustVul
effectively detects untrustworthy predictions with an F1-score of 82%-94% and
helps improve the ability of models to detect vulnerabilities by up to 321% in
F1-score and 100% in trustworthiness.
|
2503.14860 | Caleb Robinson | Caleb Robinson, Anthony Ortiz, Allen Kim, Rahul Dodhia, Andrew Zolli,
Shivaprakash K Nagaraju, James Oakleaf, Joe Kiesecker, Juan M. Lavista Ferres | Global Renewables Watch: A Temporal Dataset of Solar and Wind Energy
Derived from Satellite Imagery | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present a comprehensive global temporal dataset of commercial solar
photovoltaic (PV) farms and onshore wind turbines, derived from high-resolution
satellite imagery analyzed quarterly from the fourth quarter of 2017 to the
second quarter of 2024. We create this dataset by training deep learning-based
segmentation models to identify these renewable energy installations from
satellite imagery, then deploy them on over 13 trillion pixels covering the
world. For each detected feature, we estimate the construction date and the
preceding land use type. This dataset offers crucial insights into progress
toward sustainable development goals and serves as a valuable resource for
policymakers, researchers, and stakeholders aiming to assess and promote
effective strategies for renewable energy deployment. Our final spatial dataset
includes 375,197 individual wind turbines and 86,410 solar PV installations. We
aggregate our predictions to the country level -- estimating total power
capacity based on construction date, solar PV area, and number of windmills --
and find an $r^2$ value of $0.96$ and $0.93$ for solar PV and onshore wind
respectively compared to IRENA's most recent 2023 country-level capacity
estimates.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 03:38:43 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Robinson",
"Caleb",
""
],
[
"Ortiz",
"Anthony",
""
],
[
"Kim",
"Allen",
""
],
[
"Dodhia",
"Rahul",
""
],
[
"Zolli",
"Andrew",
""
],
[
"Nagaraju",
"Shivaprakash K",
""
],
[
"Oakleaf",
"James",
""
],
[
"Kiesecker",
"Joe",
""
],
[
"Ferres",
"Juan M. Lavista",
""
]
] | TITLE: Global Renewables Watch: A Temporal Dataset of Solar and Wind Energy
Derived from Satellite Imagery
ABSTRACT: We present a comprehensive global temporal dataset of commercial solar
photovoltaic (PV) farms and onshore wind turbines, derived from high-resolution
satellite imagery analyzed quarterly from the fourth quarter of 2017 to the
second quarter of 2024. We create this dataset by training deep learning-based
segmentation models to identify these renewable energy installations from
satellite imagery, then deploy them on over 13 trillion pixels covering the
world. For each detected feature, we estimate the construction date and the
preceding land use type. This dataset offers crucial insights into progress
toward sustainable development goals and serves as a valuable resource for
policymakers, researchers, and stakeholders aiming to assess and promote
effective strategies for renewable energy deployment. Our final spatial dataset
includes 375,197 individual wind turbines and 86,410 solar PV installations. We
aggregate our predictions to the country level -- estimating total power
capacity based on construction date, solar PV area, and number of windmills --
and find an $r^2$ value of $0.96$ and $0.93$ for solar PV and onshore wind
respectively compared to IRENA's most recent 2023 country-level capacity
estimates.
|
2503.14867 | Caoshuo Li | Caoshuo Li, Tanzhe Li, Xiaobin Hu, Donghao Luo, Taisong Jin | DVHGNN: Multi-Scale Dilated Vision HGNN for Efficient Vision Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Vision Graph Neural Network (ViG) has gained considerable attention
in computer vision. Despite its groundbreaking innovation, Vision Graph Neural
Network encounters key issues including the quadratic computational complexity
caused by its K-Nearest Neighbor (KNN) graph construction and the limitation of
pairwise relations of normal graphs. To address the aforementioned challenges,
we propose a novel vision architecture, termed Dilated Vision HyperGraph Neural
Network (DVHGNN), which is designed to leverage multi-scale hypergraph to
efficiently capture high-order correlations among objects. Specifically, the
proposed method tailors Clustering and Dilated HyperGraph Construction (DHGC)
to adaptively capture multi-scale dependencies among the data samples.
Furthermore, a dynamic hypergraph convolution mechanism is proposed to
facilitate adaptive feature exchange and fusion at the hypergraph level.
Extensive qualitative and quantitative evaluations of the benchmark image
datasets demonstrate that the proposed DVHGNN significantly outperforms the
state-of-the-art vision backbones. For instance, our DVHGNN-S achieves an
impressive top-1 accuracy of 83.1% on ImageNet-1K, surpassing ViG-S by +1.0%
and ViHGNN-S by +0.6%.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 03:45:23 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Caoshuo",
""
],
[
"Li",
"Tanzhe",
""
],
[
"Hu",
"Xiaobin",
""
],
[
"Luo",
"Donghao",
""
],
[
"Jin",
"Taisong",
""
]
] | TITLE: DVHGNN: Multi-Scale Dilated Vision HGNN for Efficient Vision Recognition
ABSTRACT: Recently, Vision Graph Neural Network (ViG) has gained considerable attention
in computer vision. Despite its groundbreaking innovation, Vision Graph Neural
Network encounters key issues including the quadratic computational complexity
caused by its K-Nearest Neighbor (KNN) graph construction and the limitation of
pairwise relations of normal graphs. To address the aforementioned challenges,
we propose a novel vision architecture, termed Dilated Vision HyperGraph Neural
Network (DVHGNN), which is designed to leverage multi-scale hypergraph to
efficiently capture high-order correlations among objects. Specifically, the
proposed method tailors Clustering and Dilated HyperGraph Construction (DHGC)
to adaptively capture multi-scale dependencies among the data samples.
Furthermore, a dynamic hypergraph convolution mechanism is proposed to
facilitate adaptive feature exchange and fusion at the hypergraph level.
Extensive qualitative and quantitative evaluations of the benchmark image
datasets demonstrate that the proposed DVHGNN significantly outperforms the
state-of-the-art vision backbones. For instance, our DVHGNN-S achieves an
impressive top-1 accuracy of 83.1% on ImageNet-1K, surpassing ViG-S by +1.0%
and ViHGNN-S by +0.6%.
|
2503.14873 | Mojtaba Mohasel | Seyed Mojtaba Mohasel, Hamidreza Koosha | Robust Support Vector Machines for Imbalanced and Noisy Data via Benders
Decomposition | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | This study introduces a novel formulation to enhance Support Vector Machines
(SVMs) in handling class imbalance and noise. Unlike the conventional Soft
Margin SVM, which penalizes the magnitude of constraint violations, the
proposed model quantifies the number of violations and aims to minimize their
frequency. To achieve this, a binary variable is incorporated into the
objective function of the primal SVM formulation, replacing the traditional
slack variable. Furthermore, each misclassified sample is assigned a priority
and an associated constraint. The resulting formulation is a mixed-integer
programming model, efficiently solved using Benders decomposition. The proposed
model's performance was benchmarked against existing models, including Soft
Margin SVM, weighted SVM, and NuSVC. Two primary hypotheses were examined: 1)
The proposed model improves the F1-score for the minority class in imbalanced
classification tasks. 2) The proposed model enhances classification accuracy in
noisy datasets. These hypotheses were evaluated using a Wilcoxon test across
multiple publicly available datasets from the OpenML repository. The results
supported both hypotheses (\( p < 0.05 \)). In addition, the proposed model
exhibited several interesting properties, such as improved robustness to noise,
a decision boundary shift favoring the minority class, a reduced number of
support vectors, and decreased prediction time. The open-source Python
implementation of the proposed SVM model is available.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 04:03:39 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Mohasel",
"Seyed Mojtaba",
""
],
[
"Koosha",
"Hamidreza",
""
]
] | TITLE: Robust Support Vector Machines for Imbalanced and Noisy Data via Benders
Decomposition
ABSTRACT: This study introduces a novel formulation to enhance Support Vector Machines
(SVMs) in handling class imbalance and noise. Unlike the conventional Soft
Margin SVM, which penalizes the magnitude of constraint violations, the
proposed model quantifies the number of violations and aims to minimize their
frequency. To achieve this, a binary variable is incorporated into the
objective function of the primal SVM formulation, replacing the traditional
slack variable. Furthermore, each misclassified sample is assigned a priority
and an associated constraint. The resulting formulation is a mixed-integer
programming model, efficiently solved using Benders decomposition. The proposed
model's performance was benchmarked against existing models, including Soft
Margin SVM, weighted SVM, and NuSVC. Two primary hypotheses were examined: 1)
The proposed model improves the F1-score for the minority class in imbalanced
classification tasks. 2) The proposed model enhances classification accuracy in
noisy datasets. These hypotheses were evaluated using a Wilcoxon test across
multiple publicly available datasets from the OpenML repository. The results
supported both hypotheses (\( p < 0.05 \)). In addition, the proposed model
exhibited several interesting properties, such as improved robustness to noise,
a decision boundary shift favoring the minority class, a reduced number of
support vectors, and decreased prediction time. The open-source Python
implementation of the proposed SVM model is available.
|
2503.14900 | Estrid He | Estrid He, Tabinda Sarwar, Ibrahim Khalil, Xun Yi, and Ke Wang | Deep Contrastive Unlearning for Language Models | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The past a few years have witnessed the great success of large language
models, demonstrating powerful capabilities in comprehending textual data and
generating human-like languages. Large language models achieve success by being
trained on vast amounts of textual data, including online sources with
copyrighted content and user-generated knowledge. However, this comes at a
cost: the potential risk of exposing users' privacy and violating copyright
protections. Thus, to safeguard individuals' "right to be forgotten", there has
been increasing interests in machine unlearning -- the process of removing
information carried by particular training samples from a model while not
deteriorating its predictive quality. This is a challenging task due to the
black-box nature of language models. Most existing studies focus on mitigating
the impact of those forgot samples upon a model's outputs, and do not
explicitly consider the geometric distributions of samples in the latent space
of a model. To address this issue, we propose a machine unlearning framework,
named Deep Contrastive Unlearning for fine-Tuning (DeepCUT) language models.
Our proposed model achieves machine unlearning by directly optimizing the
latent space of a model. Comprehensive experiments on real-world datasets
demonstrate the effectiveness and efficiency of DeepCUT with consistent and
significant improvement over baseline methods.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 04:58:45 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"He",
"Estrid",
""
],
[
"Sarwar",
"Tabinda",
""
],
[
"Khalil",
"Ibrahim",
""
],
[
"Yi",
"Xun",
""
],
[
"Wang",
"Ke",
""
]
] | TITLE: Deep Contrastive Unlearning for Language Models
ABSTRACT: The past a few years have witnessed the great success of large language
models, demonstrating powerful capabilities in comprehending textual data and
generating human-like languages. Large language models achieve success by being
trained on vast amounts of textual data, including online sources with
copyrighted content and user-generated knowledge. However, this comes at a
cost: the potential risk of exposing users' privacy and violating copyright
protections. Thus, to safeguard individuals' "right to be forgotten", there has
been increasing interests in machine unlearning -- the process of removing
information carried by particular training samples from a model while not
deteriorating its predictive quality. This is a challenging task due to the
black-box nature of language models. Most existing studies focus on mitigating
the impact of those forgot samples upon a model's outputs, and do not
explicitly consider the geometric distributions of samples in the latent space
of a model. To address this issue, we propose a machine unlearning framework,
named Deep Contrastive Unlearning for fine-Tuning (DeepCUT) language models.
Our proposed model achieves machine unlearning by directly optimizing the
latent space of a model. Comprehensive experiments on real-world datasets
demonstrate the effectiveness and efficiency of DeepCUT with consistent and
significant improvement over baseline methods.
|
2503.14905 | Siwei Wen | Siwei Wen, Junyan Ye, Peilin Feng, Hengrui Kang, Zichen Wen, Yize
Chen, Jiang Wu, Wenjun Wu, Conghui He, Weijia Li | Spot the Fake: Large Multimodal Model-Based Synthetic Image Detection
with Artifact Explanation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advancement of Artificial Intelligence Generated Content
(AIGC) technologies, synthetic images have become increasingly prevalent in
everyday life, posing new challenges for authenticity assessment and detection.
Despite the effectiveness of existing methods in evaluating image authenticity
and locating forgeries, these approaches often lack human interpretability and
do not fully address the growing complexity of synthetic data. To tackle these
challenges, we introduce FakeVLM, a specialized large multimodal model designed
for both general synthetic image and DeepFake detection tasks. FakeVLM not only
excels in distinguishing real from fake images but also provides clear, natural
language explanations for image artifacts, enhancing interpretability.
Additionally, we present FakeClue, a comprehensive dataset containing over
100,000 images across seven categories, annotated with fine-grained artifact
clues in natural language. FakeVLM demonstrates performance comparable to
expert models while eliminating the need for additional classifiers, making it
a robust solution for synthetic data detection. Extensive evaluations across
multiple datasets confirm the superiority of FakeVLM in both authenticity
classification and artifact explanation tasks, setting a new benchmark for
synthetic image detection. The dataset and code will be released in:
https://github.com/opendatalab/FakeVLM.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 05:14:44 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wen",
"Siwei",
""
],
[
"Ye",
"Junyan",
""
],
[
"Feng",
"Peilin",
""
],
[
"Kang",
"Hengrui",
""
],
[
"Wen",
"Zichen",
""
],
[
"Chen",
"Yize",
""
],
[
"Wu",
"Jiang",
""
],
[
"Wu",
"Wenjun",
""
],
[
"He",
"Conghui",
""
],
[
"Li",
"Weijia",
""
]
] | TITLE: Spot the Fake: Large Multimodal Model-Based Synthetic Image Detection
with Artifact Explanation
ABSTRACT: With the rapid advancement of Artificial Intelligence Generated Content
(AIGC) technologies, synthetic images have become increasingly prevalent in
everyday life, posing new challenges for authenticity assessment and detection.
Despite the effectiveness of existing methods in evaluating image authenticity
and locating forgeries, these approaches often lack human interpretability and
do not fully address the growing complexity of synthetic data. To tackle these
challenges, we introduce FakeVLM, a specialized large multimodal model designed
for both general synthetic image and DeepFake detection tasks. FakeVLM not only
excels in distinguishing real from fake images but also provides clear, natural
language explanations for image artifacts, enhancing interpretability.
Additionally, we present FakeClue, a comprehensive dataset containing over
100,000 images across seven categories, annotated with fine-grained artifact
clues in natural language. FakeVLM demonstrates performance comparable to
expert models while eliminating the need for additional classifiers, making it
a robust solution for synthetic data detection. Extensive evaluations across
multiple datasets confirm the superiority of FakeVLM in both authenticity
classification and artifact explanation tasks, setting a new benchmark for
synthetic image detection. The dataset and code will be released in:
https://github.com/opendatalab/FakeVLM.
|
2503.14906 | Yaofei Duan | Yaofei Duan, Tao Tan, Zhiyuan Zhu, Yuhao Huang, Yuanji Zhang, Rui Gao,
Patrick Cheong-Iao Pang, Xinru Gao, Guowei Tao, Xiang Cong, Zhou Li, Lianying
Liang, Guangzhi He, Linliang Yin, Xuedong Deng, Xin Yang and Dong Ni | FetalFlex: Anatomy-Guided Diffusion Model for Flexible Control on Fetal
Ultrasound Image Synthesis | 18 pages, 10 figures | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fetal ultrasound (US) examinations require the acquisition of multiple
planes, each providing unique diagnostic information to evaluate fetal
development and screening for congenital anomalies. However, obtaining a
comprehensive, multi-plane annotated fetal US dataset remains challenging,
particularly for rare or complex anomalies owing to their low incidence and
numerous subtypes. This poses difficulties in training novice radiologists and
developing robust AI models, especially for detecting abnormal fetuses. In this
study, we introduce a Flexible Fetal US image generation framework (FetalFlex)
to address these challenges, which leverages anatomical structures and
multimodal information to enable controllable synthesis of fetal US images
across diverse planes. Specifically, FetalFlex incorporates a pre-alignment
module to enhance controllability and introduces a repaint strategy to ensure
consistent texture and appearance. Moreover, a two-stage adaptive sampling
strategy is developed to progressively refine image quality from coarse to fine
levels. We believe that FetalFlex is the first method capable of generating
both in-distribution normal and out-of-distribution abnormal fetal US images,
without requiring any abnormal data. Experiments on multi-center datasets
demonstrate that FetalFlex achieved state-of-the-art performance across
multiple image quality metrics. A reader study further confirms the close
alignment of the generated results with expert visual assessments. Furthermore,
synthetic images by FetalFlex significantly improve the performance of six
typical deep models in downstream classification and anomaly detection tasks.
Lastly, FetalFlex's anatomy-level controllable generation offers a unique
advantage for anomaly simulation and creating paired or counterfactual data at
the pixel level. The demo is available at:
https://dyf1023.github.io/FetalFlex/.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 05:16:19 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Duan",
"Yaofei",
""
],
[
"Tan",
"Tao",
""
],
[
"Zhu",
"Zhiyuan",
""
],
[
"Huang",
"Yuhao",
""
],
[
"Zhang",
"Yuanji",
""
],
[
"Gao",
"Rui",
""
],
[
"Pang",
"Patrick Cheong-Iao",
""
],
[
"Gao",
"Xinru",
""
],
[
"Tao",
"Guowei",
""
],
[
"Cong",
"Xiang",
""
],
[
"Li",
"Zhou",
""
],
[
"Liang",
"Lianying",
""
],
[
"He",
"Guangzhi",
""
],
[
"Yin",
"Linliang",
""
],
[
"Deng",
"Xuedong",
""
],
[
"Yang",
"Xin",
""
],
[
"Ni",
"Dong",
""
]
] | TITLE: FetalFlex: Anatomy-Guided Diffusion Model for Flexible Control on Fetal
Ultrasound Image Synthesis
ABSTRACT: Fetal ultrasound (US) examinations require the acquisition of multiple
planes, each providing unique diagnostic information to evaluate fetal
development and screening for congenital anomalies. However, obtaining a
comprehensive, multi-plane annotated fetal US dataset remains challenging,
particularly for rare or complex anomalies owing to their low incidence and
numerous subtypes. This poses difficulties in training novice radiologists and
developing robust AI models, especially for detecting abnormal fetuses. In this
study, we introduce a Flexible Fetal US image generation framework (FetalFlex)
to address these challenges, which leverages anatomical structures and
multimodal information to enable controllable synthesis of fetal US images
across diverse planes. Specifically, FetalFlex incorporates a pre-alignment
module to enhance controllability and introduces a repaint strategy to ensure
consistent texture and appearance. Moreover, a two-stage adaptive sampling
strategy is developed to progressively refine image quality from coarse to fine
levels. We believe that FetalFlex is the first method capable of generating
both in-distribution normal and out-of-distribution abnormal fetal US images,
without requiring any abnormal data. Experiments on multi-center datasets
demonstrate that FetalFlex achieved state-of-the-art performance across
multiple image quality metrics. A reader study further confirms the close
alignment of the generated results with expert visual assessments. Furthermore,
synthetic images by FetalFlex significantly improve the performance of six
typical deep models in downstream classification and anomaly detection tasks.
Lastly, FetalFlex's anatomy-level controllable generation offers a unique
advantage for anomaly simulation and creating paired or counterfactual data at
the pixel level. The demo is available at:
https://dyf1023.github.io/FetalFlex/.
|
2503.14908 | Haoyu Chen | Haoyu Chen, Xiaojie Xu, Wenbo Li, Jingjing Ren, Tian Ye, Songhua Liu,
Ying-Cong Chen, Lei Zhu, Xinchao Wang | POSTA: A Go-to Framework for Customized Artistic Poster Generation | Accepted to CVPR 2025 | null | null | null | cs.GR cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Poster design is a critical medium for visual communication. Prior work has
explored automatic poster design using deep learning techniques, but these
approaches lack text accuracy, user customization, and aesthetic appeal,
limiting their applicability in artistic domains such as movies and
exhibitions, where both clear content delivery and visual impact are essential.
To address these limitations, we present POSTA: a modular framework powered by
diffusion models and multimodal large language models (MLLMs) for customized
artistic poster generation. The framework consists of three modules. Background
Diffusion creates a themed background based on user input. Design MLLM then
generates layout and typography elements that align with and complement the
background style. Finally, to enhance the poster's aesthetic appeal, ArtText
Diffusion applies additional stylization to key text elements. The final result
is a visually cohesive and appealing poster, with a fully modular process that
allows for complete customization. To train our models, we develop the
PosterArt dataset, comprising high-quality artistic posters annotated with
layout, typography, and pixel-level stylized text segmentation. Our
comprehensive experimental analysis demonstrates POSTA's exceptional
controllability and design diversity, outperforming existing models in both
text accuracy and aesthetic quality.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 05:22:38 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Chen",
"Haoyu",
""
],
[
"Xu",
"Xiaojie",
""
],
[
"Li",
"Wenbo",
""
],
[
"Ren",
"Jingjing",
""
],
[
"Ye",
"Tian",
""
],
[
"Liu",
"Songhua",
""
],
[
"Chen",
"Ying-Cong",
""
],
[
"Zhu",
"Lei",
""
],
[
"Wang",
"Xinchao",
""
]
] | TITLE: POSTA: A Go-to Framework for Customized Artistic Poster Generation
ABSTRACT: Poster design is a critical medium for visual communication. Prior work has
explored automatic poster design using deep learning techniques, but these
approaches lack text accuracy, user customization, and aesthetic appeal,
limiting their applicability in artistic domains such as movies and
exhibitions, where both clear content delivery and visual impact are essential.
To address these limitations, we present POSTA: a modular framework powered by
diffusion models and multimodal large language models (MLLMs) for customized
artistic poster generation. The framework consists of three modules. Background
Diffusion creates a themed background based on user input. Design MLLM then
generates layout and typography elements that align with and complement the
background style. Finally, to enhance the poster's aesthetic appeal, ArtText
Diffusion applies additional stylization to key text elements. The final result
is a visually cohesive and appealing poster, with a fully modular process that
allows for complete customization. To train our models, we develop the
PosterArt dataset, comprising high-quality artistic posters annotated with
layout, typography, and pixel-level stylized text segmentation. Our
comprehensive experimental analysis demonstrates POSTA's exceptional
controllability and design diversity, outperforming existing models in both
text accuracy and aesthetic quality.
|
2503.14911 | Siyuan Yan | Siyuan Yan, Ming Hu, Yiwen Jiang, Xieji Li, Hao Fei, Philipp Tschandl,
Harald Kittler, Zongyuan Ge | Derm1M: A Million-scale Vision-Language Dataset Aligned with Clinical
Ontology Knowledge for Dermatology | 23 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emergence of vision-language models has transformed medical AI, enabling
unprecedented advances in diagnostic capability and clinical applications.
However, progress in dermatology has lagged behind other medical domains due to
the lack of standard image-text pairs. Existing dermatological datasets are
limited in both scale and depth, offering only single-label annotations across
a narrow range of diseases instead of rich textual descriptions, and lacking
the crucial clinical context needed for real-world applications. To address
these limitations, we present Derm1M, the first large-scale vision-language
dataset for dermatology, comprising 1,029,761 image-text pairs. Built from
diverse educational resources and structured around a standard ontology
collaboratively developed by experts, Derm1M provides comprehensive coverage
for over 390 skin conditions across four hierarchical levels and 130 clinical
concepts with rich contextual information such as medical history, symptoms,
and skin tone. To demonstrate Derm1M potential in advancing both AI research
and clinical application, we pretrained a series of CLIP-like models,
collectively called DermLIP, on this dataset. The DermLIP family significantly
outperforms state-of-the-art foundation models on eight diverse datasets across
multiple tasks, including zero-shot skin disease classification, clinical and
artifacts concept identification, few-shot/full-shot learning, and cross-modal
retrieval. Our dataset and code will be public.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 05:30:01 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Yan",
"Siyuan",
""
],
[
"Hu",
"Ming",
""
],
[
"Jiang",
"Yiwen",
""
],
[
"Li",
"Xieji",
""
],
[
"Fei",
"Hao",
""
],
[
"Tschandl",
"Philipp",
""
],
[
"Kittler",
"Harald",
""
],
[
"Ge",
"Zongyuan",
""
]
] | TITLE: Derm1M: A Million-scale Vision-Language Dataset Aligned with Clinical
Ontology Knowledge for Dermatology
ABSTRACT: The emergence of vision-language models has transformed medical AI, enabling
unprecedented advances in diagnostic capability and clinical applications.
However, progress in dermatology has lagged behind other medical domains due to
the lack of standard image-text pairs. Existing dermatological datasets are
limited in both scale and depth, offering only single-label annotations across
a narrow range of diseases instead of rich textual descriptions, and lacking
the crucial clinical context needed for real-world applications. To address
these limitations, we present Derm1M, the first large-scale vision-language
dataset for dermatology, comprising 1,029,761 image-text pairs. Built from
diverse educational resources and structured around a standard ontology
collaboratively developed by experts, Derm1M provides comprehensive coverage
for over 390 skin conditions across four hierarchical levels and 130 clinical
concepts with rich contextual information such as medical history, symptoms,
and skin tone. To demonstrate Derm1M potential in advancing both AI research
and clinical application, we pretrained a series of CLIP-like models,
collectively called DermLIP, on this dataset. The DermLIP family significantly
outperforms state-of-the-art foundation models on eight diverse datasets across
multiple tasks, including zero-shot skin disease classification, clinical and
artifacts concept identification, few-shot/full-shot learning, and cross-modal
retrieval. Our dataset and code will be public.
|
2503.14917 | Jiazheng Li | Jiazheng Li, Lu Yu, Qing Cui, Zhiqiang Zhang, Jun Zhou, Yanfang Ye,
Chuxu Zhang | MASS: Mathematical Data Selection via Skill Graphs for Pretraining Large
Language Models | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | High-quality data plays a critical role in the pretraining and fine-tuning of
large language models (LLMs), even determining their performance ceiling to
some degree. Consequently, numerous data selection methods have been proposed
to identify subsets of data that can effectively and efficiently enhance model
performance. However, most of these methods focus on general data selection and
tend to overlook the specific nuances of domain-related data. In this paper, we
introduce MASS, a \textbf{MA}thematical data \textbf{S}election framework using
the \textbf{S}kill graph for pretraining LLMs in the mathematical reasoning
domain. By taking into account the unique characteristics of mathematics and
reasoning, we construct a skill graph that captures the mathematical skills and
their interrelations from a reference dataset. This skill graph guides us in
assigning quality scores to the target dataset, enabling us to select the
top-ranked subset which is further used to pretrain LLMs. Experimental results
demonstrate the efficiency and effectiveness of MASS across different model
sizes (1B and 7B) and pretraining datasets (web data and synthetic data).
Specifically, in terms of efficiency, models trained on subsets selected by
MASS can achieve similar performance to models trained on the original
datasets, with a significant reduction in the number of trained tokens -
ranging from 50\% to 70\% fewer tokens. In terms of effectiveness, when trained
on the same amount of tokens, models trained on the data selected by MASS
outperform those trained on the original datasets by 3.3\% to 5.9\%. These
results underscore the potential of MASS to improve both the efficiency and
effectiveness of pretraining LLMs.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 05:50:21 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Jiazheng",
""
],
[
"Yu",
"Lu",
""
],
[
"Cui",
"Qing",
""
],
[
"Zhang",
"Zhiqiang",
""
],
[
"Zhou",
"Jun",
""
],
[
"Ye",
"Yanfang",
""
],
[
"Zhang",
"Chuxu",
""
]
] | TITLE: MASS: Mathematical Data Selection via Skill Graphs for Pretraining Large
Language Models
ABSTRACT: High-quality data plays a critical role in the pretraining and fine-tuning of
large language models (LLMs), even determining their performance ceiling to
some degree. Consequently, numerous data selection methods have been proposed
to identify subsets of data that can effectively and efficiently enhance model
performance. However, most of these methods focus on general data selection and
tend to overlook the specific nuances of domain-related data. In this paper, we
introduce MASS, a \textbf{MA}thematical data \textbf{S}election framework using
the \textbf{S}kill graph for pretraining LLMs in the mathematical reasoning
domain. By taking into account the unique characteristics of mathematics and
reasoning, we construct a skill graph that captures the mathematical skills and
their interrelations from a reference dataset. This skill graph guides us in
assigning quality scores to the target dataset, enabling us to select the
top-ranked subset which is further used to pretrain LLMs. Experimental results
demonstrate the efficiency and effectiveness of MASS across different model
sizes (1B and 7B) and pretraining datasets (web data and synthetic data).
Specifically, in terms of efficiency, models trained on subsets selected by
MASS can achieve similar performance to models trained on the original
datasets, with a significant reduction in the number of trained tokens -
ranging from 50\% to 70\% fewer tokens. In terms of effectiveness, when trained
on the same amount of tokens, models trained on the data selected by MASS
outperform those trained on the original datasets by 3.3\% to 5.9\%. These
results underscore the potential of MASS to improve both the efficiency and
effectiveness of pretraining LLMs.
|
2503.14919 | Junyu Shi | Junyu Shi and Lijiang Liu and Yong Sun and Zhiyuan Zhang and Jinni
Zhou and Qiang Nie | GenM$^3$: Generative Pretrained Multi-path Motion Model for Text
Conditional Human Motion Generation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scaling up motion datasets is crucial to enhance motion generation
capabilities. However, training on large-scale multi-source datasets introduces
data heterogeneity challenges due to variations in motion content. To address
this, we propose Generative Pretrained Multi-path Motion Model (GenM$^3$), a
comprehensive framework designed to learn unified motion representations.
GenM$^3$ comprises two components: 1) a Multi-Expert VQ-VAE (MEVQ-VAE) that
adapts to different dataset distributions to learn a unified discrete motion
representation, and 2) a Multi-path Motion Transformer (MMT) that improves
intra-modal representations by using separate modality-specific pathways, each
with densely activated experts to accommodate variations within that modality,
and improves inter-modal alignment by the text-motion shared pathway. To enable
large-scale training, we integrate and unify 11 high-quality motion datasets
(approximately 220 hours of motion data) and augment it with textual
annotations (nearly 10,000 motion sequences labeled by a large language model
and 300+ by human experts). After training on our integrated dataset, GenM$^3$
achieves a state-of-the-art FID of 0.035 on the HumanML3D benchmark, surpassing
state-of-the-art methods by a large margin. It also demonstrates strong
zero-shot generalization on IDEA400 dataset, highlighting its effectiveness and
adaptability across diverse motion scenarios.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 05:56:52 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Shi",
"Junyu",
""
],
[
"Liu",
"Lijiang",
""
],
[
"Sun",
"Yong",
""
],
[
"Zhang",
"Zhiyuan",
""
],
[
"Zhou",
"Jinni",
""
],
[
"Nie",
"Qiang",
""
]
] | TITLE: GenM$^3$: Generative Pretrained Multi-path Motion Model for Text
Conditional Human Motion Generation
ABSTRACT: Scaling up motion datasets is crucial to enhance motion generation
capabilities. However, training on large-scale multi-source datasets introduces
data heterogeneity challenges due to variations in motion content. To address
this, we propose Generative Pretrained Multi-path Motion Model (GenM$^3$), a
comprehensive framework designed to learn unified motion representations.
GenM$^3$ comprises two components: 1) a Multi-Expert VQ-VAE (MEVQ-VAE) that
adapts to different dataset distributions to learn a unified discrete motion
representation, and 2) a Multi-path Motion Transformer (MMT) that improves
intra-modal representations by using separate modality-specific pathways, each
with densely activated experts to accommodate variations within that modality,
and improves inter-modal alignment by the text-motion shared pathway. To enable
large-scale training, we integrate and unify 11 high-quality motion datasets
(approximately 220 hours of motion data) and augment it with textual
annotations (nearly 10,000 motion sequences labeled by a large language model
and 300+ by human experts). After training on our integrated dataset, GenM$^3$
achieves a state-of-the-art FID of 0.035 on the HumanML3D benchmark, surpassing
state-of-the-art methods by a large margin. It also demonstrates strong
zero-shot generalization on IDEA400 dataset, highlighting its effectiveness and
adaptability across diverse motion scenarios.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.