Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.18013 | Yufei Zhan | Yufei Zhan, Yousong Zhu, Shurong Zheng, Hongyin Zhao, Fan Yang, Ming
Tang, Jinqiao Wang | Vision-R1: Evolving Human-Free Alignment in Large Vision-Language Models
via Vision-Guided Reinforcement Learning | Project in development. Github:
https://github.com/jefferyZhan/Griffon/tree/master/Vision-R1 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Vision-Language Models (LVLMs) typically follow a two-stage training
paradigm-pretraining and supervised fine-tuning. Recently, preference
optimization, derived from the language domain, has emerged as an effective
post-training reinforcement strategy to enhance capabilities of LVLMs. However,
constructing high-quality human-annotated preference data and developing robust
reward models to mimic these preferences are both costly and challenging.
Motivated by this observation, we propose Vision-R1, a novel vision-guided
R1-like reinforcement learning algorithm for LVLMs that rewards models with
definitive vision feedback. It only leverages curated instruction data,
eliminating the need for specialized reward models and handcrafted preference
datasets. We incorporate a criterion-driven reward function that further
integrates multi-dimensional feedback to evaluate model completions
comprehensively based on the vision task logic. Furthermore, we introduce a
progressive rule refinement strategy that dynamically adjusts the reward
criteria during training, enabling continuous model improvement and mitigating
reward hacking. Extensive experiments on both in-distribution and
out-of-distribution benchmarks demonstrate that fine-tuning the 7B LVLMs with
Vision-R1 achieves consistent performance gains, with even up to 50%
improvement and surpassing the state-of-the-art 10x size model.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 10:21:14 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhan",
"Yufei",
""
],
[
"Zhu",
"Yousong",
""
],
[
"Zheng",
"Shurong",
""
],
[
"Zhao",
"Hongyin",
""
],
[
"Yang",
"Fan",
""
],
[
"Tang",
"Ming",
""
],
[
"Wang",
"Jinqiao",
""
]
] | TITLE: Vision-R1: Evolving Human-Free Alignment in Large Vision-Language Models
via Vision-Guided Reinforcement Learning
ABSTRACT: Large Vision-Language Models (LVLMs) typically follow a two-stage training
paradigm-pretraining and supervised fine-tuning. Recently, preference
optimization, derived from the language domain, has emerged as an effective
post-training reinforcement strategy to enhance capabilities of LVLMs. However,
constructing high-quality human-annotated preference data and developing robust
reward models to mimic these preferences are both costly and challenging.
Motivated by this observation, we propose Vision-R1, a novel vision-guided
R1-like reinforcement learning algorithm for LVLMs that rewards models with
definitive vision feedback. It only leverages curated instruction data,
eliminating the need for specialized reward models and handcrafted preference
datasets. We incorporate a criterion-driven reward function that further
integrates multi-dimensional feedback to evaluate model completions
comprehensively based on the vision task logic. Furthermore, we introduce a
progressive rule refinement strategy that dynamically adjusts the reward
criteria during training, enabling continuous model improvement and mitigating
reward hacking. Extensive experiments on both in-distribution and
out-of-distribution benchmarks demonstrate that fine-tuning the 7B LVLMs with
Vision-R1 achieves consistent performance gains, with even up to 50%
improvement and surpassing the state-of-the-art 10x size model.
|
2503.18018 | Aabid Karim | Aabid Karim, Abdul Karim, Bhoomika Lohana, Matt Keon, Jaswinder Singh,
Abdul Sattar | Lost in Cultural Translation: Do LLMs Struggle with Math Across Cultural
Contexts? | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have significantly advanced various fields,
particularly coding, mathematical reasoning, and logical problem solving.
However, a critical question remains: Do these mathematical reasoning abilities
persist when LLMs are presented with culturally adapted math problems?
Specifically, how do LLMs perform when faced with math problems embedded in
cultural contexts that have no significant representation in main stream
web-scale AI training data? To explore this, we generated six synthetic
cultural datasets from GSM8K, a widely used benchmark for assessing LLMs'
mathematical reasoning skills. While preserving the mathematical logic and
numerical values of the original GSM8K test set, we modify cultural elements
such as personal names, food items, place names, etc. These culturally adapted
datasets provide a more reliable framework for evaluating LLMs' mathematical
reasoning under shifting cultural contexts. Our findings reveal that LLMs
struggle with math problems when cultural references change, even though the
underlying mathematical structure remains constant. Smaller models exhibit
greater performance drops compared to larger models. Interestingly, our results
also suggest that cultural familiarity can enhance mathematical reasoning. Even
models with no explicit mathematical training but exposure to relevant cultural
contexts sometimes outperform larger, mathematically proficient models on
culturally embedded math problems. This study highlights the impact of cultural
context on the mathematical reasoning abilities of LLMs, underscoring the need
for more diverse and representative training data to improve robustness in
real-world applications. The benchmark data sets and script for reproducing the
results are available at
https://github.com/akarim23131/Lost_in_Cultural_Translation
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 10:35:39 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Karim",
"Aabid",
""
],
[
"Karim",
"Abdul",
""
],
[
"Lohana",
"Bhoomika",
""
],
[
"Keon",
"Matt",
""
],
[
"Singh",
"Jaswinder",
""
],
[
"Sattar",
"Abdul",
""
]
] | TITLE: Lost in Cultural Translation: Do LLMs Struggle with Math Across Cultural
Contexts?
ABSTRACT: Large Language Models (LLMs) have significantly advanced various fields,
particularly coding, mathematical reasoning, and logical problem solving.
However, a critical question remains: Do these mathematical reasoning abilities
persist when LLMs are presented with culturally adapted math problems?
Specifically, how do LLMs perform when faced with math problems embedded in
cultural contexts that have no significant representation in main stream
web-scale AI training data? To explore this, we generated six synthetic
cultural datasets from GSM8K, a widely used benchmark for assessing LLMs'
mathematical reasoning skills. While preserving the mathematical logic and
numerical values of the original GSM8K test set, we modify cultural elements
such as personal names, food items, place names, etc. These culturally adapted
datasets provide a more reliable framework for evaluating LLMs' mathematical
reasoning under shifting cultural contexts. Our findings reveal that LLMs
struggle with math problems when cultural references change, even though the
underlying mathematical structure remains constant. Smaller models exhibit
greater performance drops compared to larger models. Interestingly, our results
also suggest that cultural familiarity can enhance mathematical reasoning. Even
models with no explicit mathematical training but exposure to relevant cultural
contexts sometimes outperform larger, mathematically proficient models on
culturally embedded math problems. This study highlights the impact of cultural
context on the mathematical reasoning abilities of LLMs, underscoring the need
for more diverse and representative training data to improve robustness in
real-world applications. The benchmark data sets and script for reproducing the
results are available at
https://github.com/akarim23131/Lost_in_Cultural_Translation
|
2503.18037 | YongKeun Park | Dohyeon Lee, Juyeon Park, Juheon Lee, Chungha Lee, YongKeun Park | Compression benchmarking of holotomography data using the OME-Zarr
storage format | null | null | null | null | physics.optics | http://creativecommons.org/licenses/by/4.0/ | Holotomography (HT) is a label-free, three-dimensional imaging technique that
captures refractive index distributions of biological samples at sub-micron
resolution. As modern HT systems enable high-throughput and large-scale
acquisition, they produce terabyte-scale datasets that require efficient data
management. This study presents a systematic benchmarking of data compression
strategies for HT data stored in the OME-Zarr format, a cloud-compatible,
chunked data structure suitable for scalable imaging workflows. Using
representative datasets-including embryo, tissue, and birefringent tissue
volumes-we evaluated combinations of preprocessing filters and 25 compression
configurations across multiple compression levels. Performance was assessed in
terms of compression ratio, bandwidth, and decompression speed. A
throughput-based evaluation metric was introduced to simulate real-world
conditions under varying network constraints, supporting optimal compressor
selection based on system bandwidth. The results offer practical guidance for
storage and transmission of large HT datasets and serve as a reference for
implementing scalable, FAIR-aligned imaging workflows in cloud and
high-performance computing environments.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 11:49:30 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lee",
"Dohyeon",
""
],
[
"Park",
"Juyeon",
""
],
[
"Lee",
"Juheon",
""
],
[
"Lee",
"Chungha",
""
],
[
"Park",
"YongKeun",
""
]
] | TITLE: Compression benchmarking of holotomography data using the OME-Zarr
storage format
ABSTRACT: Holotomography (HT) is a label-free, three-dimensional imaging technique that
captures refractive index distributions of biological samples at sub-micron
resolution. As modern HT systems enable high-throughput and large-scale
acquisition, they produce terabyte-scale datasets that require efficient data
management. This study presents a systematic benchmarking of data compression
strategies for HT data stored in the OME-Zarr format, a cloud-compatible,
chunked data structure suitable for scalable imaging workflows. Using
representative datasets-including embryo, tissue, and birefringent tissue
volumes-we evaluated combinations of preprocessing filters and 25 compression
configurations across multiple compression levels. Performance was assessed in
terms of compression ratio, bandwidth, and decompression speed. A
throughput-based evaluation metric was introduced to simulate real-world
conditions under varying network constraints, supporting optimal compressor
selection based on system bandwidth. The results offer practical guidance for
storage and transmission of large HT datasets and serve as a reference for
implementing scalable, FAIR-aligned imaging workflows in cloud and
high-performance computing environments.
|
2503.18042 | Qiang Wang | Qiang Wang, Yuhang He, SongLin Dong, Xiang Song, Jizhou Han, Haoyu Luo
and Yihong Gong | DualCP: Rehearsal-Free Domain-Incremental Learning via Dual-Level
Concept Prototype | Accepted at AAAI 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Domain-Incremental Learning (DIL) enables vision models to adapt to changing
conditions in real-world environments while maintaining the knowledge acquired
from previous domains. Given privacy concerns and training time, Rehearsal-Free
DIL (RFDIL) is more practical. Inspired by the incremental cognitive process of
the human brain, we design Dual-level Concept Prototypes (DualCP) for each
class to address the conflict between learning new knowledge and retaining old
knowledge in RFDIL. To construct DualCP, we propose a Concept Prototype
Generator (CPG) that generates both coarse-grained and fine-grained prototypes
for each class. Additionally, we introduce a Coarse-to-Fine calibrator (C2F) to
align image features with DualCP. Finally, we propose a Dual Dot-Regression
(DDR) loss function to optimize our C2F module. Extensive experiments on the
DomainNet, CDDB, and CORe50 datasets demonstrate the effectiveness of our
method.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 12:06:35 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Qiang",
""
],
[
"He",
"Yuhang",
""
],
[
"Dong",
"SongLin",
""
],
[
"Song",
"Xiang",
""
],
[
"Han",
"Jizhou",
""
],
[
"Luo",
"Haoyu",
""
],
[
"Gong",
"Yihong",
""
]
] | TITLE: DualCP: Rehearsal-Free Domain-Incremental Learning via Dual-Level
Concept Prototype
ABSTRACT: Domain-Incremental Learning (DIL) enables vision models to adapt to changing
conditions in real-world environments while maintaining the knowledge acquired
from previous domains. Given privacy concerns and training time, Rehearsal-Free
DIL (RFDIL) is more practical. Inspired by the incremental cognitive process of
the human brain, we design Dual-level Concept Prototypes (DualCP) for each
class to address the conflict between learning new knowledge and retaining old
knowledge in RFDIL. To construct DualCP, we propose a Concept Prototype
Generator (CPG) that generates both coarse-grained and fine-grained prototypes
for each class. Additionally, we introduce a Coarse-to-Fine calibrator (C2F) to
align image features with DualCP. Finally, we propose a Dual Dot-Regression
(DDR) loss function to optimize our C2F module. Extensive experiments on the
DomainNet, CDDB, and CORe50 datasets demonstrate the effectiveness of our
method.
|
2503.18048 | Haoyi Xiong | Xiaochen Zhang and Haoyi Xiong | Interpretable Feature Interaction via Statistical Self-supervised
Learning on Tabular Data | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | In high-dimensional and high-stakes contexts, ensuring both rigorous
statistical guarantees and interpretability in feature extraction from complex
tabular data remains a formidable challenge. Traditional methods such as
Principal Component Analysis (PCA) reduce dimensionality and identify key
features that explain the most variance, but are constrained by their reliance
on linear assumptions. In contrast, neural networks offer assumption-free
feature extraction through self-supervised learning techniques such as
autoencoders, though their interpretability remains a challenge in fields
requiring transparency. To address this gap, this paper introduces Spofe, a
novel self-supervised machine learning pipeline that marries the power of
kernel principal components for capturing nonlinear dependencies with a sparse
and principled polynomial representation to achieve clear interpretability with
statistical rigor. Underpinning our approach is a robust theoretical framework
that delivers precise error bounds and rigorous false discovery rate (FDR)
control via a multi-objective knockoff selection procedure; it effectively
bridges the gap between data-driven complexity and statistical reliability via
three stages: (1) generating self-supervised signals using kernel principal
components to model complex patterns, (2) distilling these signals into sparse
polynomial functions for improved interpretability, and (3) applying a
multi-objective knockoff selection procedure with significance testing to
rigorously identify important features. Extensive experiments on diverse
real-world datasets demonstrate the effectiveness of Spofe, consistently
surpassing KPCA, SKPCA, and other methods in feature selection for regression
and classification tasks. Visualization and case studies highlight its ability
to uncover key insights, enhancing interpretability and practical utility.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 12:27:42 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Xiaochen",
""
],
[
"Xiong",
"Haoyi",
""
]
] | TITLE: Interpretable Feature Interaction via Statistical Self-supervised
Learning on Tabular Data
ABSTRACT: In high-dimensional and high-stakes contexts, ensuring both rigorous
statistical guarantees and interpretability in feature extraction from complex
tabular data remains a formidable challenge. Traditional methods such as
Principal Component Analysis (PCA) reduce dimensionality and identify key
features that explain the most variance, but are constrained by their reliance
on linear assumptions. In contrast, neural networks offer assumption-free
feature extraction through self-supervised learning techniques such as
autoencoders, though their interpretability remains a challenge in fields
requiring transparency. To address this gap, this paper introduces Spofe, a
novel self-supervised machine learning pipeline that marries the power of
kernel principal components for capturing nonlinear dependencies with a sparse
and principled polynomial representation to achieve clear interpretability with
statistical rigor. Underpinning our approach is a robust theoretical framework
that delivers precise error bounds and rigorous false discovery rate (FDR)
control via a multi-objective knockoff selection procedure; it effectively
bridges the gap between data-driven complexity and statistical reliability via
three stages: (1) generating self-supervised signals using kernel principal
components to model complex patterns, (2) distilling these signals into sparse
polynomial functions for improved interpretability, and (3) applying a
multi-objective knockoff selection procedure with significance testing to
rigorously identify important features. Extensive experiments on diverse
real-world datasets demonstrate the effectiveness of Spofe, consistently
surpassing KPCA, SKPCA, and other methods in feature selection for regression
and classification tasks. Visualization and case studies highlight its ability
to uncover key insights, enhancing interpretability and practical utility.
|
2503.18050 | Hanwool Lee | Hanwool Lee | (G)I-DLE: Generative Inference via Distribution-preserving Logit
Exclusion with KL Divergence Minimization for Constrained Decoding | preprint | null | null | null | cs.CE cs.CL | http://creativecommons.org/licenses/by/4.0/ | We propose (G)I-DLE, a new approach to constrained decoding that leverages KL
divergence minimization to preserve the intrinsic conditional probability
distribution of autoregressive language models while excluding undesirable
tokens. Unlike conventional methods that naively set banned tokens' logits to
$-\infty$, which can distort the conversion from raw logits to posterior
probabilities and increase output variance, (G)I-DLE re-normalizes the allowed
token probabilities to minimize such distortion. We validate our method on the
K2-Eval dataset, specifically designed to assess Korean language fluency,
logical reasoning, and cultural appropriateness. Experimental results on
Qwen2.5 models (ranging from 1.5B to 14B) demonstrate that G-IDLE not only
boosts mean evaluation scores but also substantially reduces the variance of
output quality.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 12:37:14 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lee",
"Hanwool",
""
]
] | TITLE: (G)I-DLE: Generative Inference via Distribution-preserving Logit
Exclusion with KL Divergence Minimization for Constrained Decoding
ABSTRACT: We propose (G)I-DLE, a new approach to constrained decoding that leverages KL
divergence minimization to preserve the intrinsic conditional probability
distribution of autoregressive language models while excluding undesirable
tokens. Unlike conventional methods that naively set banned tokens' logits to
$-\infty$, which can distort the conversion from raw logits to posterior
probabilities and increase output variance, (G)I-DLE re-normalizes the allowed
token probabilities to minimize such distortion. We validate our method on the
K2-Eval dataset, specifically designed to assess Korean language fluency,
logical reasoning, and cultural appropriateness. Experimental results on
Qwen2.5 models (ranging from 1.5B to 14B) demonstrate that G-IDLE not only
boosts mean evaluation scores but also substantially reduces the variance of
output quality.
|
2503.18052 | Yue Li | Yue Li, Qi Ma, Runyi Yang, Huapeng Li, Mengjiao Ma, Bin Ren, Nikola
Popovic, Nicu Sebe, Ender Konukoglu, Theo Gevers, Luc Van Gool, Martin R.
Oswald, Danda Pani Paudel | SceneSplat: Gaussian Splatting-based Scene Understanding with
Vision-Language Pretraining | Our code, model, and dataset will be released at
https://github.com/unique1i/SceneSplat | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Recognizing arbitrary or previously unseen categories is essential for
comprehensive real-world 3D scene understanding. Currently, all existing
methods rely on 2D or textual modalities during training, or together at
inference. This highlights a clear absence of a model capable of processing 3D
data alone for learning semantics end-to-end, along with the necessary data to
train such a model. Meanwhile, 3D Gaussian Splatting (3DGS) has emerged as the
de facto standard for 3D scene representation across various vision tasks.
However, effectively integrating semantic reasoning into 3DGS in a
generalizable fashion remains an open challenge. To address these limitations
we introduce SceneSplat, to our knowledge the first large-scale 3D indoor scene
understanding approach that operates natively on 3DGS. Furthermore, we propose
a self-supervised learning scheme that unlocks rich 3D feature learning from
unlabeled scenes. In order to power the proposed methods, we introduce
SceneSplat-7K, the first large-scale 3DGS dataset for indoor scenes, comprising
of 6868 scenes derived from 7 established datasets like ScanNet, Matterport3D,
etc. Generating SceneSplat-7K required computational resources equivalent to
119 GPU-days on an L4 GPU, enabling standardized benchmarking for 3DGS-based
reasoning for indoor scenes. Our exhaustive experiments on SceneSplat-7K
demonstrate the significant benefit of the proposed methods over the
established baselines.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 12:50:25 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Yue",
""
],
[
"Ma",
"Qi",
""
],
[
"Yang",
"Runyi",
""
],
[
"Li",
"Huapeng",
""
],
[
"Ma",
"Mengjiao",
""
],
[
"Ren",
"Bin",
""
],
[
"Popovic",
"Nikola",
""
],
[
"Sebe",
"Nicu",
""
],
[
"Konukoglu",
"Ender",
""
],
[
"Gevers",
"Theo",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Oswald",
"Martin R.",
""
],
[
"Paudel",
"Danda Pani",
""
]
] | TITLE: SceneSplat: Gaussian Splatting-based Scene Understanding with
Vision-Language Pretraining
ABSTRACT: Recognizing arbitrary or previously unseen categories is essential for
comprehensive real-world 3D scene understanding. Currently, all existing
methods rely on 2D or textual modalities during training, or together at
inference. This highlights a clear absence of a model capable of processing 3D
data alone for learning semantics end-to-end, along with the necessary data to
train such a model. Meanwhile, 3D Gaussian Splatting (3DGS) has emerged as the
de facto standard for 3D scene representation across various vision tasks.
However, effectively integrating semantic reasoning into 3DGS in a
generalizable fashion remains an open challenge. To address these limitations
we introduce SceneSplat, to our knowledge the first large-scale 3D indoor scene
understanding approach that operates natively on 3DGS. Furthermore, we propose
a self-supervised learning scheme that unlocks rich 3D feature learning from
unlabeled scenes. In order to power the proposed methods, we introduce
SceneSplat-7K, the first large-scale 3DGS dataset for indoor scenes, comprising
of 6868 scenes derived from 7 established datasets like ScanNet, Matterport3D,
etc. Generating SceneSplat-7K required computational resources equivalent to
119 GPU-days on an L4 GPU, enabling standardized benchmarking for 3DGS-based
reasoning for indoor scenes. Our exhaustive experiments on SceneSplat-7K
demonstrate the significant benefit of the proposed methods over the
established baselines.
|
2503.18055 | Mingde Yao Yao | Mingde Yao, Menglu Wang, King-Man Tam, Lingen Li, Tianfan Xue, Jinwei
Gu | PolarFree: Polarization-based Reflection-free Imaging | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reflection removal is challenging due to complex light interactions, where
reflections obscure important details and hinder scene understanding.
Polarization naturally provides a powerful cue to distinguish between reflected
and transmitted light, enabling more accurate reflection removal. However,
existing methods often rely on small-scale or synthetic datasets, which fail to
capture the diversity and complexity of real-world scenarios. To this end, we
construct a large-scale dataset, PolaRGB, for Polarization-based reflection
removal of RGB images, which enables us to train models that generalize
effectively across a wide range of real-world scenarios. The PolaRGB dataset
contains 6,500 well-aligned mixed-transmission image pairs, 8x larger than
existing polarization datasets, and is the first to include both RGB and
polarization images captured across diverse indoor and outdoor environments
with varying lighting conditions. Besides, to fully exploit the potential of
polarization cues for reflection removal, we introduce PolarFree, which
leverages diffusion process to generate reflection-free cues for accurate
reflection removal. Extensive experiments show that PolarFree significantly
enhances image clarity in challenging reflective scenarios, setting a new
benchmark for polarized imaging and reflection removal. Code and dataset are
available at https://github.com/mdyao/PolarFree.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 12:53:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yao",
"Mingde",
""
],
[
"Wang",
"Menglu",
""
],
[
"Tam",
"King-Man",
""
],
[
"Li",
"Lingen",
""
],
[
"Xue",
"Tianfan",
""
],
[
"Gu",
"Jinwei",
""
]
] | TITLE: PolarFree: Polarization-based Reflection-free Imaging
ABSTRACT: Reflection removal is challenging due to complex light interactions, where
reflections obscure important details and hinder scene understanding.
Polarization naturally provides a powerful cue to distinguish between reflected
and transmitted light, enabling more accurate reflection removal. However,
existing methods often rely on small-scale or synthetic datasets, which fail to
capture the diversity and complexity of real-world scenarios. To this end, we
construct a large-scale dataset, PolaRGB, for Polarization-based reflection
removal of RGB images, which enables us to train models that generalize
effectively across a wide range of real-world scenarios. The PolaRGB dataset
contains 6,500 well-aligned mixed-transmission image pairs, 8x larger than
existing polarization datasets, and is the first to include both RGB and
polarization images captured across diverse indoor and outdoor environments
with varying lighting conditions. Besides, to fully exploit the potential of
polarization cues for reflection removal, we introduce PolarFree, which
leverages diffusion process to generate reflection-free cues for accurate
reflection removal. Extensive experiments show that PolarFree significantly
enhances image clarity in challenging reflective scenarios, setting a new
benchmark for polarized imaging and reflection removal. Code and dataset are
available at https://github.com/mdyao/PolarFree.
|
2503.18062 | Hai-Long Trieu | Anh Duc Nguyen, Hieu Minh Phi, Anh Viet Ngo, Long Hai Trieu, Thai
Phuong Nguyen | Investigating Recent Large Language Models for Vietnamese Machine
Reading Comprehension | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have shown remarkable proficiency in Machine
Reading Comprehension (MRC) tasks; however, their effectiveness for
low-resource languages like Vietnamese remains largely unexplored. In this
paper, we fine-tune and evaluate two state-of-the-art LLMs: Llama 3 (8B
parameters) and Gemma (7B parameters), on ViMMRC, a Vietnamese MRC dataset. By
utilizing Quantized Low-Rank Adaptation (QLoRA), we efficiently fine-tune these
models and compare their performance against powerful LLM-based baselines.
Although our fine-tuned models are smaller than GPT-3 and GPT-3.5, they
outperform both traditional BERT-based approaches and these larger models. This
demonstrates the effectiveness of our fine-tuning process, showcasing how
modern LLMs can surpass the capabilities of older models like BERT while still
being suitable for deployment in resource-constrained environments. Through
intensive analyses, we explore various aspects of model performance, providing
valuable insights into adapting LLMs for low-resource languages like
Vietnamese. Our study contributes to the advancement of natural language
processing in low-resource languages, and we make our fine-tuned models
publicly available at: https://huggingface.co/iaiuet.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 13:08:11 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Nguyen",
"Anh Duc",
""
],
[
"Phi",
"Hieu Minh",
""
],
[
"Ngo",
"Anh Viet",
""
],
[
"Trieu",
"Long Hai",
""
],
[
"Nguyen",
"Thai Phuong",
""
]
] | TITLE: Investigating Recent Large Language Models for Vietnamese Machine
Reading Comprehension
ABSTRACT: Large Language Models (LLMs) have shown remarkable proficiency in Machine
Reading Comprehension (MRC) tasks; however, their effectiveness for
low-resource languages like Vietnamese remains largely unexplored. In this
paper, we fine-tune and evaluate two state-of-the-art LLMs: Llama 3 (8B
parameters) and Gemma (7B parameters), on ViMMRC, a Vietnamese MRC dataset. By
utilizing Quantized Low-Rank Adaptation (QLoRA), we efficiently fine-tune these
models and compare their performance against powerful LLM-based baselines.
Although our fine-tuned models are smaller than GPT-3 and GPT-3.5, they
outperform both traditional BERT-based approaches and these larger models. This
demonstrates the effectiveness of our fine-tuning process, showcasing how
modern LLMs can surpass the capabilities of older models like BERT while still
being suitable for deployment in resource-constrained environments. Through
intensive analyses, we explore various aspects of model performance, providing
valuable insights into adapting LLMs for low-resource languages like
Vietnamese. Our study contributes to the advancement of natural language
processing in low-resource languages, and we make our fine-tuned models
publicly available at: https://huggingface.co/iaiuet.
|
2503.18063 | Peiyi Zhang | Pieyi Zhang, Richong Zhang, Zhijie Nie | Dynamic Task Vector Grouping for Efficient Multi-Task Prompt Tuning | Work in progress | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-task prompt tuning utilizes multiple high-resource source tasks to
improve performance on low-source target tasks. Existing approaches transfer
the soft prompt trained by combining all source tasks or a single
``high-similar'' source task one-time-only. However, we find that the optimal
transfer performance often comes from a combination of source tasks, which is
neither one nor all. Further, we find that the similarity between source and
target tasks also changes dynamically during fine-tuning after transfering,
making similarity calculation in the initiation stage inadequate. To address
these issues, we propose a method called Dynamic Task Vector Grouping (DTVG),
whose core ideas contain (1) measuring the task similarity with task vectors
instead of soft prompt, (2) grouping the optimal source task combination based
on two metrics: {\it target similarity} and {\it knowledge consistency}; (3)
dynamically updating the combination in each iteration step. Extensive
experiments on the 26 NLP datasets under different settings demonstrate that
DTVG effectively groups similar source tasks while reducing negative transfer,
achieving the start-of-art performance.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 13:09:04 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Pieyi",
""
],
[
"Zhang",
"Richong",
""
],
[
"Nie",
"Zhijie",
""
]
] | TITLE: Dynamic Task Vector Grouping for Efficient Multi-Task Prompt Tuning
ABSTRACT: Multi-task prompt tuning utilizes multiple high-resource source tasks to
improve performance on low-source target tasks. Existing approaches transfer
the soft prompt trained by combining all source tasks or a single
``high-similar'' source task one-time-only. However, we find that the optimal
transfer performance often comes from a combination of source tasks, which is
neither one nor all. Further, we find that the similarity between source and
target tasks also changes dynamically during fine-tuning after transfering,
making similarity calculation in the initiation stage inadequate. To address
these issues, we propose a method called Dynamic Task Vector Grouping (DTVG),
whose core ideas contain (1) measuring the task similarity with task vectors
instead of soft prompt, (2) grouping the optimal source task combination based
on two metrics: {\it target similarity} and {\it knowledge consistency}; (3)
dynamically updating the combination in each iteration step. Extensive
experiments on the 26 NLP datasets under different settings demonstrate that
DTVG effectively groups similar source tasks while reducing negative transfer,
achieving the start-of-art performance.
|
2503.18064 | Xiaoming Qi | Xiaoming Qi and Jingyang Zhang and Huazhu Fu and Guanyu Yang and Shuo
Li and Yueming Jin | Dynamic Allocation Hypernetwork with Adaptive Model Recalibration for
FCL | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Federated continual learning (FCL) offers an emerging pattern to facilitate
the applicability of federated learning (FL) in real-world scenarios, where
tasks evolve dynamically and asynchronously across clients, especially in
medical scenario. Existing server-side FCL methods in nature domain construct a
continually learnable server model by client aggregation on all-involved tasks.
However, they are challenged by: (1) Catastrophic forgetting for previously
learned tasks, leading to error accumulation in server model, making it
difficult to sustain comprehensive knowledge across all tasks. (2) Biased
optimization due to asynchronous tasks handled across different clients,
leading to the collision of optimization targets of different clients at the
same time steps. In this work, we take the first step to propose a novel
server-side FCL pattern in medical domain, Dynamic Allocation Hypernetwork with
adaptive model recalibration (\textbf{FedDAH}). It is to facilitate
collaborative learning under the distinct and dynamic task streams across
clients. To alleviate the catastrophic forgetting, we propose a dynamic
allocation hypernetwork (DAHyper) where a continually updated hypernetwork is
designed to manage the mapping between task identities and their associated
model parameters, enabling the dynamic allocation of the model across clients.
For the biased optimization, we introduce a novel adaptive model recalibration
(AMR) to incorporate the candidate changes of historical models into current
server updates, and assign weights to identical tasks across different time
steps based on the similarity for continual optimization. Extensive experiments
on the AMOS dataset demonstrate the superiority of our FedDAH to other FCL
methods on sites with different task streams. The code is
available:https://github.com/jinlab-imvr/FedDAH.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 13:12:56 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Qi",
"Xiaoming",
""
],
[
"Zhang",
"Jingyang",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Yang",
"Guanyu",
""
],
[
"Li",
"Shuo",
""
],
[
"Jin",
"Yueming",
""
]
] | TITLE: Dynamic Allocation Hypernetwork with Adaptive Model Recalibration for
FCL
ABSTRACT: Federated continual learning (FCL) offers an emerging pattern to facilitate
the applicability of federated learning (FL) in real-world scenarios, where
tasks evolve dynamically and asynchronously across clients, especially in
medical scenario. Existing server-side FCL methods in nature domain construct a
continually learnable server model by client aggregation on all-involved tasks.
However, they are challenged by: (1) Catastrophic forgetting for previously
learned tasks, leading to error accumulation in server model, making it
difficult to sustain comprehensive knowledge across all tasks. (2) Biased
optimization due to asynchronous tasks handled across different clients,
leading to the collision of optimization targets of different clients at the
same time steps. In this work, we take the first step to propose a novel
server-side FCL pattern in medical domain, Dynamic Allocation Hypernetwork with
adaptive model recalibration (\textbf{FedDAH}). It is to facilitate
collaborative learning under the distinct and dynamic task streams across
clients. To alleviate the catastrophic forgetting, we propose a dynamic
allocation hypernetwork (DAHyper) where a continually updated hypernetwork is
designed to manage the mapping between task identities and their associated
model parameters, enabling the dynamic allocation of the model across clients.
For the biased optimization, we introduce a novel adaptive model recalibration
(AMR) to incorporate the candidate changes of historical models into current
server updates, and assign weights to identical tasks across different time
steps based on the similarity for continual optimization. Extensive experiments
on the AMOS dataset demonstrate the superiority of our FedDAH to other FCL
methods on sites with different task streams. The code is
available:https://github.com/jinlab-imvr/FedDAH.
|
2503.18065 | Ziming Wei | Ziming Wei, Bingqian Lin, Yunshuang Nie, Jiaqi Chen, Shikui Ma, Hang
Xu, Xiaodan Liang | Unseen from Seen: Rewriting Observation-Instruction Using Foundation
Models for Augmenting Vision-Language Navigation | null | null | null | null | cs.CV cs.AI cs.CL cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data scarcity is a long-standing challenge in the Vision-Language Navigation
(VLN) field, which extremely hinders the generalization of agents to unseen
environments. Previous works primarily rely on additional simulator data or
web-collected images/videos to improve the generalization. However, the
simulator environments still face limited diversity, and the web-collected data
often requires extensive labor to remove the noise. In this paper, we propose a
Rewriting-driven AugMentation (RAM) paradigm for VLN, which directly creates
the unseen observation-instruction pairs via rewriting human-annotated training
data. Benefiting from our rewriting mechanism, new observation-instruction can
be obtained in both simulator-free and labor-saving manners to promote
generalization. Specifically, we first introduce Object-Enriched Observation
Rewriting, where we combine Vision-Language Models (VLMs) and Large Language
Models (LLMs) to derive rewritten object-enriched scene descriptions, enabling
observation synthesis with diverse objects and spatial layouts via
Text-to-Image Generation Models (T2IMs). Then, we propose Observation-Contrast
Instruction Rewriting, which generates observation-aligned rewritten
instructions by requiring LLMs to reason the difference between original and
new observations. We further develop a mixing-then-focusing training strategy
with a random observation cropping scheme, effectively enhancing data
distribution diversity while suppressing augmentation data noise during
training. Experiments on both the discrete environments (R2R, REVERIE, and R4R
datasets) and continuous environments (R2R-CE dataset) show the superior
performance and impressive generalization ability of our method. Code is
available at https://github.com/SaDil13/VLN-RAM.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 13:18:17 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wei",
"Ziming",
""
],
[
"Lin",
"Bingqian",
""
],
[
"Nie",
"Yunshuang",
""
],
[
"Chen",
"Jiaqi",
""
],
[
"Ma",
"Shikui",
""
],
[
"Xu",
"Hang",
""
],
[
"Liang",
"Xiaodan",
""
]
] | TITLE: Unseen from Seen: Rewriting Observation-Instruction Using Foundation
Models for Augmenting Vision-Language Navigation
ABSTRACT: Data scarcity is a long-standing challenge in the Vision-Language Navigation
(VLN) field, which extremely hinders the generalization of agents to unseen
environments. Previous works primarily rely on additional simulator data or
web-collected images/videos to improve the generalization. However, the
simulator environments still face limited diversity, and the web-collected data
often requires extensive labor to remove the noise. In this paper, we propose a
Rewriting-driven AugMentation (RAM) paradigm for VLN, which directly creates
the unseen observation-instruction pairs via rewriting human-annotated training
data. Benefiting from our rewriting mechanism, new observation-instruction can
be obtained in both simulator-free and labor-saving manners to promote
generalization. Specifically, we first introduce Object-Enriched Observation
Rewriting, where we combine Vision-Language Models (VLMs) and Large Language
Models (LLMs) to derive rewritten object-enriched scene descriptions, enabling
observation synthesis with diverse objects and spatial layouts via
Text-to-Image Generation Models (T2IMs). Then, we propose Observation-Contrast
Instruction Rewriting, which generates observation-aligned rewritten
instructions by requiring LLMs to reason the difference between original and
new observations. We further develop a mixing-then-focusing training strategy
with a random observation cropping scheme, effectively enhancing data
distribution diversity while suppressing augmentation data noise during
training. Experiments on both the discrete environments (R2R, REVERIE, and R4R
datasets) and continuous environments (R2R-CE dataset) show the superior
performance and impressive generalization ability of our method. Code is
available at https://github.com/SaDil13/VLN-RAM.
|
2503.18069 | Fei Huang | Si Shen, Fei Huang, Zhixiao Zhao, Chang Liu, Tiansheng Zheng, Danhao
Zhu | Long Is More Important Than Difficult for Training Reasoning Models | 15 pages,6 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Difficult problems, which often result in long reasoning traces, are widely
recognized as key factors for enhancing the performance of reasoning models.
However, such high-challenge problems are scarce, limiting the size of
available datasets. In this paper, we propose a simple method to decouple the
reliance on problem difficulty. First, we empirically demonstrate that
reasoning length, rather than problem difficulty, primarily influences the
performance of trained models. Second, we identify a scaling law on reasoning
length, showing that model performance increases in a log-linear fashion as the
reasoning data length grows. Finally, we introduce a straightforward technique
to generate reasoning data of arbitrary length, and show that synthesized data
is effective for training reasoning models. After fine-tuning the
Qwen2.5-32B-Instruct language model on our Long1K dataset, we present our
model, Long1K-32B, which achieves remarkable performance with only 1,000
training samples, achieving 95.6\% accuracy on MATH, and 71.1\% on GPQA
outperforming DeepSeek-R1-Distill-Qwen-32B. The model, code, and dataset are
all open-sourced, available at https://huggingface.co/ZTss/LONG1.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 13:33:59 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Shen",
"Si",
""
],
[
"Huang",
"Fei",
""
],
[
"Zhao",
"Zhixiao",
""
],
[
"Liu",
"Chang",
""
],
[
"Zheng",
"Tiansheng",
""
],
[
"Zhu",
"Danhao",
""
]
] | TITLE: Long Is More Important Than Difficult for Training Reasoning Models
ABSTRACT: Difficult problems, which often result in long reasoning traces, are widely
recognized as key factors for enhancing the performance of reasoning models.
However, such high-challenge problems are scarce, limiting the size of
available datasets. In this paper, we propose a simple method to decouple the
reliance on problem difficulty. First, we empirically demonstrate that
reasoning length, rather than problem difficulty, primarily influences the
performance of trained models. Second, we identify a scaling law on reasoning
length, showing that model performance increases in a log-linear fashion as the
reasoning data length grows. Finally, we introduce a straightforward technique
to generate reasoning data of arbitrary length, and show that synthesized data
is effective for training reasoning models. After fine-tuning the
Qwen2.5-32B-Instruct language model on our Long1K dataset, we present our
model, Long1K-32B, which achieves remarkable performance with only 1,000
training samples, achieving 95.6\% accuracy on MATH, and 71.1\% on GPQA
outperforming DeepSeek-R1-Distill-Qwen-32B. The model, code, and dataset are
all open-sourced, available at https://huggingface.co/ZTss/LONG1.
|
2503.18073 | Yuxuan Xie | Yuxuan Xie, Xuan Yu, Changjian Jiang, Sitong Mao, Shunbo Zhou, Rui
Fan, Rong Xiong, Yue Wang | PanopticSplatting: End-to-End Panoptic Gaussian Splatting | 8 pages, 6 figures | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open-vocabulary panoptic reconstruction is a challenging task for
simultaneous scene reconstruction and understanding. Recently, methods have
been proposed for 3D scene understanding based on Gaussian splatting. However,
these methods are multi-staged, suffering from the accumulated errors and the
dependence of hand-designed components. To streamline the pipeline and achieve
global optimization, we propose PanopticSplatting, an end-to-end system for
open-vocabulary panoptic reconstruction. Our method introduces query-guided
Gaussian segmentation with local cross attention, lifting 2D instance masks
without cross-frame association in an end-to-end way. The local cross attention
within view frustum effectively reduces the training memory, making our model
more accessible to large scenes with more Gaussians and objects. In addition,
to address the challenge of noisy labels in 2D pseudo masks, we propose label
blending to promote consistent 3D segmentation with less noisy floaters, as
well as label warping on 2D predictions which enhances multi-view coherence and
segmentation accuracy. Our method demonstrates strong performances in 3D scene
panoptic reconstruction on the ScanNet-V2 and ScanNet++ datasets, compared with
both NeRF-based and Gaussian-based panoptic reconstruction methods. Moreover,
PanopticSplatting can be easily generalized to numerous variants of Gaussian
splatting, and we demonstrate its robustness on different Gaussian base models.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 13:45:39 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xie",
"Yuxuan",
""
],
[
"Yu",
"Xuan",
""
],
[
"Jiang",
"Changjian",
""
],
[
"Mao",
"Sitong",
""
],
[
"Zhou",
"Shunbo",
""
],
[
"Fan",
"Rui",
""
],
[
"Xiong",
"Rong",
""
],
[
"Wang",
"Yue",
""
]
] | TITLE: PanopticSplatting: End-to-End Panoptic Gaussian Splatting
ABSTRACT: Open-vocabulary panoptic reconstruction is a challenging task for
simultaneous scene reconstruction and understanding. Recently, methods have
been proposed for 3D scene understanding based on Gaussian splatting. However,
these methods are multi-staged, suffering from the accumulated errors and the
dependence of hand-designed components. To streamline the pipeline and achieve
global optimization, we propose PanopticSplatting, an end-to-end system for
open-vocabulary panoptic reconstruction. Our method introduces query-guided
Gaussian segmentation with local cross attention, lifting 2D instance masks
without cross-frame association in an end-to-end way. The local cross attention
within view frustum effectively reduces the training memory, making our model
more accessible to large scenes with more Gaussians and objects. In addition,
to address the challenge of noisy labels in 2D pseudo masks, we propose label
blending to promote consistent 3D segmentation with less noisy floaters, as
well as label warping on 2D predictions which enhances multi-view coherence and
segmentation accuracy. Our method demonstrates strong performances in 3D scene
panoptic reconstruction on the ScanNet-V2 and ScanNet++ datasets, compared with
both NeRF-based and Gaussian-based panoptic reconstruction methods. Moreover,
PanopticSplatting can be easily generalized to numerous variants of Gaussian
splatting, and we demonstrate its robustness on different Gaussian base models.
|
2503.18082 | Nachuan Ma | Nachuan Ma, Zhengfei Song, Qiang Hu, Chuang-Wei Liu, Yu Han, Yanting
Zhang, Rui Fan, and Lihua Xie | Vehicular Road Crack Detection with Deep Learning: A New Online
Benchmark for Comprehensive Evaluation of Existing Algorithms | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | In the emerging field of urban digital twins (UDTs), advancing intelligent
road inspection (IRI) vehicles with automatic road crack detection systems is
essential for maintaining civil infrastructure. Over the past decade, deep
learning-based road crack detection methods have been developed to detect
cracks more efficiently, accurately, and objectively, with the goal of
replacing manual visual inspection. Nonetheless, there is a lack of systematic
reviews on state-of-the-art (SoTA) deep learning techniques, especially
data-fusion and label-efficient algorithms for this task. This paper thoroughly
reviews the SoTA deep learning-based algorithms, including (1) supervised, (2)
unsupervised, (3) semi-supervised, and (4) weakly-supervised methods developed
for road crack detection. Also, we create a dataset called UDTIRI-Crack,
comprising $2,500$ high-quality images from seven public annotated sources, as
the first extensive online benchmark in this field. Comprehensive experiments
are conducted to compare the detection performance, computational efficiency,
and generalizability of public SoTA deep learning-based algorithms for road
crack detection. In addition, the feasibility of foundation models and large
language models (LLMs) for road crack detection is explored. Afterwards, the
existing challenges and future development trends of deep learning-based road
crack detection algorithms are discussed. We believe this review can serve as
practical guidance for developing intelligent road detection vehicles with the
next-generation road condition assessment systems. The released benchmark
UDTIRI-Crack is available at https://udtiri.com/submission/.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 14:26:18 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ma",
"Nachuan",
""
],
[
"Song",
"Zhengfei",
""
],
[
"Hu",
"Qiang",
""
],
[
"Liu",
"Chuang-Wei",
""
],
[
"Han",
"Yu",
""
],
[
"Zhang",
"Yanting",
""
],
[
"Fan",
"Rui",
""
],
[
"Xie",
"Lihua",
""
]
] | TITLE: Vehicular Road Crack Detection with Deep Learning: A New Online
Benchmark for Comprehensive Evaluation of Existing Algorithms
ABSTRACT: In the emerging field of urban digital twins (UDTs), advancing intelligent
road inspection (IRI) vehicles with automatic road crack detection systems is
essential for maintaining civil infrastructure. Over the past decade, deep
learning-based road crack detection methods have been developed to detect
cracks more efficiently, accurately, and objectively, with the goal of
replacing manual visual inspection. Nonetheless, there is a lack of systematic
reviews on state-of-the-art (SoTA) deep learning techniques, especially
data-fusion and label-efficient algorithms for this task. This paper thoroughly
reviews the SoTA deep learning-based algorithms, including (1) supervised, (2)
unsupervised, (3) semi-supervised, and (4) weakly-supervised methods developed
for road crack detection. Also, we create a dataset called UDTIRI-Crack,
comprising $2,500$ high-quality images from seven public annotated sources, as
the first extensive online benchmark in this field. Comprehensive experiments
are conducted to compare the detection performance, computational efficiency,
and generalizability of public SoTA deep learning-based algorithms for road
crack detection. In addition, the feasibility of foundation models and large
language models (LLMs) for road crack detection is explored. Afterwards, the
existing challenges and future development trends of deep learning-based road
crack detection algorithms are discussed. We believe this review can serve as
practical guidance for developing intelligent road detection vehicles with the
next-generation road condition assessment systems. The released benchmark
UDTIRI-Crack is available at https://udtiri.com/submission/.
|
2503.18083 | Tianxin Huang | Tianxin Huang, Gim Hee Lee | Unified Geometry and Color Compression Framework for Point Clouds via
Generative Diffusion Priors | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growth of 3D applications and the rapid increase in sensor-collected
3D point cloud data, there is a rising demand for efficient compression
algorithms. Most existing learning-based compression methods handle geometry
and color attributes separately, treating them as distinct tasks, making these
methods challenging to apply directly to point clouds with colors. Besides, the
limited capacities of training datasets also limit their generalizability
across points with different distributions. In this work, we introduce a
test-time unified geometry and color compression framework of 3D point clouds.
Instead of training a compression model based on specific datasets, we adapt a
pre-trained generative diffusion model to compress original colored point
clouds into sparse sets, termed 'seeds', using prompt tuning. Decompression is
then achieved through multiple denoising steps with separate sampling
processes. Experiments on objects and indoor scenes demonstrate that our method
has superior performances compared to existing baselines for the compression of
geometry and color.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 14:27:48 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Huang",
"Tianxin",
""
],
[
"Lee",
"Gim Hee",
""
]
] | TITLE: Unified Geometry and Color Compression Framework for Point Clouds via
Generative Diffusion Priors
ABSTRACT: With the growth of 3D applications and the rapid increase in sensor-collected
3D point cloud data, there is a rising demand for efficient compression
algorithms. Most existing learning-based compression methods handle geometry
and color attributes separately, treating them as distinct tasks, making these
methods challenging to apply directly to point clouds with colors. Besides, the
limited capacities of training datasets also limit their generalizability
across points with different distributions. In this work, we introduce a
test-time unified geometry and color compression framework of 3D point clouds.
Instead of training a compression model based on specific datasets, we adapt a
pre-trained generative diffusion model to compress original colored point
clouds into sparse sets, termed 'seeds', using prompt tuning. Decompression is
then achieved through multiple denoising steps with separate sampling
processes. Experiments on objects and indoor scenes demonstrate that our method
has superior performances compared to existing baselines for the compression of
geometry and color.
|
2503.18087 | Massimiliano Ghiotto | Massimiliano Ghiotto | HyperNOs: Automated and Parallel Library for Neural Operators Research | 25 pages, 11 figures | null | null | null | cs.LG cs.NA math.NA | http://creativecommons.org/licenses/by/4.0/ | This paper introduces HyperNOs, a PyTorch library designed to streamline and
automate the process of exploring neural operators, with a special focus on
hyperparameter optimization for comprehensive and exhaustive exploration.
Indeed, HyperNOs takes advantage of state-of-the-art optimization algorithms
and parallel computing implemented in the Ray-tune library to efficiently
explore the hyperparameter space of neural operators. We also implement many
useful functionalities for studying neural operators with a user-friendly
interface, such as the possibility to train the model with a fixed number of
parameters or to train the model with multiple datasets and different
resolutions. We integrate Fourier neural operators and convolutional neural
operators in our library, achieving state of the art results on many
representative benchmarks, demonstrating the capabilities of HyperNOs to handle
real datasets and modern architectures. The library is designed to be easy to
use with the provided model and datasets, but also to be easily extended to use
new datasets and custom neural operator architectures.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 14:39:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ghiotto",
"Massimiliano",
""
]
] | TITLE: HyperNOs: Automated and Parallel Library for Neural Operators Research
ABSTRACT: This paper introduces HyperNOs, a PyTorch library designed to streamline and
automate the process of exploring neural operators, with a special focus on
hyperparameter optimization for comprehensive and exhaustive exploration.
Indeed, HyperNOs takes advantage of state-of-the-art optimization algorithms
and parallel computing implemented in the Ray-tune library to efficiently
explore the hyperparameter space of neural operators. We also implement many
useful functionalities for studying neural operators with a user-friendly
interface, such as the possibility to train the model with a fixed number of
parameters or to train the model with multiple datasets and different
resolutions. We integrate Fourier neural operators and convolutional neural
operators in our library, achieving state of the art results on many
representative benchmarks, demonstrating the capabilities of HyperNOs to handle
real datasets and modern architectures. The library is designed to be easy to
use with the provided model and datasets, but also to be easily extended to use
new datasets and custom neural operator architectures.
|
2503.18094 | Fei Li | Fei Li, Wenxuan Liu, Jingjing Chen, Ruixu Zhang, Yuran Wang, Xian
Zhong, Zheng Wang | Anomize: Better Open Vocabulary Video Anomaly Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open Vocabulary Video Anomaly Detection (OVVAD) seeks to detect and classify
both base and novel anomalies. However, existing methods face two specific
challenges related to novel anomalies. The first challenge is detection
ambiguity, where the model struggles to assign accurate anomaly scores to
unfamiliar anomalies. The second challenge is categorization confusion, where
novel anomalies are often misclassified as visually similar base instances. To
address these challenges, we explore supplementary information from multiple
sources to mitigate detection ambiguity by leveraging multiple levels of visual
data alongside matching textual information. Furthermore, we propose
incorporating label relations to guide the encoding of new labels, thereby
improving alignment between novel videos and their corresponding labels, which
helps reduce categorization confusion. The resulting Anomize framework
effectively tackles these issues, achieving superior performance on UCF-Crime
and XD-Violence datasets, demonstrating its effectiveness in OVVAD.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 14:49:32 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Fei",
""
],
[
"Liu",
"Wenxuan",
""
],
[
"Chen",
"Jingjing",
""
],
[
"Zhang",
"Ruixu",
""
],
[
"Wang",
"Yuran",
""
],
[
"Zhong",
"Xian",
""
],
[
"Wang",
"Zheng",
""
]
] | TITLE: Anomize: Better Open Vocabulary Video Anomaly Detection
ABSTRACT: Open Vocabulary Video Anomaly Detection (OVVAD) seeks to detect and classify
both base and novel anomalies. However, existing methods face two specific
challenges related to novel anomalies. The first challenge is detection
ambiguity, where the model struggles to assign accurate anomaly scores to
unfamiliar anomalies. The second challenge is categorization confusion, where
novel anomalies are often misclassified as visually similar base instances. To
address these challenges, we explore supplementary information from multiple
sources to mitigate detection ambiguity by leveraging multiple levels of visual
data alongside matching textual information. Furthermore, we propose
incorporating label relations to guide the encoding of new labels, thereby
improving alignment between novel videos and their corresponding labels, which
helps reduce categorization confusion. The resulting Anomize framework
effectively tackles these issues, achieving superior performance on UCF-Crime
and XD-Violence datasets, demonstrating its effectiveness in OVVAD.
|
2503.18107 | Hongjia Zhai | Hongjia Zhai, Hai Li, Zhenzhe Li, Xiaokun Pan, Yijia He, Guofeng Zhang | PanoGS: Gaussian-based Panoptic Segmentation for 3D Open Vocabulary
Scene Understanding | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, 3D Gaussian Splatting (3DGS) has shown encouraging performance for
open vocabulary scene understanding tasks. However, previous methods cannot
distinguish 3D instance-level information, which usually predicts a heatmap
between the scene feature and text query. In this paper, we propose PanoGS, a
novel and effective 3D panoptic open vocabulary scene understanding approach.
Technically, to learn accurate 3D language features that can scale to large
indoor scenarios, we adopt the pyramid tri-plane to model the latent continuous
parametric feature space and use a 3D feature decoder to regress the multi-view
fused 2D feature cloud. Besides, we propose language-guided graph cuts that
synergistically leverage reconstructed geometry and learned language cues to
group 3D Gaussian primitives into a set of super-primitives. To obtain 3D
consistent instance, we perform graph clustering based segmentation with
SAM-guided edge affinity computation between different super-primitives.
Extensive experiments on widely used datasets show better or more competitive
performance on 3D panoptic open vocabulary scene understanding. Project page:
\href{https://zju3dv.github.io/panogs}{https://zju3dv.github.io/panogs}.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 15:27:29 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhai",
"Hongjia",
""
],
[
"Li",
"Hai",
""
],
[
"Li",
"Zhenzhe",
""
],
[
"Pan",
"Xiaokun",
""
],
[
"He",
"Yijia",
""
],
[
"Zhang",
"Guofeng",
""
]
] | TITLE: PanoGS: Gaussian-based Panoptic Segmentation for 3D Open Vocabulary
Scene Understanding
ABSTRACT: Recently, 3D Gaussian Splatting (3DGS) has shown encouraging performance for
open vocabulary scene understanding tasks. However, previous methods cannot
distinguish 3D instance-level information, which usually predicts a heatmap
between the scene feature and text query. In this paper, we propose PanoGS, a
novel and effective 3D panoptic open vocabulary scene understanding approach.
Technically, to learn accurate 3D language features that can scale to large
indoor scenarios, we adopt the pyramid tri-plane to model the latent continuous
parametric feature space and use a 3D feature decoder to regress the multi-view
fused 2D feature cloud. Besides, we propose language-guided graph cuts that
synergistically leverage reconstructed geometry and learned language cues to
group 3D Gaussian primitives into a set of super-primitives. To obtain 3D
consistent instance, we perform graph clustering based segmentation with
SAM-guided edge affinity computation between different super-primitives.
Extensive experiments on widely used datasets show better or more competitive
performance on 3D panoptic open vocabulary scene understanding. Project page:
\href{https://zju3dv.github.io/panogs}{https://zju3dv.github.io/panogs}.
|
2503.18117 | Muhidin Mohamed | Muhidin A. Mohamed, Shuab D. Ahmed, Yahye A. Isse, Hanad M. Mohamed,
Fuad M. Hassan, Houssein A. Assowe | Detection of Somali-written Fake News and Toxic Messages on the Social
Media Using Transformer-based Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The fact that everyone with a social media account can create and share
content, and the increasing public reliance on social media platforms as a news
and information source bring about significant challenges such as
misinformation, fake news, harmful content, etc. Although human content
moderation may be useful to an extent and used by these platforms to flag
posted materials, the use of AI models provides a more sustainable, scalable,
and effective way to mitigate these harmful contents. However, low-resourced
languages such as the Somali language face limitations in AI automation,
including scarce annotated training datasets and lack of language models
tailored to their unique linguistic characteristics. This paper presents part
of our ongoing research work to bridge some of these gaps for the Somali
language. In particular, we created two human-annotated social-media-sourced
Somali datasets for two downstream applications, fake news \& toxicity
classification, and developed a transformer-based monolingual Somali language
model (named SomBERTa) -- the first of its kind to the best of our knowledge.
SomBERTa is then fine-tuned and evaluated on toxic content, fake news and news
topic classification datasets. Comparative evaluation analysis of the proposed
model against related multilingual models (e.g., AfriBERTa, AfroXLMR, etc)
demonstrated that SomBERTa consistently outperformed these comparators in both
fake news and toxic content classification tasks while achieving the best
average accuracy (87.99%) across all tasks. This research contributes to Somali
NLP by offering a foundational language model and a replicable framework for
other low-resource languages, promoting digital and AI inclusivity and
linguistic diversity.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 15:45:31 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Mohamed",
"Muhidin A.",
""
],
[
"Ahmed",
"Shuab D.",
""
],
[
"Isse",
"Yahye A.",
""
],
[
"Mohamed",
"Hanad M.",
""
],
[
"Hassan",
"Fuad M.",
""
],
[
"Assowe",
"Houssein A.",
""
]
] | TITLE: Detection of Somali-written Fake News and Toxic Messages on the Social
Media Using Transformer-based Language Models
ABSTRACT: The fact that everyone with a social media account can create and share
content, and the increasing public reliance on social media platforms as a news
and information source bring about significant challenges such as
misinformation, fake news, harmful content, etc. Although human content
moderation may be useful to an extent and used by these platforms to flag
posted materials, the use of AI models provides a more sustainable, scalable,
and effective way to mitigate these harmful contents. However, low-resourced
languages such as the Somali language face limitations in AI automation,
including scarce annotated training datasets and lack of language models
tailored to their unique linguistic characteristics. This paper presents part
of our ongoing research work to bridge some of these gaps for the Somali
language. In particular, we created two human-annotated social-media-sourced
Somali datasets for two downstream applications, fake news \& toxicity
classification, and developed a transformer-based monolingual Somali language
model (named SomBERTa) -- the first of its kind to the best of our knowledge.
SomBERTa is then fine-tuned and evaluated on toxic content, fake news and news
topic classification datasets. Comparative evaluation analysis of the proposed
model against related multilingual models (e.g., AfriBERTa, AfroXLMR, etc)
demonstrated that SomBERTa consistently outperformed these comparators in both
fake news and toxic content classification tasks while achieving the best
average accuracy (87.99%) across all tasks. This research contributes to Somali
NLP by offering a foundational language model and a replicable framework for
other low-resource languages, promoting digital and AI inclusivity and
linguistic diversity.
|
2503.18119 | Duanya Lyu | Duanya Lyu, Luyu Liu, Catherine Campbell, Yuxuan Zhang, Xiang Yan | Potentials and Limitations of Large-scale, Individual-level Mobile
Location Data for Food Acquisition Analysis | null | null | null | null | cs.CY cs.SI stat.CO | http://creativecommons.org/licenses/by/4.0/ | Understanding food acquisition is crucial for developing strategies to combat
food insecurity, a major public health concern. The emergence of large-scale
mobile location data (typically exemplified by GPS data), which captures
people's movement over time at high spatiotemporal resolutions, offer a new
approach to study this topic. This paper evaluates the potential and
limitations of large-scale GPS data for food acquisition analysis through a
case study. Using a high-resolution dataset of 286 million GPS records from
individuals in Jacksonville, Florida, we conduct a case study to assess the
strengths of GPS data in capturing spatiotemporal patterns of food outlet
visits while also discussing key limitations, such as potential data biases and
algorithmic uncertainties. Our findings confirm that GPS data can generate
valuable insights about food acquisition behavior but may significantly
underestimate visitation frequency to food outlets. Robustness checks highlight
how algorithmic choices-especially regarding food outlet classification and
visit identification-can influence research results. Our research underscores
the value of GPS data in place-based health studies while emphasizing the need
for careful consideration of data coverage, representativeness, algorithmic
choices, and the broader implications of study findings.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 15:52:36 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lyu",
"Duanya",
""
],
[
"Liu",
"Luyu",
""
],
[
"Campbell",
"Catherine",
""
],
[
"Zhang",
"Yuxuan",
""
],
[
"Yan",
"Xiang",
""
]
] | TITLE: Potentials and Limitations of Large-scale, Individual-level Mobile
Location Data for Food Acquisition Analysis
ABSTRACT: Understanding food acquisition is crucial for developing strategies to combat
food insecurity, a major public health concern. The emergence of large-scale
mobile location data (typically exemplified by GPS data), which captures
people's movement over time at high spatiotemporal resolutions, offer a new
approach to study this topic. This paper evaluates the potential and
limitations of large-scale GPS data for food acquisition analysis through a
case study. Using a high-resolution dataset of 286 million GPS records from
individuals in Jacksonville, Florida, we conduct a case study to assess the
strengths of GPS data in capturing spatiotemporal patterns of food outlet
visits while also discussing key limitations, such as potential data biases and
algorithmic uncertainties. Our findings confirm that GPS data can generate
valuable insights about food acquisition behavior but may significantly
underestimate visitation frequency to food outlets. Robustness checks highlight
how algorithmic choices-especially regarding food outlet classification and
visit identification-can influence research results. Our research underscores
the value of GPS data in place-based health studies while emphasizing the need
for careful consideration of data coverage, representativeness, algorithmic
choices, and the broader implications of study findings.
|
2503.18123 | Alexander Gielisse | Alexander Gielisse, Jan van Gemert | End-to-End Implicit Neural Representations for Classification | Accepted to CVPR 2025. 8 pages, supplementary material included | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Implicit neural representations (INRs) such as NeRF and SIREN encode a signal
in neural network parameters and show excellent results for signal
reconstruction. Using INRs for downstream tasks, such as classification, is
however not straightforward. Inherent symmetries in the parameters pose
challenges and current works primarily focus on designing architectures that
are equivariant to these symmetries. However, INR-based classification still
significantly under-performs compared to pixel-based methods like CNNs. This
work presents an end-to-end strategy for initializing SIRENs together with a
learned learning-rate scheme, to yield representations that improve
classification accuracy. We show that a simple, straightforward, Transformer
model applied to a meta-learned SIREN, without incorporating explicit symmetry
equivariances, outperforms the current state-of-the-art. On the CIFAR-10 SIREN
classification task, we improve the state-of-the-art without augmentations from
38.8% to 59.6%, and from 63.4% to 64.7% with augmentations. We demonstrate
scalability on the high-resolution Imagenette dataset achieving reasonable
reconstruction quality with a classification accuracy of 60.8% and are the
first to do INR classification on the full ImageNet-1K dataset where we achieve
a SIREN classification performance of 23.6%. To the best of our knowledge, no
other SIREN classification approach has managed to set a classification
baseline for any high-resolution image dataset. Our code is available at
https://github.com/SanderGielisse/MWT
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 16:02:23 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Gielisse",
"Alexander",
""
],
[
"van Gemert",
"Jan",
""
]
] | TITLE: End-to-End Implicit Neural Representations for Classification
ABSTRACT: Implicit neural representations (INRs) such as NeRF and SIREN encode a signal
in neural network parameters and show excellent results for signal
reconstruction. Using INRs for downstream tasks, such as classification, is
however not straightforward. Inherent symmetries in the parameters pose
challenges and current works primarily focus on designing architectures that
are equivariant to these symmetries. However, INR-based classification still
significantly under-performs compared to pixel-based methods like CNNs. This
work presents an end-to-end strategy for initializing SIRENs together with a
learned learning-rate scheme, to yield representations that improve
classification accuracy. We show that a simple, straightforward, Transformer
model applied to a meta-learned SIREN, without incorporating explicit symmetry
equivariances, outperforms the current state-of-the-art. On the CIFAR-10 SIREN
classification task, we improve the state-of-the-art without augmentations from
38.8% to 59.6%, and from 63.4% to 64.7% with augmentations. We demonstrate
scalability on the high-resolution Imagenette dataset achieving reasonable
reconstruction quality with a classification accuracy of 60.8% and are the
first to do INR classification on the full ImageNet-1K dataset where we achieve
a SIREN classification performance of 23.6%. To the best of our knowledge, no
other SIREN classification approach has managed to set a classification
baseline for any high-resolution image dataset. Our code is available at
https://github.com/SanderGielisse/MWT
|
2503.18130 | Josef Dai | Juntao Dai, Taiye Chen, Yaodong Yang, Qian Zheng, Gang Pan | Mitigating Reward Over-Optimization in RLHF via Behavior-Supported
Regularization | Published as a conference paper at ICLR 2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning from human feedback (RLHF) is an effective method for
aligning large language models (LLMs) with human values. However, reward
over-optimization remains an open challenge leading to discrepancies between
the performance of LLMs under the reward model and the true human objectives. A
primary contributor to reward over-optimization is the extrapolation error that
arises when the reward model evaluates out-of-distribution (OOD) responses.
However, current methods still fail to prevent the increasing frequency of OOD
response generation during the reinforcement learning (RL) process and are not
effective at handling extrapolation errors from OOD responses. In this work, we
propose the Behavior-Supported Policy Optimization (BSPO) method to mitigate
the reward over-optimization issue. Specifically, we define behavior policy as
the next token distribution of the reward training dataset to model the
in-distribution (ID) region of the reward model. Building on this, we introduce
the behavior-supported Bellman operator to regularize the value function,
penalizing all OOD values without impacting the ID ones. Consequently, BSPO
reduces the generation of OOD responses during the RL process, thereby avoiding
overestimation caused by the reward model's extrapolation errors.
Theoretically, we prove that BSPO guarantees a monotonic improvement of the
supported policy until convergence to the optimal behavior-supported policy.
Empirical results from extensive experiments show that BSPO outperforms
baselines in preventing reward over-optimization due to OOD evaluation and
finding the optimal ID policy.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 16:20:59 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Dai",
"Juntao",
""
],
[
"Chen",
"Taiye",
""
],
[
"Yang",
"Yaodong",
""
],
[
"Zheng",
"Qian",
""
],
[
"Pan",
"Gang",
""
]
] | TITLE: Mitigating Reward Over-Optimization in RLHF via Behavior-Supported
Regularization
ABSTRACT: Reinforcement learning from human feedback (RLHF) is an effective method for
aligning large language models (LLMs) with human values. However, reward
over-optimization remains an open challenge leading to discrepancies between
the performance of LLMs under the reward model and the true human objectives. A
primary contributor to reward over-optimization is the extrapolation error that
arises when the reward model evaluates out-of-distribution (OOD) responses.
However, current methods still fail to prevent the increasing frequency of OOD
response generation during the reinforcement learning (RL) process and are not
effective at handling extrapolation errors from OOD responses. In this work, we
propose the Behavior-Supported Policy Optimization (BSPO) method to mitigate
the reward over-optimization issue. Specifically, we define behavior policy as
the next token distribution of the reward training dataset to model the
in-distribution (ID) region of the reward model. Building on this, we introduce
the behavior-supported Bellman operator to regularize the value function,
penalizing all OOD values without impacting the ID ones. Consequently, BSPO
reduces the generation of OOD responses during the RL process, thereby avoiding
overestimation caused by the reward model's extrapolation errors.
Theoretically, we prove that BSPO guarantees a monotonic improvement of the
supported policy until convergence to the optimal behavior-supported policy.
Empirical results from extensive experiments show that BSPO outperforms
baselines in preventing reward over-optimization due to OOD evaluation and
finding the optimal ID policy.
|
2503.18141 | Hyewon Seo | Diwei Wang, C\'edric Bobenrieth, Hyewon Seo | AGIR: Assessing 3D Gait Impairment with Reasoning based on LLMs | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Assessing gait impairment plays an important role in early diagnosis, disease
monitoring, and treatment evaluation for neurodegenerative diseases. Despite
its widespread use in clinical practice, it is limited by subjectivity and a
lack of precision. While recent deep learning-based approaches have
consistently improved classification accuracies, they often lack
interpretability, hindering their utility in clinical decision-making. To
overcome these challenges, we introduce AGIR, a novel pipeline consisting of a
pre-trained VQ-VAE motion tokenizer and a subsequent Large Language Model (LLM)
fine-tuned over pairs of motion tokens and Chain-of-Thought (CoT) reasonings.
To fine-tune an LLM for pathological gait analysis, we first introduce a
multimodal dataset by adding rationales dedicated to MDS-UPDRS gait score
assessment to an existing PD gait dataset. We then introduce a two-stage
supervised fine-tuning (SFT) strategy to enhance the LLM's motion comprehension
with pathology-specific knowledge. This strategy includes: 1) a generative
stage that aligns gait motions with analytic descriptions through bidirectional
motion-description generation, 2) a reasoning stage that integrates logical
Chain-of-Thought (CoT) reasoning for impairment assessment with UPDRS gait
score. Validation on an existing dataset and comparisons with state-of-the-art
methods confirm the robustness and accuracy of our pipeline, demonstrating its
ability to assign gait impairment scores from motion input with clinically
meaningful rationales.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 17:12:16 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Diwei",
""
],
[
"Bobenrieth",
"Cédric",
""
],
[
"Seo",
"Hyewon",
""
]
] | TITLE: AGIR: Assessing 3D Gait Impairment with Reasoning based on LLMs
ABSTRACT: Assessing gait impairment plays an important role in early diagnosis, disease
monitoring, and treatment evaluation for neurodegenerative diseases. Despite
its widespread use in clinical practice, it is limited by subjectivity and a
lack of precision. While recent deep learning-based approaches have
consistently improved classification accuracies, they often lack
interpretability, hindering their utility in clinical decision-making. To
overcome these challenges, we introduce AGIR, a novel pipeline consisting of a
pre-trained VQ-VAE motion tokenizer and a subsequent Large Language Model (LLM)
fine-tuned over pairs of motion tokens and Chain-of-Thought (CoT) reasonings.
To fine-tune an LLM for pathological gait analysis, we first introduce a
multimodal dataset by adding rationales dedicated to MDS-UPDRS gait score
assessment to an existing PD gait dataset. We then introduce a two-stage
supervised fine-tuning (SFT) strategy to enhance the LLM's motion comprehension
with pathology-specific knowledge. This strategy includes: 1) a generative
stage that aligns gait motions with analytic descriptions through bidirectional
motion-description generation, 2) a reasoning stage that integrates logical
Chain-of-Thought (CoT) reasoning for impairment assessment with UPDRS gait
score. Validation on an existing dataset and comparisons with state-of-the-art
methods confirm the robustness and accuracy of our pipeline, demonstrating its
ability to assign gait impairment scores from motion input with clinically
meaningful rationales.
|
2503.18151 | Siwon Kim | Siwon Kim, Wooyung Yun, Jeongbin Oh, Soomok Lee | Efficient Deep Learning Approaches for Processing Ultra-Widefield
Retinal Imaging | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning has emerged as the predominant solution for classifying medical
images. We intend to apply these developments to the ultra-widefield (UWF)
retinal imaging dataset. Since UWF images can accurately diagnose various
retina diseases, it is very important to clas sify them accurately and prevent
them with early treatment. However, processing images manually is
time-consuming and labor-intensive, and there are two challenges to automating
this process. First, high perfor mance usually requires high computational
resources. Artificial intelli gence medical technology is better suited for
places with limited medical resources, but using high-performance processing
units in such environ ments is challenging. Second, the problem of the accuracy
of colour fun dus photography (CFP) methods. In general, the UWF method
provides more information for retinal diagnosis than the CFP method, but most
of the research has been conducted based on the CFP method. Thus, we
demonstrate that these problems can be efficiently addressed in low performance
units using methods such as strategic data augmentation and model ensembles,
which balance performance and computational re sources while utilizing UWF
images.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 17:43:24 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Kim",
"Siwon",
""
],
[
"Yun",
"Wooyung",
""
],
[
"Oh",
"Jeongbin",
""
],
[
"Lee",
"Soomok",
""
]
] | TITLE: Efficient Deep Learning Approaches for Processing Ultra-Widefield
Retinal Imaging
ABSTRACT: Deep learning has emerged as the predominant solution for classifying medical
images. We intend to apply these developments to the ultra-widefield (UWF)
retinal imaging dataset. Since UWF images can accurately diagnose various
retina diseases, it is very important to clas sify them accurately and prevent
them with early treatment. However, processing images manually is
time-consuming and labor-intensive, and there are two challenges to automating
this process. First, high perfor mance usually requires high computational
resources. Artificial intelli gence medical technology is better suited for
places with limited medical resources, but using high-performance processing
units in such environ ments is challenging. Second, the problem of the accuracy
of colour fun dus photography (CFP) methods. In general, the UWF method
provides more information for retinal diagnosis than the CFP method, but most
of the research has been conducted based on the CFP method. Thus, we
demonstrate that these problems can be efficiently addressed in low performance
units using methods such as strategic data augmentation and model ensembles,
which balance performance and computational re sources while utilizing UWF
images.
|
2503.18162 | Hui Xue PhD | Hui Xue, Sarah M. Hooper, Iain Pierce, Rhodri H. Davies, John Stairs,
Joseph Naegele, Adrienne E. Campbell-Washburn, Charlotte Manisty, James C.
Moon, Thomas A. Treibel, Peter Kellman, Michael S. Hansen | SNRAware: Improved Deep Learning MRI Denoising with SNR Unit Training
and G-factor Map Augmentation | null | null | null | null | physics.med-ph cs.AI cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | To develop and evaluate a new deep learning MR denoising method that
leverages quantitative noise distribution information from the reconstruction
process to improve denoising performance and generalization.
This retrospective study trained 14 different transformer and convolutional
models with two backbone architectures on a large dataset of 2,885,236 images
from 96,605 cardiac retro-gated cine complex series acquired at 3T. The
proposed training scheme, termed SNRAware, leverages knowledge of the MRI
reconstruction process to improve denoising performance by simulating large,
high quality, and diverse synthetic datasets, and providing quantitative
information about the noise distribution to the model. In-distribution testing
was performed on a hold-out dataset of 3000 samples with performance measured
using PSNR and SSIM, with ablation comparison without the noise augmentation.
Out-of-distribution tests were conducted on cardiac real-time cine, first-pass
cardiac perfusion, and neuro and spine MRI, all acquired at 1.5T, to test model
generalization across imaging sequences, dynamically changing contrast,
different anatomies, and field strengths. The best model found in the
in-distribution test generalized well to out-of-distribution samples,
delivering 6.5x and 2.9x CNR improvement for real-time cine and perfusion
imaging, respectively. Further, a model trained with 100% cardiac cine data
generalized well to a T1 MPRAGE neuro 3D scan and T2 TSE spine MRI.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 18:16:36 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xue",
"Hui",
""
],
[
"Hooper",
"Sarah M.",
""
],
[
"Pierce",
"Iain",
""
],
[
"Davies",
"Rhodri H.",
""
],
[
"Stairs",
"John",
""
],
[
"Naegele",
"Joseph",
""
],
[
"Campbell-Washburn",
"Adrienne E.",
""
],
[
"Manisty",
"Charlotte",
""
],
[
"Moon",
"James C.",
""
],
[
"Treibel",
"Thomas A.",
""
],
[
"Kellman",
"Peter",
""
],
[
"Hansen",
"Michael S.",
""
]
] | TITLE: SNRAware: Improved Deep Learning MRI Denoising with SNR Unit Training
and G-factor Map Augmentation
ABSTRACT: To develop and evaluate a new deep learning MR denoising method that
leverages quantitative noise distribution information from the reconstruction
process to improve denoising performance and generalization.
This retrospective study trained 14 different transformer and convolutional
models with two backbone architectures on a large dataset of 2,885,236 images
from 96,605 cardiac retro-gated cine complex series acquired at 3T. The
proposed training scheme, termed SNRAware, leverages knowledge of the MRI
reconstruction process to improve denoising performance by simulating large,
high quality, and diverse synthetic datasets, and providing quantitative
information about the noise distribution to the model. In-distribution testing
was performed on a hold-out dataset of 3000 samples with performance measured
using PSNR and SSIM, with ablation comparison without the noise augmentation.
Out-of-distribution tests were conducted on cardiac real-time cine, first-pass
cardiac perfusion, and neuro and spine MRI, all acquired at 1.5T, to test model
generalization across imaging sequences, dynamically changing contrast,
different anatomies, and field strengths. The best model found in the
in-distribution test generalized well to out-of-distribution samples,
delivering 6.5x and 2.9x CNR improvement for real-time cine and perfusion
imaging, respectively. Further, a model trained with 100% cardiac cine data
generalized well to a T1 MPRAGE neuro 3D scan and T2 TSE spine MRI.
|
2503.18170 | Abderrachid Hamrani | Abderrachid Hamrani, Anuradha Godavarty | Self-Attention Diffusion Models for Zero-Shot Biomedical Image
Segmentation: Unlocking New Frontiers in Medical Imaging | 15 pages, 5 figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Producing high-quality segmentation masks for medical images is a fundamental
challenge in biomedical image analysis. Recent research has explored
large-scale supervised training to enable segmentation across various medical
imaging modalities and unsupervised training to facilitate segmentation without
dense annotations. However, constructing a model capable of segmenting diverse
medical images in a zero-shot manner without any annotations remains a
significant hurdle. This paper introduces the Attention Diffusion Zero-shot
Unsupervised System (ADZUS), a novel approach that leverages self-attention
diffusion models for zero-shot biomedical image segmentation. ADZUS harnesses
the intrinsic capabilities of pre-trained diffusion models, utilizing their
generative and discriminative potentials to segment medical images without
requiring annotated training data or prior domain-specific knowledge. The ADZUS
architecture is detailed, with its integration of self-attention mechanisms
that facilitate context-aware and detail-sensitive segmentations being
highlighted. Experimental results across various medical imaging datasets,
including skin lesion segmentation, chest X-ray infection segmentation, and
white blood cell segmentation, reveal that ADZUS achieves state-of-the-art
performance. Notably, ADZUS reached Dice scores ranging from 88.7\% to 92.9\%
and IoU scores from 66.3\% to 93.3\% across different segmentation tasks,
demonstrating significant improvements in handling novel, unseen medical
imagery. It is noteworthy that while ADZUS demonstrates high effectiveness, it
demands substantial computational resources and extended processing times. The
model's efficacy in zero-shot settings underscores its potential to reduce
reliance on costly annotations and seamlessly adapt to new medical imaging
tasks, thereby expanding the diagnostic capabilities of AI-driven medical
imaging technologies.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 18:47:12 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hamrani",
"Abderrachid",
""
],
[
"Godavarty",
"Anuradha",
""
]
] | TITLE: Self-Attention Diffusion Models for Zero-Shot Biomedical Image
Segmentation: Unlocking New Frontiers in Medical Imaging
ABSTRACT: Producing high-quality segmentation masks for medical images is a fundamental
challenge in biomedical image analysis. Recent research has explored
large-scale supervised training to enable segmentation across various medical
imaging modalities and unsupervised training to facilitate segmentation without
dense annotations. However, constructing a model capable of segmenting diverse
medical images in a zero-shot manner without any annotations remains a
significant hurdle. This paper introduces the Attention Diffusion Zero-shot
Unsupervised System (ADZUS), a novel approach that leverages self-attention
diffusion models for zero-shot biomedical image segmentation. ADZUS harnesses
the intrinsic capabilities of pre-trained diffusion models, utilizing their
generative and discriminative potentials to segment medical images without
requiring annotated training data or prior domain-specific knowledge. The ADZUS
architecture is detailed, with its integration of self-attention mechanisms
that facilitate context-aware and detail-sensitive segmentations being
highlighted. Experimental results across various medical imaging datasets,
including skin lesion segmentation, chest X-ray infection segmentation, and
white blood cell segmentation, reveal that ADZUS achieves state-of-the-art
performance. Notably, ADZUS reached Dice scores ranging from 88.7\% to 92.9\%
and IoU scores from 66.3\% to 93.3\% across different segmentation tasks,
demonstrating significant improvements in handling novel, unseen medical
imagery. It is noteworthy that while ADZUS demonstrates high effectiveness, it
demands substantial computational resources and extended processing times. The
model's efficacy in zero-shot settings underscores its potential to reduce
reliance on costly annotations and seamlessly adapt to new medical imaging
tasks, thereby expanding the diagnostic capabilities of AI-driven medical
imaging technologies.
|
2503.18174 | Weronika {\L}ajewska | Weronika {\L}ajewska and Krisztian Balog | GINGER: Grounded Information Nugget-Based Generation of Responses | null | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval-augmented generation (RAG) faces challenges related to factual
correctness, source attribution, and response completeness. To address them, we
propose a modular pipeline for grounded response generation that operates on
information nuggets-minimal, atomic units of relevant information extracted
from retrieved documents. The multistage pipeline encompasses nugget detection,
clustering, ranking, top cluster summarization, and fluency enhancement. It
guarantees grounding in specific facts, facilitates source attribution, and
ensures maximum information inclusion within length constraints. Extensive
experiments on the TREC RAG'24 dataset evaluated with the AutoNuggetizer
framework demonstrate that GINGER achieves state-of-the-art performance on this
benchmark.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 19:10:23 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Łajewska",
"Weronika",
""
],
[
"Balog",
"Krisztian",
""
]
] | TITLE: GINGER: Grounded Information Nugget-Based Generation of Responses
ABSTRACT: Retrieval-augmented generation (RAG) faces challenges related to factual
correctness, source attribution, and response completeness. To address them, we
propose a modular pipeline for grounded response generation that operates on
information nuggets-minimal, atomic units of relevant information extracted
from retrieved documents. The multistage pipeline encompasses nugget detection,
clustering, ranking, top cluster summarization, and fluency enhancement. It
guarantees grounding in specific facts, facilitates source attribution, and
ensures maximum information inclusion within length constraints. Extensive
experiments on the TREC RAG'24 dataset evaluated with the AutoNuggetizer
framework demonstrate that GINGER achieves state-of-the-art performance on this
benchmark.
|
2503.18177 | Dim Shaiakhmetov | Gulnaz Gimaletdinova, Dim Shaiakhmetov, Madina Akpaeva, Mukhammadmuso
Abduzhabbarov, Kadyrmamat Momunov | Training A Neural Network For Partially Occluded Road Sign
Identification In The Context Of Autonomous Vehicles | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing number of autonomous vehicles and the rapid development of
computer vision technologies underscore the particular importance of conducting
research on the accuracy of traffic sign recognition. Numerous studies in this
field have already achieved significant results, demonstrating high
effectiveness in addressing traffic sign recognition tasks. However, the task
becomes considerably more complex when a sign is partially obscured by
surrounding objects, such as tree branches, billboards, or other elements of
the urban environment. In our study, we investigated how partial occlusion of
traffic signs affects their recognition. For this purpose, we collected a
dataset comprising 5,746 images, including both fully visible and partially
occluded signs, and made it publicly available. Using this dataset, we compared
the performance of our custom convolutional neural network (CNN), which
achieved 96% accuracy, with models trained using transfer learning. The best
result was obtained by VGG16 with full layer unfreezing, reaching 99% accuracy.
Additional experiments revealed that models trained solely on fully visible
signs lose effectiveness when recognizing occluded signs. This highlights the
critical importance of incorporating real-world data with partial occlusion
into training sets to ensure robust model performance in complex practical
scenarios and to enhance the safety of autonomous driving.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 19:25:56 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Gimaletdinova",
"Gulnaz",
""
],
[
"Shaiakhmetov",
"Dim",
""
],
[
"Akpaeva",
"Madina",
""
],
[
"Abduzhabbarov",
"Mukhammadmuso",
""
],
[
"Momunov",
"Kadyrmamat",
""
]
] | TITLE: Training A Neural Network For Partially Occluded Road Sign
Identification In The Context Of Autonomous Vehicles
ABSTRACT: The increasing number of autonomous vehicles and the rapid development of
computer vision technologies underscore the particular importance of conducting
research on the accuracy of traffic sign recognition. Numerous studies in this
field have already achieved significant results, demonstrating high
effectiveness in addressing traffic sign recognition tasks. However, the task
becomes considerably more complex when a sign is partially obscured by
surrounding objects, such as tree branches, billboards, or other elements of
the urban environment. In our study, we investigated how partial occlusion of
traffic signs affects their recognition. For this purpose, we collected a
dataset comprising 5,746 images, including both fully visible and partially
occluded signs, and made it publicly available. Using this dataset, we compared
the performance of our custom convolutional neural network (CNN), which
achieved 96% accuracy, with models trained using transfer learning. The best
result was obtained by VGG16 with full layer unfreezing, reaching 99% accuracy.
Additional experiments revealed that models trained solely on fully visible
signs lose effectiveness when recognizing occluded signs. This highlights the
critical importance of incorporating real-world data with partial occlusion
into training sets to ensure robust model performance in complex practical
scenarios and to enhance the safety of autonomous driving.
|
2503.18178 | Alessio Alexiadis | Ossama Shafiq, Bahman Ghiassi, Alessio Alexiadis | The Power of Small LLMs in Geometry Generation for Physical Simulations | 24 pages, 17 figures | null | null | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | Engineers widely rely on simulation platforms like COMSOL or ANSYS to model
and optimise processes. However, setting up such simulations requires expertise
in defining geometry, generating meshes, establishing boundary conditions, and
configuring solvers. This research aims to simplify this process by enabling
engineers to describe their setup in plain language, allowing a Large Language
Model (LLM) to generate the necessary input files for their specific
application. This novel approach allows establishing a direct link between
natural language and complex engineering tasks. Building on previous work that
evaluated various LLMs for generating input files across simple and complex
geometries, this study demonstrates that small LLMs - specifically, Phi-3 Mini
and Qwen-2.5 1.5B - can be fine-tuned to generate precise engineering
geometries in GMSH format. Through Low-Rank Adaptation (LoRA), we curated a
dataset of 480 instruction-output pairs encompassing simple shapes (squares,
rectangles, circles, and half circles) and more complex structures (I-beams,
cylindrical pipes, and bent pipes). The fine-tuned models produced
high-fidelity outputs, handling routine geometry generation with minimal
intervention. While challenges remain with geometries involving combinations of
multiple bodies, this study demonstrates that fine-tuned small models can
outperform larger models like GPT-4o in specialised tasks, offering a precise
and resource-efficient alternative for engineering applications.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 19:28:33 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Shafiq",
"Ossama",
""
],
[
"Ghiassi",
"Bahman",
""
],
[
"Alexiadis",
"Alessio",
""
]
] | TITLE: The Power of Small LLMs in Geometry Generation for Physical Simulations
ABSTRACT: Engineers widely rely on simulation platforms like COMSOL or ANSYS to model
and optimise processes. However, setting up such simulations requires expertise
in defining geometry, generating meshes, establishing boundary conditions, and
configuring solvers. This research aims to simplify this process by enabling
engineers to describe their setup in plain language, allowing a Large Language
Model (LLM) to generate the necessary input files for their specific
application. This novel approach allows establishing a direct link between
natural language and complex engineering tasks. Building on previous work that
evaluated various LLMs for generating input files across simple and complex
geometries, this study demonstrates that small LLMs - specifically, Phi-3 Mini
and Qwen-2.5 1.5B - can be fine-tuned to generate precise engineering
geometries in GMSH format. Through Low-Rank Adaptation (LoRA), we curated a
dataset of 480 instruction-output pairs encompassing simple shapes (squares,
rectangles, circles, and half circles) and more complex structures (I-beams,
cylindrical pipes, and bent pipes). The fine-tuned models produced
high-fidelity outputs, handling routine geometry generation with minimal
intervention. While challenges remain with geometries involving combinations of
multiple bodies, this study demonstrates that fine-tuned small models can
outperform larger models like GPT-4o in specialised tasks, offering a precise
and resource-efficient alternative for engineering applications.
|
2503.18179 | Xiaojie Yang | Xiaojie Yang and Zipei Fan and Hangli Ge and Takashi Michikata and
Ryosuke Shibasaki and Noboru Koshizuka | Causality-Aware Next Location Prediction Framework based on Human
Mobility Stratification | Accepted by IEEE UIC 2024 | null | null | null | cs.LG cs.IR | http://creativecommons.org/licenses/by/4.0/ | Human mobility data are fused with multiple travel patterns and hidden
spatiotemporal patterns are extracted by integrating user, location, and time
information to improve next location prediction accuracy. In existing next
location prediction methods, different causal relationships that result from
patterns in human mobility data are ignored, which leads to confounding
information that can have a negative effect on predictions. Therefore, this
study introduces a causality-aware framework for next location prediction,
focusing on human mobility stratification for travel patterns. In our research,
a novel causal graph is developed that describes the relationships between
various input variables. We use counterfactuals to enhance the indirect effects
in our causal graph for specific travel patterns: non-anchor targeted travels.
The proposed framework is designed as a plug-and-play module that integrates
multiple next location prediction paradigms. We tested our proposed framework
using several state-of-the-art models and human mobility datasets, and the
results reveal that the proposed module improves the prediction performance. In
addition, we provide results from the ablation study and quantitative study to
demonstrate the soundness of our causal graph and its ability to further
enhance the interpretability of the current next location prediction models.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 19:30:24 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yang",
"Xiaojie",
""
],
[
"Fan",
"Zipei",
""
],
[
"Ge",
"Hangli",
""
],
[
"Michikata",
"Takashi",
""
],
[
"Shibasaki",
"Ryosuke",
""
],
[
"Koshizuka",
"Noboru",
""
]
] | TITLE: Causality-Aware Next Location Prediction Framework based on Human
Mobility Stratification
ABSTRACT: Human mobility data are fused with multiple travel patterns and hidden
spatiotemporal patterns are extracted by integrating user, location, and time
information to improve next location prediction accuracy. In existing next
location prediction methods, different causal relationships that result from
patterns in human mobility data are ignored, which leads to confounding
information that can have a negative effect on predictions. Therefore, this
study introduces a causality-aware framework for next location prediction,
focusing on human mobility stratification for travel patterns. In our research,
a novel causal graph is developed that describes the relationships between
various input variables. We use counterfactuals to enhance the indirect effects
in our causal graph for specific travel patterns: non-anchor targeted travels.
The proposed framework is designed as a plug-and-play module that integrates
multiple next location prediction paradigms. We tested our proposed framework
using several state-of-the-art models and human mobility datasets, and the
results reveal that the proposed module improves the prediction performance. In
addition, we provide results from the ablation study and quantitative study to
demonstrate the soundness of our causal graph and its ability to further
enhance the interpretability of the current next location prediction models.
|
2503.18182 | Agam Shah | Divya Patel, Vansh Parikh, Om Patel, Agam Shah, Bhaskar Chaudhury | Exploring Topic Trends in COVID-19 Research Literature using
Non-Negative Matrix Factorization | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In this work, we apply topic modeling using Non-Negative Matrix Factorization
(NMF) on the COVID-19 Open Research Dataset (CORD-19) to uncover the underlying
thematic structure and its evolution within the extensive body of COVID-19
research literature. NMF factorizes the document-term matrix into two
non-negative matrices, effectively representing the topics and their
distribution across the documents. This helps us see how strongly documents
relate to topics and how topics relate to words. We describe the complete
methodology which involves a series of rigorous pre-processing steps to
standardize the available text data while preserving the context of phrases,
and subsequently feature extraction using the term frequency-inverse document
frequency (tf-idf), which assigns weights to words based on their frequency and
rarity in the dataset. To ensure the robustness of our topic model, we conduct
a stability analysis. This process assesses the stability scores of the NMF
topic model for different numbers of topics, enabling us to select the optimal
number of topics for our analysis. Through our analysis, we track the evolution
of topics over time within the CORD-19 dataset. Our findings contribute to the
understanding of the knowledge structure of the COVID-19 research landscape,
providing a valuable resource for future research in this field.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 19:37:52 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Patel",
"Divya",
""
],
[
"Parikh",
"Vansh",
""
],
[
"Patel",
"Om",
""
],
[
"Shah",
"Agam",
""
],
[
"Chaudhury",
"Bhaskar",
""
]
] | TITLE: Exploring Topic Trends in COVID-19 Research Literature using
Non-Negative Matrix Factorization
ABSTRACT: In this work, we apply topic modeling using Non-Negative Matrix Factorization
(NMF) on the COVID-19 Open Research Dataset (CORD-19) to uncover the underlying
thematic structure and its evolution within the extensive body of COVID-19
research literature. NMF factorizes the document-term matrix into two
non-negative matrices, effectively representing the topics and their
distribution across the documents. This helps us see how strongly documents
relate to topics and how topics relate to words. We describe the complete
methodology which involves a series of rigorous pre-processing steps to
standardize the available text data while preserving the context of phrases,
and subsequently feature extraction using the term frequency-inverse document
frequency (tf-idf), which assigns weights to words based on their frequency and
rarity in the dataset. To ensure the robustness of our topic model, we conduct
a stability analysis. This process assesses the stability scores of the NMF
topic model for different numbers of topics, enabling us to select the optimal
number of topics for our analysis. Through our analysis, we track the evolution
of topics over time within the CORD-19 dataset. Our findings contribute to the
understanding of the knowledge structure of the COVID-19 research landscape,
providing a valuable resource for future research in this field.
|
2503.18185 | Georgios Papadopoulos Th. | Spyridon Evangelatos, Eleni Veroni, Vasilis Efthymiou, Christos
Nikolopoulos, Georgios Th. Papadopoulos, Panagiotis Sarigiannidis | Exploring Energy Landscapes for Minimal Counterfactual Explanations:
Applications in Cybersecurity and Beyond | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Counterfactual explanations have emerged as a prominent method in Explainable
Artificial Intelligence (XAI), providing intuitive and actionable insights into
Machine Learning model decisions. In contrast to other traditional feature
attribution methods that assess the importance of input variables,
counterfactual explanations focus on identifying the minimal changes required
to alter a model's prediction, offering a ``what-if'' analysis that is close to
human reasoning. In the context of XAI, counterfactuals enhance transparency,
trustworthiness and fairness, offering explanations that are not just
interpretable but directly applicable in the decision-making processes.
In this paper, we present a novel framework that integrates perturbation
theory and statistical mechanics to generate minimal counterfactual
explanations in explainable AI. We employ a local Taylor expansion of a Machine
Learning model's predictive function and reformulate the counterfactual search
as an energy minimization problem over a complex landscape. In sequence, we
model the probability of candidate perturbations leveraging the Boltzmann
distribution and use simulated annealing for iterative refinement. Our approach
systematically identifies the smallest modifications required to change a
model's prediction while maintaining plausibility. Experimental results on
benchmark datasets for cybersecurity in Internet of Things environments,
demonstrate that our method provides actionable, interpretable counterfactuals
and offers deeper insights into model sensitivity and decision boundaries in
high-dimensional spaces.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 19:48:37 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Evangelatos",
"Spyridon",
""
],
[
"Veroni",
"Eleni",
""
],
[
"Efthymiou",
"Vasilis",
""
],
[
"Nikolopoulos",
"Christos",
""
],
[
"Papadopoulos",
"Georgios Th.",
""
],
[
"Sarigiannidis",
"Panagiotis",
""
]
] | TITLE: Exploring Energy Landscapes for Minimal Counterfactual Explanations:
Applications in Cybersecurity and Beyond
ABSTRACT: Counterfactual explanations have emerged as a prominent method in Explainable
Artificial Intelligence (XAI), providing intuitive and actionable insights into
Machine Learning model decisions. In contrast to other traditional feature
attribution methods that assess the importance of input variables,
counterfactual explanations focus on identifying the minimal changes required
to alter a model's prediction, offering a ``what-if'' analysis that is close to
human reasoning. In the context of XAI, counterfactuals enhance transparency,
trustworthiness and fairness, offering explanations that are not just
interpretable but directly applicable in the decision-making processes.
In this paper, we present a novel framework that integrates perturbation
theory and statistical mechanics to generate minimal counterfactual
explanations in explainable AI. We employ a local Taylor expansion of a Machine
Learning model's predictive function and reformulate the counterfactual search
as an energy minimization problem over a complex landscape. In sequence, we
model the probability of candidate perturbations leveraging the Boltzmann
distribution and use simulated annealing for iterative refinement. Our approach
systematically identifies the smallest modifications required to change a
model's prediction while maintaining plausibility. Experimental results on
benchmark datasets for cybersecurity in Internet of Things environments,
demonstrate that our method provides actionable, interpretable counterfactuals
and offers deeper insights into model sensitivity and decision boundaries in
high-dimensional spaces.
|
2503.18190 | Jamie Haddock | Alejandra Castillo, Jamie Haddock, Iryna Hartsock, Paulina Hoyos, Lara
Kassab, Alona Kryshchenko, Kamila Larripa, Deanna Needell, Shambhavi
Suryanarayanan, Karamatou Yacoubou Djima | Quantile-Based Randomized Kaczmarz for Corrupted Tensor Linear Systems | null | null | null | null | stat.ML cs.LG cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reconstruction of tensor-valued signals from corrupted measurements,
known as tensor regression, has become essential in many multi-modal
applications such as hyperspectral image reconstruction and medical imaging. In
this work, we address the tensor linear system problem $\mathcal{A}
\mathcal{X}=\mathcal{B}$, where $\mathcal{A}$ is a measurement operator,
$\mathcal{X}$ is the unknown tensor-valued signal, and $\mathcal{B}$ contains
the measurements, possibly corrupted by arbitrary errors. Such corruption is
common in large-scale tensor data, where transmission, sensory, or storage
errors are rare per instance but likely over the entire dataset and may be
arbitrarily large in magnitude. We extend the Kaczmarz method, a popular
iterative algorithm for solving large linear systems, to develop a Quantile
Tensor Randomized Kaczmarz (QTRK) method robust to large, sparse corruptions in
the observations $\mathcal{B}$. This approach combines the tensor Kaczmarz
framework with quantile-based statistics, allowing it to mitigate adversarial
corruptions and improve convergence reliability. We also propose and discuss
the Masked Quantile Randomized Kaczmarz (mQTRK) variant, which selectively
applies partial updates to handle corruptions further. We present convergence
guarantees, discuss the advantages and disadvantages of our approaches, and
demonstrate the effectiveness of our methods through experiments, including an
application for video deblurring.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 20:15:33 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Castillo",
"Alejandra",
""
],
[
"Haddock",
"Jamie",
""
],
[
"Hartsock",
"Iryna",
""
],
[
"Hoyos",
"Paulina",
""
],
[
"Kassab",
"Lara",
""
],
[
"Kryshchenko",
"Alona",
""
],
[
"Larripa",
"Kamila",
""
],
[
"Needell",
"Deanna",
""
],
[
"Suryanarayanan",
"Shambhavi",
""
],
[
"Djima",
"Karamatou Yacoubou",
""
]
] | TITLE: Quantile-Based Randomized Kaczmarz for Corrupted Tensor Linear Systems
ABSTRACT: The reconstruction of tensor-valued signals from corrupted measurements,
known as tensor regression, has become essential in many multi-modal
applications such as hyperspectral image reconstruction and medical imaging. In
this work, we address the tensor linear system problem $\mathcal{A}
\mathcal{X}=\mathcal{B}$, where $\mathcal{A}$ is a measurement operator,
$\mathcal{X}$ is the unknown tensor-valued signal, and $\mathcal{B}$ contains
the measurements, possibly corrupted by arbitrary errors. Such corruption is
common in large-scale tensor data, where transmission, sensory, or storage
errors are rare per instance but likely over the entire dataset and may be
arbitrarily large in magnitude. We extend the Kaczmarz method, a popular
iterative algorithm for solving large linear systems, to develop a Quantile
Tensor Randomized Kaczmarz (QTRK) method robust to large, sparse corruptions in
the observations $\mathcal{B}$. This approach combines the tensor Kaczmarz
framework with quantile-based statistics, allowing it to mitigate adversarial
corruptions and improve convergence reliability. We also propose and discuss
the Masked Quantile Randomized Kaczmarz (mQTRK) variant, which selectively
applies partial updates to handle corruptions further. We present convergence
guarantees, discuss the advantages and disadvantages of our approaches, and
demonstrate the effectiveness of our methods through experiments, including an
application for video deblurring.
|
2503.18195 | Hongliang Chi | Hongliang Chi, Qiong Wu, Zhengyi Zhou, Yao Ma | Shapley-Guided Utility Learning for Effective Graph Inference Data
Valuation | null | null | null | null | cs.LG cs.GT | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) have demonstrated remarkable performance in
various graph-based machine learning tasks, yet evaluating the importance of
neighbors of testing nodes remains largely unexplored due to the challenge of
assessing data importance without test labels. To address this gap, we propose
Shapley-Guided Utility Learning (SGUL), a novel framework for graph inference
data valuation. SGUL innovatively combines transferable data-specific and
modelspecific features to approximate test accuracy without relying on ground
truth labels. By incorporating Shapley values as a preprocessing step and using
feature Shapley values as input, our method enables direct optimization of
Shapley value prediction while reducing computational demands. SGUL overcomes
key limitations of existing methods, including poor generalization to unseen
test-time structures and indirect optimization. Experiments on diverse graph
datasets demonstrate that SGUL consistently outperforms existing baselines in
both inductive and transductive settings. SGUL offers an effective, efficient,
and interpretable approach for quantifying the value of test-time neighbors.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 20:35:03 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chi",
"Hongliang",
""
],
[
"Wu",
"Qiong",
""
],
[
"Zhou",
"Zhengyi",
""
],
[
"Ma",
"Yao",
""
]
] | TITLE: Shapley-Guided Utility Learning for Effective Graph Inference Data
Valuation
ABSTRACT: Graph Neural Networks (GNNs) have demonstrated remarkable performance in
various graph-based machine learning tasks, yet evaluating the importance of
neighbors of testing nodes remains largely unexplored due to the challenge of
assessing data importance without test labels. To address this gap, we propose
Shapley-Guided Utility Learning (SGUL), a novel framework for graph inference
data valuation. SGUL innovatively combines transferable data-specific and
modelspecific features to approximate test accuracy without relying on ground
truth labels. By incorporating Shapley values as a preprocessing step and using
feature Shapley values as input, our method enables direct optimization of
Shapley value prediction while reducing computational demands. SGUL overcomes
key limitations of existing methods, including poor generalization to unseen
test-time structures and indirect optimization. Experiments on diverse graph
datasets demonstrate that SGUL consistently outperforms existing baselines in
both inductive and transductive settings. SGUL offers an effective, efficient,
and interpretable approach for quantifying the value of test-time neighbors.
|
2503.18197 | Jiali Cheng | Ziheng Chen, Jiali Cheng, Gabriele Tolomei, Sijia Liu, Hadi Amiri, Yu
Wang, Kaushiki Nag, Lu Lin | FROG: Fair Removal on Graphs | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | As compliance with privacy regulations becomes increasingly critical, the
growing demand for data privacy has highlighted the significance of machine
unlearning in many real world applications, such as social network and
recommender systems, many of which can be represented as graph-structured data.
However, existing graph unlearning algorithms indiscriminately modify edges or
nodes from well-trained models without considering the potential impact of such
structural modifications on fairness. For example, forgetting links between
nodes with different genders in a social network may exacerbate group
disparities, leading to significant fairness concerns. To address these
challenges, we propose a novel approach that jointly optimizes the graph
structure and the corresponding model for fair unlearning tasks.
Specifically,our approach rewires the graph to enhance unlearning efficiency by
removing redundant edges that hinder forgetting while preserving fairness
through targeted edge augmentation. Additionally, we introduce a worst-case
evaluation mechanism to assess the reliability of fair unlearning performance.
Extensive experiments on real-world datasets demonstrate the effectiveness of
the proposed approach in achieving superior unlearning outcomes.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 20:39:53 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Ziheng",
""
],
[
"Cheng",
"Jiali",
""
],
[
"Tolomei",
"Gabriele",
""
],
[
"Liu",
"Sijia",
""
],
[
"Amiri",
"Hadi",
""
],
[
"Wang",
"Yu",
""
],
[
"Nag",
"Kaushiki",
""
],
[
"Lin",
"Lu",
""
]
] | TITLE: FROG: Fair Removal on Graphs
ABSTRACT: As compliance with privacy regulations becomes increasingly critical, the
growing demand for data privacy has highlighted the significance of machine
unlearning in many real world applications, such as social network and
recommender systems, many of which can be represented as graph-structured data.
However, existing graph unlearning algorithms indiscriminately modify edges or
nodes from well-trained models without considering the potential impact of such
structural modifications on fairness. For example, forgetting links between
nodes with different genders in a social network may exacerbate group
disparities, leading to significant fairness concerns. To address these
challenges, we propose a novel approach that jointly optimizes the graph
structure and the corresponding model for fair unlearning tasks.
Specifically,our approach rewires the graph to enhance unlearning efficiency by
removing redundant edges that hinder forgetting while preserving fairness
through targeted edge augmentation. Additionally, we introduce a worst-case
evaluation mechanism to assess the reliability of fair unlearning performance.
Extensive experiments on real-world datasets demonstrate the effectiveness of
the proposed approach in achieving superior unlearning outcomes.
|
2503.18210 | Nitish Dashora | Nitish Dashora, Dibya Ghosh, Sergey Levine | ViVa: Video-Trained Value Functions for Guiding Online RL from Diverse
Data | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online reinforcement learning (RL) with sparse rewards poses a challenge
partly because of the lack of feedback on states leading to the goal.
Furthermore, expert offline data with reward signal is rarely available to
provide this feedback and bootstrap online learning. How can we guide online
agents to the right solution without this on-task data? Reward shaping offers a
solution by providing fine-grained signal to nudge the policy towards the
optimal solution. However, reward shaping often requires domain knowledge to
hand-engineer heuristics for a specific goal. To enable more general and
inexpensive guidance, we propose and analyze a data-driven methodology that
automatically guides RL by learning from widely available video data such as
Internet recordings, off-task demonstrations, task failures, and undirected
environment interaction. By learning a model of optimal goal-conditioned value
from diverse passive data, we open the floor to scaling up and using various
data sources to model general goal-reaching behaviors relevant to guiding
online RL. Specifically, we use intent-conditioned value functions to learn
from diverse videos and incorporate these goal-conditioned values into the
reward. Our experiments show that video-trained value functions work well with
a variety of data sources, exhibit positive transfer from human video
pre-training, can generalize to unseen goals, and scale with dataset size.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 21:24:33 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Dashora",
"Nitish",
""
],
[
"Ghosh",
"Dibya",
""
],
[
"Levine",
"Sergey",
""
]
] | TITLE: ViVa: Video-Trained Value Functions for Guiding Online RL from Diverse
Data
ABSTRACT: Online reinforcement learning (RL) with sparse rewards poses a challenge
partly because of the lack of feedback on states leading to the goal.
Furthermore, expert offline data with reward signal is rarely available to
provide this feedback and bootstrap online learning. How can we guide online
agents to the right solution without this on-task data? Reward shaping offers a
solution by providing fine-grained signal to nudge the policy towards the
optimal solution. However, reward shaping often requires domain knowledge to
hand-engineer heuristics for a specific goal. To enable more general and
inexpensive guidance, we propose and analyze a data-driven methodology that
automatically guides RL by learning from widely available video data such as
Internet recordings, off-task demonstrations, task failures, and undirected
environment interaction. By learning a model of optimal goal-conditioned value
from diverse passive data, we open the floor to scaling up and using various
data sources to model general goal-reaching behaviors relevant to guiding
online RL. Specifically, we use intent-conditioned value functions to learn
from diverse videos and incorporate these goal-conditioned values into the
reward. Our experiments show that video-trained value functions work well with
a variety of data sources, exhibit positive transfer from human video
pre-training, can generalize to unseen goals, and scale with dataset size.
|
2503.18213 | Delower Hossain | Delower Hossain, Jake Y Chen | A Study on Neuro-Symbolic Artificial Intelligence: Healthcare
Perspectives | 18 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Over the last few decades, Artificial Intelligence (AI) scientists have been
conducting investigations to attain human-level performance by a machine in
accomplishing a cognitive task. Within machine learning, the ultimate
aspiration is to attain Artificial General Intelligence (AGI) through a
machine. This pursuit has led to the exploration of two distinct AI paradigms.
Symbolic AI, also known as classical or GOFAI (Good Old-Fashioned AI) and
Connectionist (Sub-symbolic) AI, represented by Neural Systems, are two
mutually exclusive paradigms. Symbolic AI excels in reasoning, explainability,
and knowledge representation but faces challenges in processing complex
real-world data with noise. Conversely, deep learning (Black-Box systems)
research breakthroughs in neural networks are notable, yet they lack reasoning
and interpretability. Neuro-symbolic AI (NeSy), an emerging area of AI
research, attempts to bridge this gap by integrating logical reasoning into
neural networks, enabling them to learn and reason with symbolic
representations. While a long path, this strategy has made significant progress
towards achieving common sense reasoning by systems. This article conducts an
extensive review of over 977 studies from prominent scientific databases (DBLP,
ACL, IEEExplore, Scopus, PubMed, ICML, ICLR), thoroughly examining the
multifaceted capabilities of Neuro-Symbolic AI, with a particular focus on its
healthcare applications, particularly in drug discovery, and Protein
engineering research. The survey addresses vital themes, including reasoning,
explainability, integration strategies, 41 healthcare-related use cases,
benchmarking, datasets, current approach limitations from both healthcare and
broader perspectives, and proposed novel approaches for future experiments.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 21:33:38 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hossain",
"Delower",
""
],
[
"Chen",
"Jake Y",
""
]
] | TITLE: A Study on Neuro-Symbolic Artificial Intelligence: Healthcare
Perspectives
ABSTRACT: Over the last few decades, Artificial Intelligence (AI) scientists have been
conducting investigations to attain human-level performance by a machine in
accomplishing a cognitive task. Within machine learning, the ultimate
aspiration is to attain Artificial General Intelligence (AGI) through a
machine. This pursuit has led to the exploration of two distinct AI paradigms.
Symbolic AI, also known as classical or GOFAI (Good Old-Fashioned AI) and
Connectionist (Sub-symbolic) AI, represented by Neural Systems, are two
mutually exclusive paradigms. Symbolic AI excels in reasoning, explainability,
and knowledge representation but faces challenges in processing complex
real-world data with noise. Conversely, deep learning (Black-Box systems)
research breakthroughs in neural networks are notable, yet they lack reasoning
and interpretability. Neuro-symbolic AI (NeSy), an emerging area of AI
research, attempts to bridge this gap by integrating logical reasoning into
neural networks, enabling them to learn and reason with symbolic
representations. While a long path, this strategy has made significant progress
towards achieving common sense reasoning by systems. This article conducts an
extensive review of over 977 studies from prominent scientific databases (DBLP,
ACL, IEEExplore, Scopus, PubMed, ICML, ICLR), thoroughly examining the
multifaceted capabilities of Neuro-Symbolic AI, with a particular focus on its
healthcare applications, particularly in drug discovery, and Protein
engineering research. The survey addresses vital themes, including reasoning,
explainability, integration strategies, 41 healthcare-related use cases,
benchmarking, datasets, current approach limitations from both healthcare and
broader perspectives, and proposed novel approaches for future experiments.
|
2503.18223 | Alexander Mathis | Valentin Gabeff and Haozhe Qi and Brendan Flaherty and Gencer Sumb\"ul
and Alexander Mathis and Devis Tuia | MammAlps: A multi-view video behavior monitoring dataset of wild mammals
in the Swiss Alps | CVPR 2025; Benchmark and code at:
https://github.com/eceo-epfl/MammAlps | null | null | null | cs.CV cs.IR q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monitoring wildlife is essential for ecology and ethology, especially in
light of the increasing human impact on ecosystems. Camera traps have emerged
as habitat-centric sensors enabling the study of wildlife populations at scale
with minimal disturbance. However, the lack of annotated video datasets limits
the development of powerful video understanding models needed to process the
vast amount of fieldwork data collected. To advance research in wild animal
behavior monitoring we present MammAlps, a multimodal and multi-view dataset of
wildlife behavior monitoring from 9 camera-traps in the Swiss National Park.
MammAlps contains over 14 hours of video with audio, 2D segmentation maps and
8.5 hours of individual tracks densely labeled for species and behavior. Based
on 6135 single animal clips, we propose the first hierarchical and multimodal
animal behavior recognition benchmark using audio, video and reference scene
segmentation maps as inputs. Furthermore, we also propose a second
ecology-oriented benchmark aiming at identifying activities, species, number of
individuals and meteorological conditions from 397 multi-view and long-term
ecological events, including false positive triggers. We advocate that both
tasks are complementary and contribute to bridging the gap between machine
learning and ecology. Code and data are available at:
https://github.com/eceo-epfl/MammAlps
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 21:51:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Gabeff",
"Valentin",
""
],
[
"Qi",
"Haozhe",
""
],
[
"Flaherty",
"Brendan",
""
],
[
"Sumbül",
"Gencer",
""
],
[
"Mathis",
"Alexander",
""
],
[
"Tuia",
"Devis",
""
]
] | TITLE: MammAlps: A multi-view video behavior monitoring dataset of wild mammals
in the Swiss Alps
ABSTRACT: Monitoring wildlife is essential for ecology and ethology, especially in
light of the increasing human impact on ecosystems. Camera traps have emerged
as habitat-centric sensors enabling the study of wildlife populations at scale
with minimal disturbance. However, the lack of annotated video datasets limits
the development of powerful video understanding models needed to process the
vast amount of fieldwork data collected. To advance research in wild animal
behavior monitoring we present MammAlps, a multimodal and multi-view dataset of
wildlife behavior monitoring from 9 camera-traps in the Swiss National Park.
MammAlps contains over 14 hours of video with audio, 2D segmentation maps and
8.5 hours of individual tracks densely labeled for species and behavior. Based
on 6135 single animal clips, we propose the first hierarchical and multimodal
animal behavior recognition benchmark using audio, video and reference scene
segmentation maps as inputs. Furthermore, we also propose a second
ecology-oriented benchmark aiming at identifying activities, species, number of
individuals and meteorological conditions from 397 multi-view and long-term
ecological events, including false positive triggers. We advocate that both
tasks are complementary and contribute to bridging the gap between machine
learning and ecology. Code and data are available at:
https://github.com/eceo-epfl/MammAlps
|
2503.18224 | Hamzah I Khan | Shubhankar Agarwal, Hamzah I. Khan, Sandeep P. Chinchali, David
Fridovich-Keil | A Framework for Finding Local Saddle Points in Two-Player Zero-Sum
Black-Box Games | null | null | null | null | cs.LG cs.GT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Saddle point optimization is a critical problem employed in numerous
real-world applications, including portfolio optimization, generative
adversarial networks, and robotics. It has been extensively studied in cases
where the objective function is known and differentiable. Existing work in
black-box settings with unknown objectives that can only be sampled either
assumes convexity-concavity in the objective to simplify the problem or
operates with noisy gradient estimators. In contrast, we introduce a framework
inspired by Bayesian optimization which utilizes Gaussian processes to model
the unknown (potentially nonconvex-nonconcave) objective and requires only
zeroth-order samples. Our approach frames the saddle point optimization problem
as a two-level process which can flexibly integrate existing and novel
approaches to this problem. The upper level of our framework produces a model
of the objective function by sampling in promising locations, and the lower
level of our framework uses the existing model to frame and solve a general-sum
game to identify locations to sample. This lower level procedure can be
designed in complementary ways, and we demonstrate the flexibility of our
approach by introducing variants which appropriately trade off between factors
like runtime, the cost of function evaluations, and the number of available
initial samples. We experimentally demonstrate these algorithms on synthetic
and realistic datasets in black-box nonconvex-nonconcave settings, showcasing
their ability to efficiently locate local saddle points in these contexts.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 21:57:45 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Agarwal",
"Shubhankar",
""
],
[
"Khan",
"Hamzah I.",
""
],
[
"Chinchali",
"Sandeep P.",
""
],
[
"Fridovich-Keil",
"David",
""
]
] | TITLE: A Framework for Finding Local Saddle Points in Two-Player Zero-Sum
Black-Box Games
ABSTRACT: Saddle point optimization is a critical problem employed in numerous
real-world applications, including portfolio optimization, generative
adversarial networks, and robotics. It has been extensively studied in cases
where the objective function is known and differentiable. Existing work in
black-box settings with unknown objectives that can only be sampled either
assumes convexity-concavity in the objective to simplify the problem or
operates with noisy gradient estimators. In contrast, we introduce a framework
inspired by Bayesian optimization which utilizes Gaussian processes to model
the unknown (potentially nonconvex-nonconcave) objective and requires only
zeroth-order samples. Our approach frames the saddle point optimization problem
as a two-level process which can flexibly integrate existing and novel
approaches to this problem. The upper level of our framework produces a model
of the objective function by sampling in promising locations, and the lower
level of our framework uses the existing model to frame and solve a general-sum
game to identify locations to sample. This lower level procedure can be
designed in complementary ways, and we demonstrate the flexibility of our
approach by introducing variants which appropriately trade off between factors
like runtime, the cost of function evaluations, and the number of available
initial samples. We experimentally demonstrate these algorithms on synthetic
and realistic datasets in black-box nonconvex-nonconcave settings, showcasing
their ability to efficiently locate local saddle points in these contexts.
|
2503.18235 | Yilong Wang | Yilong Wang, Jiahao Zhang, Tianxiang Zhao, Suhang Wang | Enhance GNNs with Reliable Confidence Estimation via Adversarial
Calibration Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite their impressive predictive performance, GNNs often exhibit poor
confidence calibration, i.e., their predicted confidence scores do not
accurately reflect true correctness likelihood. This issue raises concerns
about their reliability in high-stakes domains such as fraud detection, and
risk assessment, where well-calibrated predictions are essential for
decision-making. To ensure trustworthy predictions, several GNN calibration
methods are proposed. Though they can improve global calibration, our
experiments reveal that they often fail to generalize across different node
groups, leading to inaccurate confidence in node groups with different degree
levels, classes, and local structures. In certain cases, they even degrade
calibration compared to the original uncalibrated GNN. To address this
challenge, we propose a novel AdvCali framework that adaptively enhances
calibration across different node groups. Our method leverages adversarial
training to automatically identify mis-calibrated node groups and applies a
differentiable Group Expected Calibration Error (ECE) loss term to refine
confidence estimation within these groups. This allows the model to dynamically
adjust its calibration strategy without relying on dataset-specific prior
knowledge about miscalibrated subgroups. Extensive experiments on real-world
datasets demonstrate that our approach not only improves global calibration but
also significantly enhances calibration within groups defined by feature
similarity, topology, and connectivity, outperforming previous methods and
demonstrating its effectiveness in practical scenarios.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 23:04:41 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Yilong",
""
],
[
"Zhang",
"Jiahao",
""
],
[
"Zhao",
"Tianxiang",
""
],
[
"Wang",
"Suhang",
""
]
] | TITLE: Enhance GNNs with Reliable Confidence Estimation via Adversarial
Calibration Learning
ABSTRACT: Despite their impressive predictive performance, GNNs often exhibit poor
confidence calibration, i.e., their predicted confidence scores do not
accurately reflect true correctness likelihood. This issue raises concerns
about their reliability in high-stakes domains such as fraud detection, and
risk assessment, where well-calibrated predictions are essential for
decision-making. To ensure trustworthy predictions, several GNN calibration
methods are proposed. Though they can improve global calibration, our
experiments reveal that they often fail to generalize across different node
groups, leading to inaccurate confidence in node groups with different degree
levels, classes, and local structures. In certain cases, they even degrade
calibration compared to the original uncalibrated GNN. To address this
challenge, we propose a novel AdvCali framework that adaptively enhances
calibration across different node groups. Our method leverages adversarial
training to automatically identify mis-calibrated node groups and applies a
differentiable Group Expected Calibration Error (ECE) loss term to refine
confidence estimation within these groups. This allows the model to dynamically
adjust its calibration strategy without relying on dataset-specific prior
knowledge about miscalibrated subgroups. Extensive experiments on real-world
datasets demonstrate that our approach not only improves global calibration but
also significantly enhances calibration within groups defined by feature
similarity, topology, and connectivity, outperforming previous methods and
demonstrating its effectiveness in practical scenarios.
|
2503.18242 | Daniel Lee | Aneesh Vathul, Daniel Lee, Sheryl Chen, and Arthi Tasmia | ShED-HD: A Shannon Entropy Distribution Framework for Lightweight
Hallucination Detection on Edge Devices | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Large Language Models (LLMs) have demonstrated impressive capabilities on a
broad array of NLP tasks, but their tendency to produce
hallucinations$\unicode{x2013}$plausible-sounding but factually incorrect
content$\unicode{x2013}$poses severe challenges in high-stakes domains.
Existing hallucination detection methods either bear the computational cost of
multiple inference passes or sacrifice accuracy for efficiency with single-pass
approaches, neither of which is ideal in resource-constrained environments such
as edge devices. We propose the Shannon Entropy Distribution Hallucination
Detector (ShED-HD), a novel hallucination detection framework that bridges this
gap by classifying sequence-level entropy patterns using a lightweight BiLSTM
architecture with single-headed attention. In contrast to prior approaches,
ShED-HD efficiently detects distinctive uncertainty patterns across entire
output sequences, preserving contextual awareness. Through in-depth evaluation
on three datasets (BioASQ, TriviaQA, and Jeopardy Questions), we show that
ShED-HD significantly outperforms other computationally efficient approaches in
the out-of-distribution setting, while achieving comparable performance in the
in-distribution setting. ShED-HD facilitates hallucination detection that is
low-cost, accurate, and generalizable, improving the credibility of content
generated by LLMs in resource-constrained environments where trustworthy AI
functionality is crucial.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 23:47:26 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Vathul",
"Aneesh",
""
],
[
"Lee",
"Daniel",
""
],
[
"Chen",
"Sheryl",
""
],
[
"Tasmia",
"Arthi",
""
]
] | TITLE: ShED-HD: A Shannon Entropy Distribution Framework for Lightweight
Hallucination Detection on Edge Devices
ABSTRACT: Large Language Models (LLMs) have demonstrated impressive capabilities on a
broad array of NLP tasks, but their tendency to produce
hallucinations$\unicode{x2013}$plausible-sounding but factually incorrect
content$\unicode{x2013}$poses severe challenges in high-stakes domains.
Existing hallucination detection methods either bear the computational cost of
multiple inference passes or sacrifice accuracy for efficiency with single-pass
approaches, neither of which is ideal in resource-constrained environments such
as edge devices. We propose the Shannon Entropy Distribution Hallucination
Detector (ShED-HD), a novel hallucination detection framework that bridges this
gap by classifying sequence-level entropy patterns using a lightweight BiLSTM
architecture with single-headed attention. In contrast to prior approaches,
ShED-HD efficiently detects distinctive uncertainty patterns across entire
output sequences, preserving contextual awareness. Through in-depth evaluation
on three datasets (BioASQ, TriviaQA, and Jeopardy Questions), we show that
ShED-HD significantly outperforms other computationally efficient approaches in
the out-of-distribution setting, while achieving comparable performance in the
in-distribution setting. ShED-HD facilitates hallucination detection that is
low-cost, accurate, and generalizable, improving the credibility of content
generated by LLMs in resource-constrained environments where trustworthy AI
functionality is crucial.
|
2503.18245 | Wei Huang | Wei Huang, Hanchen Wang, Dong Wen, Wenjie Zhang, Ying Zhang, Xuemin
Lin | DiffGED: Computing Graph Edit Distance via Diffusion-based Graph
Matching | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | The Graph Edit Distance (GED) problem, which aims to compute the minimum
number of edit operations required to transform one graph into another, is a
fundamental challenge in graph analysis with wide-ranging applications.
However, due to its NP-hard nature, traditional A* approaches often suffer from
scalability issue, making them computationally intractable for large graphs.
Many recent deep learning frameworks address GED by formulating it as a
regression task, which, while efficient, fails to recover the edit path -- a
central interest in GED. Furthermore, recent hybrid approaches that combine
deep learning with traditional methods to recover the edit path often yield
poor solution quality. These methods also struggle to generate candidate
solutions in parallel, resulting in increased running times.In this paper, we
present a novel approach, DiffGED, that leverages generative diffusion model to
solve GED and recover the corresponding edit path. Specifically, we first
generate multiple diverse node matching matrices in parallel through a
diffusion-based graph matching model. Next, node mappings are extracted from
each generated matching matrices in parallel, and each extracted node mapping
can be simply transformed into an edit path. Benefiting from the generative
diversity provided by the diffusion model, DiffGED is less likely to fall into
local sub-optimal solutions, thereby achieving superior overall solution
quality close to the exact solution. Experimental results on real-world
datasets demonstrate that DiffGED can generate multiple diverse edit paths with
exceptionally high accuracy comparable to exact solutions while maintaining a
running time shorter than most of hybrid approaches.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 00:03:16 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Huang",
"Wei",
""
],
[
"Wang",
"Hanchen",
""
],
[
"Wen",
"Dong",
""
],
[
"Zhang",
"Wenjie",
""
],
[
"Zhang",
"Ying",
""
],
[
"Lin",
"Xuemin",
""
]
] | TITLE: DiffGED: Computing Graph Edit Distance via Diffusion-based Graph
Matching
ABSTRACT: The Graph Edit Distance (GED) problem, which aims to compute the minimum
number of edit operations required to transform one graph into another, is a
fundamental challenge in graph analysis with wide-ranging applications.
However, due to its NP-hard nature, traditional A* approaches often suffer from
scalability issue, making them computationally intractable for large graphs.
Many recent deep learning frameworks address GED by formulating it as a
regression task, which, while efficient, fails to recover the edit path -- a
central interest in GED. Furthermore, recent hybrid approaches that combine
deep learning with traditional methods to recover the edit path often yield
poor solution quality. These methods also struggle to generate candidate
solutions in parallel, resulting in increased running times.In this paper, we
present a novel approach, DiffGED, that leverages generative diffusion model to
solve GED and recover the corresponding edit path. Specifically, we first
generate multiple diverse node matching matrices in parallel through a
diffusion-based graph matching model. Next, node mappings are extracted from
each generated matching matrices in parallel, and each extracted node mapping
can be simply transformed into an edit path. Benefiting from the generative
diversity provided by the diffusion model, DiffGED is less likely to fall into
local sub-optimal solutions, thereby achieving superior overall solution
quality close to the exact solution. Experimental results on real-world
datasets demonstrate that DiffGED can generate multiple diverse edit paths with
exceptionally high accuracy comparable to exact solutions while maintaining a
running time shorter than most of hybrid approaches.
|
2503.18246 | Feiran Wang | Feiran Wang and Bin Duan and Jiachen Tao and Nikhil Sharma and Dawen
Cai and Yan Yan | ZECO: ZeroFusion Guided 3D MRI Conditional Generation | Project page: \url{https://brack-wang.github.io/ZECO_web/}; Github
Code: \url{https://github.com/Brack-Wang/ZECO} | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical image segmentation is crucial for enhancing diagnostic accuracy and
treatment planning in Magnetic Resonance Imaging (MRI). However, acquiring
precise lesion masks for segmentation model training demands specialized
expertise and significant time investment, leading to a small dataset scale in
clinical practice. In this paper, we present ZECO, a ZeroFusion guided 3D MRI
conditional generation framework that extracts, compresses, and generates
high-fidelity MRI images with corresponding 3D segmentation masks to mitigate
data scarcity. To effectively capture inter-slice relationships within volumes,
we introduce a Spatial Transformation Module that encodes MRI images into a
compact latent space for the diffusion process. Moving beyond unconditional
generation, our novel ZeroFusion method progressively maps 3D masks to MRI
images in latent space, enabling robust training on limited datasets while
avoiding overfitting. ZECO outperforms state-of-the-art models in both
quantitative and qualitative evaluations on Brain MRI datasets across various
modalities, showcasing its exceptional capability in synthesizing high-quality
MRI images conditioned on segmentation masks.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 00:04:52 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Feiran",
""
],
[
"Duan",
"Bin",
""
],
[
"Tao",
"Jiachen",
""
],
[
"Sharma",
"Nikhil",
""
],
[
"Cai",
"Dawen",
""
],
[
"Yan",
"Yan",
""
]
] | TITLE: ZECO: ZeroFusion Guided 3D MRI Conditional Generation
ABSTRACT: Medical image segmentation is crucial for enhancing diagnostic accuracy and
treatment planning in Magnetic Resonance Imaging (MRI). However, acquiring
precise lesion masks for segmentation model training demands specialized
expertise and significant time investment, leading to a small dataset scale in
clinical practice. In this paper, we present ZECO, a ZeroFusion guided 3D MRI
conditional generation framework that extracts, compresses, and generates
high-fidelity MRI images with corresponding 3D segmentation masks to mitigate
data scarcity. To effectively capture inter-slice relationships within volumes,
we introduce a Spatial Transformation Module that encodes MRI images into a
compact latent space for the diffusion process. Moving beyond unconditional
generation, our novel ZeroFusion method progressively maps 3D masks to MRI
images in latent space, enabling robust training on limited datasets while
avoiding overfitting. ZECO outperforms state-of-the-art models in both
quantitative and qualitative evaluations on Brain MRI datasets across various
modalities, showcasing its exceptional capability in synthesizing high-quality
MRI images conditioned on segmentation masks.
|
2503.18247 | Tadesse Destaw Belay | Tadesse Destaw Belay, Israel Abebe Azime, Ibrahim Said Ahmad, Idris
Abdulmumin, Abinew Ali Ayele, Shamsuddeen Hassan Muhammad, Seid Muhie Yimam | AfroXLMR-Social: Adapting Pre-trained Language Models for African
Languages Social Media Text | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Pretrained Language Models (PLMs) built from various sources are the
foundation of today's NLP progress. Language representations learned by such
models achieve strong performance across many tasks with datasets of varying
sizes drawn from various sources. We explore a thorough analysis of domain and
task adaptive continual pretraining approaches for low-resource African
languages and a promising result is shown for the evaluated tasks. We create
AfriSocial, a corpus designed for domain adaptive finetuning that passes
through quality pre-processing steps. Continual pretraining PLMs using
AfriSocial as domain adaptive pretraining (DAPT) data, consistently improves
performance on fine-grained emotion classification task of 16 targeted
languages from 1% to 28.27% macro F1 score. Likewise, using the task adaptive
pertaining (TAPT) approach, further finetuning with small unlabeled but similar
task data shows promising results. For example, unlabeled sentiment data
(source) for fine-grained emotion classification task (target) improves the
base model results by an F1 score ranging from 0.55% to 15.11%. Combining the
two methods, DAPT + TAPT, achieves also better results than base models. All
the resources will be available to improve low-resource NLP tasks, generally,
as well as other similar domain tasks such as hate speech and sentiment tasks.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 00:06:33 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Belay",
"Tadesse Destaw",
""
],
[
"Azime",
"Israel Abebe",
""
],
[
"Ahmad",
"Ibrahim Said",
""
],
[
"Abdulmumin",
"Idris",
""
],
[
"Ayele",
"Abinew Ali",
""
],
[
"Muhammad",
"Shamsuddeen Hassan",
""
],
[
"Yimam",
"Seid Muhie",
""
]
] | TITLE: AfroXLMR-Social: Adapting Pre-trained Language Models for African
Languages Social Media Text
ABSTRACT: Pretrained Language Models (PLMs) built from various sources are the
foundation of today's NLP progress. Language representations learned by such
models achieve strong performance across many tasks with datasets of varying
sizes drawn from various sources. We explore a thorough analysis of domain and
task adaptive continual pretraining approaches for low-resource African
languages and a promising result is shown for the evaluated tasks. We create
AfriSocial, a corpus designed for domain adaptive finetuning that passes
through quality pre-processing steps. Continual pretraining PLMs using
AfriSocial as domain adaptive pretraining (DAPT) data, consistently improves
performance on fine-grained emotion classification task of 16 targeted
languages from 1% to 28.27% macro F1 score. Likewise, using the task adaptive
pertaining (TAPT) approach, further finetuning with small unlabeled but similar
task data shows promising results. For example, unlabeled sentiment data
(source) for fine-grained emotion classification task (target) improves the
base model results by an F1 score ranging from 0.55% to 15.11%. Combining the
two methods, DAPT + TAPT, achieves also better results than base models. All
the resources will be available to improve low-resource NLP tasks, generally,
as well as other similar domain tasks such as hate speech and sentiment tasks.
|
2503.18249 | Anseong Park | Anseong Park, Jaeyune Ryu, and Won Bo Lee | Ionic Liquid Molecular Dynamics Simulation with Machine Learning Force
Fields: DPMD and MACE | null | null | null | null | physics.chem-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Machine learning force fields (MLFFs) are gaining attention as an alternative
to classical force fields (FFs) by using deep learning models trained on
density functional theory (DFT) data to improve interatomic potential accuracy.
In this study, we develop and apply MLFFs for ionic liquids (ILs), specifically
PYR14BF4 and LiTFSI/PYR14TFSI, using two different MLFF frameworks: DeePMD
(DPMD) and MACE. We find that high-quality training datasets are crucial,
especially including both equilibrated (EQ) and non-equilibrated (nEQ)
structures, to build reliable MLFFs. Both DPMD and MACE MLFFs show good
accuracy in force and energy predictions, but MACE performs better in
predicting IL density and diffusion. We also analyze molecular configurations
from our trained MACE MLFF and notice differences compared to pre-trained MACE
models like MPA-0 and OMAT-0. Our results suggest that careful dataset
preparation and fine-tuning are necessary to obtain reliable MLFF-based MD
simulations for ILs.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 00:20:41 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Park",
"Anseong",
""
],
[
"Ryu",
"Jaeyune",
""
],
[
"Lee",
"Won Bo",
""
]
] | TITLE: Ionic Liquid Molecular Dynamics Simulation with Machine Learning Force
Fields: DPMD and MACE
ABSTRACT: Machine learning force fields (MLFFs) are gaining attention as an alternative
to classical force fields (FFs) by using deep learning models trained on
density functional theory (DFT) data to improve interatomic potential accuracy.
In this study, we develop and apply MLFFs for ionic liquids (ILs), specifically
PYR14BF4 and LiTFSI/PYR14TFSI, using two different MLFF frameworks: DeePMD
(DPMD) and MACE. We find that high-quality training datasets are crucial,
especially including both equilibrated (EQ) and non-equilibrated (nEQ)
structures, to build reliable MLFFs. Both DPMD and MACE MLFFs show good
accuracy in force and energy predictions, but MACE performs better in
predicting IL density and diffusion. We also analyze molecular configurations
from our trained MACE MLFF and notice differences compared to pre-trained MACE
models like MPA-0 and OMAT-0. Our results suggest that careful dataset
preparation and fine-tuning are necessary to obtain reliable MLFF-based MD
simulations for ILs.
|
2503.18251 | S. VenkataKeerthy | Kuldeep Gautam, S. VenkataKeerthy, Ramakrishna Upadrasta | COFO: COdeFOrces dataset for Program Classification, Recognition and
Tagging | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | In recent years, a lot of technological advances in computer science have
aided software programmers to create innovative and real-time user-friendly
software. With the creation of the software and the urging interest of people
to learn to write software, there is a large collection of source codes that
can be found on the web, also known as Big Code, which can be used as a source
of data for driving the machine learning applications tending to solve certain
software engineering problems. In this paper, we present COFO, a dataset
consisting of 809 classes/problems with a total of 369K source codes written in
C, C++, Java, and Python programming languages, along with other metadata such
as code tags, problem specification, and input-output specifications. COFO has
been scraped from the openly available Codeforces website using a
selenium-beautifulsoup-python based scraper. We envision that this dataset can
be useful for solving machine learning-based problems like program
classification/recognition, tagging, predicting program properties, and code
comprehension.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 00:29:43 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Gautam",
"Kuldeep",
""
],
[
"VenkataKeerthy",
"S.",
""
],
[
"Upadrasta",
"Ramakrishna",
""
]
] | TITLE: COFO: COdeFOrces dataset for Program Classification, Recognition and
Tagging
ABSTRACT: In recent years, a lot of technological advances in computer science have
aided software programmers to create innovative and real-time user-friendly
software. With the creation of the software and the urging interest of people
to learn to write software, there is a large collection of source codes that
can be found on the web, also known as Big Code, which can be used as a source
of data for driving the machine learning applications tending to solve certain
software engineering problems. In this paper, we present COFO, a dataset
consisting of 809 classes/problems with a total of 369K source codes written in
C, C++, Java, and Python programming languages, along with other metadata such
as code tags, problem specification, and input-output specifications. COFO has
been scraped from the openly available Codeforces website using a
selenium-beautifulsoup-python based scraper. We envision that this dataset can
be useful for solving machine learning-based problems like program
classification/recognition, tagging, predicting program properties, and code
comprehension.
|
2503.18253 | Tadesse Destaw Belay | Tadesse Destaw Belay, Dawit Ketema Gete, Abinew Ali Ayele, Olga
Kolesnikova, Grigori Sidorov, Seid Muhie Yimam | Enhancing Multi-Label Emotion Analysis and Corresponding Intensities for
Ethiopian Languages | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In this digital world, people freely express their emotions using different
social media platforms. As a result, modeling and integrating
emotion-understanding models are vital for various human-computer interaction
tasks such as decision-making, product and customer feedback analysis,
political promotions, marketing research, and social media monitoring. As users
express different emotions simultaneously in a single instance, annotating
emotions in a multilabel setting such as the EthioEmo (Belay et al., 2025)
dataset effectively captures this dynamic. Additionally, incorporating
intensity, or the degree of emotion, is crucial, as emotions can significantly
differ in their expressive strength and impact. This intensity is significant
for assessing whether further action is necessary in decision-making processes,
especially concerning negative emotions in applications such as healthcare and
mental health studies. To enhance the EthioEmo dataset, we include annotations
for the intensity of each labeled emotion. Furthermore, we evaluate various
state-of-the-art encoder-only Pretrained Language Models (PLMs) and
decoder-only Large Language Models (LLMs) to provide comprehensive
benchmarking.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 00:34:36 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Belay",
"Tadesse Destaw",
""
],
[
"Gete",
"Dawit Ketema",
""
],
[
"Ayele",
"Abinew Ali",
""
],
[
"Kolesnikova",
"Olga",
""
],
[
"Sidorov",
"Grigori",
""
],
[
"Yimam",
"Seid Muhie",
""
]
] | TITLE: Enhancing Multi-Label Emotion Analysis and Corresponding Intensities for
Ethiopian Languages
ABSTRACT: In this digital world, people freely express their emotions using different
social media platforms. As a result, modeling and integrating
emotion-understanding models are vital for various human-computer interaction
tasks such as decision-making, product and customer feedback analysis,
political promotions, marketing research, and social media monitoring. As users
express different emotions simultaneously in a single instance, annotating
emotions in a multilabel setting such as the EthioEmo (Belay et al., 2025)
dataset effectively captures this dynamic. Additionally, incorporating
intensity, or the degree of emotion, is crucial, as emotions can significantly
differ in their expressive strength and impact. This intensity is significant
for assessing whether further action is necessary in decision-making processes,
especially concerning negative emotions in applications such as healthcare and
mental health studies. To enhance the EthioEmo dataset, we include annotations
for the intensity of each labeled emotion. Furthermore, we evaluate various
state-of-the-art encoder-only Pretrained Language Models (PLMs) and
decoder-only Large Language Models (LLMs) to provide comprehensive
benchmarking.
|
2503.18263 | Praveen Chopra Mr | Praveen Chopra, Himanshu Kumar, Sandeep Yadav | PNN: A Novel Progressive Neural Network for Fault Classification in
Rotating Machinery under Small Dataset Constraint | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Fault detection in rotating machinery is a complex task, particularly in
small and heterogeneous dataset scenarios. Variability in sensor placement,
machinery configurations, and structural differences further increase the
complexity of the problem. Conventional deep learning approaches often demand
large, homogeneous datasets, limiting their applicability in data-scarce
industrial environments. While transfer learning and few-shot learning have
shown potential, however, they are often constrained by the need for extensive
fault datasets. This research introduces a unified framework leveraging a novel
progressive neural network (PNN) architecture designed to address these
challenges. The PNN sequentially estimates the fixed-size refined features of
the higher order with the help of all previously estimated features and appends
them to the feature set. This fixed-size feature output at each layer controls
the complexity of the PNN and makes it suitable for effective learning from
small datasets. The framework's effectiveness is validated on eight datasets,
including six open-source datasets, one in-house fault simulator, and one
real-world industrial dataset. The PNN achieves state-of-the-art performance in
fault detection across varying dataset sizes and machinery types, highlighting
superior generalization and classification capabilities.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 01:12:23 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chopra",
"Praveen",
""
],
[
"Kumar",
"Himanshu",
""
],
[
"Yadav",
"Sandeep",
""
]
] | TITLE: PNN: A Novel Progressive Neural Network for Fault Classification in
Rotating Machinery under Small Dataset Constraint
ABSTRACT: Fault detection in rotating machinery is a complex task, particularly in
small and heterogeneous dataset scenarios. Variability in sensor placement,
machinery configurations, and structural differences further increase the
complexity of the problem. Conventional deep learning approaches often demand
large, homogeneous datasets, limiting their applicability in data-scarce
industrial environments. While transfer learning and few-shot learning have
shown potential, however, they are often constrained by the need for extensive
fault datasets. This research introduces a unified framework leveraging a novel
progressive neural network (PNN) architecture designed to address these
challenges. The PNN sequentially estimates the fixed-size refined features of
the higher order with the help of all previously estimated features and appends
them to the feature set. This fixed-size feature output at each layer controls
the complexity of the PNN and makes it suitable for effective learning from
small datasets. The framework's effectiveness is validated on eight datasets,
including six open-source datasets, one in-house fault simulator, and one
real-world industrial dataset. The PNN achieves state-of-the-art performance in
fault detection across varying dataset sizes and machinery types, highlighting
superior generalization and classification capabilities.
|
2503.18267 | Minh-Tuan Tran | Minh-Tuan Tran, Trung Le, Xuan-May Le, Thanh-Toan Do, Dinh Phung | Enhancing Dataset Distillation via Non-Critical Region Refinement | Accepted at CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Dataset distillation has become a popular method for compressing large
datasets into smaller, more efficient representations while preserving critical
information for model training. Data features are broadly categorized into two
types: instance-specific features, which capture unique, fine-grained details
of individual examples, and class-general features, which represent shared,
broad patterns across a class. However, previous approaches often struggle to
balance these features-some focus solely on class-general patterns, neglecting
finer instance details, while others prioritize instance-specific features,
overlooking the shared characteristics essential for class-level understanding.
In this paper, we introduce the Non-Critical Region Refinement Dataset
Distillation (NRR-DD) method, which preserves instance-specific details and
fine-grained regions in synthetic data while enriching non-critical regions
with class-general information. This approach enables models to leverage all
pixel information, capturing both feature types and enhancing overall
performance. Additionally, we present Distance-Based Representative (DBR)
knowledge transfer, which eliminates the need for soft labels in training by
relying on the distance between synthetic data predictions and one-hot encoded
labels. Experimental results show that NRR-DD achieves state-of-the-art
performance on both small- and large-scale datasets. Furthermore, by storing
only two distances per instance, our method delivers comparable results across
various settings. The code is available at
https://github.com/tmtuan1307/NRR-DD.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 01:20:22 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Tran",
"Minh-Tuan",
""
],
[
"Le",
"Trung",
""
],
[
"Le",
"Xuan-May",
""
],
[
"Do",
"Thanh-Toan",
""
],
[
"Phung",
"Dinh",
""
]
] | TITLE: Enhancing Dataset Distillation via Non-Critical Region Refinement
ABSTRACT: Dataset distillation has become a popular method for compressing large
datasets into smaller, more efficient representations while preserving critical
information for model training. Data features are broadly categorized into two
types: instance-specific features, which capture unique, fine-grained details
of individual examples, and class-general features, which represent shared,
broad patterns across a class. However, previous approaches often struggle to
balance these features-some focus solely on class-general patterns, neglecting
finer instance details, while others prioritize instance-specific features,
overlooking the shared characteristics essential for class-level understanding.
In this paper, we introduce the Non-Critical Region Refinement Dataset
Distillation (NRR-DD) method, which preserves instance-specific details and
fine-grained regions in synthetic data while enriching non-critical regions
with class-general information. This approach enables models to leverage all
pixel information, capturing both feature types and enhancing overall
performance. Additionally, we present Distance-Based Representative (DBR)
knowledge transfer, which eliminates the need for soft labels in training by
relying on the distance between synthetic data predictions and one-hot encoded
labels. Experimental results show that NRR-DD achieves state-of-the-art
performance on both small- and large-scale datasets. Furthermore, by storing
only two distances per instance, our method delivers comparable results across
various settings. The code is available at
https://github.com/tmtuan1307/NRR-DD.
|
2503.18275 | Liu Xulang | Xulang Liu, Ning Tan | GI-SLAM: Gaussian-Inertial SLAM | 10 pages, 2 figures, 5 tables | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D Gaussian Splatting (3DGS) has recently emerged as a powerful
representation of geometry and appearance for dense Simultaneous Localization
and Mapping (SLAM). Through rapid, differentiable rasterization of 3D
Gaussians, many 3DGS SLAM methods achieve near real-time rendering and
accelerated training. However, these methods largely overlook inertial data,
witch is a critical piece of information collected from the inertial
measurement unit (IMU). In this paper, we present GI-SLAM, a novel
gaussian-inertial SLAM system which consists of an IMU-enhanced camera tracking
module and a realistic 3D Gaussian-based scene representation for mapping. Our
method introduces an IMU loss that seamlessly integrates into the deep learning
framework underpinning 3D Gaussian Splatting SLAM, effectively enhancing the
accuracy, robustness and efficiency of camera tracking. Moreover, our SLAM
system supports a wide range of sensor configurations, including monocular,
stereo, and RGBD cameras, both with and without IMU integration. Our method
achieves competitive performance compared with existing state-of-the-art
real-time methods on the EuRoC and TUM-RGBD datasets.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 01:45:40 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Xulang",
""
],
[
"Tan",
"Ning",
""
]
] | TITLE: GI-SLAM: Gaussian-Inertial SLAM
ABSTRACT: 3D Gaussian Splatting (3DGS) has recently emerged as a powerful
representation of geometry and appearance for dense Simultaneous Localization
and Mapping (SLAM). Through rapid, differentiable rasterization of 3D
Gaussians, many 3DGS SLAM methods achieve near real-time rendering and
accelerated training. However, these methods largely overlook inertial data,
witch is a critical piece of information collected from the inertial
measurement unit (IMU). In this paper, we present GI-SLAM, a novel
gaussian-inertial SLAM system which consists of an IMU-enhanced camera tracking
module and a realistic 3D Gaussian-based scene representation for mapping. Our
method introduces an IMU loss that seamlessly integrates into the deep learning
framework underpinning 3D Gaussian Splatting SLAM, effectively enhancing the
accuracy, robustness and efficiency of camera tracking. Moreover, our SLAM
system supports a wide range of sensor configurations, including monocular,
stereo, and RGBD cameras, both with and without IMU integration. Our method
achieves competitive performance compared with existing state-of-the-art
real-time methods on the EuRoC and TUM-RGBD datasets.
|
2503.18276 | Yuming Huang | Yuming Huang, Wei Gao, Zhiyuan Zhang, Maani Ghaffari, Dezhen Song,
Cheng-Zhong Xu, and Hui Kong | Learning Orientation Field for OSM-Guided Autonomous Navigation | 14 pages, 12 figures, and 5 tables | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | OpenStreetMap (OSM) has gained popularity recently in autonomous navigation
due to its public accessibility, lower maintenance costs, and broader
geographical coverage. However, existing methods often struggle with noisy OSM
data and incomplete sensor observations, leading to inaccuracies in trajectory
planning. These challenges are particularly evident in complex driving
scenarios, such as at intersections or facing occlusions. To address these
challenges, we propose a robust and explainable two-stage framework to learn an
Orientation Field (OrField) for robot navigation by integrating LiDAR scans and
OSM routes. In the first stage, we introduce the novel representation, OrField,
which can provide orientations for each grid on the map, reasoning jointly from
noisy LiDAR scans and OSM routes. To generate a robust OrField, we train a deep
neural network by encoding a versatile initial OrField and output an optimized
OrField. Based on OrField, we propose two trajectory planners for OSM-guided
robot navigation, called Field-RRT* and Field-Bezier, respectively, in the
second stage by improving the Rapidly Exploring Random Tree (RRT) algorithm and
Bezier curve to estimate the trajectories. Thanks to the robustness of OrField
which captures both global and local information, Field-RRT* and Field-Bezier
can generate accurate and reliable trajectories even in challenging conditions.
We validate our approach through experiments on the SemanticKITTI dataset and
our own campus dataset. The results demonstrate the effectiveness of our
method, achieving superior performance in complex and noisy conditions. Our
code for network training and real-world deployment is available at
https://github.com/IMRL/OriField.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 01:46:17 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Huang",
"Yuming",
""
],
[
"Gao",
"Wei",
""
],
[
"Zhang",
"Zhiyuan",
""
],
[
"Ghaffari",
"Maani",
""
],
[
"Song",
"Dezhen",
""
],
[
"Xu",
"Cheng-Zhong",
""
],
[
"Kong",
"Hui",
""
]
] | TITLE: Learning Orientation Field for OSM-Guided Autonomous Navigation
ABSTRACT: OpenStreetMap (OSM) has gained popularity recently in autonomous navigation
due to its public accessibility, lower maintenance costs, and broader
geographical coverage. However, existing methods often struggle with noisy OSM
data and incomplete sensor observations, leading to inaccuracies in trajectory
planning. These challenges are particularly evident in complex driving
scenarios, such as at intersections or facing occlusions. To address these
challenges, we propose a robust and explainable two-stage framework to learn an
Orientation Field (OrField) for robot navigation by integrating LiDAR scans and
OSM routes. In the first stage, we introduce the novel representation, OrField,
which can provide orientations for each grid on the map, reasoning jointly from
noisy LiDAR scans and OSM routes. To generate a robust OrField, we train a deep
neural network by encoding a versatile initial OrField and output an optimized
OrField. Based on OrField, we propose two trajectory planners for OSM-guided
robot navigation, called Field-RRT* and Field-Bezier, respectively, in the
second stage by improving the Rapidly Exploring Random Tree (RRT) algorithm and
Bezier curve to estimate the trajectories. Thanks to the robustness of OrField
which captures both global and local information, Field-RRT* and Field-Bezier
can generate accurate and reliable trajectories even in challenging conditions.
We validate our approach through experiments on the SemanticKITTI dataset and
our own campus dataset. The results demonstrate the effectiveness of our
method, achieving superior performance in complex and noisy conditions. Our
code for network training and real-world deployment is available at
https://github.com/IMRL/OriField.
|
2503.18282 | Kazuhiro Yamada | Kazuhiro Yamada, Li Yin, Qingrui Hu, Ning Ding, Shunsuke Iwashita, Jun
Ichikawa, Kiwamu Kotani, Calvin Yeung and Keisuke Fujii | TrackID3x3: A Dataset and Algorithm for Multi-Player Tracking with
Identification and Pose Estimation in 3x3 Basketball Full-court Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-object tracking, player identification, and pose estimation are
fundamental components of sports analytics, essential for analyzing player
movements, performance, and tactical strategies. However, existing datasets and
methodologies primarily target mainstream team sports such as soccer and
conventional 5-on-5 basketball, often overlooking scenarios involving
fixed-camera setups commonly used at amateur levels, less mainstream sports, or
datasets that explicitly incorporate pose annotations. In this paper, we
propose the TrackID3x3 dataset, the first publicly available comprehensive
dataset specifically designed for multi-player tracking, player identification,
and pose estimation in 3x3 basketball scenarios. The dataset comprises three
distinct subsets (Indoor fixed-camera, Outdoor fixed-camera, and Drone camera
footage), capturing diverse full-court camera perspectives and environments. We
also introduce the Track-ID task, a simplified variant of the game state
reconstruction task that excludes field detection and focuses exclusively on
fixed-camera scenarios. To evaluate performance, we propose a baseline
algorithm called Track-ID algorithm, tailored to assess tracking and
identification quality. Furthermore, our benchmark experiments, utilizing
recent multi-object tracking algorithms (e.g., BoT-SORT-ReID) and top-down pose
estimation methods (HRNet, RTMPose, and SwinPose), demonstrate robust results
and highlight remaining challenges. Our dataset and evaluation benchmarks
provide a solid foundation for advancing automated analytics in 3x3 basketball.
Dataset and code will be available at
https://github.com/open-starlab/TrackID3x3.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 01:55:46 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yamada",
"Kazuhiro",
""
],
[
"Yin",
"Li",
""
],
[
"Hu",
"Qingrui",
""
],
[
"Ding",
"Ning",
""
],
[
"Iwashita",
"Shunsuke",
""
],
[
"Ichikawa",
"Jun",
""
],
[
"Kotani",
"Kiwamu",
""
],
[
"Yeung",
"Calvin",
""
],
[
"Fujii",
"Keisuke",
""
]
] | TITLE: TrackID3x3: A Dataset and Algorithm for Multi-Player Tracking with
Identification and Pose Estimation in 3x3 Basketball Full-court Videos
ABSTRACT: Multi-object tracking, player identification, and pose estimation are
fundamental components of sports analytics, essential for analyzing player
movements, performance, and tactical strategies. However, existing datasets and
methodologies primarily target mainstream team sports such as soccer and
conventional 5-on-5 basketball, often overlooking scenarios involving
fixed-camera setups commonly used at amateur levels, less mainstream sports, or
datasets that explicitly incorporate pose annotations. In this paper, we
propose the TrackID3x3 dataset, the first publicly available comprehensive
dataset specifically designed for multi-player tracking, player identification,
and pose estimation in 3x3 basketball scenarios. The dataset comprises three
distinct subsets (Indoor fixed-camera, Outdoor fixed-camera, and Drone camera
footage), capturing diverse full-court camera perspectives and environments. We
also introduce the Track-ID task, a simplified variant of the game state
reconstruction task that excludes field detection and focuses exclusively on
fixed-camera scenarios. To evaluate performance, we propose a baseline
algorithm called Track-ID algorithm, tailored to assess tracking and
identification quality. Furthermore, our benchmark experiments, utilizing
recent multi-object tracking algorithms (e.g., BoT-SORT-ReID) and top-down pose
estimation methods (HRNet, RTMPose, and SwinPose), demonstrate robust results
and highlight remaining challenges. Our dataset and evaluation benchmarks
provide a solid foundation for advancing automated analytics in 3x3 basketball.
Dataset and code will be available at
https://github.com/open-starlab/TrackID3x3.
|
2503.18286 | Siyuan Cheng | Siyuan Cheng, Lingjuan Lyu, Zhenting Wang, Xiangyu Zhang, Vikash
Sehwag | CO-SPY: Combining Semantic and Pixel Features to Detect Synthetic Images
by AI | null | The IEEE/CVF Conference on Computer Vision and Pattern Recognition
2025 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advancement of generative AI, it is now possible to synthesize
high-quality images in a few seconds. Despite the power of these technologies,
they raise significant concerns regarding misuse. Current efforts to
distinguish between real and AI-generated images may lack generalization, being
effective for only certain types of generative models and susceptible to
post-processing techniques like JPEG compression. To overcome these
limitations, we propose a novel framework, Co-Spy, that first enhances existing
semantic features (e.g., the number of fingers in a hand) and artifact features
(e.g., pixel value differences), and then adaptively integrates them to achieve
more general and robust synthetic image detection. Additionally, we create
Co-Spy-Bench, a comprehensive dataset comprising 5 real image datasets and 22
state-of-the-art generative models, including the latest models like FLUX. We
also collect 50k synthetic images in the wild from the Internet to enable
evaluation in a more practical setting. Our extensive evaluations demonstrate
that our detector outperforms existing methods under identical training
conditions, achieving an average accuracy improvement of approximately 11% to
34%. The code is available at https://github.com/Megum1/Co-Spy.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 01:59:29 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Cheng",
"Siyuan",
""
],
[
"Lyu",
"Lingjuan",
""
],
[
"Wang",
"Zhenting",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Sehwag",
"Vikash",
""
]
] | TITLE: CO-SPY: Combining Semantic and Pixel Features to Detect Synthetic Images
by AI
ABSTRACT: With the rapid advancement of generative AI, it is now possible to synthesize
high-quality images in a few seconds. Despite the power of these technologies,
they raise significant concerns regarding misuse. Current efforts to
distinguish between real and AI-generated images may lack generalization, being
effective for only certain types of generative models and susceptible to
post-processing techniques like JPEG compression. To overcome these
limitations, we propose a novel framework, Co-Spy, that first enhances existing
semantic features (e.g., the number of fingers in a hand) and artifact features
(e.g., pixel value differences), and then adaptively integrates them to achieve
more general and robust synthetic image detection. Additionally, we create
Co-Spy-Bench, a comprehensive dataset comprising 5 real image datasets and 22
state-of-the-art generative models, including the latest models like FLUX. We
also collect 50k synthetic images in the wild from the Internet to enable
evaluation in a more practical setting. Our extensive evaluations demonstrate
that our detector outperforms existing methods under identical training
conditions, achieving an average accuracy improvement of approximately 11% to
34%. The code is available at https://github.com/Megum1/Co-Spy.
|
2503.18290 | Paul K. Mandal | Paul K. Mandal | When is dataset cartography ineffective? Using training dynamics does
not improve robustness against Adversarial SQuAD | 5 pages, 3 figures, 4 tables | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, I investigate the effectiveness of dataset cartography for
extractive question answering on the SQuAD dataset. I begin by analyzing
annotation artifacts in SQuAD and evaluate the impact of two adversarial
datasets, AddSent and AddOneSent, on an ELECTRA-small model. Using training
dynamics, I partition SQuAD into easy-to-learn, ambiguous, and hard-to-learn
subsets. I then compare the performance of models trained on these subsets to
those trained on randomly selected samples of equal size. Results show that
training on cartography-based subsets does not improve generalization to the
SQuAD validation set or the AddSent adversarial set. While the hard-to-learn
subset yields a slightly higher F1 score on the AddOneSent dataset, the overall
gains are limited. These findings suggest that dataset cartography provides
little benefit for adversarial robustness in SQuAD-style QA tasks. I conclude
by comparing these results to prior findings on SNLI and discuss possible
reasons for the observed differences.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 02:24:18 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Mandal",
"Paul K.",
""
]
] | TITLE: When is dataset cartography ineffective? Using training dynamics does
not improve robustness against Adversarial SQuAD
ABSTRACT: In this paper, I investigate the effectiveness of dataset cartography for
extractive question answering on the SQuAD dataset. I begin by analyzing
annotation artifacts in SQuAD and evaluate the impact of two adversarial
datasets, AddSent and AddOneSent, on an ELECTRA-small model. Using training
dynamics, I partition SQuAD into easy-to-learn, ambiguous, and hard-to-learn
subsets. I then compare the performance of models trained on these subsets to
those trained on randomly selected samples of equal size. Results show that
training on cartography-based subsets does not improve generalization to the
SQuAD validation set or the AddSent adversarial set. While the hard-to-learn
subset yields a slightly higher F1 score on the AddOneSent dataset, the overall
gains are limited. These findings suggest that dataset cartography provides
little benefit for adversarial robustness in SQuAD-style QA tasks. I conclude
by comparing these results to prior findings on SNLI and discuss possible
reasons for the observed differences.
|
2503.18292 | Chen Zhang | Chen Zhang, Kuntai Du, Shu Liu, Woosuk Kwon, Xiangxi Mo, Yufeng Wang,
Xiaoxuan Liu, Kaichao You, Zhuohan Li, Mingsheng Long, Jidong Zhai, Joseph
Gonzalez, Ion Stoica | Jenga: Effective Memory Management for Serving LLM with Heterogeneity | 16 pages, 19 figures | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) are widely used but expensive to run, especially
as inference workloads grow. To lower costs, maximizing the request batch size
by managing GPU memory efficiently is crucial. While PagedAttention has
recently been proposed to improve the efficiency of memory management, we find
that the growing heterogeneity in the embeddings dimensions, attention, and
access patterns of modern LLM architectures introduces new challenges for
memory allocation.
In this paper, we present Jenga, a novel memory allocation framework for
heterogeneous embeddings in LLMs. Jenga tackles two key challenges: (1)
minimizing memory fragmentation when managing embeddings of different sizes,
and (2) enabling flexible caching and eviction policies tailored to the
specific token-dependency patterns of various layers. Jenga employs a two-level
memory allocator, leveraging the least common multiple (LCM) of embedding sizes
to optimize memory usage and providing APIs to express layer-specific caching
logic to enhance memory reuse.
We implemente Jenga on vLLM, a state-of-the-art LLM inference engine, and
evaluate it with diverse LLMs, datasets, and GPU configurations. Evaluations
show that Jenga improves GPU memory utilization by up to 79.6%, and increases
serving throughput by up to 4.92x (1.80x on average).
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 02:28:04 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Chen",
""
],
[
"Du",
"Kuntai",
""
],
[
"Liu",
"Shu",
""
],
[
"Kwon",
"Woosuk",
""
],
[
"Mo",
"Xiangxi",
""
],
[
"Wang",
"Yufeng",
""
],
[
"Liu",
"Xiaoxuan",
""
],
[
"You",
"Kaichao",
""
],
[
"Li",
"Zhuohan",
""
],
[
"Long",
"Mingsheng",
""
],
[
"Zhai",
"Jidong",
""
],
[
"Gonzalez",
"Joseph",
""
],
[
"Stoica",
"Ion",
""
]
] | TITLE: Jenga: Effective Memory Management for Serving LLM with Heterogeneity
ABSTRACT: Large language models (LLMs) are widely used but expensive to run, especially
as inference workloads grow. To lower costs, maximizing the request batch size
by managing GPU memory efficiently is crucial. While PagedAttention has
recently been proposed to improve the efficiency of memory management, we find
that the growing heterogeneity in the embeddings dimensions, attention, and
access patterns of modern LLM architectures introduces new challenges for
memory allocation.
In this paper, we present Jenga, a novel memory allocation framework for
heterogeneous embeddings in LLMs. Jenga tackles two key challenges: (1)
minimizing memory fragmentation when managing embeddings of different sizes,
and (2) enabling flexible caching and eviction policies tailored to the
specific token-dependency patterns of various layers. Jenga employs a two-level
memory allocator, leveraging the least common multiple (LCM) of embedding sizes
to optimize memory usage and providing APIs to express layer-specific caching
logic to enhance memory reuse.
We implemente Jenga on vLLM, a state-of-the-art LLM inference engine, and
evaluate it with diverse LLMs, datasets, and GPU configurations. Evaluations
show that Jenga improves GPU memory utilization by up to 79.6%, and increases
serving throughput by up to 4.92x (1.80x on average).
|
2503.18294 | Fiseha Berhanu Tesema PhD | Fiseha B. Tesema, Alejandro Guerra Manzanares, Tianxiang Cui, Qian
Zhang, Moses Solomon, Sean He | LGPS: A Lightweight GAN-Based Approach for Polyp Segmentation in
Colonoscopy Images | 10 pages, 6 Figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Colorectal cancer (CRC) is a major global cause of cancer-related deaths,
with early polyp detection and removal during colonoscopy being crucial for
prevention. While deep learning methods have shown promise in polyp
segmentation, challenges such as high computational costs, difficulty in
segmenting small or low-contrast polyps, and limited generalizability across
datasets persist. To address these issues, we propose LGPS, a lightweight
GAN-based framework for polyp segmentation. LGPS incorporates three key
innovations: (1) a MobileNetV2 backbone enhanced with modified residual blocks
and Squeeze-and-Excitation (ResE) modules for efficient feature extraction; (2)
Convolutional Conditional Random Fields (ConvCRF) for precise boundary
refinement; and (3) a hybrid loss function combining Binary Cross-Entropy,
Weighted IoU Loss, and Dice Loss to address class imbalance and enhance
segmentation accuracy. LGPS is validated on five benchmark datasets and
compared with state-of-the-art(SOTA) methods. On the largest and challenging
PolypGen test dataset, LGPS achieves a Dice of 0.7299 and an IoU of 0.7867,
outperformed all SOTA works and demonstrating robust generalization. With only
1.07 million parameters, LGPS is 17 times smaller than the smallest existing
model, making it highly suitable for real-time clinical applications. Its
lightweight design and strong performance underscore its potential for
improving early CRC diagnosis. Code is available at
https://github.com/Falmi/LGPS/.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 02:41:53 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Tesema",
"Fiseha B.",
""
],
[
"Manzanares",
"Alejandro Guerra",
""
],
[
"Cui",
"Tianxiang",
""
],
[
"Zhang",
"Qian",
""
],
[
"Solomon",
"Moses",
""
],
[
"He",
"Sean",
""
]
] | TITLE: LGPS: A Lightweight GAN-Based Approach for Polyp Segmentation in
Colonoscopy Images
ABSTRACT: Colorectal cancer (CRC) is a major global cause of cancer-related deaths,
with early polyp detection and removal during colonoscopy being crucial for
prevention. While deep learning methods have shown promise in polyp
segmentation, challenges such as high computational costs, difficulty in
segmenting small or low-contrast polyps, and limited generalizability across
datasets persist. To address these issues, we propose LGPS, a lightweight
GAN-based framework for polyp segmentation. LGPS incorporates three key
innovations: (1) a MobileNetV2 backbone enhanced with modified residual blocks
and Squeeze-and-Excitation (ResE) modules for efficient feature extraction; (2)
Convolutional Conditional Random Fields (ConvCRF) for precise boundary
refinement; and (3) a hybrid loss function combining Binary Cross-Entropy,
Weighted IoU Loss, and Dice Loss to address class imbalance and enhance
segmentation accuracy. LGPS is validated on five benchmark datasets and
compared with state-of-the-art(SOTA) methods. On the largest and challenging
PolypGen test dataset, LGPS achieves a Dice of 0.7299 and an IoU of 0.7867,
outperformed all SOTA works and demonstrating robust generalization. With only
1.07 million parameters, LGPS is 17 times smaller than the smallest existing
model, making it highly suitable for real-time clinical applications. Its
lightweight design and strong performance underscore its potential for
improving early CRC diagnosis. Code is available at
https://github.com/Falmi/LGPS/.
|
2503.18300 | Xi Wu | Xi Wu and Dan Zhang and Chao Zhou and Liangwei Yang and Tianyu Lin and
Jibing Gong | RAU: Towards Regularized Alignment and Uniformity for Representation
Learning in Recommendation | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems (RecSys) have become essential in modern society, driving
user engagement and satisfaction across diverse online platforms. Most RecSys
focuses on designing a powerful encoder to embed users and items into
high-dimensional vector representation space, with loss functions optimizing
their representation distributions. Recent studies reveal that directly
optimizing key properties of the representation distribution, such as alignment
and uniformity, can outperform complex encoder designs. However, existing
methods for optimizing critical attributes overlook the impact of dataset
sparsity on the model: limited user-item interactions lead to sparse alignment,
while excessive interactions result in uneven uniformity, both of which degrade
performance. In this paper, we identify the sparse alignment and uneven
uniformity issues, and further propose Regularized Alignment and Uniformity
(RAU) to cope with these two issues accordingly. RAU consists of two novel
regularization methods for alignment and uniformity to learn better user/item
representation. 1) Center-strengthened alignment further aligns the average
in-batch user/item representation to provide an enhanced alignment signal and
further minimize the disparity between user and item representation. 2)
Low-variance-guided uniformity minimizes the variance of pairwise distances
along with uniformity, which provides extra guidance to a more stabilized
uniformity increase during training. We conducted extensive experiments on
three real-world datasets, and the proposed RAU resulted in significant
performance improvements compared to current state-of-the-art CF methods, which
confirms the advantages of the two proposed regularization methods.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 03:03:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wu",
"Xi",
""
],
[
"Zhang",
"Dan",
""
],
[
"Zhou",
"Chao",
""
],
[
"Yang",
"Liangwei",
""
],
[
"Lin",
"Tianyu",
""
],
[
"Gong",
"Jibing",
""
]
] | TITLE: RAU: Towards Regularized Alignment and Uniformity for Representation
Learning in Recommendation
ABSTRACT: Recommender systems (RecSys) have become essential in modern society, driving
user engagement and satisfaction across diverse online platforms. Most RecSys
focuses on designing a powerful encoder to embed users and items into
high-dimensional vector representation space, with loss functions optimizing
their representation distributions. Recent studies reveal that directly
optimizing key properties of the representation distribution, such as alignment
and uniformity, can outperform complex encoder designs. However, existing
methods for optimizing critical attributes overlook the impact of dataset
sparsity on the model: limited user-item interactions lead to sparse alignment,
while excessive interactions result in uneven uniformity, both of which degrade
performance. In this paper, we identify the sparse alignment and uneven
uniformity issues, and further propose Regularized Alignment and Uniformity
(RAU) to cope with these two issues accordingly. RAU consists of two novel
regularization methods for alignment and uniformity to learn better user/item
representation. 1) Center-strengthened alignment further aligns the average
in-batch user/item representation to provide an enhanced alignment signal and
further minimize the disparity between user and item representation. 2)
Low-variance-guided uniformity minimizes the variance of pairwise distances
along with uniformity, which provides extra guidance to a more stabilized
uniformity increase during training. We conducted extensive experiments on
three real-world datasets, and the proposed RAU resulted in significant
performance improvements compared to current state-of-the-art CF methods, which
confirms the advantages of the two proposed regularization methods.
|
2503.18301 | Jiajun Guo | Haifeng Li, Jiajun Guo, Xuanxin Fan and Dezhen Song | Ground Penetrating Radar-Assisted Multimodal Robot Odometry Using
Subsurface Feature Matrix | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Localization of robots using subsurface features observed by
ground-penetrating radar (GPR) enhances and adds robustness to common sensor
modalities, as subsurface features are less affected by weather, seasons, and
surface changes. We introduce an innovative multimodal odometry approach using
inputs from GPR, an inertial measurement unit (IMU), and a wheel encoder. To
efficiently address GPR signal noise, we introduce an advanced feature
representation called the subsurface feature matrix (SFM). The SFM leverages
frequency domain data and identifies peaks within radar scans. Additionally, we
propose a novel feature matching method that estimates GPR displacement by
aligning SFMs. The integrations from these three input sources are consolidated
using a factor graph approach to achieve multimodal robot odometry. Our method
has been developed and evaluated with the CMU-GPR public dataset, demonstrating
improvements in accuracy and robustness with real-time performance in robotic
odometry tasks.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 03:07:28 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Haifeng",
""
],
[
"Guo",
"Jiajun",
""
],
[
"Fan",
"Xuanxin",
""
],
[
"Song",
"Dezhen",
""
]
] | TITLE: Ground Penetrating Radar-Assisted Multimodal Robot Odometry Using
Subsurface Feature Matrix
ABSTRACT: Localization of robots using subsurface features observed by
ground-penetrating radar (GPR) enhances and adds robustness to common sensor
modalities, as subsurface features are less affected by weather, seasons, and
surface changes. We introduce an innovative multimodal odometry approach using
inputs from GPR, an inertial measurement unit (IMU), and a wheel encoder. To
efficiently address GPR signal noise, we introduce an advanced feature
representation called the subsurface feature matrix (SFM). The SFM leverages
frequency domain data and identifies peaks within radar scans. Additionally, we
propose a novel feature matching method that estimates GPR displacement by
aligning SFMs. The integrations from these three input sources are consolidated
using a factor graph approach to achieve multimodal robot odometry. Our method
has been developed and evaluated with the CMU-GPR public dataset, demonstrating
improvements in accuracy and robustness with real-time performance in robotic
odometry tasks.
|
2503.18302 | Qingyue Long | Qingyue Long, Can Rong, Huandong Wang, Shaw Rajib, Yong Li | DiffMove: Group Mobility Tendency Enhanced Trajectory Recovery via
Diffusion Model | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the real world, trajectory data is often sparse and incomplete due to low
collection frequencies or limited device coverage. Trajectory recovery aims to
recover these missing trajectory points, making the trajectories denser and
more complete. However, this task faces two key challenges: 1) The excessive
sparsity of individual trajectories makes it difficult to effectively leverage
historical information for recovery; 2) Sparse trajectories make it harder to
capture complex individual mobility preferences. To address these challenges,
we propose a novel method called DiffMove. Firstly, we harness crowd wisdom for
trajectory recovery. Specifically, we construct a group tendency graph using
the collective trajectories of all users and then integrate the group mobility
trends into the location representations via graph embedding. This solves the
challenge of sparse trajectories being unable to rely on individual historical
trajectories for recovery. Secondly, we capture individual mobility preferences
from both historical and current perspectives. Finally, we integrate group
mobility tendencies and individual preferences into the spatiotemporal
distribution of the trajectory to recover high-quality trajectories. Extensive
experiments on two real-world datasets demonstrate that DiffMove outperforms
existing state-of-the-art methods. Further analysis validates the robustness of
our method.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 03:08:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Long",
"Qingyue",
""
],
[
"Rong",
"Can",
""
],
[
"Wang",
"Huandong",
""
],
[
"Rajib",
"Shaw",
""
],
[
"Li",
"Yong",
""
]
] | TITLE: DiffMove: Group Mobility Tendency Enhanced Trajectory Recovery via
Diffusion Model
ABSTRACT: In the real world, trajectory data is often sparse and incomplete due to low
collection frequencies or limited device coverage. Trajectory recovery aims to
recover these missing trajectory points, making the trajectories denser and
more complete. However, this task faces two key challenges: 1) The excessive
sparsity of individual trajectories makes it difficult to effectively leverage
historical information for recovery; 2) Sparse trajectories make it harder to
capture complex individual mobility preferences. To address these challenges,
we propose a novel method called DiffMove. Firstly, we harness crowd wisdom for
trajectory recovery. Specifically, we construct a group tendency graph using
the collective trajectories of all users and then integrate the group mobility
trends into the location representations via graph embedding. This solves the
challenge of sparse trajectories being unable to rely on individual historical
trajectories for recovery. Secondly, we capture individual mobility preferences
from both historical and current perspectives. Finally, we integrate group
mobility tendencies and individual preferences into the spatiotemporal
distribution of the trajectory to recover high-quality trajectories. Extensive
experiments on two real-world datasets demonstrate that DiffMove outperforms
existing state-of-the-art methods. Further analysis validates the robustness of
our method.
|
2503.18309 | Zhidi Lin | Zhidi Lin, Ying Li, Feng Yin, Juan Maro\~nas, Alexandre H. Thi\'ery | Efficient Transformed Gaussian Process State-Space Models for
Non-Stationary High-Dimensional Dynamical Systems | 13 pages, 6 figures | null | null | null | stat.ML cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian process state-space models (GPSSMs) have emerged as a powerful
framework for modeling dynamical systems, offering interpretable uncertainty
quantification and inherent regularization. However, existing GPSSMs face
significant challenges in handling high-dimensional, non-stationary systems due
to computational inefficiencies, limited scalability, and restrictive
stationarity assumptions. In this paper, we propose an efficient transformed
Gaussian process state-space model (ETGPSSM) to address these limitations. Our
approach leverages a single shared Gaussian process (GP) combined with
normalizing flows and Bayesian neural networks, enabling efficient modeling of
complex, high-dimensional state transitions while preserving scalability. To
address the lack of closed-form expressions for the implicit process in the
transformed GP, we follow its generative process and introduce an efficient
variational inference algorithm, aided by the ensemble Kalman filter (EnKF), to
enable computationally tractable learning and inference. Extensive empirical
evaluations on synthetic and real-world datasets demonstrate the superior
performance of our ETGPSSM in system dynamics learning, high-dimensional state
estimation, and time-series forecasting, outperforming existing GPSSMs and
neural network-based methods in both accuracy and computational efficiency.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 03:19:45 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lin",
"Zhidi",
""
],
[
"Li",
"Ying",
""
],
[
"Yin",
"Feng",
""
],
[
"Maroñas",
"Juan",
""
],
[
"Thiéry",
"Alexandre H.",
""
]
] | TITLE: Efficient Transformed Gaussian Process State-Space Models for
Non-Stationary High-Dimensional Dynamical Systems
ABSTRACT: Gaussian process state-space models (GPSSMs) have emerged as a powerful
framework for modeling dynamical systems, offering interpretable uncertainty
quantification and inherent regularization. However, existing GPSSMs face
significant challenges in handling high-dimensional, non-stationary systems due
to computational inefficiencies, limited scalability, and restrictive
stationarity assumptions. In this paper, we propose an efficient transformed
Gaussian process state-space model (ETGPSSM) to address these limitations. Our
approach leverages a single shared Gaussian process (GP) combined with
normalizing flows and Bayesian neural networks, enabling efficient modeling of
complex, high-dimensional state transitions while preserving scalability. To
address the lack of closed-form expressions for the implicit process in the
transformed GP, we follow its generative process and introduce an efficient
variational inference algorithm, aided by the ensemble Kalman filter (EnKF), to
enable computationally tractable learning and inference. Extensive empirical
evaluations on synthetic and real-world datasets demonstrate the superior
performance of our ETGPSSM in system dynamics learning, high-dimensional state
estimation, and time-series forecasting, outperforming existing GPSSMs and
neural network-based methods in both accuracy and computational efficiency.
|
2503.18312 | Jianlong Jin | Jianlong Jin, Chenglong Zhao, Ruixin Zhang, Sheng Shang, Jianqing Xu,
Jingyun Zhang, ShaoMing Wang, Yang Zhao, Shouhong Ding, Wei Jia, Yunsheng Wu | Diff-Palm: Realistic Palmprint Generation with Polynomial Creases and
Intra-Class Variation Controllable Diffusion Models | Accepted by CVPR2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Palmprint recognition is significantly limited by the lack of large-scale
publicly available datasets. Previous methods have adopted B\'ezier curves to
simulate the palm creases, which then serve as input for conditional GANs to
generate realistic palmprints. However, without employing real data
fine-tuning, the performance of the recognition model trained on these
synthetic datasets would drastically decline, indicating a large gap between
generated and real palmprints. This is primarily due to the utilization of an
inaccurate palm crease representation and challenges in balancing intra-class
variation with identity consistency. To address this, we introduce a
polynomial-based palm crease representation that provides a new palm crease
generation mechanism more closely aligned with the real distribution. We also
propose the palm creases conditioned diffusion model with a novel intra-class
variation control method. By applying our proposed $K$-step noise-sharing
sampling, we are able to synthesize palmprint datasets with large intra-class
variation and high identity consistency. Experimental results show that, for
the first time, recognition models trained solely on our synthetic datasets,
without any fine-tuning, outperform those trained on real datasets.
Furthermore, our approach achieves superior recognition performance as the
number of generated identities increases.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 03:30:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Jin",
"Jianlong",
""
],
[
"Zhao",
"Chenglong",
""
],
[
"Zhang",
"Ruixin",
""
],
[
"Shang",
"Sheng",
""
],
[
"Xu",
"Jianqing",
""
],
[
"Zhang",
"Jingyun",
""
],
[
"Wang",
"ShaoMing",
""
],
[
"Zhao",
"Yang",
""
],
[
"Ding",
"Shouhong",
""
],
[
"Jia",
"Wei",
""
],
[
"Wu",
"Yunsheng",
""
]
] | TITLE: Diff-Palm: Realistic Palmprint Generation with Polynomial Creases and
Intra-Class Variation Controllable Diffusion Models
ABSTRACT: Palmprint recognition is significantly limited by the lack of large-scale
publicly available datasets. Previous methods have adopted B\'ezier curves to
simulate the palm creases, which then serve as input for conditional GANs to
generate realistic palmprints. However, without employing real data
fine-tuning, the performance of the recognition model trained on these
synthetic datasets would drastically decline, indicating a large gap between
generated and real palmprints. This is primarily due to the utilization of an
inaccurate palm crease representation and challenges in balancing intra-class
variation with identity consistency. To address this, we introduce a
polynomial-based palm crease representation that provides a new palm crease
generation mechanism more closely aligned with the real distribution. We also
propose the palm creases conditioned diffusion model with a novel intra-class
variation control method. By applying our proposed $K$-step noise-sharing
sampling, we are able to synthesize palmprint datasets with large intra-class
variation and high identity consistency. Experimental results show that, for
the first time, recognition models trained solely on our synthetic datasets,
without any fine-tuning, outperform those trained on real datasets.
Furthermore, our approach achieves superior recognition performance as the
number of generated identities increases.
|
2503.18338 | Wenrui Cai | Wenrui Cai and Qingjie Liu and Yunhong Wang | SPMTrack: Spatio-Temporal Parameter-Efficient Fine-Tuning with Mixture
of Experts for Scalable Visual Tracking | Accepted by CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most state-of-the-art trackers adopt one-stream paradigm, using a single
Vision Transformer for joint feature extraction and relation modeling of
template and search region images. However, relation modeling between different
image patches exhibits significant variations. For instance, background regions
dominated by target-irrelevant information require reduced attention
allocation, while foreground, particularly boundary areas, need to be be
emphasized. A single model may not effectively handle all kinds of relation
modeling simultaneously. In this paper, we propose a novel tracker called
SPMTrack based on mixture-of-experts tailored for visual tracking task (TMoE),
combining the capability of multiple experts to handle diverse relation
modeling more flexibly. Benefiting from TMoE, we extend relation modeling from
image pairs to spatio-temporal context, further improving tracking accuracy
with minimal increase in model parameters. Moreover, we employ TMoE as a
parameter-efficient fine-tuning method, substantially reducing trainable
parameters, which enables us to train SPMTrack of varying scales efficiently
and preserve the generalization ability of pretrained models to achieve
superior performance. We conduct experiments on seven datasets, and
experimental results demonstrate that our method significantly outperforms
current state-of-the-art trackers. The source code is available at
https://github.com/WenRuiCai/SPMTrack.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 04:43:02 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Cai",
"Wenrui",
""
],
[
"Liu",
"Qingjie",
""
],
[
"Wang",
"Yunhong",
""
]
] | TITLE: SPMTrack: Spatio-Temporal Parameter-Efficient Fine-Tuning with Mixture
of Experts for Scalable Visual Tracking
ABSTRACT: Most state-of-the-art trackers adopt one-stream paradigm, using a single
Vision Transformer for joint feature extraction and relation modeling of
template and search region images. However, relation modeling between different
image patches exhibits significant variations. For instance, background regions
dominated by target-irrelevant information require reduced attention
allocation, while foreground, particularly boundary areas, need to be be
emphasized. A single model may not effectively handle all kinds of relation
modeling simultaneously. In this paper, we propose a novel tracker called
SPMTrack based on mixture-of-experts tailored for visual tracking task (TMoE),
combining the capability of multiple experts to handle diverse relation
modeling more flexibly. Benefiting from TMoE, we extend relation modeling from
image pairs to spatio-temporal context, further improving tracking accuracy
with minimal increase in model parameters. Moreover, we employ TMoE as a
parameter-efficient fine-tuning method, substantially reducing trainable
parameters, which enables us to train SPMTrack of varying scales efficiently
and preserve the generalization ability of pretrained models to achieve
superior performance. We conduct experiments on seven datasets, and
experimental results demonstrate that our method significantly outperforms
current state-of-the-art trackers. The source code is available at
https://github.com/WenRuiCai/SPMTrack.
|
2503.18347 | Wen Zheng Terence Ng | Wen Zheng Terence Ng, Jianda Chen, Yuan Xu, Tianwei Zhang | Latent Embedding Adaptation for Human Preference Alignment in Diffusion
Planners | 8 pages | null | null | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This work addresses the challenge of personalizing trajectories generated in
automated decision-making systems by introducing a resource-efficient approach
that enables rapid adaptation to individual users' preferences. Our method
leverages a pretrained conditional diffusion model with Preference Latent
Embeddings (PLE), trained on a large, reward-free offline dataset. The PLE
serves as a compact representation for capturing specific user preferences. By
adapting the pretrained model using our proposed preference inversion method,
which directly optimizes the learnable PLE, we achieve superior alignment with
human preferences compared to existing solutions like Reinforcement Learning
from Human Feedback (RLHF) and Low-Rank Adaptation (LoRA). To better reflect
practical applications, we create a benchmark experiment using real human
preferences on diverse, high-reward trajectories.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 05:11:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ng",
"Wen Zheng Terence",
""
],
[
"Chen",
"Jianda",
""
],
[
"Xu",
"Yuan",
""
],
[
"Zhang",
"Tianwei",
""
]
] | TITLE: Latent Embedding Adaptation for Human Preference Alignment in Diffusion
Planners
ABSTRACT: This work addresses the challenge of personalizing trajectories generated in
automated decision-making systems by introducing a resource-efficient approach
that enables rapid adaptation to individual users' preferences. Our method
leverages a pretrained conditional diffusion model with Preference Latent
Embeddings (PLE), trained on a large, reward-free offline dataset. The PLE
serves as a compact representation for capturing specific user preferences. By
adapting the pretrained model using our proposed preference inversion method,
which directly optimizes the learnable PLE, we achieve superior alignment with
human preferences compared to existing solutions like Reinforcement Learning
from Human Feedback (RLHF) and Low-Rank Adaptation (LoRA). To better reflect
practical applications, we create a benchmark experiment using real human
preferences on diverse, high-reward trajectories.
|
2503.18349 | Zekai Deng | Zekai Deng, Ye Shi, Kaiyang Ji, Lan Xu, Shaoli Huang, and Jingya Wang | Human-Object Interaction with Vision-Language Model Guided Relative
Movement Dynamics | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human-Object Interaction (HOI) is vital for advancing simulation, animation,
and robotics, enabling the generation of long-term, physically plausible
motions in 3D environments. However, existing methods often fall short of
achieving physics realism and supporting diverse types of interactions. To
address these challenges, this paper introduces a unified Human-Object
Interaction framework that provides unified control over interactions with
static scenes and dynamic objects using language commands. The interactions
between human and object parts can always be described as the continuous stable
Relative Movement Dynamics (RMD) between human and object parts. By leveraging
the world knowledge and scene perception capabilities of Vision-Language Models
(VLMs), we translate language commands into RMD diagrams, which are used to
guide goal-conditioned reinforcement learning for sequential interaction with
objects. Our framework supports long-horizon interactions among dynamic,
articulated, and static objects. To support the training and evaluation of our
framework, we present a new dataset named Interplay, which includes multi-round
task plans generated by VLMs, covering both static and dynamic HOI tasks.
Extensive experiments demonstrate that our proposed framework can effectively
handle a wide range of HOI tasks, showcasing its ability to maintain long-term,
multi-round transitions. For more details, please refer to our project webpage:
https://rmd-hoi.github.io/.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 05:18:04 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Deng",
"Zekai",
""
],
[
"Shi",
"Ye",
""
],
[
"Ji",
"Kaiyang",
""
],
[
"Xu",
"Lan",
""
],
[
"Huang",
"Shaoli",
""
],
[
"Wang",
"Jingya",
""
]
] | TITLE: Human-Object Interaction with Vision-Language Model Guided Relative
Movement Dynamics
ABSTRACT: Human-Object Interaction (HOI) is vital for advancing simulation, animation,
and robotics, enabling the generation of long-term, physically plausible
motions in 3D environments. However, existing methods often fall short of
achieving physics realism and supporting diverse types of interactions. To
address these challenges, this paper introduces a unified Human-Object
Interaction framework that provides unified control over interactions with
static scenes and dynamic objects using language commands. The interactions
between human and object parts can always be described as the continuous stable
Relative Movement Dynamics (RMD) between human and object parts. By leveraging
the world knowledge and scene perception capabilities of Vision-Language Models
(VLMs), we translate language commands into RMD diagrams, which are used to
guide goal-conditioned reinforcement learning for sequential interaction with
objects. Our framework supports long-horizon interactions among dynamic,
articulated, and static objects. To support the training and evaluation of our
framework, we present a new dataset named Interplay, which includes multi-round
task plans generated by VLMs, covering both static and dynamic HOI tasks.
Extensive experiments demonstrate that our proposed framework can effectively
handle a wide range of HOI tasks, showcasing its ability to maintain long-term,
multi-round transitions. For more details, please refer to our project webpage:
https://rmd-hoi.github.io/.
|
2503.18355 | Yuto Sakai | Yuto Sakai and Qiang Ma | Food Recommendation With Balancing Comfort and Curiosity | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Food is a key pleasure of traveling, but travelers face a trade-off between
exploring curious new local food and choosing comfortable, familiar options.
This creates demand for personalized recommendation systems that balance these
competing factors. To the best of our knowledge, conventional recommendation
methods cannot provide recommendations that offer both curiosity and comfort
for food unknown to the user at a travel destination. In this study, we propose
new quantitative methods for estimating comfort and curiosity: Kernel Density
Scoring (KDS) and Mahalanobis Distance Scoring (MDS). KDS probabilistically
estimates food history distribution using kernel density estimation, while MDS
uses Mahalanobis distances between foods. These methods score food based on how
their representation vectors fit the estimated distributions. We also propose a
ranking method measuring the balance between comfort and curiosity based on
taste and ingredients. This balance is defined as curiosity (return) gained per
unit of comfort (risk) in choosing a food. For evaluation the proposed method,
we newly collected a dataset containing user surveys on Japanese food and
assessments of foreign food regarding comfort and curiosity. Comparing our
methods against the existing method, the Wilcoxon signed-rank test showed that
when estimating comfort from taste and curiosity from ingredients, the
MDS-based method outperformed the Baseline, while the KDS-based method showed
no significant differences. When estimating curiosity from taste and comfort
from ingredients, both methods outperformed the Baseline. The MDS-based method
consistently outperformed KDS in ROC-AUC values.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 05:32:37 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Sakai",
"Yuto",
""
],
[
"Ma",
"Qiang",
""
]
] | TITLE: Food Recommendation With Balancing Comfort and Curiosity
ABSTRACT: Food is a key pleasure of traveling, but travelers face a trade-off between
exploring curious new local food and choosing comfortable, familiar options.
This creates demand for personalized recommendation systems that balance these
competing factors. To the best of our knowledge, conventional recommendation
methods cannot provide recommendations that offer both curiosity and comfort
for food unknown to the user at a travel destination. In this study, we propose
new quantitative methods for estimating comfort and curiosity: Kernel Density
Scoring (KDS) and Mahalanobis Distance Scoring (MDS). KDS probabilistically
estimates food history distribution using kernel density estimation, while MDS
uses Mahalanobis distances between foods. These methods score food based on how
their representation vectors fit the estimated distributions. We also propose a
ranking method measuring the balance between comfort and curiosity based on
taste and ingredients. This balance is defined as curiosity (return) gained per
unit of comfort (risk) in choosing a food. For evaluation the proposed method,
we newly collected a dataset containing user surveys on Japanese food and
assessments of foreign food regarding comfort and curiosity. Comparing our
methods against the existing method, the Wilcoxon signed-rank test showed that
when estimating comfort from taste and curiosity from ingredients, the
MDS-based method outperformed the Baseline, while the KDS-based method showed
no significant differences. When estimating curiosity from taste and comfort
from ingredients, both methods outperformed the Baseline. The MDS-based method
consistently outperformed KDS in ROC-AUC values.
|
2503.18364 | Chenxi Xie | Chenxi Xie, Minghan Li, Hui Zeng, Jun Luo, Lei Zhang | MaSS13K: A Matting-level Semantic Segmentation Benchmark | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | High-resolution semantic segmentation is essential for applications such as
image editing, bokeh imaging, AR/VR, etc. Unfortunately, existing datasets
often have limited resolution and lack precise mask details and boundaries. In
this work, we build a large-scale, matting-level semantic segmentation dataset,
named MaSS13K, which consists of 13,348 real-world images, all at 4K
resolution. MaSS13K provides high-quality mask annotations of a number of
objects, which are categorized into seven categories: human, vegetation,
ground, sky, water, building, and others. MaSS13K features precise masks, with
an average mask complexity 20-50 times higher than existing semantic
segmentation datasets. We consequently present a method specifically designed
for high-resolution semantic segmentation, namely MaSSFormer, which employs an
efficient pixel decoder that aggregates high-level semantic features and
low-level texture features across three stages, aiming to produce
high-resolution masks with minimal computational cost. Finally, we propose a
new learning paradigm, which integrates the high-quality masks of the seven
given categories with pseudo labels from new classes, enabling MaSSFormer to
transfer its accurate segmentation capability to other classes of objects. Our
proposed MaSSFormer is comprehensively evaluated on the MaSS13K benchmark
together with 14 representative segmentation models. We expect that our
meticulously annotated MaSS13K dataset and the MaSSFormer model can facilitate
the research of high-resolution and high-quality semantic segmentation.
Datasets and codes can be found at https://github.com/xiechenxi99/MaSS13K.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 05:59:40 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xie",
"Chenxi",
""
],
[
"Li",
"Minghan",
""
],
[
"Zeng",
"Hui",
""
],
[
"Luo",
"Jun",
""
],
[
"Zhang",
"Lei",
""
]
] | TITLE: MaSS13K: A Matting-level Semantic Segmentation Benchmark
ABSTRACT: High-resolution semantic segmentation is essential for applications such as
image editing, bokeh imaging, AR/VR, etc. Unfortunately, existing datasets
often have limited resolution and lack precise mask details and boundaries. In
this work, we build a large-scale, matting-level semantic segmentation dataset,
named MaSS13K, which consists of 13,348 real-world images, all at 4K
resolution. MaSS13K provides high-quality mask annotations of a number of
objects, which are categorized into seven categories: human, vegetation,
ground, sky, water, building, and others. MaSS13K features precise masks, with
an average mask complexity 20-50 times higher than existing semantic
segmentation datasets. We consequently present a method specifically designed
for high-resolution semantic segmentation, namely MaSSFormer, which employs an
efficient pixel decoder that aggregates high-level semantic features and
low-level texture features across three stages, aiming to produce
high-resolution masks with minimal computational cost. Finally, we propose a
new learning paradigm, which integrates the high-quality masks of the seven
given categories with pseudo labels from new classes, enabling MaSSFormer to
transfer its accurate segmentation capability to other classes of objects. Our
proposed MaSSFormer is comprehensively evaluated on the MaSS13K benchmark
together with 14 representative segmentation models. We expect that our
meticulously annotated MaSS13K dataset and the MaSSFormer model can facilitate
the research of high-resolution and high-quality semantic segmentation.
Datasets and codes can be found at https://github.com/xiechenxi99/MaSS13K.
|
2503.18370 | Dan Casas | Raquel Vidaurre, Elena Garces and Dan Casas | DiffusedWrinkles: A Diffusion-Based Model for Data-Driven Garment
Animation | BMVC 2024 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present a data-driven method for learning to generate animations of 3D
garments using a 2D image diffusion model. In contrast to existing methods,
typically based on fully connected networks, graph neural networks, or
generative adversarial networks, which have difficulties to cope with
parametric garments with fine wrinkle detail, our approach is able to
synthesize high-quality 3D animations for a wide variety of garments and body
shapes, while being agnostic to the garment mesh topology. Our key idea is to
represent 3D garment deformations as a 2D layout-consistent texture that
encodes 3D offsets with respect to a parametric garment template. Using this
representation, we encode a large dataset of garments simulated in various
motions and shapes and train a novel conditional diffusion model that is able
to synthesize high-quality pose-shape-and-design dependent 3D garment
deformations. Since our model is generative, we can synthesize various
plausible deformations for a given target pose, shape, and design.
Additionally, we show that we can further condition our model using an existing
garment state, which enables the generation of temporally coherent sequences.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 06:08:26 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Vidaurre",
"Raquel",
""
],
[
"Garces",
"Elena",
""
],
[
"Casas",
"Dan",
""
]
] | TITLE: DiffusedWrinkles: A Diffusion-Based Model for Data-Driven Garment
Animation
ABSTRACT: We present a data-driven method for learning to generate animations of 3D
garments using a 2D image diffusion model. In contrast to existing methods,
typically based on fully connected networks, graph neural networks, or
generative adversarial networks, which have difficulties to cope with
parametric garments with fine wrinkle detail, our approach is able to
synthesize high-quality 3D animations for a wide variety of garments and body
shapes, while being agnostic to the garment mesh topology. Our key idea is to
represent 3D garment deformations as a 2D layout-consistent texture that
encodes 3D offsets with respect to a parametric garment template. Using this
representation, we encode a large dataset of garments simulated in various
motions and shapes and train a novel conditional diffusion model that is able
to synthesize high-quality pose-shape-and-design dependent 3D garment
deformations. Since our model is generative, we can synthesize various
plausible deformations for a given target pose, shape, and design.
Additionally, we show that we can further condition our model using an existing
garment state, which enables the generation of temporally coherent sequences.
|
2503.18375 | Zhijie Zhang | Yunhao Quan, Chuang Gao, Nan Cheng, Zhijie Zhang, Zhisheng Yin,
Wenchao Xu, Danyang Wang | ALWNN Empowered Automatic Modulation Classification: Conquering
Complexity and Scarce Sample Conditions | null | null | null | null | cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Automatic Modulation Classification (AMC), deep learning methods have
shown remarkable performance, offering significant advantages over traditional
approaches and demonstrating their vast potential. Nevertheless, notable
drawbacks, particularly in their high demands for storage, computational
resources, and large-scale labeled data, which limit their practical
application in real-world scenarios. To tackle this issue, this paper
innovatively proposes an automatic modulation classification model based on the
Adaptive Lightweight Wavelet Neural Network (ALWNN) and the few-shot framework
(MALWNN). The ALWNN model, by integrating the adaptive wavelet neural network
and depth separable convolution, reduces the number of model parameters and
computational complexity. The MALWNN framework, using ALWNN as an encoder and
incorporating prototype network technology, decreases the model's dependence on
the quantity of samples. Simulation results indicate that this model performs
remarkably well on mainstream datasets. Moreover, in terms of Floating Point
Operations Per Second (FLOPS) and Normalized Multiply - Accumulate Complexity
(NMACC), ALWNN significantly reduces computational complexity compared to
existing methods. This is further validated by real-world system tests on USRP
and Raspberry Pi platforms. Experiments with MALWNN show its superior
performance in few-shot learning scenarios compared to other algorithms.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 06:14:33 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Quan",
"Yunhao",
""
],
[
"Gao",
"Chuang",
""
],
[
"Cheng",
"Nan",
""
],
[
"Zhang",
"Zhijie",
""
],
[
"Yin",
"Zhisheng",
""
],
[
"Xu",
"Wenchao",
""
],
[
"Wang",
"Danyang",
""
]
] | TITLE: ALWNN Empowered Automatic Modulation Classification: Conquering
Complexity and Scarce Sample Conditions
ABSTRACT: In Automatic Modulation Classification (AMC), deep learning methods have
shown remarkable performance, offering significant advantages over traditional
approaches and demonstrating their vast potential. Nevertheless, notable
drawbacks, particularly in their high demands for storage, computational
resources, and large-scale labeled data, which limit their practical
application in real-world scenarios. To tackle this issue, this paper
innovatively proposes an automatic modulation classification model based on the
Adaptive Lightweight Wavelet Neural Network (ALWNN) and the few-shot framework
(MALWNN). The ALWNN model, by integrating the adaptive wavelet neural network
and depth separable convolution, reduces the number of model parameters and
computational complexity. The MALWNN framework, using ALWNN as an encoder and
incorporating prototype network technology, decreases the model's dependence on
the quantity of samples. Simulation results indicate that this model performs
remarkably well on mainstream datasets. Moreover, in terms of Floating Point
Operations Per Second (FLOPS) and Normalized Multiply - Accumulate Complexity
(NMACC), ALWNN significantly reduces computational complexity compared to
existing methods. This is further validated by real-world system tests on USRP
and Raspberry Pi platforms. Experiments with MALWNN show its superior
performance in few-shot learning scenarios compared to other algorithms.
|
2503.18385 | Xudong Mou | Xudong Mou, Rui Wang, Bo Li, Tianyu Wo, Jie Sun, Hui Wang, Xudong Liu | RoCA: Robust Contrastive One-class Time Series Anomaly Detection with
Contaminated Data | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The accumulation of time-series signals and the absence of labels make
time-series Anomaly Detection (AD) a self-supervised task of deep learning.
Methods based on normality assumptions face the following three limitations:
(1) A single assumption could hardly characterize the whole normality or lead
to some deviation. (2) Some assumptions may go against the principle of AD. (3)
Their basic assumption is that the training data is uncontaminated (free of
anomalies), which is unrealistic in practice, leading to a decline in
robustness. This paper proposes a novel robust approach, RoCA, which is the
first to address all of the above three challenges, as far as we are aware. It
fuses the separated assumptions of one-class classification and contrastive
learning in a single training process to characterize a more complete so-called
normality. Additionally, it monitors the training data and computes a carefully
designed anomaly score throughout the training process. This score helps
identify latent anomalies, which are then used to define the classification
boundary, inspired by the concept of outlier exposure. The performance on AIOps
datasets improved by 6% compared to when contamination was not considered
(COCA). On two large and high-dimensional multivariate datasets, the
performance increased by 5% to 10%. RoCA achieves the highest average
performance on both univariate and multivariate datasets. The source code is
available at https://github.com/ruiking04/RoCA.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 06:52:28 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Mou",
"Xudong",
""
],
[
"Wang",
"Rui",
""
],
[
"Li",
"Bo",
""
],
[
"Wo",
"Tianyu",
""
],
[
"Sun",
"Jie",
""
],
[
"Wang",
"Hui",
""
],
[
"Liu",
"Xudong",
""
]
] | TITLE: RoCA: Robust Contrastive One-class Time Series Anomaly Detection with
Contaminated Data
ABSTRACT: The accumulation of time-series signals and the absence of labels make
time-series Anomaly Detection (AD) a self-supervised task of deep learning.
Methods based on normality assumptions face the following three limitations:
(1) A single assumption could hardly characterize the whole normality or lead
to some deviation. (2) Some assumptions may go against the principle of AD. (3)
Their basic assumption is that the training data is uncontaminated (free of
anomalies), which is unrealistic in practice, leading to a decline in
robustness. This paper proposes a novel robust approach, RoCA, which is the
first to address all of the above three challenges, as far as we are aware. It
fuses the separated assumptions of one-class classification and contrastive
learning in a single training process to characterize a more complete so-called
normality. Additionally, it monitors the training data and computes a carefully
designed anomaly score throughout the training process. This score helps
identify latent anomalies, which are then used to define the classification
boundary, inspired by the concept of outlier exposure. The performance on AIOps
datasets improved by 6% compared to when contamination was not considered
(COCA). On two large and high-dimensional multivariate datasets, the
performance increased by 5% to 10%. RoCA achieves the highest average
performance on both univariate and multivariate datasets. The source code is
available at https://github.com/ruiking04/RoCA.
|
2503.18393 | Xinhua Xu | Xinhua Xu, Hong Liu, Jianbing Wu, Jinfu Liu | PDDM: Pseudo Depth Diffusion Model for RGB-PD Semantic Segmentation
Based in Complex Indoor Scenes | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The integration of RGB and depth modalities significantly enhances the
accuracy of segmenting complex indoor scenes, with depth data from RGB-D
cameras playing a crucial role in this improvement. However, collecting an
RGB-D dataset is more expensive than an RGB dataset due to the need for
specialized depth sensors. Aligning depth and RGB images also poses challenges
due to sensor positioning and issues like missing data and noise. In contrast,
Pseudo Depth (PD) from high-precision depth estimation algorithms can eliminate
the dependence on RGB-D sensors and alignment processes, as well as provide
effective depth information and show significant potential in semantic
segmentation. Therefore, to explore the practicality of utilizing pseudo depth
instead of real depth for semantic segmentation, we design an RGB-PD
segmentation pipeline to integrate RGB and pseudo depth and propose a Pseudo
Depth Aggregation Module (PDAM) for fully exploiting the informative clues
provided by the diverse pseudo depth maps. The PDAM aggregates multiple pseudo
depth maps into a single modality, making it easily adaptable to other RGB-D
segmentation methods. In addition, the pre-trained diffusion model serves as a
strong feature extractor for RGB segmentation tasks, but multi-modal
diffusion-based segmentation methods remain unexplored. Therefore, we present a
Pseudo Depth Diffusion Model (PDDM) that adopts a large-scale text-image
diffusion model as a feature extractor and a simple yet effective fusion
strategy to integrate pseudo depth. To verify the applicability of pseudo depth
and our PDDM, we perform extensive experiments on the NYUv2 and SUNRGB-D
datasets. The experimental results demonstrate that pseudo depth can
effectively enhance segmentation performance, and our PDDM achieves
state-of-the-art performance, outperforming other methods by +6.98 mIoU on
NYUv2 and +2.11 mIoU on SUNRGB-D.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 07:05:31 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xu",
"Xinhua",
""
],
[
"Liu",
"Hong",
""
],
[
"Wu",
"Jianbing",
""
],
[
"Liu",
"Jinfu",
""
]
] | TITLE: PDDM: Pseudo Depth Diffusion Model for RGB-PD Semantic Segmentation
Based in Complex Indoor Scenes
ABSTRACT: The integration of RGB and depth modalities significantly enhances the
accuracy of segmenting complex indoor scenes, with depth data from RGB-D
cameras playing a crucial role in this improvement. However, collecting an
RGB-D dataset is more expensive than an RGB dataset due to the need for
specialized depth sensors. Aligning depth and RGB images also poses challenges
due to sensor positioning and issues like missing data and noise. In contrast,
Pseudo Depth (PD) from high-precision depth estimation algorithms can eliminate
the dependence on RGB-D sensors and alignment processes, as well as provide
effective depth information and show significant potential in semantic
segmentation. Therefore, to explore the practicality of utilizing pseudo depth
instead of real depth for semantic segmentation, we design an RGB-PD
segmentation pipeline to integrate RGB and pseudo depth and propose a Pseudo
Depth Aggregation Module (PDAM) for fully exploiting the informative clues
provided by the diverse pseudo depth maps. The PDAM aggregates multiple pseudo
depth maps into a single modality, making it easily adaptable to other RGB-D
segmentation methods. In addition, the pre-trained diffusion model serves as a
strong feature extractor for RGB segmentation tasks, but multi-modal
diffusion-based segmentation methods remain unexplored. Therefore, we present a
Pseudo Depth Diffusion Model (PDDM) that adopts a large-scale text-image
diffusion model as a feature extractor and a simple yet effective fusion
strategy to integrate pseudo depth. To verify the applicability of pseudo depth
and our PDDM, we perform extensive experiments on the NYUv2 and SUNRGB-D
datasets. The experimental results demonstrate that pseudo depth can
effectively enhance segmentation performance, and our PDDM achieves
state-of-the-art performance, outperforming other methods by +6.98 mIoU on
NYUv2 and +2.11 mIoU on SUNRGB-D.
|
2503.18421 | Zihan Zheng | Qiang Hu, Zihan Zheng, Houqiang Zhong, Sihua Fu, Li Song,
XiaoyunZhang, Guangtao Zhai, Yanfeng Wang | 4DGC: Rate-Aware 4D Gaussian Compression for Efficient Streamable
Free-Viewpoint Video | CVPR2025 | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D Gaussian Splatting (3DGS) has substantial potential for enabling
photorealistic Free-Viewpoint Video (FVV) experiences. However, the vast number
of Gaussians and their associated attributes poses significant challenges for
storage and transmission. Existing methods typically handle dynamic 3DGS
representation and compression separately, neglecting motion information and
the rate-distortion (RD) trade-off during training, leading to performance
degradation and increased model redundancy. To address this gap, we propose
4DGC, a novel rate-aware 4D Gaussian compression framework that significantly
reduces storage size while maintaining superior RD performance for FVV.
Specifically, 4DGC introduces a motion-aware dynamic Gaussian representation
that utilizes a compact motion grid combined with sparse compensated Gaussians
to exploit inter-frame similarities. This representation effectively handles
large motions, preserving quality and reducing temporal redundancy.
Furthermore, we present an end-to-end compression scheme that employs
differentiable quantization and a tiny implicit entropy model to compress the
motion grid and compensated Gaussians efficiently. The entire framework is
jointly optimized using a rate-distortion trade-off. Extensive experiments
demonstrate that 4DGC supports variable bitrates and consistently outperforms
existing methods in RD performance across multiple datasets.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 08:05:27 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hu",
"Qiang",
""
],
[
"Zheng",
"Zihan",
""
],
[
"Zhong",
"Houqiang",
""
],
[
"Fu",
"Sihua",
""
],
[
"Song",
"Li",
""
],
[
"XiaoyunZhang",
"",
""
],
[
"Zhai",
"Guangtao",
""
],
[
"Wang",
"Yanfeng",
""
]
] | TITLE: 4DGC: Rate-Aware 4D Gaussian Compression for Efficient Streamable
Free-Viewpoint Video
ABSTRACT: 3D Gaussian Splatting (3DGS) has substantial potential for enabling
photorealistic Free-Viewpoint Video (FVV) experiences. However, the vast number
of Gaussians and their associated attributes poses significant challenges for
storage and transmission. Existing methods typically handle dynamic 3DGS
representation and compression separately, neglecting motion information and
the rate-distortion (RD) trade-off during training, leading to performance
degradation and increased model redundancy. To address this gap, we propose
4DGC, a novel rate-aware 4D Gaussian compression framework that significantly
reduces storage size while maintaining superior RD performance for FVV.
Specifically, 4DGC introduces a motion-aware dynamic Gaussian representation
that utilizes a compact motion grid combined with sparse compensated Gaussians
to exploit inter-frame similarities. This representation effectively handles
large motions, preserving quality and reducing temporal redundancy.
Furthermore, we present an end-to-end compression scheme that employs
differentiable quantization and a tiny implicit entropy model to compress the
motion grid and compensated Gaussians efficiently. The entire framework is
jointly optimized using a rate-distortion trade-off. Extensive experiments
demonstrate that 4DGC supports variable bitrates and consistently outperforms
existing methods in RD performance across multiple datasets.
|
2503.18424 | Abdulrezzak Zekiye | Abdulrezzak Zekiye, Ouns Bouachir, \"Oznur \"Ozkasap, Moayad Aloqaily | ED-DAO: Energy Donation Algorithms based on Decentralized Autonomous
Organization | 6 pages, 5 figures, and 4 tables. Accepted for publication in IEEE
International Conference on Communications (IEEE ICC 2025) | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | Energy is a fundamental component of modern life, driving nearly all aspects
of daily activities. As such, the inability to access energy when needed is a
significant issue that requires innovative solutions. In this paper, we propose
ED-DAO, a novel fully transparent and community-driven decentralized autonomous
organization (DAO) designed to facilitate energy donations. We analyze the
energy donation process by exploring various approaches and categorizing them
based on both the source of donated energy and funding origins. We propose a
novel Hybrid Energy Donation (HED) algorithm, which enables contributions from
both external and internal donors. External donations are payments sourced from
entities such as charities and organizations, where energy is sourced from the
utility grid and prosumers. Internal donations, on the other hand, come from
peer contributors with surplus energy. HED prioritizes donations in the
following sequence: peer-sourced energy (P2D), utilitygrid-sourced energy
(UG2D), and direct energy donations by peers (P2PD). By merging these donation
approaches, the HED algorithm increases the volume of donated energy, providing
a more effective means to address energy poverty. Experiments were conducted on
a dataset to evaluate the effectiveness of the proposed method. The results
showed that HED increased the total donated energy by at least 0.43% (64
megawatts) compared to the other algorithms (UG2D, P2D, and P2PD).
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 08:08:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zekiye",
"Abdulrezzak",
""
],
[
"Bouachir",
"Ouns",
""
],
[
"Özkasap",
"Öznur",
""
],
[
"Aloqaily",
"Moayad",
""
]
] | TITLE: ED-DAO: Energy Donation Algorithms based on Decentralized Autonomous
Organization
ABSTRACT: Energy is a fundamental component of modern life, driving nearly all aspects
of daily activities. As such, the inability to access energy when needed is a
significant issue that requires innovative solutions. In this paper, we propose
ED-DAO, a novel fully transparent and community-driven decentralized autonomous
organization (DAO) designed to facilitate energy donations. We analyze the
energy donation process by exploring various approaches and categorizing them
based on both the source of donated energy and funding origins. We propose a
novel Hybrid Energy Donation (HED) algorithm, which enables contributions from
both external and internal donors. External donations are payments sourced from
entities such as charities and organizations, where energy is sourced from the
utility grid and prosumers. Internal donations, on the other hand, come from
peer contributors with surplus energy. HED prioritizes donations in the
following sequence: peer-sourced energy (P2D), utilitygrid-sourced energy
(UG2D), and direct energy donations by peers (P2PD). By merging these donation
approaches, the HED algorithm increases the volume of donated energy, providing
a more effective means to address energy poverty. Experiments were conducted on
a dataset to evaluate the effectiveness of the proposed method. The results
showed that HED increased the total donated energy by at least 0.43% (64
megawatts) compared to the other algorithms (UG2D, P2D, and P2PD).
|
2503.18427 | Yingchen Song | Yingchen Song, Yaobin Wang, Yi Luo, Huan Wu, Pingping Tang | AES-SpMM: Balancing Accuracy and Speed by Adaptive Edge Sampling
Strategy to Accelerate SpMM in GNNs | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coordinating the design of sampling and sparse-dense matrix multiplication
(SpMM) is crucial for accelerating graph neural networks (GNNs). However, due
to irrational sampling strategies, existing methods face a trade-off between
accuracy and speed. Moreover, as computational optimizations progress, data
loading has gradually become the primary bottleneck in GNN inference. To
address these issues, we propose AES-SpMM, an adaptive edge sampling SpMM
kernel. It considers the relationship between the number of non-zero elements
in each matrix row and the shared memory width. The edge sampling scheme is
adaptively selected according to the different situations of each row. AES-SpMM
reduces the graph size through adaptive edge sampling to fit the GPU's shared
memory, lowering the computational cost and enhancing data locality, thus
balancing the accuracy and speed of GNN inference. Additionally, we introduce a
quantization-based AES-SpMM, which applies quantization and dequantization to
feature data in GNNs. This approach significantly reduces data loading time
while keeping accuracy loss negligible. We evaluated AES-SpMM with common GNN
models and datasets. The results show that AES-SpMM outperforms both the
cuSPARSE SpMM kernel and GE-SpMM by up to 25.87 times and 23.01 times,
respectively, with less than 1% accuracy loss. Compared to ES-SpMM, it reduces
accuracy loss by 3.4% on average , achieving a 1.31 times speedup. Compared to
AES-SpMM, quantization-based AES-SpMM has a maximum accuracy loss of 0.3% and
feature data loading time overhead is reduced by 50.91%-70.51%.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 08:12:40 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Song",
"Yingchen",
""
],
[
"Wang",
"Yaobin",
""
],
[
"Luo",
"Yi",
""
],
[
"Wu",
"Huan",
""
],
[
"Tang",
"Pingping",
""
]
] | TITLE: AES-SpMM: Balancing Accuracy and Speed by Adaptive Edge Sampling
Strategy to Accelerate SpMM in GNNs
ABSTRACT: Coordinating the design of sampling and sparse-dense matrix multiplication
(SpMM) is crucial for accelerating graph neural networks (GNNs). However, due
to irrational sampling strategies, existing methods face a trade-off between
accuracy and speed. Moreover, as computational optimizations progress, data
loading has gradually become the primary bottleneck in GNN inference. To
address these issues, we propose AES-SpMM, an adaptive edge sampling SpMM
kernel. It considers the relationship between the number of non-zero elements
in each matrix row and the shared memory width. The edge sampling scheme is
adaptively selected according to the different situations of each row. AES-SpMM
reduces the graph size through adaptive edge sampling to fit the GPU's shared
memory, lowering the computational cost and enhancing data locality, thus
balancing the accuracy and speed of GNN inference. Additionally, we introduce a
quantization-based AES-SpMM, which applies quantization and dequantization to
feature data in GNNs. This approach significantly reduces data loading time
while keeping accuracy loss negligible. We evaluated AES-SpMM with common GNN
models and datasets. The results show that AES-SpMM outperforms both the
cuSPARSE SpMM kernel and GE-SpMM by up to 25.87 times and 23.01 times,
respectively, with less than 1% accuracy loss. Compared to ES-SpMM, it reduces
accuracy loss by 3.4% on average , achieving a 1.31 times speedup. Compared to
AES-SpMM, quantization-based AES-SpMM has a maximum accuracy loss of 0.3% and
feature data loading time overhead is reduced by 50.91%-70.51%.
|
2503.18432 | Junsong Li | Junsong Li, Jie Zhou, Yutao Yang, Bihao Zhan, Qianjun Pan, Yuyang
Ding, Qin Chen, Jiang Bo, Xin Lin, Liang He | Teaching LLMs for Step-Level Automatic Math Correction via Reinforcement
Learning | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic math correction aims to check students' solutions to mathematical
problems via artificial intelligence technologies. Most existing studies focus
on judging the final answer at the problem level, while they ignore detailed
feedback on each step in a math problem-solving process, which requires
abilities of semantic understanding and reasoning. In this paper, we propose a
reinforcement learning (RL)-based method to boost large language model (LLM)
for step-level automatic math correction, named StepAMC. Particularly, we
convert the step-level automatic math correction within the text classification
task into an RL problem to enhance the reasoning capabilities of LLMs. Then, we
design a space-constrained policy network to improve the stability of RL. Then,
we introduce a fine-grained reward network to convert the binary human feedback
into a continuous value. We conduct extensive experiments over two benchmark
datasets and the results show that our model outperforms the eleven strong
baselines.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 08:28:34 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Junsong",
""
],
[
"Zhou",
"Jie",
""
],
[
"Yang",
"Yutao",
""
],
[
"Zhan",
"Bihao",
""
],
[
"Pan",
"Qianjun",
""
],
[
"Ding",
"Yuyang",
""
],
[
"Chen",
"Qin",
""
],
[
"Bo",
"Jiang",
""
],
[
"Lin",
"Xin",
""
],
[
"He",
"Liang",
""
]
] | TITLE: Teaching LLMs for Step-Level Automatic Math Correction via Reinforcement
Learning
ABSTRACT: Automatic math correction aims to check students' solutions to mathematical
problems via artificial intelligence technologies. Most existing studies focus
on judging the final answer at the problem level, while they ignore detailed
feedback on each step in a math problem-solving process, which requires
abilities of semantic understanding and reasoning. In this paper, we propose a
reinforcement learning (RL)-based method to boost large language model (LLM)
for step-level automatic math correction, named StepAMC. Particularly, we
convert the step-level automatic math correction within the text classification
task into an RL problem to enhance the reasoning capabilities of LLMs. Then, we
design a space-constrained policy network to improve the stability of RL. Then,
we introduce a fine-grained reward network to convert the binary human feedback
into a continuous value. We conduct extensive experiments over two benchmark
datasets and the results show that our model outperforms the eleven strong
baselines.
|
2503.18438 | Guosheng Zhao | Guosheng Zhao, Xiaofeng Wang, Chaojun Ni, Zheng Zhu, Wenkang Qin, Guan
Huang, Xingang Wang | ReconDreamer++: Harmonizing Generative and Reconstructive Models for
Driving Scene Representation | Project Page: https://recondreamer-plus.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Combining reconstruction models with generative models has emerged as a
promising paradigm for closed-loop simulation in autonomous driving. For
example, ReconDreamer has demonstrated remarkable success in rendering
large-scale maneuvers. However, a significant gap remains between the generated
data and real-world sensor observations, particularly in terms of fidelity for
structured elements, such as the ground surface. To address these challenges,
we propose ReconDreamer++, an enhanced framework that significantly improves
the overall rendering quality by mitigating the domain gap and refining the
representation of the ground surface. Specifically, ReconDreamer++ introduces
the Novel Trajectory Deformable Network (NTDNet), which leverages learnable
spatial deformation mechanisms to bridge the domain gap between synthesized
novel views and original sensor observations. Moreover, for structured elements
such as the ground surface, we preserve geometric prior knowledge in 3D
Gaussians, and the optimization process focuses on refining appearance
attributes while preserving the underlying geometric structure. Experimental
evaluations conducted on multiple datasets (Waymo, nuScenes, PandaSet, and
EUVS) confirm the superior performance of ReconDreamer++. Specifically, on
Waymo, ReconDreamer++ achieves performance comparable to Street Gaussians for
the original trajectory while significantly outperforming ReconDreamer on novel
trajectories. In particular, it achieves substantial improvements, including a
6.1% increase in NTA-IoU, a 23. 0% improvement in FID, and a remarkable 4.5%
gain in the ground surface metric NTL-IoU, highlighting its effectiveness in
accurately reconstructing structured elements such as the road surface.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 08:40:20 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhao",
"Guosheng",
""
],
[
"Wang",
"Xiaofeng",
""
],
[
"Ni",
"Chaojun",
""
],
[
"Zhu",
"Zheng",
""
],
[
"Qin",
"Wenkang",
""
],
[
"Huang",
"Guan",
""
],
[
"Wang",
"Xingang",
""
]
] | TITLE: ReconDreamer++: Harmonizing Generative and Reconstructive Models for
Driving Scene Representation
ABSTRACT: Combining reconstruction models with generative models has emerged as a
promising paradigm for closed-loop simulation in autonomous driving. For
example, ReconDreamer has demonstrated remarkable success in rendering
large-scale maneuvers. However, a significant gap remains between the generated
data and real-world sensor observations, particularly in terms of fidelity for
structured elements, such as the ground surface. To address these challenges,
we propose ReconDreamer++, an enhanced framework that significantly improves
the overall rendering quality by mitigating the domain gap and refining the
representation of the ground surface. Specifically, ReconDreamer++ introduces
the Novel Trajectory Deformable Network (NTDNet), which leverages learnable
spatial deformation mechanisms to bridge the domain gap between synthesized
novel views and original sensor observations. Moreover, for structured elements
such as the ground surface, we preserve geometric prior knowledge in 3D
Gaussians, and the optimization process focuses on refining appearance
attributes while preserving the underlying geometric structure. Experimental
evaluations conducted on multiple datasets (Waymo, nuScenes, PandaSet, and
EUVS) confirm the superior performance of ReconDreamer++. Specifically, on
Waymo, ReconDreamer++ achieves performance comparable to Street Gaussians for
the original trajectory while significantly outperforming ReconDreamer on novel
trajectories. In particular, it achieves substantial improvements, including a
6.1% increase in NTA-IoU, a 23. 0% improvement in FID, and a remarkable 4.5%
gain in the ground surface metric NTL-IoU, highlighting its effectiveness in
accurately reconstructing structured elements such as the road surface.
|
2503.18444 | Vishnudatta Thota | Vishnudatta Thota, Swati Priya, Twinkle Tripathy | Dominant Groups and Asymmetric Polarization in Generalized
Quasi-Structurally Balanced Networks | 6 pages, 11 figures, under review in Automatica | null | null | null | eess.SY cs.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The paper focuses on the phenomenon of asymmetric polarization arising in the
presence of a dominant group in the network. The existing works in the
literature analyze polarization primarily in structurally and
quasi-structurally balanced networks. In this work, we introduce generalized
quasi-structurally balanced (GQSB) networks, which include both of these
networks as special cases. In the presence of a dominant group, a GQSB network
has a unique bipartition: the dominant group (and its allies) and the remaining
agents. The dominant group's superior influence results in an asymmetry in how
the inter-subset antagonistic interactions are perceived by both of the
subsets. This, in turn, leads to asymmetry in the final polarized opinions. To
model this behavior, we propose a generalized Laplacian flow for undirected
GQSB networks with a dominant group and establish necessary and sufficient
conditions for achieving asymmetric polarization. The theoretical results
presented in this paper are validated through numerical simulations on the
Highland Tribes real-world dataset.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 08:46:13 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Thota",
"Vishnudatta",
""
],
[
"Priya",
"Swati",
""
],
[
"Tripathy",
"Twinkle",
""
]
] | TITLE: Dominant Groups and Asymmetric Polarization in Generalized
Quasi-Structurally Balanced Networks
ABSTRACT: The paper focuses on the phenomenon of asymmetric polarization arising in the
presence of a dominant group in the network. The existing works in the
literature analyze polarization primarily in structurally and
quasi-structurally balanced networks. In this work, we introduce generalized
quasi-structurally balanced (GQSB) networks, which include both of these
networks as special cases. In the presence of a dominant group, a GQSB network
has a unique bipartition: the dominant group (and its allies) and the remaining
agents. The dominant group's superior influence results in an asymmetry in how
the inter-subset antagonistic interactions are perceived by both of the
subsets. This, in turn, leads to asymmetry in the final polarized opinions. To
model this behavior, we propose a generalized Laplacian flow for undirected
GQSB networks with a dominant group and establish necessary and sufficient
conditions for achieving asymmetric polarization. The theoretical results
presented in this paper are validated through numerical simulations on the
Highland Tribes real-world dataset.
|
2503.18454 | Yunhong Lu | Yunhong Lu, Qichao Wang, Hengyuan Cao, Xierui Wang, Xiaoyin Xu, Min
Zhang | InPO: Inversion Preference Optimization with Reparametrized DDIM for
Efficient Diffusion Model Alignment | Accepted by CVPR2025 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Without using explicit reward, direct preference optimization (DPO) employs
paired human preference data to fine-tune generative models, a method that has
garnered considerable attention in large language models (LLMs). However,
exploration of aligning text-to-image (T2I) diffusion models with human
preferences remains limited. In comparison to supervised fine-tuning, existing
methods that align diffusion model suffer from low training efficiency and
subpar generation quality due to the long Markov chain process and the
intractability of the reverse process. To address these limitations, we
introduce DDIM-InPO, an efficient method for direct preference alignment of
diffusion models. Our approach conceptualizes diffusion model as a single-step
generative model, allowing us to fine-tune the outputs of specific latent
variables selectively. In order to accomplish this objective, we first assign
implicit rewards to any latent variable directly via a reparameterization
technique. Then we construct an Inversion technique to estimate appropriate
latent variables for preference optimization. This modification process enables
the diffusion model to only fine-tune the outputs of latent variables that have
a strong correlation with the preference dataset. Experimental results indicate
that our DDIM-InPO achieves state-of-the-art performance with just 400 steps of
fine-tuning, surpassing all preference aligning baselines for T2I diffusion
models in human preference evaluation tasks.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 08:58:49 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lu",
"Yunhong",
""
],
[
"Wang",
"Qichao",
""
],
[
"Cao",
"Hengyuan",
""
],
[
"Wang",
"Xierui",
""
],
[
"Xu",
"Xiaoyin",
""
],
[
"Zhang",
"Min",
""
]
] | TITLE: InPO: Inversion Preference Optimization with Reparametrized DDIM for
Efficient Diffusion Model Alignment
ABSTRACT: Without using explicit reward, direct preference optimization (DPO) employs
paired human preference data to fine-tune generative models, a method that has
garnered considerable attention in large language models (LLMs). However,
exploration of aligning text-to-image (T2I) diffusion models with human
preferences remains limited. In comparison to supervised fine-tuning, existing
methods that align diffusion model suffer from low training efficiency and
subpar generation quality due to the long Markov chain process and the
intractability of the reverse process. To address these limitations, we
introduce DDIM-InPO, an efficient method for direct preference alignment of
diffusion models. Our approach conceptualizes diffusion model as a single-step
generative model, allowing us to fine-tune the outputs of specific latent
variables selectively. In order to accomplish this objective, we first assign
implicit rewards to any latent variable directly via a reparameterization
technique. Then we construct an Inversion technique to estimate appropriate
latent variables for preference optimization. This modification process enables
the diffusion model to only fine-tune the outputs of latent variables that have
a strong correlation with the preference dataset. Experimental results indicate
that our DDIM-InPO achieves state-of-the-art performance with just 400 steps of
fine-tuning, surpassing all preference aligning baselines for T2I diffusion
models in human preference evaluation tasks.
|
2503.18460 | Jiahui Xiang | Jiahui Xiang, Tong Ye, Peiyu Liu, Yinan Zhang, Wenhai Wang | ModiGen: A Large Language Model-Based Workflow for Multi-Task Modelica
Code Generation | null | null | null | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modelica is a widely adopted language for simulating complex physical
systems, yet effective model creation and optimization require substantial
domain expertise. Although large language models (LLMs) have demonstrated
promising capabilities in code generation, their application to modeling
remains largely unexplored. To address this gap, we have developed benchmark
datasets specifically designed to evaluate the performance of LLMs in
generating Modelica component models and test cases. Our evaluation reveals
substantial limitations in current LLMs, as the generated code often fails to
simulate successfully. To overcome these challenges, we propose a specialized
workflow that integrates supervised fine-tuning, graph retrieval-augmented
generation, and feedback optimization to improve the accuracy and reliability
of Modelica code generation. The evaluation results demonstrate significant
performance gains: the maximum improvement in pass@1 reached 0.3349 for the
component generation task and 0.2457 for the test case generation task. This
research underscores the potential of LLMs to advance intelligent modeling
tools and offers valuable insights for future developments in system modeling
and engineering applications.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 09:04:49 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xiang",
"Jiahui",
""
],
[
"Ye",
"Tong",
""
],
[
"Liu",
"Peiyu",
""
],
[
"Zhang",
"Yinan",
""
],
[
"Wang",
"Wenhai",
""
]
] | TITLE: ModiGen: A Large Language Model-Based Workflow for Multi-Task Modelica
Code Generation
ABSTRACT: Modelica is a widely adopted language for simulating complex physical
systems, yet effective model creation and optimization require substantial
domain expertise. Although large language models (LLMs) have demonstrated
promising capabilities in code generation, their application to modeling
remains largely unexplored. To address this gap, we have developed benchmark
datasets specifically designed to evaluate the performance of LLMs in
generating Modelica component models and test cases. Our evaluation reveals
substantial limitations in current LLMs, as the generated code often fails to
simulate successfully. To overcome these challenges, we propose a specialized
workflow that integrates supervised fine-tuning, graph retrieval-augmented
generation, and feedback optimization to improve the accuracy and reliability
of Modelica code generation. The evaluation results demonstrate significant
performance gains: the maximum improvement in pass@1 reached 0.3349 for the
component generation task and 0.2457 for the test case generation task. This
research underscores the potential of LLMs to advance intelligent modeling
tools and offers valuable insights for future developments in system modeling
and engineering applications.
|
2503.18462 | Marcin Mazur | Tadeusz Dziarmaga, Marcin K\k{a}dzio{\l}ka, Artur Kasymov, and Marcin
Mazur | PALATE: Peculiar Application of the Law of Total Expectation to Enhance
the Evaluation of Deep Generative Models | null | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep generative models (DGMs) have caused a paradigm shift in the field of
machine learning, yielding noteworthy advancements in domains such as image
synthesis, natural language processing, and other related areas. However, a
comprehensive evaluation of these models that accounts for the trichotomy
between fidelity, diversity, and novelty in generated samples remains a
formidable challenge. A recently introduced solution that has emerged as a
promising approach in this regard is the Feature Likelihood Divergence (FLD), a
method that offers a theoretically motivated practical tool, yet also exhibits
some computational challenges. In this paper, we propose PALATE, a novel
enhancement to the evaluation of DGMs that addresses limitations of existing
metrics. Our approach is based on a peculiar application of the law of total
expectation to random variables representing accessible real data. When
combined with the MMD baseline metric and DINOv2 feature extractor, PALATE
offers a holistic evaluation framework that matches or surpasses
state-of-the-art solutions while providing superior computational efficiency
and scalability to large-scale datasets. Through a series of experiments, we
demonstrate the effectiveness of the PALATE enhancement, contributing a
computationally efficient, holistic evaluation approach that advances the field
of DGMs assessment, especially in detecting sample memorization and evaluating
generalization capabilities.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 09:06:45 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Dziarmaga",
"Tadeusz",
""
],
[
"Kądziołka",
"Marcin",
""
],
[
"Kasymov",
"Artur",
""
],
[
"Mazur",
"Marcin",
""
]
] | TITLE: PALATE: Peculiar Application of the Law of Total Expectation to Enhance
the Evaluation of Deep Generative Models
ABSTRACT: Deep generative models (DGMs) have caused a paradigm shift in the field of
machine learning, yielding noteworthy advancements in domains such as image
synthesis, natural language processing, and other related areas. However, a
comprehensive evaluation of these models that accounts for the trichotomy
between fidelity, diversity, and novelty in generated samples remains a
formidable challenge. A recently introduced solution that has emerged as a
promising approach in this regard is the Feature Likelihood Divergence (FLD), a
method that offers a theoretically motivated practical tool, yet also exhibits
some computational challenges. In this paper, we propose PALATE, a novel
enhancement to the evaluation of DGMs that addresses limitations of existing
metrics. Our approach is based on a peculiar application of the law of total
expectation to random variables representing accessible real data. When
combined with the MMD baseline metric and DINOv2 feature extractor, PALATE
offers a holistic evaluation framework that matches or surpasses
state-of-the-art solutions while providing superior computational efficiency
and scalability to large-scale datasets. Through a series of experiments, we
demonstrate the effectiveness of the PALATE enhancement, contributing a
computationally efficient, holistic evaluation approach that advances the field
of DGMs assessment, especially in detecting sample memorization and evaluating
generalization capabilities.
|
2503.18463 | Sixian Ding | Sixian Ding, Xu Jiang, Zhongjing Du, Jiaqi Cui, Xinyi Zeng, Yan Wang | SIT-FER: Integration of Semantic-, Instance-, Text-level Information for
Semi-supervised Facial Expression Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi-supervised deep facial expression recognition (SS-DFER) has gained
increasingly research interest due to the difficulty in accessing sufficient
labeled data in practical settings. However, existing SS-DFER methods mainly
utilize generated semantic-level pseudo-labels for supervised learning, the
unreliability of which compromises their performance and undermines the
practical utility. In this paper, we propose a novel SS-DFER framework that
simultaneously incorporates semantic, instance, and text-level information to
generate high-quality pseudo-labels. Specifically, for the unlabeled data,
considering the comprehensive knowledge within the textual descriptions and
instance representations, we respectively calculate the similarities between
the facial vision features and the corresponding textual and instance features
to obtain the probabilities at the text- and instance-level. Combining with the
semantic-level probability, these three-level probabilities are elaborately
aggregated to gain the final pseudo-labels. Furthermore, to enhance the
utilization of one-hot labels for the labeled data, we also incorporate text
embeddings excavated from textual descriptions to co-supervise model training,
enabling facial visual features to exhibit semantic correlations in the text
space. Experiments on three datasets demonstrate that our method significantly
outperforms current state-of-the-art SS-DFER methods and even exceeds fully
supervised baselines. The code will be available at
https://github.com/PatrickStarL/SIT-FER.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 09:08:14 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ding",
"Sixian",
""
],
[
"Jiang",
"Xu",
""
],
[
"Du",
"Zhongjing",
""
],
[
"Cui",
"Jiaqi",
""
],
[
"Zeng",
"Xinyi",
""
],
[
"Wang",
"Yan",
""
]
] | TITLE: SIT-FER: Integration of Semantic-, Instance-, Text-level Information for
Semi-supervised Facial Expression Recognition
ABSTRACT: Semi-supervised deep facial expression recognition (SS-DFER) has gained
increasingly research interest due to the difficulty in accessing sufficient
labeled data in practical settings. However, existing SS-DFER methods mainly
utilize generated semantic-level pseudo-labels for supervised learning, the
unreliability of which compromises their performance and undermines the
practical utility. In this paper, we propose a novel SS-DFER framework that
simultaneously incorporates semantic, instance, and text-level information to
generate high-quality pseudo-labels. Specifically, for the unlabeled data,
considering the comprehensive knowledge within the textual descriptions and
instance representations, we respectively calculate the similarities between
the facial vision features and the corresponding textual and instance features
to obtain the probabilities at the text- and instance-level. Combining with the
semantic-level probability, these three-level probabilities are elaborately
aggregated to gain the final pseudo-labels. Furthermore, to enhance the
utilization of one-hot labels for the labeled data, we also incorporate text
embeddings excavated from textual descriptions to co-supervise model training,
enabling facial visual features to exhibit semantic correlations in the text
space. Experiments on three datasets demonstrate that our method significantly
outperforms current state-of-the-art SS-DFER methods and even exceeds fully
supervised baselines. The code will be available at
https://github.com/PatrickStarL/SIT-FER.
|
2503.18469 | Hao Ni | Hao Ni, Lianli Gao, Pengpeng Zeng, Heng Tao Shen, Jingkuan Song | CFReID: Continual Few-shot Person Re-Identification | 16 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world surveillance systems are dynamically evolving, requiring a person
Re-identification model to continuously handle newly incoming data from various
domains. To cope with these dynamics, Lifelong ReID (LReID) has been proposed
to learn and accumulate knowledge across multiple domains incrementally.
However, LReID models need to be trained on large-scale labeled data for each
unseen domain, which are typically inaccessible due to privacy and cost
concerns. In this paper, we propose a new paradigm called Continual Few-shot
ReID (CFReID), which requires models to be incrementally trained using few-shot
data and tested on all seen domains. Under few-shot conditions, CFREID faces
two core challenges: 1) learning knowledge from few-shot data of unseen domain,
and 2) avoiding catastrophic forgetting of seen domains. To tackle these two
challenges, we propose a Stable Distribution Alignment (SDA) framework from
feature distribution perspective. Specifically, our SDA is composed of two
modules, i.e., Meta Distribution Alignment (MDA) and Prototype-based Few-shot
Adaptation (PFA). To support the study of CFReID, we establish an evaluation
benchmark for CFReID on five publicly available ReID datasets. Extensive
experiments demonstrate that our SDA can enhance the few-shot learning and
anti-forgetting capabilities under few-shot conditions. Notably, our approach,
using only 5\% of the data, i.e., 32 IDs, significantly outperforms LReID's
state-of-the-art performance, which requires 700 to 1,000 IDs.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 09:17:05 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ni",
"Hao",
""
],
[
"Gao",
"Lianli",
""
],
[
"Zeng",
"Pengpeng",
""
],
[
"Shen",
"Heng Tao",
""
],
[
"Song",
"Jingkuan",
""
]
] | TITLE: CFReID: Continual Few-shot Person Re-Identification
ABSTRACT: Real-world surveillance systems are dynamically evolving, requiring a person
Re-identification model to continuously handle newly incoming data from various
domains. To cope with these dynamics, Lifelong ReID (LReID) has been proposed
to learn and accumulate knowledge across multiple domains incrementally.
However, LReID models need to be trained on large-scale labeled data for each
unseen domain, which are typically inaccessible due to privacy and cost
concerns. In this paper, we propose a new paradigm called Continual Few-shot
ReID (CFReID), which requires models to be incrementally trained using few-shot
data and tested on all seen domains. Under few-shot conditions, CFREID faces
two core challenges: 1) learning knowledge from few-shot data of unseen domain,
and 2) avoiding catastrophic forgetting of seen domains. To tackle these two
challenges, we propose a Stable Distribution Alignment (SDA) framework from
feature distribution perspective. Specifically, our SDA is composed of two
modules, i.e., Meta Distribution Alignment (MDA) and Prototype-based Few-shot
Adaptation (PFA). To support the study of CFReID, we establish an evaluation
benchmark for CFReID on five publicly available ReID datasets. Extensive
experiments demonstrate that our SDA can enhance the few-shot learning and
anti-forgetting capabilities under few-shot conditions. Notably, our approach,
using only 5\% of the data, i.e., 32 IDs, significantly outperforms LReID's
state-of-the-art performance, which requires 700 to 1,000 IDs.
|
2503.18478 | Yan Shu | Xiangrui Liu, Yan Shu, Zheng Liu, Ao Li, Yang Tian, Bo Zhao | Video-XL-Pro: Reconstructive Token Compression for Extremely Long Video
Understanding | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Despite advanced token compression techniques, existing multimodal large
language models (MLLMs) still struggle with hour-long video understanding. In
this work, we propose Video-XL-Pro, an efficient method for extremely long
video understanding, built upon Reconstructive Compression of Tokens (ReCoT), a
learnable module that leverages self-supervised learning to generate
comprehensive and compact video tokens. ReCoT introduces two key components:
(i) Dynamic Token Synthesizer (DTS): DTS generates pseudo-video tokens from
static image tokens by learning intra-token relationships, which are then used
in masked video modeling. (ii) Semantic-Guided Masking (SGM): SGM adaptively
masks redundant visual tokens to facilitate more effective reconstructive
learning. To improve training efficiency in MLLMs fine-tuning, we introduce a
video-specific dataset pruning strategy and design a simple yet Query-aware
Selector that enables the model to precisely locate query-relevant video
tokens. With only 3B parameters, Video-XL-Pro outperforms most 7B models
trained on larger datasets across multiple long video understanding benchmarks.
Moreover, it can process over 8K frames on a single A100 GPU while maintaining
high-quality performance.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 09:21:48 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Xiangrui",
""
],
[
"Shu",
"Yan",
""
],
[
"Liu",
"Zheng",
""
],
[
"Li",
"Ao",
""
],
[
"Tian",
"Yang",
""
],
[
"Zhao",
"Bo",
""
]
] | TITLE: Video-XL-Pro: Reconstructive Token Compression for Extremely Long Video
Understanding
ABSTRACT: Despite advanced token compression techniques, existing multimodal large
language models (MLLMs) still struggle with hour-long video understanding. In
this work, we propose Video-XL-Pro, an efficient method for extremely long
video understanding, built upon Reconstructive Compression of Tokens (ReCoT), a
learnable module that leverages self-supervised learning to generate
comprehensive and compact video tokens. ReCoT introduces two key components:
(i) Dynamic Token Synthesizer (DTS): DTS generates pseudo-video tokens from
static image tokens by learning intra-token relationships, which are then used
in masked video modeling. (ii) Semantic-Guided Masking (SGM): SGM adaptively
masks redundant visual tokens to facilitate more effective reconstructive
learning. To improve training efficiency in MLLMs fine-tuning, we introduce a
video-specific dataset pruning strategy and design a simple yet Query-aware
Selector that enables the model to precisely locate query-relevant video
tokens. With only 3B parameters, Video-XL-Pro outperforms most 7B models
trained on larger datasets across multiple long video understanding benchmarks.
Moreover, it can process over 8K frames on a single A100 GPU while maintaining
high-quality performance.
|
2503.18491 | Shuo Yang | Shuo Yang, Siwen Luo, Soyeon Caren Han, Eduard Hovy | MAGIC-VQA: Multimodal And Grounded Inference with Commonsense Knowledge
for Visual Question Answering | 8 Pages, 5 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Visual Question Answering (VQA) requires reasoning across visual and textual
modalities, yet Large Vision-Language Models (LVLMs) often lack integrated
commonsense knowledge, limiting their robustness in real-world scenarios. To
address this, we introduce MAGIC-VQA, a novel framework that enhances VQA by
systematically integrating commonsense knowledge with LVLMs. MAGIC-VQA employs
a three-stage process: (1) Explicit Knowledge Integration from external
sources, (2) By-Type Post-Processing for contextual refinement, and (3)
Implicit Knowledge Augmentation using a Graph Neural Network (GNN) for
structured reasoning. While GNNs bring greater depth to structured inference,
they enable superior relational inference beyond LVLMs. MAGIC-VQA bridges a key
gap by unifying commonsensse knowledge with LVLM-driven reasoning, eliminating
the need for extensive pre-training or complex prompt tuning. Our framework
achieves state-of-the-art performance on benchmark datasets, significantly
improving commonsense reasoning in VQA.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 09:45:26 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yang",
"Shuo",
""
],
[
"Luo",
"Siwen",
""
],
[
"Han",
"Soyeon Caren",
""
],
[
"Hovy",
"Eduard",
""
]
] | TITLE: MAGIC-VQA: Multimodal And Grounded Inference with Commonsense Knowledge
for Visual Question Answering
ABSTRACT: Visual Question Answering (VQA) requires reasoning across visual and textual
modalities, yet Large Vision-Language Models (LVLMs) often lack integrated
commonsense knowledge, limiting their robustness in real-world scenarios. To
address this, we introduce MAGIC-VQA, a novel framework that enhances VQA by
systematically integrating commonsense knowledge with LVLMs. MAGIC-VQA employs
a three-stage process: (1) Explicit Knowledge Integration from external
sources, (2) By-Type Post-Processing for contextual refinement, and (3)
Implicit Knowledge Augmentation using a Graph Neural Network (GNN) for
structured reasoning. While GNNs bring greater depth to structured inference,
they enable superior relational inference beyond LVLMs. MAGIC-VQA bridges a key
gap by unifying commonsensse knowledge with LVLM-driven reasoning, eliminating
the need for extensive pre-training or complex prompt tuning. Our framework
achieves state-of-the-art performance on benchmark datasets, significantly
improving commonsense reasoning in VQA.
|
2503.18502 | Jose Manuel Gomez-Perez Dr. | Andr\'es Garc\'ia-Silva and Jos\'e Manuel G\'omez-P\'erez | Autoregressive Language Models for Knowledge Base Population: A case
study in the space mission domain | Pre-print version | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Knowledge base population KBP plays a crucial role in populating and
maintaining knowledge bases up-to-date in organizations by leveraging domain
corpora. Motivated by the increasingly large context windows supported by large
language models, we propose to fine-tune an autoregressive language model for
end-toend KPB. Our case study involves the population of a space mission
knowledge graph. To fine-tune the model we generate a dataset for end-to-end
KBP tapping into existing domain resources. Our case study shows that
fine-tuned language models of limited size can achieve competitive and even
higher accuracy than larger models in the KBP task. Smaller models specialized
for KBP offer affordable deployment and lower-cost inference. Moreover, KBP
specialist models do not require the ontology to be included in the prompt,
allowing for more space in the context for additional input text or output
serialization.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 09:58:44 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"García-Silva",
"Andrés",
""
],
[
"Gómez-Pérez",
"José Manuel",
""
]
] | TITLE: Autoregressive Language Models for Knowledge Base Population: A case
study in the space mission domain
ABSTRACT: Knowledge base population KBP plays a crucial role in populating and
maintaining knowledge bases up-to-date in organizations by leveraging domain
corpora. Motivated by the increasingly large context windows supported by large
language models, we propose to fine-tune an autoregressive language model for
end-toend KPB. Our case study involves the population of a space mission
knowledge graph. To fine-tune the model we generate a dataset for end-to-end
KBP tapping into existing domain resources. Our case study shows that
fine-tuned language models of limited size can achieve competitive and even
higher accuracy than larger models in the KBP task. Smaller models specialized
for KBP offer affordable deployment and lower-cost inference. Moreover, KBP
specialist models do not require the ontology to be included in the prompt,
allowing for more space in the context for additional input text or output
serialization.
|
2503.18503 | Jiate Li | Jiate Li, Meng Pang, Yun Dong, Binghui Wang | Deterministic Certification of Graph Neural Networks against Graph
Poisoning Attacks with Arbitrary Perturbations | Accepted at CVPR 2025 | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by/4.0/ | Graph neural networks (GNNs) are becoming the de facto method to learn on the
graph data and have achieved the state-of-the-art on node and graph
classification tasks. However, recent works show GNNs are vulnerable to
training-time poisoning attacks -- marginally perturbing edges, nodes, or/and
node features of training graph(s) can largely degrade GNNs' testing
performance. Most previous defenses against graph poisoning attacks are
empirical and are soon broken by adaptive / stronger ones. A few provable
defenses provide robustness guarantees, but have large gaps when applied in
practice: 1) restrict the attacker on only one type of perturbation; 2) design
for a particular GNN architecture or task; and 3) robustness guarantees are not
100\% accurate.
In this work, we bridge all these gaps by developing PGNNCert, the first
certified defense of GNNs against poisoning attacks under arbitrary (edge,
node, and node feature) perturbations with deterministic robustness guarantees.
Extensive evaluations on multiple node and graph classification datasets and
GNNs demonstrate the effectiveness of PGNNCert to provably defend against
arbitrary poisoning perturbations. PGNNCert is also shown to significantly
outperform the state-of-the-art certified defenses against edge perturbation or
node perturbation during GNN training.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 09:59:44 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Jiate",
""
],
[
"Pang",
"Meng",
""
],
[
"Dong",
"Yun",
""
],
[
"Wang",
"Binghui",
""
]
] | TITLE: Deterministic Certification of Graph Neural Networks against Graph
Poisoning Attacks with Arbitrary Perturbations
ABSTRACT: Graph neural networks (GNNs) are becoming the de facto method to learn on the
graph data and have achieved the state-of-the-art on node and graph
classification tasks. However, recent works show GNNs are vulnerable to
training-time poisoning attacks -- marginally perturbing edges, nodes, or/and
node features of training graph(s) can largely degrade GNNs' testing
performance. Most previous defenses against graph poisoning attacks are
empirical and are soon broken by adaptive / stronger ones. A few provable
defenses provide robustness guarantees, but have large gaps when applied in
practice: 1) restrict the attacker on only one type of perturbation; 2) design
for a particular GNN architecture or task; and 3) robustness guarantees are not
100\% accurate.
In this work, we bridge all these gaps by developing PGNNCert, the first
certified defense of GNNs against poisoning attacks under arbitrary (edge,
node, and node feature) perturbations with deterministic robustness guarantees.
Extensive evaluations on multiple node and graph classification datasets and
GNNs demonstrate the effectiveness of PGNNCert to provably defend against
arbitrary poisoning perturbations. PGNNCert is also shown to significantly
outperform the state-of-the-art certified defenses against edge perturbation or
node perturbation during GNN training.
|
2503.18512 | Leheng Zhang | Leheng Zhang, Weiyi You, Kexuan Shi, Shuhang Gu | Uncertainty-guided Perturbation for Image Super-Resolution Diffusion
Model | Accepted to CVPR 2025 | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion-based image super-resolution methods have demonstrated significant
advantages over GAN-based approaches, particularly in terms of perceptual
quality. Building upon a lengthy Markov chain, diffusion-based methods possess
remarkable modeling capacity, enabling them to achieve outstanding performance
in real-world scenarios. Unlike previous methods that focus on modifying the
noise schedule or sampling process to enhance performance, our approach
emphasizes the improved utilization of LR information. We find that different
regions of the LR image can be viewed as corresponding to different timesteps
in a diffusion process, where flat areas are closer to the target HR
distribution but edge and texture regions are farther away. In these flat
areas, applying a slight noise is more advantageous for the reconstruction. We
associate this characteristic with uncertainty and propose to apply uncertainty
estimate to guide region-specific noise level control, a technique we refer to
as Uncertainty-guided Noise Weighting. Pixels with lower uncertainty (i.e.,
flat regions) receive reduced noise to preserve more LR information, therefore
improving performance. Furthermore, we modify the network architecture of
previous methods to develop our Uncertainty-guided Perturbation
Super-Resolution (UPSR) model. Extensive experimental results demonstrate that,
despite reduced model size and training overhead, the proposed UWSR method
outperforms current state-of-the-art methods across various datasets, both
quantitatively and qualitatively.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 10:07:16 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Leheng",
""
],
[
"You",
"Weiyi",
""
],
[
"Shi",
"Kexuan",
""
],
[
"Gu",
"Shuhang",
""
]
] | TITLE: Uncertainty-guided Perturbation for Image Super-Resolution Diffusion
Model
ABSTRACT: Diffusion-based image super-resolution methods have demonstrated significant
advantages over GAN-based approaches, particularly in terms of perceptual
quality. Building upon a lengthy Markov chain, diffusion-based methods possess
remarkable modeling capacity, enabling them to achieve outstanding performance
in real-world scenarios. Unlike previous methods that focus on modifying the
noise schedule or sampling process to enhance performance, our approach
emphasizes the improved utilization of LR information. We find that different
regions of the LR image can be viewed as corresponding to different timesteps
in a diffusion process, where flat areas are closer to the target HR
distribution but edge and texture regions are farther away. In these flat
areas, applying a slight noise is more advantageous for the reconstruction. We
associate this characteristic with uncertainty and propose to apply uncertainty
estimate to guide region-specific noise level control, a technique we refer to
as Uncertainty-guided Noise Weighting. Pixels with lower uncertainty (i.e.,
flat regions) receive reduced noise to preserve more LR information, therefore
improving performance. Furthermore, we modify the network architecture of
previous methods to develop our Uncertainty-guided Perturbation
Super-Resolution (UPSR) model. Extensive experimental results demonstrate that,
despite reduced model size and training overhead, the proposed UWSR method
outperforms current state-of-the-art methods across various datasets, both
quantitatively and qualitatively.
|
2503.18528 | Moein Sorkhei | Moein Sorkhei, Christos Matsoukas, Johan Fredin Haslum, Kevin Smith | k-NN as a Simple and Effective Estimator of Transferability | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How well can one expect transfer learning to work in a new setting where the
domain is shifted, the task is different, and the architecture changes? Many
transfer learning metrics have been proposed to answer this question. But how
accurate are their predictions in a realistic new setting? We conducted an
extensive evaluation involving over 42,000 experiments comparing 23
transferability metrics across 16 different datasets to assess their ability to
predict transfer performance. Our findings reveal that none of the existing
metrics perform well across the board. However, we find that a simple k-nearest
neighbor evaluation -- as is commonly used to evaluate feature quality for
self-supervision -- not only surpasses existing metrics, but also offers better
computational efficiency and ease of implementation.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 10:35:11 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Sorkhei",
"Moein",
""
],
[
"Matsoukas",
"Christos",
""
],
[
"Haslum",
"Johan Fredin",
""
],
[
"Smith",
"Kevin",
""
]
] | TITLE: k-NN as a Simple and Effective Estimator of Transferability
ABSTRACT: How well can one expect transfer learning to work in a new setting where the
domain is shifted, the task is different, and the architecture changes? Many
transfer learning metrics have been proposed to answer this question. But how
accurate are their predictions in a realistic new setting? We conducted an
extensive evaluation involving over 42,000 experiments comparing 23
transferability metrics across 16 different datasets to assess their ability to
predict transfer performance. Our findings reveal that none of the existing
metrics perform well across the board. However, we find that a simple k-nearest
neighbor evaluation -- as is commonly used to evaluate feature quality for
self-supervision -- not only surpasses existing metrics, but also offers better
computational efficiency and ease of implementation.
|
2503.18533 | Dawei Yan | Dawei Yan, Yang Li, Qing-Guo Chen, Weihua Luo, Peng Wang, Haokui
Zhang, Chunhua Shen | MMCR: Advancing Visual Language Model in Multimodal Multi-Turn
Contextual Reasoning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Compared to single-turn dialogue, multi-turn dialogue involving multiple
images better aligns with the needs of real-world human-AI interactions.
Additionally, as training data, it provides richer contextual reasoning
information, thereby guiding the model to achieve better performance. However,
existing vision-language models (VLMs) primarily rely on single-turn dialogue
training and evaluation benchmarks. In this paper, following the
characteristics of human dialogue, such as focused topics and concise, clear
content, we present MMCR (Multimodal Multi-turn Contextual Reasoning), a novel
dataset comprising: (1) MMCR-310k -- the largest multi-image multi-turn
instruction tuning dataset with 310K contextual dialogues, each covering 1-4
images and 4 or 8 dialogue turns; and (2) MMCR-Bench -- a diagnostic benchmark
featuring dialogues, spanning 8 domains (Humanities, Natural, Science,
Education, etc.) and 40 sub-topics. Extensive evaluations demonstrate that
models fine-tuned with MMCR-310k achieve 5.2\% higher contextual accuracy on
MMCR-Bench, while showing consistent improvements on existing benchmarks
(+1.1\% on AI2D, +1.2\% on MMMU and MMVet). MMCR and prompt engineering will be
released publicly.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 10:40:33 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yan",
"Dawei",
""
],
[
"Li",
"Yang",
""
],
[
"Chen",
"Qing-Guo",
""
],
[
"Luo",
"Weihua",
""
],
[
"Wang",
"Peng",
""
],
[
"Zhang",
"Haokui",
""
],
[
"Shen",
"Chunhua",
""
]
] | TITLE: MMCR: Advancing Visual Language Model in Multimodal Multi-Turn
Contextual Reasoning
ABSTRACT: Compared to single-turn dialogue, multi-turn dialogue involving multiple
images better aligns with the needs of real-world human-AI interactions.
Additionally, as training data, it provides richer contextual reasoning
information, thereby guiding the model to achieve better performance. However,
existing vision-language models (VLMs) primarily rely on single-turn dialogue
training and evaluation benchmarks. In this paper, following the
characteristics of human dialogue, such as focused topics and concise, clear
content, we present MMCR (Multimodal Multi-turn Contextual Reasoning), a novel
dataset comprising: (1) MMCR-310k -- the largest multi-image multi-turn
instruction tuning dataset with 310K contextual dialogues, each covering 1-4
images and 4 or 8 dialogue turns; and (2) MMCR-Bench -- a diagnostic benchmark
featuring dialogues, spanning 8 domains (Humanities, Natural, Science,
Education, etc.) and 40 sub-topics. Extensive evaluations demonstrate that
models fine-tuned with MMCR-310k achieve 5.2\% higher contextual accuracy on
MMCR-Bench, while showing consistent improvements on existing benchmarks
(+1.1\% on AI2D, +1.2\% on MMMU and MMVet). MMCR and prompt engineering will be
released publicly.
|
2503.18536 | Erjian Guo | Erjian Guo, Zhen Zhao, Zicheng Wang, Tong Chen, Yunyi Liu, Luping Zhou | DiN: Diffusion Model for Robust Medical VQA with Semantic Noisy Labels | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical Visual Question Answering (Med-VQA) systems benefit the
interpretation of medical images containing critical clinical information.
However, the challenge of noisy labels and limited high-quality datasets
remains underexplored. To address this, we establish the first benchmark for
noisy labels in Med-VQA by simulating human mislabeling with semantically
designed noise types. More importantly, we introduce the DiN framework, which
leverages a diffusion model to handle noisy labels in Med-VQA. Unlike the
dominant classification-based VQA approaches that directly predict answers, our
Answer Diffuser (AD) module employs a coarse-to-fine process, refining answer
candidates with a diffusion model for improved accuracy. The Answer Condition
Generator (ACG) further enhances this process by generating task-specific
conditional information via integrating answer embeddings with fused
image-question features. To address label noise, our Noisy Label
Refinement(NLR) module introduces a robust loss function and dynamic answer
adjustment to further boost the performance of the AD module.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 10:42:48 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Guo",
"Erjian",
""
],
[
"Zhao",
"Zhen",
""
],
[
"Wang",
"Zicheng",
""
],
[
"Chen",
"Tong",
""
],
[
"Liu",
"Yunyi",
""
],
[
"Zhou",
"Luping",
""
]
] | TITLE: DiN: Diffusion Model for Robust Medical VQA with Semantic Noisy Labels
ABSTRACT: Medical Visual Question Answering (Med-VQA) systems benefit the
interpretation of medical images containing critical clinical information.
However, the challenge of noisy labels and limited high-quality datasets
remains underexplored. To address this, we establish the first benchmark for
noisy labels in Med-VQA by simulating human mislabeling with semantically
designed noise types. More importantly, we introduce the DiN framework, which
leverages a diffusion model to handle noisy labels in Med-VQA. Unlike the
dominant classification-based VQA approaches that directly predict answers, our
Answer Diffuser (AD) module employs a coarse-to-fine process, refining answer
candidates with a diffusion model for improved accuracy. The Answer Condition
Generator (ACG) further enhances this process by generating task-specific
conditional information via integrating answer embeddings with fused
image-question features. To address label noise, our Noisy Label
Refinement(NLR) module introduces a robust loss function and dynamic answer
adjustment to further boost the performance of the AD module.
|
2503.18540 | Guneet Mutreja | Guneet Mutreja, Philipp Schuegraf, Ksenia Bittner | HiRes-FusedMIM: A High-Resolution RGB-DSM Pre-trained Model for
Building-Level Remote Sensing Applications | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advances in self-supervised learning have led to the development of
foundation models that have significantly advanced performance in various
computer vision tasks. However, despite their potential, these models often
overlook the crucial role of high-resolution digital surface models (DSMs) in
understanding urban environments, particularly for building-level analysis,
which is essential for applications like digital twins. To address this gap, we
introduce HiRes-FusedMIM, a novel pre-trained model specifically designed to
leverage the rich information contained within high-resolution RGB and DSM
data. HiRes-FusedMIM utilizes a dual-encoder simple masked image modeling
(SimMIM) architecture with a multi-objective loss function that combines
reconstruction and contrastive objectives, enabling it to learn powerful, joint
representations from both modalities. We conducted a comprehensive evaluation
of HiRes-FusedMIM on a diverse set of downstream tasks, including
classification, semantic segmentation, and instance segmentation. Our results
demonstrate that: 1) HiRes-FusedMIM outperforms previous state-of-the-art
geospatial methods on several building-related datasets, including WHU Aerial
and LoveDA, demonstrating its effectiveness in capturing and leveraging
fine-grained building information; 2) Incorporating DSMs during pre-training
consistently improves performance compared to using RGB data alone,
highlighting the value of elevation information for building-level analysis; 3)
The dual-encoder architecture of HiRes-FusedMIM, with separate encoders for RGB
and DSM data, significantly outperforms a single-encoder model on the Vaihingen
segmentation task, indicating the benefits of learning specialized
representations for each modality. To facilitate further research and
applications in this direction, we will publicly release the trained model
weights.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 10:49:55 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Mutreja",
"Guneet",
""
],
[
"Schuegraf",
"Philipp",
""
],
[
"Bittner",
"Ksenia",
""
]
] | TITLE: HiRes-FusedMIM: A High-Resolution RGB-DSM Pre-trained Model for
Building-Level Remote Sensing Applications
ABSTRACT: Recent advances in self-supervised learning have led to the development of
foundation models that have significantly advanced performance in various
computer vision tasks. However, despite their potential, these models often
overlook the crucial role of high-resolution digital surface models (DSMs) in
understanding urban environments, particularly for building-level analysis,
which is essential for applications like digital twins. To address this gap, we
introduce HiRes-FusedMIM, a novel pre-trained model specifically designed to
leverage the rich information contained within high-resolution RGB and DSM
data. HiRes-FusedMIM utilizes a dual-encoder simple masked image modeling
(SimMIM) architecture with a multi-objective loss function that combines
reconstruction and contrastive objectives, enabling it to learn powerful, joint
representations from both modalities. We conducted a comprehensive evaluation
of HiRes-FusedMIM on a diverse set of downstream tasks, including
classification, semantic segmentation, and instance segmentation. Our results
demonstrate that: 1) HiRes-FusedMIM outperforms previous state-of-the-art
geospatial methods on several building-related datasets, including WHU Aerial
and LoveDA, demonstrating its effectiveness in capturing and leveraging
fine-grained building information; 2) Incorporating DSMs during pre-training
consistently improves performance compared to using RGB data alone,
highlighting the value of elevation information for building-level analysis; 3)
The dual-encoder architecture of HiRes-FusedMIM, with separate encoders for RGB
and DSM data, significantly outperforms a single-encoder model on the Vaihingen
segmentation task, indicating the benefits of learning specialized
representations for each modality. To facilitate further research and
applications in this direction, we will publicly release the trained model
weights.
|
2503.18542 | Nathan Clarke | Nathan Clarke, Gaseb Alotibi, Dany Joy, Fudong Li, Steven Furnell, Ali
Alshumrani, Hussan Mohammed | An Identity and Interaction Based Network Forensic Analysis | null | null | null | null | cs.CR cs.AI | http://creativecommons.org/licenses/by/4.0/ | In todays landscape of increasing electronic crime, network forensics plays a
pivotal role in digital investigations. It aids in understanding which systems
to analyse and as a supplement to support evidence found through more
traditional computer based investigations. However, the nature and
functionality of the existing Network Forensic Analysis Tools (NFATs) fall
short compared to File System Forensic Analysis Tools (FS FATs) in providing
usable data. The analysis tends to focus upon IP addresses, which are not
synonymous with user identities, a point of significant interest to
investigators. This paper presents several experiments designed to create a
novel NFAT approach that can identify users and understand how they are using
network based applications whilst the traffic remains encrypted. The
experiments build upon the prior art and investigate how effective this
approach is in classifying users and their actions. Utilising an in-house
dataset composed of 50 million packers, the experiments are formed of three
incremental developments that assist in improving performance. Building upon
the successful experiments, a proposed NFAT interface is presented to
illustrate the ease at which investigators would be able to ask relevant
questions of user interactions. The experiments profiled across 27 users, has
yielded an average 93.3% True Positive Identification Rate (TPIR), with 41% of
users experiencing 100% TPIR. Skype, Wikipedia and Hotmail services achieved a
notably high level of recognition performance. The study has developed and
evaluated an approach to analyse encrypted network traffic more effectively
through the modelling of network traffic and to subsequently visualise these
interactions through a novel network forensic analysis tool.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 10:52:23 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Clarke",
"Nathan",
""
],
[
"Alotibi",
"Gaseb",
""
],
[
"Joy",
"Dany",
""
],
[
"Li",
"Fudong",
""
],
[
"Furnell",
"Steven",
""
],
[
"Alshumrani",
"Ali",
""
],
[
"Mohammed",
"Hussan",
""
]
] | TITLE: An Identity and Interaction Based Network Forensic Analysis
ABSTRACT: In todays landscape of increasing electronic crime, network forensics plays a
pivotal role in digital investigations. It aids in understanding which systems
to analyse and as a supplement to support evidence found through more
traditional computer based investigations. However, the nature and
functionality of the existing Network Forensic Analysis Tools (NFATs) fall
short compared to File System Forensic Analysis Tools (FS FATs) in providing
usable data. The analysis tends to focus upon IP addresses, which are not
synonymous with user identities, a point of significant interest to
investigators. This paper presents several experiments designed to create a
novel NFAT approach that can identify users and understand how they are using
network based applications whilst the traffic remains encrypted. The
experiments build upon the prior art and investigate how effective this
approach is in classifying users and their actions. Utilising an in-house
dataset composed of 50 million packers, the experiments are formed of three
incremental developments that assist in improving performance. Building upon
the successful experiments, a proposed NFAT interface is presented to
illustrate the ease at which investigators would be able to ask relevant
questions of user interactions. The experiments profiled across 27 users, has
yielded an average 93.3% True Positive Identification Rate (TPIR), with 41% of
users experiencing 100% TPIR. Skype, Wikipedia and Hotmail services achieved a
notably high level of recognition performance. The study has developed and
evaluated an approach to analyse encrypted network traffic more effectively
through the modelling of network traffic and to subsequently visualise these
interactions through a novel network forensic analysis tool.
|
2503.18544 | Rafia Rahim | Rafia Rahim, Samuel Woerz, Andreas Zell | Distilling Stereo Networks for Performant and Efficient Leaner Networks | 8 pages, 3 figures. Published in: 2023 International Joint Conference
on Neural Networks (IJCNN) | null | 10.1109/IJCNN54540.2023.10191503 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Knowledge distillation has been quite popular in vision for tasks like
classification and segmentation however not much work has been done for
distilling state-of-the-art stereo matching methods despite their range of
applications. One of the reasons for its lack of use in stereo matching
networks is due to the inherent complexity of these networks, where a typical
network is composed of multiple two- and three-dimensional modules. In this
work, we systematically combine the insights from state-of-the-art stereo
methods with general knowledge-distillation techniques to develop a joint
framework for stereo networks distillation with competitive results and faster
inference. Moreover, we show, via a detailed empirical analysis, that
distilling knowledge from the stereo network requires careful design of the
complete distillation pipeline starting from backbone to the right selection of
distillation points and corresponding loss functions. This results in the
student networks that are not only leaner and faster but give excellent
performance . For instance, our student network while performing better than
the performance oriented methods like PSMNet [1], CFNet [2], and LEAStereo [3])
on benchmark SceneFlow dataset, is 8x, 5x, and 8x faster respectively.
Furthermore, compared to speed oriented methods having inference time less than
100ms, our student networks perform better than all the tested methods. In
addition, our student network also shows better generalization capabilities
when tested on unseen datasets like ETH3D and Middlebury.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 10:56:57 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Rahim",
"Rafia",
""
],
[
"Woerz",
"Samuel",
""
],
[
"Zell",
"Andreas",
""
]
] | TITLE: Distilling Stereo Networks for Performant and Efficient Leaner Networks
ABSTRACT: Knowledge distillation has been quite popular in vision for tasks like
classification and segmentation however not much work has been done for
distilling state-of-the-art stereo matching methods despite their range of
applications. One of the reasons for its lack of use in stereo matching
networks is due to the inherent complexity of these networks, where a typical
network is composed of multiple two- and three-dimensional modules. In this
work, we systematically combine the insights from state-of-the-art stereo
methods with general knowledge-distillation techniques to develop a joint
framework for stereo networks distillation with competitive results and faster
inference. Moreover, we show, via a detailed empirical analysis, that
distilling knowledge from the stereo network requires careful design of the
complete distillation pipeline starting from backbone to the right selection of
distillation points and corresponding loss functions. This results in the
student networks that are not only leaner and faster but give excellent
performance . For instance, our student network while performing better than
the performance oriented methods like PSMNet [1], CFNet [2], and LEAStereo [3])
on benchmark SceneFlow dataset, is 8x, 5x, and 8x faster respectively.
Furthermore, compared to speed oriented methods having inference time less than
100ms, our student networks perform better than all the tested methods. In
addition, our student network also shows better generalization capabilities
when tested on unseen datasets like ETH3D and Middlebury.
|
2503.18549 | Peng Du | Xiaolong Yin, Xingyu Lu, Jiahang Shen, Jingzhe Ni, Hailong Li, Ruofeng
Tong, Min Tang, Peng Du | RLCAD: Reinforcement Learning Training Gym for Revolution Involved CAD
Command Sequence Generation | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | A CAD command sequence is a typical parametric design paradigm in 3D CAD
systems where a model is constructed by overlaying 2D sketches with operations
such as extrusion, revolution, and Boolean operations. Although there is
growing academic interest in the automatic generation of command sequences,
existing methods and datasets only support operations such as 2D sketching,
extrusion,and Boolean operations. This limitation makes it challenging to
represent more complex geometries. In this paper, we present a reinforcement
learning (RL) training environment (gym) built on a CAD geometric engine. Given
an input boundary representation (B-Rep) geometry, the policy network in the RL
algorithm generates an action. This action, along with previously generated
actions, is processed within the gym to produce the corresponding CAD geometry,
which is then fed back into the policy network. The rewards, determined by the
difference between the generated and target geometries within the gym, are used
to update the RL network. Our method supports operations beyond sketches,
Boolean, and extrusion, including revolution operations. With this training
gym, we achieve state-of-the-art (SOTA) quality in generating command sequences
from B-Rep geometries. In addition, our method can significantly improve the
efficiency of command sequence generation by a factor of 39X compared with the
previous training gym.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 11:01:05 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yin",
"Xiaolong",
""
],
[
"Lu",
"Xingyu",
""
],
[
"Shen",
"Jiahang",
""
],
[
"Ni",
"Jingzhe",
""
],
[
"Li",
"Hailong",
""
],
[
"Tong",
"Ruofeng",
""
],
[
"Tang",
"Min",
""
],
[
"Du",
"Peng",
""
]
] | TITLE: RLCAD: Reinforcement Learning Training Gym for Revolution Involved CAD
Command Sequence Generation
ABSTRACT: A CAD command sequence is a typical parametric design paradigm in 3D CAD
systems where a model is constructed by overlaying 2D sketches with operations
such as extrusion, revolution, and Boolean operations. Although there is
growing academic interest in the automatic generation of command sequences,
existing methods and datasets only support operations such as 2D sketching,
extrusion,and Boolean operations. This limitation makes it challenging to
represent more complex geometries. In this paper, we present a reinforcement
learning (RL) training environment (gym) built on a CAD geometric engine. Given
an input boundary representation (B-Rep) geometry, the policy network in the RL
algorithm generates an action. This action, along with previously generated
actions, is processed within the gym to produce the corresponding CAD geometry,
which is then fed back into the policy network. The rewards, determined by the
difference between the generated and target geometries within the gym, are used
to update the RL network. Our method supports operations beyond sketches,
Boolean, and extrusion, including revolution operations. With this training
gym, we achieve state-of-the-art (SOTA) quality in generating command sequences
from B-Rep geometries. In addition, our method can significantly improve the
efficiency of command sequence generation by a factor of 39X compared with the
previous training gym.
|
2503.18552 | Qiang Qu | Qiang Qu, Ming Li, Xiaoming Chen, Tongliang Liu | EvAnimate: Event-conditioned Image-to-Video Generation for Human
Animation | null | null | null | null | cs.CV cs.AI cs.MM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Conditional human animation transforms a static reference image into a
dynamic sequence by applying motion cues such as poses. These motion cues are
typically derived from video data but are susceptible to limitations including
low temporal resolution, motion blur, overexposure, and inaccuracies under
low-light conditions. In contrast, event cameras provide data streams with
exceptionally high temporal resolution, a wide dynamic range, and inherent
resistance to motion blur and exposure issues. In this work, we propose
EvAnimate, a framework that leverages event streams as motion cues to animate
static human images. Our approach employs a specialized event representation
that transforms asynchronous event streams into 3-channel slices with
controllable slicing rates and appropriate slice density, ensuring
compatibility with diffusion models. Subsequently, a dual-branch architecture
generates high-quality videos by harnessing the inherent motion dynamics of the
event streams, thereby enhancing both video quality and temporal consistency.
Specialized data augmentation strategies further enhance cross-person
generalization. Finally, we establish a new benchmarking, including simulated
event data for training and validation, and a real-world event dataset
capturing human actions under normal and extreme scenarios. The experiment
results demonstrate that EvAnimate achieves high temporal fidelity and robust
performance in scenarios where traditional video-derived cues fall short.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 11:05:41 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Qu",
"Qiang",
""
],
[
"Li",
"Ming",
""
],
[
"Chen",
"Xiaoming",
""
],
[
"Liu",
"Tongliang",
""
]
] | TITLE: EvAnimate: Event-conditioned Image-to-Video Generation for Human
Animation
ABSTRACT: Conditional human animation transforms a static reference image into a
dynamic sequence by applying motion cues such as poses. These motion cues are
typically derived from video data but are susceptible to limitations including
low temporal resolution, motion blur, overexposure, and inaccuracies under
low-light conditions. In contrast, event cameras provide data streams with
exceptionally high temporal resolution, a wide dynamic range, and inherent
resistance to motion blur and exposure issues. In this work, we propose
EvAnimate, a framework that leverages event streams as motion cues to animate
static human images. Our approach employs a specialized event representation
that transforms asynchronous event streams into 3-channel slices with
controllable slicing rates and appropriate slice density, ensuring
compatibility with diffusion models. Subsequently, a dual-branch architecture
generates high-quality videos by harnessing the inherent motion dynamics of the
event streams, thereby enhancing both video quality and temporal consistency.
Specialized data augmentation strategies further enhance cross-person
generalization. Finally, we establish a new benchmarking, including simulated
event data for training and validation, and a real-world event dataset
capturing human actions under normal and extreme scenarios. The experiment
results demonstrate that EvAnimate achieves high temporal fidelity and robust
performance in scenarios where traditional video-derived cues fall short.
|
2503.18553 | Zihao Chen | Zihao Chen, Hsuanyu Wu, Chi-Hsi Kung, Yi-Ting Chen, Yan-Tsung Peng | ATARS: An Aerial Traffic Atomic Activity Recognition and Temporal
Segmentation Dataset | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Traffic Atomic Activity which describes traffic patterns for topological
intersection dynamics is a crucial topic for the advancement of intelligent
driving systems. However, existing atomic activity datasets are collected from
an egocentric view, which cannot support the scenarios where traffic activities
in an entire intersection are required. Moreover, existing datasets only
provide video-level atomic activity annotations, which require exhausting
efforts to manually trim the videos for recognition and limit their
applications to untrimmed videos. To bridge this gap, we introduce the Aerial
Traffic Atomic Activity Recognition and Segmentation (ATARS) dataset, the first
aerial dataset designed for multi-label atomic activity analysis. We offer
atomic activity labels for each frame, which accurately record the intervals
for traffic activities. Moreover, we propose a novel task, Multi-label Temporal
Atomic Activity Recognition, enabling the study of accurate temporal
localization for atomic activity and easing the burden of manual video trimming
for recognition. We conduct extensive experiments to evaluate existing
state-of-the-art models on both atomic activity recognition and temporal atomic
activity segmentation. The results highlight the unique challenges of our ATARS
dataset, such as recognizing extremely small objects' activities. We further
provide comprehensive discussion analyzing these challenges and offer valuable
insights for future direction to improve recognizing atomic activity in aerial
view. Our source code and dataset are available at
https://github.com/magecliff96/ATARS/
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 11:06:04 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Zihao",
""
],
[
"Wu",
"Hsuanyu",
""
],
[
"Kung",
"Chi-Hsi",
""
],
[
"Chen",
"Yi-Ting",
""
],
[
"Peng",
"Yan-Tsung",
""
]
] | TITLE: ATARS: An Aerial Traffic Atomic Activity Recognition and Temporal
Segmentation Dataset
ABSTRACT: Traffic Atomic Activity which describes traffic patterns for topological
intersection dynamics is a crucial topic for the advancement of intelligent
driving systems. However, existing atomic activity datasets are collected from
an egocentric view, which cannot support the scenarios where traffic activities
in an entire intersection are required. Moreover, existing datasets only
provide video-level atomic activity annotations, which require exhausting
efforts to manually trim the videos for recognition and limit their
applications to untrimmed videos. To bridge this gap, we introduce the Aerial
Traffic Atomic Activity Recognition and Segmentation (ATARS) dataset, the first
aerial dataset designed for multi-label atomic activity analysis. We offer
atomic activity labels for each frame, which accurately record the intervals
for traffic activities. Moreover, we propose a novel task, Multi-label Temporal
Atomic Activity Recognition, enabling the study of accurate temporal
localization for atomic activity and easing the burden of manual video trimming
for recognition. We conduct extensive experiments to evaluate existing
state-of-the-art models on both atomic activity recognition and temporal atomic
activity segmentation. The results highlight the unique challenges of our ATARS
dataset, such as recognizing extremely small objects' activities. We further
provide comprehensive discussion analyzing these challenges and offer valuable
insights for future direction to improve recognizing atomic activity in aerial
view. Our source code and dataset are available at
https://github.com/magecliff96/ATARS/
|
2503.18567 | Biwen Meng | Biwen Meng and Xi Long and Wanrong Yang and Ruochen Liu and Yi Tian
and Yalin Zheng and Jingxin Liu | Advancing Cross-Organ Domain Generalization with Test-Time Style
Transfer and Diversity Enhancement | 2025 IEEE International Symposium on Biomedical Imaging (ISBI) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has made significant progress in addressing challenges in
various fields including computational pathology (CPath). However, due to the
complexity of the domain shift problem, the performance of existing models will
degrade, especially when it comes to multi-domain or cross-domain tasks. In
this paper, we propose a Test-time style transfer (T3s) that uses a
bidirectional mapping mechanism to project the features of the source and
target domains into a unified feature space, enhancing the generalization
ability of the model. To further increase the style expression space, we
introduce a Cross-domain style diversification module (CSDM) to ensure the
orthogonality between style bases. In addition, data augmentation and low-rank
adaptation techniques are used to improve feature alignment and sensitivity,
enabling the model to adapt to multi-domain inputs effectively. Our method has
demonstrated effectiveness on three unseen datasets.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 11:22:27 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Meng",
"Biwen",
""
],
[
"Long",
"Xi",
""
],
[
"Yang",
"Wanrong",
""
],
[
"Liu",
"Ruochen",
""
],
[
"Tian",
"Yi",
""
],
[
"Zheng",
"Yalin",
""
],
[
"Liu",
"Jingxin",
""
]
] | TITLE: Advancing Cross-Organ Domain Generalization with Test-Time Style
Transfer and Diversity Enhancement
ABSTRACT: Deep learning has made significant progress in addressing challenges in
various fields including computational pathology (CPath). However, due to the
complexity of the domain shift problem, the performance of existing models will
degrade, especially when it comes to multi-domain or cross-domain tasks. In
this paper, we propose a Test-time style transfer (T3s) that uses a
bidirectional mapping mechanism to project the features of the source and
target domains into a unified feature space, enhancing the generalization
ability of the model. To further increase the style expression space, we
introduce a Cross-domain style diversification module (CSDM) to ensure the
orthogonality between style bases. In addition, data augmentation and low-rank
adaptation techniques are used to improve feature alignment and sensitivity,
enabling the model to adapt to multi-domain inputs effectively. Our method has
demonstrated effectiveness on three unseen datasets.
|
2503.18569 | Hadi Mohammadi | Hadi Mohammadi, Ehsan Nazerfard, Mostafa Haghir Chehreghani | Anchor-based oversampling for imbalanced tabular data via contrastive
and adversarial learning | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Imbalanced data represent a distribution with more frequencies of one class
(majority) than the other (minority). This phenomenon occurs across various
domains, such as security, medical care and human activity. In imbalanced
learning, classification algorithms are typically inclined to classify the
majority class accurately, resulting in artificially high accuracy rates. As a
result, many minority samples are mistakenly labelled as majority-class
instances, resulting in a bias that benefits the majority class. This study
presents a framework based on boundary anchor samples to tackle the imbalance
learning challenge. First, we select and use anchor samples to train a
multilayer perceptron (MLP) classifier, which acts as a prior knowledge model
and aids the adversarial and contrastive learning procedures. Then, we designed
a novel deep generative model called Anchor Stabilized Conditional Generative
Adversarial Network or Anch-SCGAN in short. Anch-SCGAN is supported with two
generators for the minority and majority classes and a discriminator
incorporating additional class-specific information from the pre-trained
feature extractor MLP. In addition, we facilitate the generator's training
procedure in two ways. First, we define a new generator loss function based on
reprocessed anchor samples and contrastive learning. Second, we apply a scoring
strategy to stabilize the adversarial training part in generators. We train
Anch-SCGAN and further finetune it with anchor samples to improve the precision
of the generated samples. Our experiments on 16 real-world imbalanced datasets
illustrate that Anch-SCGAN outperforms the renowned methods in imbalanced
learning.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 11:25:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Mohammadi",
"Hadi",
""
],
[
"Nazerfard",
"Ehsan",
""
],
[
"Chehreghani",
"Mostafa Haghir",
""
]
] | TITLE: Anchor-based oversampling for imbalanced tabular data via contrastive
and adversarial learning
ABSTRACT: Imbalanced data represent a distribution with more frequencies of one class
(majority) than the other (minority). This phenomenon occurs across various
domains, such as security, medical care and human activity. In imbalanced
learning, classification algorithms are typically inclined to classify the
majority class accurately, resulting in artificially high accuracy rates. As a
result, many minority samples are mistakenly labelled as majority-class
instances, resulting in a bias that benefits the majority class. This study
presents a framework based on boundary anchor samples to tackle the imbalance
learning challenge. First, we select and use anchor samples to train a
multilayer perceptron (MLP) classifier, which acts as a prior knowledge model
and aids the adversarial and contrastive learning procedures. Then, we designed
a novel deep generative model called Anchor Stabilized Conditional Generative
Adversarial Network or Anch-SCGAN in short. Anch-SCGAN is supported with two
generators for the minority and majority classes and a discriminator
incorporating additional class-specific information from the pre-trained
feature extractor MLP. In addition, we facilitate the generator's training
procedure in two ways. First, we define a new generator loss function based on
reprocessed anchor samples and contrastive learning. Second, we apply a scoring
strategy to stabilize the adversarial training part in generators. We train
Anch-SCGAN and further finetune it with anchor samples to improve the precision
of the generated samples. Our experiments on 16 real-world imbalanced datasets
illustrate that Anch-SCGAN outperforms the renowned methods in imbalanced
learning.
|
2503.18572 | Prathyush Sambaturu | Prathyush Sambaturu, Bernardo Gutierrez, Moritz U.G. Kraemer | Identifying and Characterising Higher Order Interactions in Mobility
Networks Using Hypergraphs | null | null | null | null | cs.SI cs.AI cs.DB cs.DM math.CO | http://creativecommons.org/licenses/by/4.0/ | Understanding human mobility is essential for applications ranging from urban
planning to public health. Traditional mobility models such as flow networks
and colocation matrices capture only pairwise interactions between discrete
locations, overlooking higher-order relationships among locations (i.e.,
mobility flow among two or more locations). To address this, we propose
co-visitation hypergraphs, a model that leverages temporal observation windows
to extract group interactions between locations from individual mobility
trajectory data. Using frequent pattern mining, our approach constructs
hypergraphs that capture dynamic mobility behaviors across different spatial
and temporal scales. We validate our method on a publicly available mobility
dataset and demonstrate its effectiveness in analyzing city-scale mobility
patterns, detecting shifts during external disruptions such as extreme weather
events, and examining how a location's connectivity (degree) relates to the
number of points of interest (POIs) within it. Our results demonstrate that our
hypergraph-based mobility analysis framework is a valuable tool with potential
applications in diverse fields such as public health, disaster resilience, and
urban planning.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 11:29:06 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Sambaturu",
"Prathyush",
""
],
[
"Gutierrez",
"Bernardo",
""
],
[
"Kraemer",
"Moritz U. G.",
""
]
] | TITLE: Identifying and Characterising Higher Order Interactions in Mobility
Networks Using Hypergraphs
ABSTRACT: Understanding human mobility is essential for applications ranging from urban
planning to public health. Traditional mobility models such as flow networks
and colocation matrices capture only pairwise interactions between discrete
locations, overlooking higher-order relationships among locations (i.e.,
mobility flow among two or more locations). To address this, we propose
co-visitation hypergraphs, a model that leverages temporal observation windows
to extract group interactions between locations from individual mobility
trajectory data. Using frequent pattern mining, our approach constructs
hypergraphs that capture dynamic mobility behaviors across different spatial
and temporal scales. We validate our method on a publicly available mobility
dataset and demonstrate its effectiveness in analyzing city-scale mobility
patterns, detecting shifts during external disruptions such as extreme weather
events, and examining how a location's connectivity (degree) relates to the
number of points of interest (POIs) within it. Our results demonstrate that our
hypergraph-based mobility analysis framework is a valuable tool with potential
applications in diverse fields such as public health, disaster resilience, and
urban planning.
|
2503.18594 | Guillem Garc\'ia Subies | Guillem Garc\'ia Subies, \'Alvaro Barbero Jim\'enez, Paloma Mart\'inez
Fern\'andez | ClinText-SP and RigoBERTa Clinical: a new set of open resources for
Spanish Clinical NLP | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present a novel contribution to Spanish clinical natural language
processing by introducing the largest publicly available clinical corpus,
ClinText-SP, along with a state-of-the-art clinical encoder language model,
RigoBERTa Clinical. Our corpus was meticulously curated from diverse open
sources, including clinical cases from medical journals and annotated corpora
from shared tasks, providing a rich and diverse dataset that was previously
difficult to access. RigoBERTa Clinical, developed through domain-adaptive
pretraining on this comprehensive dataset, significantly outperforms existing
models on multiple clinical NLP benchmarks. By publicly releasing both the
dataset and the model, we aim to empower the research community with robust
resources that can drive further advancements in clinical NLP and ultimately
contribute to improved healthcare applications.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 11:52:17 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Subies",
"Guillem García",
""
],
[
"Jiménez",
"Álvaro Barbero",
""
],
[
"Fernández",
"Paloma Martínez",
""
]
] | TITLE: ClinText-SP and RigoBERTa Clinical: a new set of open resources for
Spanish Clinical NLP
ABSTRACT: We present a novel contribution to Spanish clinical natural language
processing by introducing the largest publicly available clinical corpus,
ClinText-SP, along with a state-of-the-art clinical encoder language model,
RigoBERTa Clinical. Our corpus was meticulously curated from diverse open
sources, including clinical cases from medical journals and annotated corpora
from shared tasks, providing a rich and diverse dataset that was previously
difficult to access. RigoBERTa Clinical, developed through domain-adaptive
pretraining on this comprehensive dataset, significantly outperforms existing
models on multiple clinical NLP benchmarks. By publicly releasing both the
dataset and the model, we aim to empower the research community with robust
resources that can drive further advancements in clinical NLP and ultimately
contribute to improved healthcare applications.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.