id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.19823 | Yan Zhuang | Yan Zhuang, Minheng Chen, Chao Cao, Tong Chen, Jing Zhang, Xiaowei Yu,
Yanjun Lyu, Lu Zhang, Tianming Liu, and Dajiang Zhu | GyralNet Subnetwork Partitioning via Differentiable Spectral Modularity
Optimization | 10 pages, 3 figures | null | null | null | q-bio.NC cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Understanding the structural and functional organization of the human brain
requires a detailed examination of cortical folding patterns, among which the
three-hinge gyrus (3HG) has been identified as a key structural landmark.
GyralNet, a network representation of cortical folding, models 3HGs as nodes
and gyral crests as edges, highlighting their role as critical hubs in
cortico-cortical connectivity. However, existing methods for analyzing 3HGs
face significant challenges, including the sub-voxel scale of 3HGs at typical
neuroimaging resolutions, the computational complexity of establishing
cross-subject correspondences, and the oversimplification of treating 3HGs as
independent nodes without considering their community-level relationships. To
address these limitations, we propose a fully differentiable subnetwork
partitioning framework that employs a spectral modularity maximization
optimization strategy to modularize the organization of 3HGs within GyralNet.
By incorporating topological structural similarity and DTI-derived connectivity
patterns as attribute features, our approach provides a biologically meaningful
representation of cortical organization. Extensive experiments on the Human
Connectome Project (HCP) dataset demonstrate that our method effectively
partitions GyralNet at the individual level while preserving the
community-level consistency of 3HGs across subjects, offering a robust
foundation for understanding brain connectivity.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 16:33:12 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 21:17:19 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhuang",
"Yan",
""
],
[
"Chen",
"Minheng",
""
],
[
"Cao",
"Chao",
""
],
[
"Chen",
"Tong",
""
],
[
"Zhang",
"Jing",
""
],
[
"Yu",
"Xiaowei",
""
],
[
"Lyu",
"Yanjun",
""
],
[
"Zhang",
"Lu",
""
],
[
"Liu",
"Tianming",
""
],
[
"Zhu",
"Dajiang",
""
]
] | TITLE: GyralNet Subnetwork Partitioning via Differentiable Spectral Modularity
Optimization
ABSTRACT: Understanding the structural and functional organization of the human brain
requires a detailed examination of cortical folding patterns, among which the
three-hinge gyrus (3HG) has been identified as a key structural landmark.
GyralNet, a network representation of cortical folding, models 3HGs as nodes
and gyral crests as edges, highlighting their role as critical hubs in
cortico-cortical connectivity. However, existing methods for analyzing 3HGs
face significant challenges, including the sub-voxel scale of 3HGs at typical
neuroimaging resolutions, the computational complexity of establishing
cross-subject correspondences, and the oversimplification of treating 3HGs as
independent nodes without considering their community-level relationships. To
address these limitations, we propose a fully differentiable subnetwork
partitioning framework that employs a spectral modularity maximization
optimization strategy to modularize the organization of 3HGs within GyralNet.
By incorporating topological structural similarity and DTI-derived connectivity
patterns as attribute features, our approach provides a biologically meaningful
representation of cortical organization. Extensive experiments on the Human
Connectome Project (HCP) dataset demonstrate that our method effectively
partitions GyralNet at the individual level while preserving the
community-level consistency of 3HGs across subjects, offering a robust
foundation for understanding brain connectivity.
| no_new_dataset | 0.948202 |
2503.20136 | Zhenkai Qin | Zhenkai Qin, BaoZhong Wei, Caifeng Gao | Innovative LSGTime Model for Crime Spatiotemporal Prediction Based on
MindSpore Framework | null | null | null | null | cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | With the acceleration of urbanization, the spatiotemporal characteristics of
criminal activities have become increasingly complex. Accurate prediction of
crime distribution is crucial for optimizing the allocation of police resources
and preventing crime. This paper proposes LGSTime, a crime spatiotemporal
prediction model that integrates Long Short-Term Memory (LSTM), Gated Recurrent
Unit (GRU), and the Multi-head Sparse Self-attention mechanism. LSTM and GRU
capture long-term dependencies in crime time series, such as seasonality and
periodicity, through their unique gating mechanisms. The Multi-head Sparse
Self-attention mechanism, on the other hand, focuses on both temporal and
spatial features of criminal events simultaneously through parallel processing
and sparsification techniques, significantly improving computational efficiency
and prediction accuracy. The integrated model leverages the strengths of each
technique to better handle complex spatiotemporal data. Experimental findings
demonstrate that the model attains optimal performance across four real - world
crime datasets. In comparison to the CNN model, it exhibits performance
enhancements of 2.8\%, 1.9\%, and 1.4\% in the Mean Squared Error (MSE), Mean
Absolute Error (MAE), and Root Mean Squared Error (RMSE) metrics respectively.
These results offer a valuable reference for tackling the challenges in crime
prediction.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 00:57:38 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 14:12:07 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 13:50:20 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Qin",
"Zhenkai",
""
],
[
"Wei",
"BaoZhong",
""
],
[
"Gao",
"Caifeng",
""
]
] | TITLE: Innovative LSGTime Model for Crime Spatiotemporal Prediction Based on
MindSpore Framework
ABSTRACT: With the acceleration of urbanization, the spatiotemporal characteristics of
criminal activities have become increasingly complex. Accurate prediction of
crime distribution is crucial for optimizing the allocation of police resources
and preventing crime. This paper proposes LGSTime, a crime spatiotemporal
prediction model that integrates Long Short-Term Memory (LSTM), Gated Recurrent
Unit (GRU), and the Multi-head Sparse Self-attention mechanism. LSTM and GRU
capture long-term dependencies in crime time series, such as seasonality and
periodicity, through their unique gating mechanisms. The Multi-head Sparse
Self-attention mechanism, on the other hand, focuses on both temporal and
spatial features of criminal events simultaneously through parallel processing
and sparsification techniques, significantly improving computational efficiency
and prediction accuracy. The integrated model leverages the strengths of each
technique to better handle complex spatiotemporal data. Experimental findings
demonstrate that the model attains optimal performance across four real - world
crime datasets. In comparison to the CNN model, it exhibits performance
enhancements of 2.8\%, 1.9\%, and 1.4\% in the Mean Squared Error (MSE), Mean
Absolute Error (MAE), and Root Mean Squared Error (RMSE) metrics respectively.
These results offer a valuable reference for tackling the challenges in crime
prediction.
| no_new_dataset | 0.949106 |
2503.20290 | Siyin Wang | Siyin Wang, Wenyi Yu, Xianzhao Chen, Xiaohai Tian, Jun Zhang, Lu Lu,
Yu Tsao, Junichi Yamagishi, Yuxuan Wang, Chao Zhang | QualiSpeech: A Speech Quality Assessment Dataset with Natural Language
Reasoning and Descriptions | 23 pages, 16 figures | null | null | null | eess.AS cs.AI cs.CL cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores a novel perspective to speech quality assessment by
leveraging natural language descriptions, offering richer, more nuanced
insights than traditional numerical scoring methods. Natural language feedback
provides instructive recommendations and detailed evaluations, yet existing
datasets lack the comprehensive annotations needed for this approach. To bridge
this gap, we introduce QualiSpeech, a comprehensive low-level speech quality
assessment dataset encompassing 11 key aspects and detailed natural language
comments that include reasoning and contextual insights. Additionally, we
propose the QualiSpeech Benchmark to evaluate the low-level speech
understanding capabilities of auditory large language models (LLMs).
Experimental results demonstrate that finetuned auditory LLMs can reliably
generate detailed descriptions of noise and distortion, effectively identifying
their types and temporal characteristics. The results further highlight the
potential for incorporating reasoning to enhance the accuracy and reliability
of quality assessments. The dataset will be released at
https://huggingface.co/datasets/tsinghua-ee/QualiSpeech.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 07:32:20 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 12:33:53 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Siyin",
""
],
[
"Yu",
"Wenyi",
""
],
[
"Chen",
"Xianzhao",
""
],
[
"Tian",
"Xiaohai",
""
],
[
"Zhang",
"Jun",
""
],
[
"Lu",
"Lu",
""
],
[
"Tsao",
"Yu",
""
],
[
"Yamagishi",
"Junichi",
""
],
[
"Wang",
"Yuxuan",
""
],
[
"Zhang",
"Chao",
""
]
] | TITLE: QualiSpeech: A Speech Quality Assessment Dataset with Natural Language
Reasoning and Descriptions
ABSTRACT: This paper explores a novel perspective to speech quality assessment by
leveraging natural language descriptions, offering richer, more nuanced
insights than traditional numerical scoring methods. Natural language feedback
provides instructive recommendations and detailed evaluations, yet existing
datasets lack the comprehensive annotations needed for this approach. To bridge
this gap, we introduce QualiSpeech, a comprehensive low-level speech quality
assessment dataset encompassing 11 key aspects and detailed natural language
comments that include reasoning and contextual insights. Additionally, we
propose the QualiSpeech Benchmark to evaluate the low-level speech
understanding capabilities of auditory large language models (LLMs).
Experimental results demonstrate that finetuned auditory LLMs can reliably
generate detailed descriptions of noise and distortion, effectively identifying
their types and temporal characteristics. The results further highlight the
potential for incorporating reasoning to enhance the accuracy and reliability
of quality assessments. The dataset will be released at
https://huggingface.co/datasets/tsinghua-ee/QualiSpeech.
| new_dataset | 0.957991 |
2503.20794 | Veysel Kocaman Vk | Veysel Kocaman, Muhammed Santas, Yigit Gul, Mehmet Butgul, David Talby | Can Zero-Shot Commercial APIs Deliver Regulatory-Grade Clinical Text
DeIdentification? | 14 pages, accepted at Text2Story Workshop at ECIR 2025 | null | null | null | cs.CL cs.CR cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | We evaluate the performance of four leading solutions for de-identification
of unstructured medical text - Azure Health Data Services, AWS Comprehend
Medical, OpenAI GPT-4o, and John Snow Labs - on a ground truth dataset of 48
clinical documents annotated by medical experts. The analysis, conducted at
both entity-level and token-level, suggests that John Snow Labs' Medical
Language Models solution achieves the highest accuracy, with a 96% F1-score in
protected health information (PHI) detection, outperforming Azure (91%), AWS
(83%), and GPT-4o (79%). John Snow Labs is not only the only solution which
achieves regulatory-grade accuracy (surpassing that of human experts) but is
also the most cost-effective solution: It is over 80% cheaper compared to Azure
and GPT-4o, and is the only solution not priced by token. Its fixed-cost local
deployment model avoids the escalating per-request fees of cloud-based
services, making it a scalable and economical choice.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 10:05:04 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 19:44:35 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Kocaman",
"Veysel",
""
],
[
"Santas",
"Muhammed",
""
],
[
"Gul",
"Yigit",
""
],
[
"Butgul",
"Mehmet",
""
],
[
"Talby",
"David",
""
]
] | TITLE: Can Zero-Shot Commercial APIs Deliver Regulatory-Grade Clinical Text
DeIdentification?
ABSTRACT: We evaluate the performance of four leading solutions for de-identification
of unstructured medical text - Azure Health Data Services, AWS Comprehend
Medical, OpenAI GPT-4o, and John Snow Labs - on a ground truth dataset of 48
clinical documents annotated by medical experts. The analysis, conducted at
both entity-level and token-level, suggests that John Snow Labs' Medical
Language Models solution achieves the highest accuracy, with a 96% F1-score in
protected health information (PHI) detection, outperforming Azure (91%), AWS
(83%), and GPT-4o (79%). John Snow Labs is not only the only solution which
achieves regulatory-grade accuracy (surpassing that of human experts) but is
also the most cost-effective solution: It is over 80% cheaper compared to Azure
and GPT-4o, and is the only solution not priced by token. Its fixed-cost local
deployment model avoids the escalating per-request fees of cloud-based
services, making it a scalable and economical choice.
| no_new_dataset | 0.949059 |
2503.21477 | Wenyi Xiong | Wenyi Xiong and Jian Chen and Ziheng Qi | Fine-Grained Behavior and Lane Constraints Guided Trajectory Prediction
Method | This work has been submitted to the IEEE for possible publication | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trajectory prediction, as a critical component of autonomous driving systems,
has attracted the attention of many researchers. Existing prediction algorithms
focus on extracting more detailed scene features or selecting more reasonable
trajectory destinations. However, in the face of dynamic and evolving future
movements of the target vehicle, these algorithms cannot provide a fine-grained
and continuous description of future behaviors and lane constraints, which
degrades the prediction accuracy. To address this challenge, we present BLNet,
a novel dualstream architecture that synergistically integrates behavioral
intention recognition and lane constraint modeling through parallel attention
mechanisms. The framework generates fine-grained behavior state queries
(capturing spatial-temporal movement patterns) and lane queries (encoding lane
topology constraints), supervised by two auxiliary losses, respectively.
Subsequently, a two-stage decoder first produces trajectory proposals, then
performs point-level refinement by jointly incorporating both the continuity of
passed lanes and future motion features. Extensive experiments on two large
datasets, nuScenes and Argoverse, show that our network exhibits significant
performance gains over existing direct regression and goal-based algorithms.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 13:06:57 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 14:15:11 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xiong",
"Wenyi",
""
],
[
"Chen",
"Jian",
""
],
[
"Qi",
"Ziheng",
""
]
] | TITLE: Fine-Grained Behavior and Lane Constraints Guided Trajectory Prediction
Method
ABSTRACT: Trajectory prediction, as a critical component of autonomous driving systems,
has attracted the attention of many researchers. Existing prediction algorithms
focus on extracting more detailed scene features or selecting more reasonable
trajectory destinations. However, in the face of dynamic and evolving future
movements of the target vehicle, these algorithms cannot provide a fine-grained
and continuous description of future behaviors and lane constraints, which
degrades the prediction accuracy. To address this challenge, we present BLNet,
a novel dualstream architecture that synergistically integrates behavioral
intention recognition and lane constraint modeling through parallel attention
mechanisms. The framework generates fine-grained behavior state queries
(capturing spatial-temporal movement patterns) and lane queries (encoding lane
topology constraints), supervised by two auxiliary losses, respectively.
Subsequently, a two-stage decoder first produces trajectory proposals, then
performs point-level refinement by jointly incorporating both the continuity of
passed lanes and future motion features. Extensive experiments on two large
datasets, nuScenes and Argoverse, show that our network exhibits significant
performance gains over existing direct regression and goal-based algorithms.
| no_new_dataset | 0.944842 |
2503.22516 | Samira Alkaee Taleghan | Samira Alkaee Taleghan, Morteza Karimzadeh, Andrew P. Barrett, Walter
N. Meier, Farnoush Banaei-Kashani | Assessing Foundation Models for Sea Ice Type Segmentation in Sentinel-1
SAR Imagery | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate segmentation of sea ice types is essential for mapping and
operational forecasting of sea ice conditions for safe navigation and resource
extraction in ice-covered waters, as well as for understanding polar climate
processes. While deep learning methods have shown promise in automating sea ice
segmentation, they often rely on extensive labeled datasets which require
expert knowledge and are time-consuming to create. Recently, foundation models
(FMs) have shown excellent results for segmenting remote sensing images by
utilizing pre-training on large datasets using self-supervised techniques.
However, their effectiveness for sea ice segmentation remains unexplored,
especially given sea ice's complex structures, seasonal changes, and unique
spectral signatures, as well as peculiar Synthetic Aperture Radar (SAR) imagery
characteristics including banding and scalloping noise, and varying ice
backscatter characteristics, which are often missing in standard remote sensing
pre-training datasets. In particular, SAR images over polar regions are
acquired using different modes than used to capture the images at lower
latitudes by the same sensors that form training datasets for FMs. This study
evaluates ten remote sensing FMs for sea ice type segmentation using Sentinel-1
SAR imagery, focusing on their seasonal and spatial generalization. Among the
selected models, Prithvi-600M outperforms the baseline models, while CROMA
achieves a very similar performance in F1-score. Our contributions include
offering a systematic methodology for selecting FMs for sea ice data analysis,
a comprehensive benchmarking study on performances of FMs for sea ice
segmentation with tailored performance metrics, and insights into existing gaps
and future directions for improving domain-specific models in polar
applications using SAR data.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 15:21:08 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Taleghan",
"Samira Alkaee",
""
],
[
"Karimzadeh",
"Morteza",
""
],
[
"Barrett",
"Andrew P.",
""
],
[
"Meier",
"Walter N.",
""
],
[
"Banaei-Kashani",
"Farnoush",
""
]
] | TITLE: Assessing Foundation Models for Sea Ice Type Segmentation in Sentinel-1
SAR Imagery
ABSTRACT: Accurate segmentation of sea ice types is essential for mapping and
operational forecasting of sea ice conditions for safe navigation and resource
extraction in ice-covered waters, as well as for understanding polar climate
processes. While deep learning methods have shown promise in automating sea ice
segmentation, they often rely on extensive labeled datasets which require
expert knowledge and are time-consuming to create. Recently, foundation models
(FMs) have shown excellent results for segmenting remote sensing images by
utilizing pre-training on large datasets using self-supervised techniques.
However, their effectiveness for sea ice segmentation remains unexplored,
especially given sea ice's complex structures, seasonal changes, and unique
spectral signatures, as well as peculiar Synthetic Aperture Radar (SAR) imagery
characteristics including banding and scalloping noise, and varying ice
backscatter characteristics, which are often missing in standard remote sensing
pre-training datasets. In particular, SAR images over polar regions are
acquired using different modes than used to capture the images at lower
latitudes by the same sensors that form training datasets for FMs. This study
evaluates ten remote sensing FMs for sea ice type segmentation using Sentinel-1
SAR imagery, focusing on their seasonal and spatial generalization. Among the
selected models, Prithvi-600M outperforms the baseline models, while CROMA
achieves a very similar performance in F1-score. Our contributions include
offering a systematic methodology for selecting FMs for sea ice data analysis,
a comprehensive benchmarking study on performances of FMs for sea ice
segmentation with tailored performance metrics, and insights into existing gaps
and future directions for improving domain-specific models in polar
applications using SAR data.
| no_new_dataset | 0.951188 |
2503.22829 | Zhen Lin | Zhen Lin, Hongyu Yuan, Richard Barcus, Qing Lyu, Sucheta Chakravarty,
Megan E. Lipford, Carol A. Shively, Suzanne Craft, Mohammad Kawas, Jeongchul
Kim, Christopher T. Whitlow | Nonhuman Primate Brain Tissue Segmentation Using a Transfer Learning
Approach | null | null | null | null | eess.IV cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-human primates (NHPs) serve as critical models for understanding human
brain function and neurological disorders due to their close evolutionary
relationship with humans. Accurate brain tissue segmentation in NHPs is
critical for understanding neurological disorders, but challenging due to the
scarcity of annotated NHP brain MRI datasets, the small size of the NHP brain,
the limited resolution of available imaging data and the anatomical differences
between human and NHP brains. To address these challenges, we propose a novel
approach utilizing STU-Net with transfer learning to leverage knowledge
transferred from human brain MRI data to enhance segmentation accuracy in the
NHP brain MRI, particularly when training data is limited. The combination of
STU-Net and transfer learning effectively delineates complex tissue boundaries
and captures fine anatomical details specific to NHP brains. Notably, our
method demonstrated improvement in segmenting small subcortical structures such
as putamen and thalamus that are challenging to resolve with limited spatial
resolution and tissue contrast, and achieved DSC of over 0.88, IoU over 0.8 and
HD95 under 7. This study introduces a robust method for multi-class brain
tissue segmentation in NHPs, potentially accelerating research in evolutionary
neuroscience and preclinical studies of neurological disorders relevant to
human health.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 18:51:22 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 11:52:54 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Lin",
"Zhen",
""
],
[
"Yuan",
"Hongyu",
""
],
[
"Barcus",
"Richard",
""
],
[
"Lyu",
"Qing",
""
],
[
"Chakravarty",
"Sucheta",
""
],
[
"Lipford",
"Megan E.",
""
],
[
"Shively",
"Carol A.",
""
],
[
"Craft",
"Suzanne",
""
],
[
"Kawas",
"Mohammad",
""
],
[
"Kim",
"Jeongchul",
""
],
[
"Whitlow",
"Christopher T.",
""
]
] | TITLE: Nonhuman Primate Brain Tissue Segmentation Using a Transfer Learning
Approach
ABSTRACT: Non-human primates (NHPs) serve as critical models for understanding human
brain function and neurological disorders due to their close evolutionary
relationship with humans. Accurate brain tissue segmentation in NHPs is
critical for understanding neurological disorders, but challenging due to the
scarcity of annotated NHP brain MRI datasets, the small size of the NHP brain,
the limited resolution of available imaging data and the anatomical differences
between human and NHP brains. To address these challenges, we propose a novel
approach utilizing STU-Net with transfer learning to leverage knowledge
transferred from human brain MRI data to enhance segmentation accuracy in the
NHP brain MRI, particularly when training data is limited. The combination of
STU-Net and transfer learning effectively delineates complex tissue boundaries
and captures fine anatomical details specific to NHP brains. Notably, our
method demonstrated improvement in segmenting small subcortical structures such
as putamen and thalamus that are challenging to resolve with limited spatial
resolution and tissue contrast, and achieved DSC of over 0.88, IoU over 0.8 and
HD95 under 7. This study introduces a robust method for multi-class brain
tissue segmentation in NHPs, potentially accelerating research in evolutionary
neuroscience and preclinical studies of neurological disorders relevant to
human health.
| no_new_dataset | 0.953013 |
2503.23001 | Bin Han | Bin Han, Di Feng, Jie Wang, and Hans D. Schotten | Buyer-Initiated Auction Mechanism for Data Redemption in Machine
Unlearning | Submitted to IEEE GLOBECOM 2025 | null | null | null | cs.LG cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid growth of artificial intelligence (AI) has raised privacy concerns
over user data, leading to regulations like the General Data Protection
Regulation (GDPR) and the California Consumer Privacy Act (CCPA). With the
essential toolbox provided by machine unlearning, AI service providers are now
able to remove user data from their trained models as well as the training
datasets, so as to comply with such regulations. However, extensive data
redemption can be costly and degrade model accuracy. To balance the cost of
unlearning and the privacy protection, we propose a buyer-initiated auction
mechanism for data redemption, enabling the service provider to purchase data
from willing users with appropriate compensation. This approach does not
require the server to have any a priori knowledge about the users' privacy
preference, and provides an efficient solution for maximizing the social
welfare in the investigated problem.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 07:44:34 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 04:25:31 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Han",
"Bin",
""
],
[
"Feng",
"Di",
""
],
[
"Wang",
"Jie",
""
],
[
"Schotten",
"Hans D.",
""
]
] | TITLE: Buyer-Initiated Auction Mechanism for Data Redemption in Machine
Unlearning
ABSTRACT: The rapid growth of artificial intelligence (AI) has raised privacy concerns
over user data, leading to regulations like the General Data Protection
Regulation (GDPR) and the California Consumer Privacy Act (CCPA). With the
essential toolbox provided by machine unlearning, AI service providers are now
able to remove user data from their trained models as well as the training
datasets, so as to comply with such regulations. However, extensive data
redemption can be costly and degrade model accuracy. To balance the cost of
unlearning and the privacy protection, we propose a buyer-initiated auction
mechanism for data redemption, enabling the service provider to purchase data
from willing users with appropriate compensation. This approach does not
require the server to have any a priori knowledge about the users' privacy
preference, and provides an efficient solution for maximizing the social
welfare in the investigated problem.
| no_new_dataset | 0.954605 |
2503.23179 | Wiebke Heyer | Wiebke Heyer, Yannic Elser, Lennart Berkel, Xinrui Song, Xuanang Xu,
Pingkun Yan, Xi Jia, Jinming Duan, Zi Li, Tony C. W. Mok, BoWen LI, Christian
Staackmann, Christoph Gro{\ss}br\"ohmer, Lasse Hansen, Alessa Hering, Malte
M. Sieren, Mattias P. Heinrich | OncoReg: Medical Image Registration for Oncological Challenges | 26 pages, 6 figures | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | In modern cancer research, the vast volume of medical data generated is often
underutilised due to challenges related to patient privacy. The OncoReg
Challenge addresses this issue by enabling researchers to develop and validate
image registration methods through a two-phase framework that ensures patient
privacy while fostering the development of more generalisable AI models. Phase
one involves working with a publicly available dataset, while phase two focuses
on training models on a private dataset within secure hospital networks.
OncoReg builds upon the foundation established by the Learn2Reg Challenge by
incorporating the registration of interventional cone-beam computed tomography
(CBCT) with standard planning fan-beam CT (FBCT) images in radiotherapy.
Accurate image registration is crucial in oncology, particularly for dynamic
treatment adjustments in image-guided radiotherapy, where precise alignment is
necessary to minimise radiation exposure to healthy tissues while effectively
targeting tumours. This work details the methodology and data behind the
OncoReg Challenge and provides a comprehensive analysis of the competition
entries and results. Findings reveal that feature extraction plays a pivotal
role in this registration task. A new method emerging from this challenge
demonstrated its versatility, while established approaches continue to perform
comparably to newer techniques. Both deep learning and classical approaches
still play significant roles in image registration, with the combination of
methods - particularly in feature extraction - proving most effective.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 18:16:10 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 08:44:33 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Heyer",
"Wiebke",
""
],
[
"Elser",
"Yannic",
""
],
[
"Berkel",
"Lennart",
""
],
[
"Song",
"Xinrui",
""
],
[
"Xu",
"Xuanang",
""
],
[
"Yan",
"Pingkun",
""
],
[
"Jia",
"Xi",
""
],
[
"Duan",
"Jinming",
""
],
[
"Li",
"Zi",
""
],
[
"Mok",
"Tony C. W.",
""
],
[
"LI",
"BoWen",
""
],
[
"Staackmann",
"Christian",
""
],
[
"Großbröhmer",
"Christoph",
""
],
[
"Hansen",
"Lasse",
""
],
[
"Hering",
"Alessa",
""
],
[
"Sieren",
"Malte M.",
""
],
[
"Heinrich",
"Mattias P.",
""
]
] | TITLE: OncoReg: Medical Image Registration for Oncological Challenges
ABSTRACT: In modern cancer research, the vast volume of medical data generated is often
underutilised due to challenges related to patient privacy. The OncoReg
Challenge addresses this issue by enabling researchers to develop and validate
image registration methods through a two-phase framework that ensures patient
privacy while fostering the development of more generalisable AI models. Phase
one involves working with a publicly available dataset, while phase two focuses
on training models on a private dataset within secure hospital networks.
OncoReg builds upon the foundation established by the Learn2Reg Challenge by
incorporating the registration of interventional cone-beam computed tomography
(CBCT) with standard planning fan-beam CT (FBCT) images in radiotherapy.
Accurate image registration is crucial in oncology, particularly for dynamic
treatment adjustments in image-guided radiotherapy, where precise alignment is
necessary to minimise radiation exposure to healthy tissues while effectively
targeting tumours. This work details the methodology and data behind the
OncoReg Challenge and provides a comprehensive analysis of the competition
entries and results. Findings reveal that feature extraction plays a pivotal
role in this registration task. A new method emerging from this challenge
demonstrated its versatility, while established approaches continue to perform
comparably to newer techniques. Both deep learning and classical approaches
still play significant roles in image registration, with the combination of
methods - particularly in feature extraction - proving most effective.
| no_new_dataset | 0.942876 |
2503.23461 | Nikai Du | Nikai Du, Zhennan Chen, Zhizhou Chen, Shan Gao, Xi Chen, Zhengkai
Jiang, Jian Yang and Ying Tai | TextCrafter: Accurately Rendering Multiple Texts in Complex Visual
Scenes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the task of Complex Visual Text Generation (CVTG), which
centers on generating intricate textual content distributed across diverse
regions within visual images. In CVTG, image generation models often rendering
distorted and blurred visual text or missing some visual text. To tackle these
challenges, we propose TextCrafter, a novel multi-visual text rendering method.
TextCrafter employs a progressive strategy to decompose complex visual text
into distinct components while ensuring robust alignment between textual
content and its visual carrier. Additionally, it incorporates a token focus
enhancement mechanism to amplify the prominence of visual text during the
generation process. TextCrafter effectively addresses key challenges in CVTG
tasks, such as text confusion, omissions, and blurriness. Moreover, we present
a new benchmark dataset, CVTG-2K, tailored to rigorously evaluate the
performance of generative models on CVTG tasks. Extensive experiments
demonstrate that our method surpasses state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 14:36:55 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 02:56:45 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Du",
"Nikai",
""
],
[
"Chen",
"Zhennan",
""
],
[
"Chen",
"Zhizhou",
""
],
[
"Gao",
"Shan",
""
],
[
"Chen",
"Xi",
""
],
[
"Jiang",
"Zhengkai",
""
],
[
"Yang",
"Jian",
""
],
[
"Tai",
"Ying",
""
]
] | TITLE: TextCrafter: Accurately Rendering Multiple Texts in Complex Visual
Scenes
ABSTRACT: This paper explores the task of Complex Visual Text Generation (CVTG), which
centers on generating intricate textual content distributed across diverse
regions within visual images. In CVTG, image generation models often rendering
distorted and blurred visual text or missing some visual text. To tackle these
challenges, we propose TextCrafter, a novel multi-visual text rendering method.
TextCrafter employs a progressive strategy to decompose complex visual text
into distinct components while ensuring robust alignment between textual
content and its visual carrier. Additionally, it incorporates a token focus
enhancement mechanism to amplify the prominence of visual text during the
generation process. TextCrafter effectively addresses key challenges in CVTG
tasks, such as text confusion, omissions, and blurriness. Moreover, we present
a new benchmark dataset, CVTG-2K, tailored to rigorously evaluate the
performance of generative models on CVTG tasks. Extensive experiments
demonstrate that our method surpasses state-of-the-art approaches.
| new_dataset | 0.9598 |
2503.23811 | Chris Brogly | Chris Brogly, Connor McElroy | Did ChatGPT or Copilot use alter the style of internet news headlines? A
time series regression analysis | null | null | null | null | cs.CL cs.SI | http://creativecommons.org/licenses/by-sa/4.0/ | The release of advanced Large Language Models (LLMs) such as ChatGPT and
Copilot is changing the way text is created and may influence the content that
we find on the web. This study investigated whether the release of these two
popular LLMs coincided with a change in writing style in headlines and links on
worldwide news websites. 175 NLP features were obtained for each text in a
dataset of 451 million headlines/links. An interrupted time series analysis was
applied for each of the 175 NLP features to evaluate whether there were any
statistically significant sustained changes after the release dates of ChatGPT
and/or Copilot. There were a total of 44 features that did not appear to have
any significant sustained change after the release of ChatGPT/Copilot. A total
of 91 other features did show significant change with ChatGPT and/or Copilot
although significance with earlier control LLM release dates (GPT-1/2/3,
Gopher) removed them from consideration. This initial analysis suggests these
language models may have had a limited impact on the style of individual news
headlines/links, with respect to only some NLP measures.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 07:44:26 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 06:56:57 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Brogly",
"Chris",
""
],
[
"McElroy",
"Connor",
""
]
] | TITLE: Did ChatGPT or Copilot use alter the style of internet news headlines? A
time series regression analysis
ABSTRACT: The release of advanced Large Language Models (LLMs) such as ChatGPT and
Copilot is changing the way text is created and may influence the content that
we find on the web. This study investigated whether the release of these two
popular LLMs coincided with a change in writing style in headlines and links on
worldwide news websites. 175 NLP features were obtained for each text in a
dataset of 451 million headlines/links. An interrupted time series analysis was
applied for each of the 175 NLP features to evaluate whether there were any
statistically significant sustained changes after the release dates of ChatGPT
and/or Copilot. There were a total of 44 features that did not appear to have
any significant sustained change after the release of ChatGPT/Copilot. A total
of 91 other features did show significant change with ChatGPT and/or Copilot
although significance with earlier control LLM release dates (GPT-1/2/3,
Gopher) removed them from consideration. This initial analysis suggests these
language models may have had a limited impact on the style of individual news
headlines/links, with respect to only some NLP measures.
| no_new_dataset | 0.936692 |
2503.23862 | Eon Seung Seong | SeonYeong Lee, EonSeung Seong, DongEon Lee, SiYeoul Lee, Yubin Cho,
Chunsu Park, Seonho Kim, MinKyung Seo, YoungSin Ko, MinWoo Kim | Learned Image Compression and Restoration for Digital Pathology | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Digital pathology images play a crucial role in medical diagnostics, but
their ultra-high resolution and large file sizes pose significant challenges
for storage, transmission, and real-time visualization. To address these
issues, we propose CLERIC, a novel deep learning-based image compression
framework designed specifically for whole slide images (WSIs). CLERIC
integrates a learnable lifting scheme and advanced convolutional techniques to
enhance compression efficiency while preserving critical pathological details.
Our framework employs a lifting-scheme transform in the analysis stage to
decompose images into low- and high-frequency components, enabling more
structured latent representations. These components are processed through
parallel encoders incorporating Deformable Residual Blocks (DRB) and Recurrent
Residual Blocks (R2B) to improve feature extraction and spatial adaptability.
The synthesis stage applies an inverse lifting transform for effective image
reconstruction, ensuring high-fidelity restoration of fine-grained tissue
structures. We evaluate CLERIC on a digital pathology image dataset and compare
its performance against state-of-the-art learned image compression (LIC)
models. Experimental results demonstrate that CLERIC achieves superior
rate-distortion (RD) performance, significantly reducing storage requirements
while maintaining high diagnostic image quality. Our study highlights the
potential of deep learning-based compression in digital pathology, facilitating
efficient data management and long-term storage while ensuring seamless
integration into clinical workflows and AI-assisted diagnostic systems. Code
and models are available at: https://github.com/pnu-amilab/CLERIC.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:09:09 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 03:06:51 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Lee",
"SeonYeong",
""
],
[
"Seong",
"EonSeung",
""
],
[
"Lee",
"DongEon",
""
],
[
"Lee",
"SiYeoul",
""
],
[
"Cho",
"Yubin",
""
],
[
"Park",
"Chunsu",
""
],
[
"Kim",
"Seonho",
""
],
[
"Seo",
"MinKyung",
""
],
[
"Ko",
"YoungSin",
""
],
[
"Kim",
"MinWoo",
""
]
] | TITLE: Learned Image Compression and Restoration for Digital Pathology
ABSTRACT: Digital pathology images play a crucial role in medical diagnostics, but
their ultra-high resolution and large file sizes pose significant challenges
for storage, transmission, and real-time visualization. To address these
issues, we propose CLERIC, a novel deep learning-based image compression
framework designed specifically for whole slide images (WSIs). CLERIC
integrates a learnable lifting scheme and advanced convolutional techniques to
enhance compression efficiency while preserving critical pathological details.
Our framework employs a lifting-scheme transform in the analysis stage to
decompose images into low- and high-frequency components, enabling more
structured latent representations. These components are processed through
parallel encoders incorporating Deformable Residual Blocks (DRB) and Recurrent
Residual Blocks (R2B) to improve feature extraction and spatial adaptability.
The synthesis stage applies an inverse lifting transform for effective image
reconstruction, ensuring high-fidelity restoration of fine-grained tissue
structures. We evaluate CLERIC on a digital pathology image dataset and compare
its performance against state-of-the-art learned image compression (LIC)
models. Experimental results demonstrate that CLERIC achieves superior
rate-distortion (RD) performance, significantly reducing storage requirements
while maintaining high diagnostic image quality. Our study highlights the
potential of deep learning-based compression in digital pathology, facilitating
efficient data management and long-term storage while ensuring seamless
integration into clinical workflows and AI-assisted diagnostic systems. Code
and models are available at: https://github.com/pnu-amilab/CLERIC.
| no_new_dataset | 0.946941 |
2503.23959 | Bizhe Bai | Bizhe Bai and Jianjian Cao and Yadan Luo and Tao Chen | Local Information Matters: Inference Acceleration For Grounded
Conversation Generation Models Through Adaptive Local-Aware Token Pruning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Grounded Conversation Generation (GCG) is an emerging vision-language task
that requires models to generate natural language responses seamlessly
intertwined with corresponding object segmentation masks. Recent models, such
as GLaMM and OMG-LLaVA, achieve pixel-level grounding but incur significant
computational costs due to processing a large number of visual tokens. Existing
token pruning methods, like FastV and PyramidDrop, fail to preserve the local
visual features critical for accurate grounding, leading to substantial
performance drops in GCG tasks. To address this, we propose Adaptive
Local-Aware Token Pruning (ALTP), a simple yet effective framework that
accelerates GCG models by prioritizing local object information. ALTP
introduces two key components: (1) Detail Density Capture (DDC), which uses
superpixel segmentation to retain tokens in object-centric regions, preserving
fine-grained details, and (2) Dynamic Density Formation (DDF), which
dynamically allocates tokens based on information density, ensuring higher
retention in semantically rich areas. Extensive experiments on the GranDf
dataset demonstrate that ALTP significantly outperforms existing token pruning
methods, such as FastV and PyramidDrop, on both GLaMM and OMG-LLaVA models.
Notably, when applied to GLaMM, ALTP achieves a 90% reduction in visual tokens
with a 4.9% improvement in AP50 and a 5.0% improvement in Recall compared to
PyramidDrop. Similarly, on OMG-LLaVA, ALTP improves AP by 2.1% and mIOU by 3.0%
at a 90% token reduction compared with PDrop.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 11:18:27 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 08:34:57 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Bai",
"Bizhe",
""
],
[
"Cao",
"Jianjian",
""
],
[
"Luo",
"Yadan",
""
],
[
"Chen",
"Tao",
""
]
] | TITLE: Local Information Matters: Inference Acceleration For Grounded
Conversation Generation Models Through Adaptive Local-Aware Token Pruning
ABSTRACT: Grounded Conversation Generation (GCG) is an emerging vision-language task
that requires models to generate natural language responses seamlessly
intertwined with corresponding object segmentation masks. Recent models, such
as GLaMM and OMG-LLaVA, achieve pixel-level grounding but incur significant
computational costs due to processing a large number of visual tokens. Existing
token pruning methods, like FastV and PyramidDrop, fail to preserve the local
visual features critical for accurate grounding, leading to substantial
performance drops in GCG tasks. To address this, we propose Adaptive
Local-Aware Token Pruning (ALTP), a simple yet effective framework that
accelerates GCG models by prioritizing local object information. ALTP
introduces two key components: (1) Detail Density Capture (DDC), which uses
superpixel segmentation to retain tokens in object-centric regions, preserving
fine-grained details, and (2) Dynamic Density Formation (DDF), which
dynamically allocates tokens based on information density, ensuring higher
retention in semantically rich areas. Extensive experiments on the GranDf
dataset demonstrate that ALTP significantly outperforms existing token pruning
methods, such as FastV and PyramidDrop, on both GLaMM and OMG-LLaVA models.
Notably, when applied to GLaMM, ALTP achieves a 90% reduction in visual tokens
with a 4.9% improvement in AP50 and a 5.0% improvement in Recall compared to
PyramidDrop. Similarly, on OMG-LLaVA, ALTP improves AP by 2.1% and mIOU by 3.0%
at a 90% token reduction compared with PDrop.
| no_new_dataset | 0.951188 |
2503.24026 | Boyuan Wang | Boyuan Wang, Xiaofeng Wang, Chaojun Ni, Guosheng Zhao, Zhiqin Yang,
Zheng Zhu, Muyang Zhang, Yukun Zhou, Xinze Chen, Guan Huang, Lihong Liu,
Xingang Wang | HumanDreamer: Generating Controllable Human-Motion Videos via Decoupled
Generation | Project Page: https://humandreamer.github.io | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human-motion video generation has been a challenging task, primarily due to
the difficulty inherent in learning human body movements. While some approaches
have attempted to drive human-centric video generation explicitly through pose
control, these methods typically rely on poses derived from existing videos,
thereby lacking flexibility. To address this, we propose HumanDreamer, a
decoupled human video generation framework that first generates diverse poses
from text prompts and then leverages these poses to generate human-motion
videos. Specifically, we propose MotionVid, the largest dataset for
human-motion pose generation. Based on the dataset, we present MotionDiT, which
is trained to generate structured human-motion poses from text prompts.
Besides, a novel LAMA loss is introduced, which together contribute to a
significant improvement in FID by 62.4%, along with respective enhancements in
R-precision for top1, top2, and top3 by 41.8%, 26.3%, and 18.3%, thereby
advancing both the Text-to-Pose control accuracy and FID metrics. Our
experiments across various Pose-to-Video baselines demonstrate that the poses
generated by our method can produce diverse and high-quality human-motion
videos. Furthermore, our model can facilitate other downstream tasks, such as
pose sequence prediction and 2D-3D motion lifting.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 12:51:45 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 03:43:35 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Boyuan",
""
],
[
"Wang",
"Xiaofeng",
""
],
[
"Ni",
"Chaojun",
""
],
[
"Zhao",
"Guosheng",
""
],
[
"Yang",
"Zhiqin",
""
],
[
"Zhu",
"Zheng",
""
],
[
"Zhang",
"Muyang",
""
],
[
"Zhou",
"Yukun",
""
],
[
"Chen",
"Xinze",
""
],
[
"Huang",
"Guan",
""
],
[
"Liu",
"Lihong",
""
],
[
"Wang",
"Xingang",
""
]
] | TITLE: HumanDreamer: Generating Controllable Human-Motion Videos via Decoupled
Generation
ABSTRACT: Human-motion video generation has been a challenging task, primarily due to
the difficulty inherent in learning human body movements. While some approaches
have attempted to drive human-centric video generation explicitly through pose
control, these methods typically rely on poses derived from existing videos,
thereby lacking flexibility. To address this, we propose HumanDreamer, a
decoupled human video generation framework that first generates diverse poses
from text prompts and then leverages these poses to generate human-motion
videos. Specifically, we propose MotionVid, the largest dataset for
human-motion pose generation. Based on the dataset, we present MotionDiT, which
is trained to generate structured human-motion poses from text prompts.
Besides, a novel LAMA loss is introduced, which together contribute to a
significant improvement in FID by 62.4%, along with respective enhancements in
R-precision for top1, top2, and top3 by 41.8%, 26.3%, and 18.3%, thereby
advancing both the Text-to-Pose control accuracy and FID metrics. Our
experiments across various Pose-to-Video baselines demonstrate that the poses
generated by our method can produce diverse and high-quality human-motion
videos. Furthermore, our model can facilitate other downstream tasks, such as
pose sequence prediction and 2D-3D motion lifting.
| new_dataset | 0.959154 |
2503.24270 | Yuelei Li | Yuelei Li, Hyunjin Kim, Fangneng Zhan, Ri-Zhao Qiu, Mazeyu Ji, Xiaojun
Shan, Xueyan Zou, Paul Liang, Hanspeter Pfister, Xiaolong Wang | Visual Acoustic Fields | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Objects produce different sounds when hit, and humans can intuitively infer
how an object might sound based on its appearance and material properties.
Inspired by this intuition, we propose Visual Acoustic Fields, a framework that
bridges hitting sounds and visual signals within a 3D space using 3D Gaussian
Splatting (3DGS). Our approach features two key modules: sound generation and
sound localization. The sound generation module leverages a conditional
diffusion model, which takes multiscale features rendered from a
feature-augmented 3DGS to generate realistic hitting sounds. Meanwhile, the
sound localization module enables querying the 3D scene, represented by the
feature-augmented 3DGS, to localize hitting positions based on the sound
sources. To support this framework, we introduce a novel pipeline for
collecting scene-level visual-sound sample pairs, achieving alignment between
captured images, impact locations, and corresponding sounds. To the best of our
knowledge, this is the first dataset to connect visual and acoustic signals in
a 3D context. Extensive experiments on our dataset demonstrate the
effectiveness of Visual Acoustic Fields in generating plausible impact sounds
and accurately localizing impact sources. Our project page is at
https://yuelei0428.github.io/projects/Visual-Acoustic-Fields/.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 16:16:10 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 03:16:38 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Li",
"Yuelei",
""
],
[
"Kim",
"Hyunjin",
""
],
[
"Zhan",
"Fangneng",
""
],
[
"Qiu",
"Ri-Zhao",
""
],
[
"Ji",
"Mazeyu",
""
],
[
"Shan",
"Xiaojun",
""
],
[
"Zou",
"Xueyan",
""
],
[
"Liang",
"Paul",
""
],
[
"Pfister",
"Hanspeter",
""
],
[
"Wang",
"Xiaolong",
""
]
] | TITLE: Visual Acoustic Fields
ABSTRACT: Objects produce different sounds when hit, and humans can intuitively infer
how an object might sound based on its appearance and material properties.
Inspired by this intuition, we propose Visual Acoustic Fields, a framework that
bridges hitting sounds and visual signals within a 3D space using 3D Gaussian
Splatting (3DGS). Our approach features two key modules: sound generation and
sound localization. The sound generation module leverages a conditional
diffusion model, which takes multiscale features rendered from a
feature-augmented 3DGS to generate realistic hitting sounds. Meanwhile, the
sound localization module enables querying the 3D scene, represented by the
feature-augmented 3DGS, to localize hitting positions based on the sound
sources. To support this framework, we introduce a novel pipeline for
collecting scene-level visual-sound sample pairs, achieving alignment between
captured images, impact locations, and corresponding sounds. To the best of our
knowledge, this is the first dataset to connect visual and acoustic signals in
a 3D context. Extensive experiments on our dataset demonstrate the
effectiveness of Visual Acoustic Fields in generating plausible impact sounds
and accurately localizing impact sources. Our project page is at
https://yuelei0428.github.io/projects/Visual-Acoustic-Fields/.
| new_dataset | 0.960137 |
2503.24326 | Rupert Polley | Rupert Polley, Sai Vignesh Abishek Deenadayalan, J. Marius Z\"ollner | Self-Supervised Pretraining for Aerial Road Extraction | Accepted at 36th IEEE Intelligent Vehicles Symposium (IV) 2025 Joint
Workshop on Safety, Metrics and Benchmarks for Autonomous Driving | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep neural networks for aerial image segmentation require large amounts of
labeled data, but high-quality aerial datasets with precise annotations are
scarce and costly to produce. To address this limitation, we propose a
self-supervised pretraining method that improves segmentation performance while
reducing reliance on labeled data. Our approach uses inpainting-based
pretraining, where the model learns to reconstruct missing regions in aerial
images, capturing their inherent structure before being fine-tuned for road
extraction. This method improves generalization, enhances robustness to domain
shifts, and is invariant to model architecture and dataset choice. Experiments
show that our pretraining significantly boosts segmentation accuracy,
especially in low-data regimes, making it a scalable solution for aerial image
analysis.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:14:08 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 12:18:44 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Polley",
"Rupert",
""
],
[
"Deenadayalan",
"Sai Vignesh Abishek",
""
],
[
"Zöllner",
"J. Marius",
""
]
] | TITLE: Self-Supervised Pretraining for Aerial Road Extraction
ABSTRACT: Deep neural networks for aerial image segmentation require large amounts of
labeled data, but high-quality aerial datasets with precise annotations are
scarce and costly to produce. To address this limitation, we propose a
self-supervised pretraining method that improves segmentation performance while
reducing reliance on labeled data. Our approach uses inpainting-based
pretraining, where the model learns to reconstruct missing regions in aerial
images, capturing their inherent structure before being fine-tuned for road
extraction. This method improves generalization, enhances robustness to domain
shifts, and is invariant to model architecture and dataset choice. Experiments
show that our pretraining significantly boosts segmentation accuracy,
especially in low-data regimes, making it a scalable solution for aerial image
analysis.
| no_new_dataset | 0.951684 |
2504.00003 | Francisco Rowe Prof | Rodgers Iradukunda, Francisco Rowe, Elisabetta Pietrostefani | Producing population-level estimates of internal displacement in Ukraine
using GPS mobile phone data | 3 figures | null | null | null | physics.soc-ph cs.SI stat.AP | http://creativecommons.org/licenses/by/4.0/ | Nearly 110 million people are forcibly displaced people worldwide. However,
estimating the scale and patterns of internally displaced persons in real time,
and developing appropriate policy responses, remain hindered by traditional
data streams. They are infrequently updated, costly and slow. Mobile phone
location data can overcome these limitations, but only represent a population
segment. Drawing on an anonymised large-scale, high-frequency dataset of
locations from 25 million mobile devices, we propose an approach to leverage
mobile phone data and produce population-level estimates of internal
displacement. We use this approach to quantify the extent, pace and geographic
patterns of internal displacement in Ukraine during the early stages of the
Russian invasion in 2022. Our results produce reliable population-level
estimates, enabling real-time monitoring of internal displacement at detailed
spatio-temporal resolutions. Accurate estimations are crucial to support timely
and effective humanitarian and disaster management responses, prioritising
resources where they are most needed.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 21:39:36 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Iradukunda",
"Rodgers",
""
],
[
"Rowe",
"Francisco",
""
],
[
"Pietrostefani",
"Elisabetta",
""
]
] | TITLE: Producing population-level estimates of internal displacement in Ukraine
using GPS mobile phone data
ABSTRACT: Nearly 110 million people are forcibly displaced people worldwide. However,
estimating the scale and patterns of internally displaced persons in real time,
and developing appropriate policy responses, remain hindered by traditional
data streams. They are infrequently updated, costly and slow. Mobile phone
location data can overcome these limitations, but only represent a population
segment. Drawing on an anonymised large-scale, high-frequency dataset of
locations from 25 million mobile devices, we propose an approach to leverage
mobile phone data and produce population-level estimates of internal
displacement. We use this approach to quantify the extent, pace and geographic
patterns of internal displacement in Ukraine during the early stages of the
Russian invasion in 2022. Our results produce reliable population-level
estimates, enabling real-time monitoring of internal displacement at detailed
spatio-temporal resolutions. Accurate estimations are crucial to support timely
and effective humanitarian and disaster management responses, prioritising
resources where they are most needed.
| no_new_dataset | 0.695493 |
2504.00019 | Indraneil Paul Mr. | Indraneil Paul, Haoyi Yang, Goran Glava\v{s}, Kristian Kersting, Iryna
Gurevych | ObscuraCoder: Powering Efficient Code LM Pre-Training Via Obfuscation
Grounding | null | null | null | null | cs.CL cs.AI cs.SE | http://creativecommons.org/licenses/by/4.0/ | Language models (LMs) have become a staple of the code-writing toolbox. Their
pre-training recipe has, however, remained stagnant over recent years, barring
the occasional changes in data sourcing and filtering strategies. In
particular, research exploring modifications to Code-LMs' pre-training
objectives, geared towards improving data efficiency and better disentangling
between syntax and semantics, has been noticeably sparse, especially compared
with corresponding efforts in natural language LMs. In this work, we examine
grounding on obfuscated code as a means of helping Code-LMs look beyond the
surface-form syntax and enhance their pre-training sample efficiency. To this
end, we compile ObscuraX, a dataset of approximately 55M source and obfuscated
code pairs in seven languages. Subsequently, we pre-train ObscuraCoder models,
ranging in size from 255M to 2.8B parameters, on a 272B-token corpus that
includes ObscuraX and demonstrate that our obfuscation-based pre-training
recipe leads to consistent improvements in Code-LMs' abilities compared to both
vanilla autoregressive pre-training as well as existing de-obfuscation (DOBF)
objectives. ObscuraCoder demonstrates sizeable gains across multiple tests of
syntactic and semantic code understanding, along with improved capabilities in
multilingual code completion, multilingual code commit summarization, and
multi-purpose library-oriented code generation.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 23:08:53 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Paul",
"Indraneil",
""
],
[
"Yang",
"Haoyi",
""
],
[
"Glavaš",
"Goran",
""
],
[
"Kersting",
"Kristian",
""
],
[
"Gurevych",
"Iryna",
""
]
] | TITLE: ObscuraCoder: Powering Efficient Code LM Pre-Training Via Obfuscation
Grounding
ABSTRACT: Language models (LMs) have become a staple of the code-writing toolbox. Their
pre-training recipe has, however, remained stagnant over recent years, barring
the occasional changes in data sourcing and filtering strategies. In
particular, research exploring modifications to Code-LMs' pre-training
objectives, geared towards improving data efficiency and better disentangling
between syntax and semantics, has been noticeably sparse, especially compared
with corresponding efforts in natural language LMs. In this work, we examine
grounding on obfuscated code as a means of helping Code-LMs look beyond the
surface-form syntax and enhance their pre-training sample efficiency. To this
end, we compile ObscuraX, a dataset of approximately 55M source and obfuscated
code pairs in seven languages. Subsequently, we pre-train ObscuraCoder models,
ranging in size from 255M to 2.8B parameters, on a 272B-token corpus that
includes ObscuraX and demonstrate that our obfuscation-based pre-training
recipe leads to consistent improvements in Code-LMs' abilities compared to both
vanilla autoregressive pre-training as well as existing de-obfuscation (DOBF)
objectives. ObscuraCoder demonstrates sizeable gains across multiple tests of
syntactic and semantic code understanding, along with improved capabilities in
multilingual code completion, multilingual code commit summarization, and
multi-purpose library-oriented code generation.
| new_dataset | 0.959421 |
2504.00020 | Huan Zhao | Huan Zhao, Yiming Liu, Jina Yao, Ling Xiong, Zexin Zhou, Zixing Zhang | Celler:A Genomic Language Model for Long-Tailed Single-Cell Annotation | null | null | null | null | q-bio.GN cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent breakthroughs in single-cell technology have ushered in unparalleled
opportunities to decode the molecular intricacy of intricate biological
systems, especially those linked to diseases unique to humans. However, these
progressions have also ushered in novel obstacles-specifically, the efficient
annotation of extensive, long-tailed single-cell data pertaining to disease
conditions. To effectively surmount this challenge, we introduce Celler, a
state-of-the-art generative pre-training model crafted specifically for the
annotation of single-cell data. Celler incorporates two groundbreaking
elements: First, we introduced the Gaussian Inflation (GInf) Loss function. By
dynamically adjusting sample weights, GInf Loss significantly enhances the
model's ability to learn from rare categories while reducing the risk of
overfitting for common categories. Secondly, we introduce an innovative Hard
Data Mining (HDM) strategy into the training process, specifically targeting
the challenging-to-learn minority data samples, which significantly improved
the model's predictive accuracy. Additionally, to further advance research in
this field, we have constructed a large-scale single-cell dataset: Celler-75,
which encompasses 40 million cells distributed across 80 human tissues and 75
specific diseases. This dataset provides critical support for comprehensively
exploring the potential of single-cell technology in disease research. Our code
is available at https://github.com/AI4science-ym/HiCeller.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 02:04:26 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhao",
"Huan",
""
],
[
"Liu",
"Yiming",
""
],
[
"Yao",
"Jina",
""
],
[
"Xiong",
"Ling",
""
],
[
"Zhou",
"Zexin",
""
],
[
"Zhang",
"Zixing",
""
]
] | TITLE: Celler:A Genomic Language Model for Long-Tailed Single-Cell Annotation
ABSTRACT: Recent breakthroughs in single-cell technology have ushered in unparalleled
opportunities to decode the molecular intricacy of intricate biological
systems, especially those linked to diseases unique to humans. However, these
progressions have also ushered in novel obstacles-specifically, the efficient
annotation of extensive, long-tailed single-cell data pertaining to disease
conditions. To effectively surmount this challenge, we introduce Celler, a
state-of-the-art generative pre-training model crafted specifically for the
annotation of single-cell data. Celler incorporates two groundbreaking
elements: First, we introduced the Gaussian Inflation (GInf) Loss function. By
dynamically adjusting sample weights, GInf Loss significantly enhances the
model's ability to learn from rare categories while reducing the risk of
overfitting for common categories. Secondly, we introduce an innovative Hard
Data Mining (HDM) strategy into the training process, specifically targeting
the challenging-to-learn minority data samples, which significantly improved
the model's predictive accuracy. Additionally, to further advance research in
this field, we have constructed a large-scale single-cell dataset: Celler-75,
which encompasses 40 million cells distributed across 80 human tissues and 75
specific diseases. This dataset provides critical support for comprehensively
exploring the potential of single-cell technology in disease research. Our code
is available at https://github.com/AI4science-ym/HiCeller.
| new_dataset | 0.956997 |
2504.00023 | Niklas Rottmayer | Niklas Rottmayer and Claudia Redenbach | A Novel Distance-Based Metric for Quality Assessment in Image
Segmentation | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | The assessment of segmentation quality plays a fundamental role in the
development, optimization, and comparison of segmentation methods which are
used in a wide range of applications. With few exceptions, quality assessment
is performed using traditional metrics, which are based on counting the number
of erroneous pixels but do not capture the spatial distribution of errors.
Established distance-based metrics such as the average Hausdorff distance are
difficult to interpret and compare for different methods and datasets. In this
paper, we introduce the Surface Consistency Coefficient (SCC), a novel
distance-based quality metric that quantifies the spatial distribution of
errors based on their proximity to the surface of the structure. Through a
rigorous analysis using synthetic data and real segmentation results, we
demonstrate the robustness and effectiveness of SCC in distinguishing errors
near the surface from those further away. At the same time, SCC is easy to
interpret and comparable across different structural contexts.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 12:02:09 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Rottmayer",
"Niklas",
""
],
[
"Redenbach",
"Claudia",
""
]
] | TITLE: A Novel Distance-Based Metric for Quality Assessment in Image
Segmentation
ABSTRACT: The assessment of segmentation quality plays a fundamental role in the
development, optimization, and comparison of segmentation methods which are
used in a wide range of applications. With few exceptions, quality assessment
is performed using traditional metrics, which are based on counting the number
of erroneous pixels but do not capture the spatial distribution of errors.
Established distance-based metrics such as the average Hausdorff distance are
difficult to interpret and compare for different methods and datasets. In this
paper, we introduce the Surface Consistency Coefficient (SCC), a novel
distance-based quality metric that quantifies the spatial distribution of
errors based on their proximity to the surface of the structure. Through a
rigorous analysis using synthetic data and real segmentation results, we
demonstrate the robustness and effectiveness of SCC in distinguishing errors
near the surface from those further away. At the same time, SCC is easy to
interpret and comparable across different structural contexts.
| no_new_dataset | 0.946498 |
2504.00026 | Jose Jorge Moutinho Uliana | Jos\'e J. M. Uliana, Renato A. Krohling | Diffusion models applied to skin and oral cancer classification | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | This study investigates the application of diffusion models in medical image
classification (DiffMIC), focusing on skin and oral lesions. Utilizing the
datasets PAD-UFES-20 for skin cancer and P-NDB-UFES for oral cancer, the
diffusion model demonstrated competitive performance compared to
state-of-the-art deep learning models like Convolutional Neural Networks (CNNs)
and Transformers. Specifically, for the PAD-UFES-20 dataset, the model achieved
a balanced accuracy of 0.6457 for six-class classification and 0.8357 for
binary classification (cancer vs. non-cancer). For the P-NDB-UFES dataset, it
attained a balanced accuracy of 0.9050. These results suggest that diffusion
models are viable models for classifying medical images of skin and oral
lesions. In addition, we investigate the robustness of the model trained on
PAD-UFES-20 for skin cancer but tested on the clinical images of the HIBA
dataset.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 20:29:35 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Uliana",
"José J. M.",
""
],
[
"Krohling",
"Renato A.",
""
]
] | TITLE: Diffusion models applied to skin and oral cancer classification
ABSTRACT: This study investigates the application of diffusion models in medical image
classification (DiffMIC), focusing on skin and oral lesions. Utilizing the
datasets PAD-UFES-20 for skin cancer and P-NDB-UFES for oral cancer, the
diffusion model demonstrated competitive performance compared to
state-of-the-art deep learning models like Convolutional Neural Networks (CNNs)
and Transformers. Specifically, for the PAD-UFES-20 dataset, the model achieved
a balanced accuracy of 0.6457 for six-class classification and 0.8357 for
binary classification (cancer vs. non-cancer). For the P-NDB-UFES dataset, it
attained a balanced accuracy of 0.9050. These results suggest that diffusion
models are viable models for classifying medical images of skin and oral
lesions. In addition, we investigate the robustness of the model trained on
PAD-UFES-20 for skin cancer but tested on the clinical images of the HIBA
dataset.
| no_new_dataset | 0.934694 |
2504.00036 | Hido Pinto | Hido Pinto, Eran Segal | Improving Diseases Predictions Utilizing External Bio-Banks | null | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine learning has been successfully used in critical domains, such as
medicine. However, extracting meaningful insights from biomedical data is often
constrained by the lack of their available disease labels. In this research, we
demonstrate how machine learning can be leveraged to enhance explainability and
uncover biologically meaningful associations, even when predictive improvements
in disease modeling are limited. We train LightGBM models from scratch on our
dataset (10K) to impute metabolomics features and apply them to the UK Biobank
(UKBB) for downstream analysis. The imputed metabolomics features are then used
in survival analysis to assess their impact on disease-related risk factors. As
a result, our approach successfully identified biologically relevant
connections that were not previously known to the predictive models.
Additionally, we applied a genome-wide association study (GWAS) on key
metabolomics features, revealing a link between vascular dementia and smoking.
Although being a well-established epidemiological relationship, this link was
not embedded in the model's training data, which validated the method's ability
to extract meaningful signals. Furthermore, by integrating survival models as
inputs in the 10K data, we uncovered associations between metabolic substances
and obesity, demonstrating the ability to infer disease risk for future
patients without requiring direct outcome labels. These findings highlight the
potential of leveraging external bio-banks to extract valuable biomedical
insights, even in data-limited scenarios. Our results demonstrate that machine
learning models trained on smaller datasets can still be used to uncover real
biological associations when carefully integrated with survival analysis and
genetic studies.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 13:05:20 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Pinto",
"Hido",
""
],
[
"Segal",
"Eran",
""
]
] | TITLE: Improving Diseases Predictions Utilizing External Bio-Banks
ABSTRACT: Machine learning has been successfully used in critical domains, such as
medicine. However, extracting meaningful insights from biomedical data is often
constrained by the lack of their available disease labels. In this research, we
demonstrate how machine learning can be leveraged to enhance explainability and
uncover biologically meaningful associations, even when predictive improvements
in disease modeling are limited. We train LightGBM models from scratch on our
dataset (10K) to impute metabolomics features and apply them to the UK Biobank
(UKBB) for downstream analysis. The imputed metabolomics features are then used
in survival analysis to assess their impact on disease-related risk factors. As
a result, our approach successfully identified biologically relevant
connections that were not previously known to the predictive models.
Additionally, we applied a genome-wide association study (GWAS) on key
metabolomics features, revealing a link between vascular dementia and smoking.
Although being a well-established epidemiological relationship, this link was
not embedded in the model's training data, which validated the method's ability
to extract meaningful signals. Furthermore, by integrating survival models as
inputs in the 10K data, we uncovered associations between metabolic substances
and obesity, demonstrating the ability to infer disease risk for future
patients without requiring direct outcome labels. These findings highlight the
potential of leveraging external bio-banks to extract valuable biomedical
insights, even in data-limited scenarios. Our results demonstrate that machine
learning models trained on smaller datasets can still be used to uncover real
biological associations when carefully integrated with survival analysis and
genetic studies.
| no_new_dataset | 0.940626 |
2504.00045 | Adrian Bermudez-VIllalva | Adrian Bermudez-Villalva, Maryam Mehrnezhad and Ehsan Toreini | Measuring Online Hate on 4chan using Pre-trained Deep Learning Models | IEEE Transactions on Technology and Society, 11 pages | null | 10.1109/TTS.2025.3549931 | null | cs.CL cs.CY | http://creativecommons.org/licenses/by/4.0/ | Online hate speech can harmfully impact individuals and groups, specifically
on non-moderated platforms such as 4chan where users can post anonymous
content. This work focuses on analysing and measuring the prevalence of online
hate on 4chan's politically incorrect board (/pol/) using state-of-the-art
Natural Language Processing (NLP) models, specifically transformer-based models
such as RoBERTa and Detoxify. By leveraging these advanced models, we provide
an in-depth analysis of hate speech dynamics and quantify the extent of online
hate non-moderated platforms. The study advances understanding through
multi-class classification of hate speech (racism, sexism, religion, etc.),
while also incorporating the classification of toxic content (e.g., identity
attacks and threats) and a further topic modelling analysis. The results show
that 11.20% of this dataset is identified as containing hate in different
categories. These evaluations show that online hate is manifested in various
forms, confirming the complicated and volatile nature of detection in the wild.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 22:47:11 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Bermudez-Villalva",
"Adrian",
""
],
[
"Mehrnezhad",
"Maryam",
""
],
[
"Toreini",
"Ehsan",
""
]
] | TITLE: Measuring Online Hate on 4chan using Pre-trained Deep Learning Models
ABSTRACT: Online hate speech can harmfully impact individuals and groups, specifically
on non-moderated platforms such as 4chan where users can post anonymous
content. This work focuses on analysing and measuring the prevalence of online
hate on 4chan's politically incorrect board (/pol/) using state-of-the-art
Natural Language Processing (NLP) models, specifically transformer-based models
such as RoBERTa and Detoxify. By leveraging these advanced models, we provide
an in-depth analysis of hate speech dynamics and quantify the extent of online
hate non-moderated platforms. The study advances understanding through
multi-class classification of hate speech (racism, sexism, religion, etc.),
while also incorporating the classification of toxic content (e.g., identity
attacks and threats) and a further topic modelling analysis. The results show
that 11.20% of this dataset is identified as containing hate in different
categories. These evaluations show that online hate is manifested in various
forms, confirming the complicated and volatile nature of detection in the wild.
| no_new_dataset | 0.939471 |
2504.00058 | Chamodya Attanayake | Lahiru Akmeemana, Chamodya Attanayake, Husni Faiz, Sandareka
Wickramanayake | GAL-MAD: Towards Explainable Anomaly Detection in Microservice
Applications Using Graph Attention Networks | 14 pages, preprint, 10 figures | null | null | null | cs.SE cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The transition to microservices has revolutionized software architectures,
offering enhanced scalability and modularity. However, the distributed and
dynamic nature of microservices introduces complexities in ensuring system
reliability, making anomaly detection crucial for maintaining performance and
functionality. Anomalies stemming from network and performance issues must be
swiftly identified and addressed. Existing anomaly detection techniques often
rely on statistical models or machine learning methods that struggle with the
high-dimensional, interdependent data inherent in microservice applications.
Current techniques and available datasets predominantly focus on system traces
and logs, limiting their ability to support advanced detection models. This
paper addresses these gaps by introducing the RS-Anomic dataset generated using
the open-source RobotShop microservice application. The dataset captures
multivariate performance metrics and response times under normal and anomalous
conditions, encompassing ten types of anomalies. We propose a novel anomaly
detection model called Graph Attention and LSTM-based Microservice Anomaly
Detection (GAL-MAD), leveraging Graph Attention and Long Short-Term Memory
architectures to capture spatial and temporal dependencies in microservices. We
utilize SHAP values to localize anomalous services and identify root causes to
enhance explainability. Experimental results demonstrate that GAL-MAD
outperforms state-of-the-art models on the RS-Anomic dataset, achieving higher
accuracy and recall across varying anomaly rates. The explanations provide
actionable insights into service anomalies, which benefits system
administrators.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 10:11:31 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Akmeemana",
"Lahiru",
""
],
[
"Attanayake",
"Chamodya",
""
],
[
"Faiz",
"Husni",
""
],
[
"Wickramanayake",
"Sandareka",
""
]
] | TITLE: GAL-MAD: Towards Explainable Anomaly Detection in Microservice
Applications Using Graph Attention Networks
ABSTRACT: The transition to microservices has revolutionized software architectures,
offering enhanced scalability and modularity. However, the distributed and
dynamic nature of microservices introduces complexities in ensuring system
reliability, making anomaly detection crucial for maintaining performance and
functionality. Anomalies stemming from network and performance issues must be
swiftly identified and addressed. Existing anomaly detection techniques often
rely on statistical models or machine learning methods that struggle with the
high-dimensional, interdependent data inherent in microservice applications.
Current techniques and available datasets predominantly focus on system traces
and logs, limiting their ability to support advanced detection models. This
paper addresses these gaps by introducing the RS-Anomic dataset generated using
the open-source RobotShop microservice application. The dataset captures
multivariate performance metrics and response times under normal and anomalous
conditions, encompassing ten types of anomalies. We propose a novel anomaly
detection model called Graph Attention and LSTM-based Microservice Anomaly
Detection (GAL-MAD), leveraging Graph Attention and Long Short-Term Memory
architectures to capture spatial and temporal dependencies in microservices. We
utilize SHAP values to localize anomalous services and identify root causes to
enhance explainability. Experimental results demonstrate that GAL-MAD
outperforms state-of-the-art models on the RS-Anomic dataset, achieving higher
accuracy and recall across varying anomaly rates. The explanations provide
actionable insights into service anomalies, which benefits system
administrators.
| new_dataset | 0.78899 |
2504.00061 | Dou Liu | Dou Liu, Ying Long, Sophia Zuoqiu, Tian Tang, Rong Yin | Evaluating the Feasibility and Accuracy of Large Language Models for
Medical History-Taking in Obstetrics and Gynecology | Accepted by IISE 2025 annual conference | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Effective physician-patient communications in pre-diagnostic environments,
and most specifically in complex and sensitive medical areas such as
infertility, are critical but consume a lot of time and, therefore, cause
clinic workflows to become inefficient. Recent advancements in Large Language
Models (LLMs) offer a potential solution for automating conversational medical
history-taking and improving diagnostic accuracy. This study evaluates the
feasibility and performance of LLMs in those tasks for infertility cases. An
AI-driven conversational system was developed to simulate physician-patient
interactions with ChatGPT-4o and ChatGPT-4o-mini. A total of 70 real-world
infertility cases were processed, generating 420 diagnostic histories. Model
performance was assessed using F1 score, Differential Diagnosis (DDs) Accuracy,
and Accuracy of Infertility Type Judgment (ITJ). ChatGPT-4o-mini outperformed
ChatGPT-4o in information extraction accuracy (F1 score: 0.9258 vs. 0.9029, p =
0.045, d = 0.244) and demonstrated higher completeness in medical
history-taking (97.58% vs. 77.11%), suggesting that ChatGPT-4o-mini is more
effective in extracting detailed patient information, which is critical for
improving diagnostic accuracy. In contrast, ChatGPT-4o performed slightly
better in differential diagnosis accuracy (2.0524 vs. 2.0048, p > 0.05). ITJ
accuracy was higher in ChatGPT-4o-mini (0.6476 vs. 0.5905) but with lower
consistency (Cronbach's $\alpha$ = 0.562), suggesting variability in
classification reliability. Both models demonstrated strong feasibility in
automating infertility history-taking, with ChatGPT-4o-mini excelling in
completeness and extraction accuracy. In future studies, expert validation for
accuracy and dependability in a clinical setting, AI model fine-tuning, and
larger datasets with a mix of cases of infertility have to be prioritized.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:09:53 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Liu",
"Dou",
""
],
[
"Long",
"Ying",
""
],
[
"Zuoqiu",
"Sophia",
""
],
[
"Tang",
"Tian",
""
],
[
"Yin",
"Rong",
""
]
] | TITLE: Evaluating the Feasibility and Accuracy of Large Language Models for
Medical History-Taking in Obstetrics and Gynecology
ABSTRACT: Effective physician-patient communications in pre-diagnostic environments,
and most specifically in complex and sensitive medical areas such as
infertility, are critical but consume a lot of time and, therefore, cause
clinic workflows to become inefficient. Recent advancements in Large Language
Models (LLMs) offer a potential solution for automating conversational medical
history-taking and improving diagnostic accuracy. This study evaluates the
feasibility and performance of LLMs in those tasks for infertility cases. An
AI-driven conversational system was developed to simulate physician-patient
interactions with ChatGPT-4o and ChatGPT-4o-mini. A total of 70 real-world
infertility cases were processed, generating 420 diagnostic histories. Model
performance was assessed using F1 score, Differential Diagnosis (DDs) Accuracy,
and Accuracy of Infertility Type Judgment (ITJ). ChatGPT-4o-mini outperformed
ChatGPT-4o in information extraction accuracy (F1 score: 0.9258 vs. 0.9029, p =
0.045, d = 0.244) and demonstrated higher completeness in medical
history-taking (97.58% vs. 77.11%), suggesting that ChatGPT-4o-mini is more
effective in extracting detailed patient information, which is critical for
improving diagnostic accuracy. In contrast, ChatGPT-4o performed slightly
better in differential diagnosis accuracy (2.0524 vs. 2.0048, p > 0.05). ITJ
accuracy was higher in ChatGPT-4o-mini (0.6476 vs. 0.5905) but with lower
consistency (Cronbach's $\alpha$ = 0.562), suggesting variability in
classification reliability. Both models demonstrated strong feasibility in
automating infertility history-taking, with ChatGPT-4o-mini excelling in
completeness and extraction accuracy. In future studies, expert validation for
accuracy and dependability in a clinical setting, AI model fine-tuning, and
larger datasets with a mix of cases of infertility have to be prioritized.
| no_new_dataset | 0.951594 |
2504.00068 | Sanjay Chakraborty | Sanjay Chakraborty, Fredrik Heintz | Integrating Quantum-Classical Attention in Patch Transformers for
Enhanced Time Series Forecasting | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | QCAAPatchTF is a quantum attention network integrated with an advanced
patch-based transformer, designed for multivariate time series forecasting,
classification, and anomaly detection. Leveraging quantum superpositions,
entanglement, and variational quantum eigensolver principles, the model
introduces a quantum-classical hybrid self-attention mechanism to capture
multivariate correlations across time points. For multivariate long-term time
series, the quantum self-attention mechanism can reduce computational
complexity while maintaining temporal relationships. It then applies the
quantum-classical hybrid self-attention mechanism alongside a feed-forward
network in the encoder stage of the advanced patch-based transformer. While the
feed-forward network learns nonlinear representations for each variable frame,
the quantum self-attention mechanism processes individual series to enhance
multivariate relationships. The advanced patch-based transformer computes the
optimized patch length by dividing the sequence length into a fixed number of
patches instead of using an arbitrary set of values. The stride is then set to
half of the patch length to ensure efficient overlapping representations while
maintaining temporal continuity. QCAAPatchTF achieves state-of-the-art
performance in both long-term and short-term forecasting, classification, and
anomaly detection tasks, demonstrating state-of-the-art accuracy and efficiency
on complex real-world datasets.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:23:36 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Chakraborty",
"Sanjay",
""
],
[
"Heintz",
"Fredrik",
""
]
] | TITLE: Integrating Quantum-Classical Attention in Patch Transformers for
Enhanced Time Series Forecasting
ABSTRACT: QCAAPatchTF is a quantum attention network integrated with an advanced
patch-based transformer, designed for multivariate time series forecasting,
classification, and anomaly detection. Leveraging quantum superpositions,
entanglement, and variational quantum eigensolver principles, the model
introduces a quantum-classical hybrid self-attention mechanism to capture
multivariate correlations across time points. For multivariate long-term time
series, the quantum self-attention mechanism can reduce computational
complexity while maintaining temporal relationships. It then applies the
quantum-classical hybrid self-attention mechanism alongside a feed-forward
network in the encoder stage of the advanced patch-based transformer. While the
feed-forward network learns nonlinear representations for each variable frame,
the quantum self-attention mechanism processes individual series to enhance
multivariate relationships. The advanced patch-based transformer computes the
optimized patch length by dividing the sequence length into a fixed number of
patches instead of using an arbitrary set of values. The stride is then set to
half of the patch length to ensure efficient overlapping representations while
maintaining temporal continuity. QCAAPatchTF achieves state-of-the-art
performance in both long-term and short-term forecasting, classification, and
anomaly detection tasks, demonstrating state-of-the-art accuracy and efficiency
on complex real-world datasets.
| no_new_dataset | 0.949716 |
2504.00070 | Sanjay Chakraborty | Sanjay Chakraborty, Fredrik Heintz | Enhancing Time Series Forecasting with Fuzzy Attention-Integrated
Transformers | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper introduces FANTF (Fuzzy Attention Network-Based Transformers), a
novel approach that integrates fuzzy logic with existing transformer
architectures to advance time series forecasting, classification, and anomaly
detection tasks. FANTF leverages a proposed fuzzy attention mechanism
incorporating fuzzy membership functions to handle uncertainty and imprecision
in noisy and ambiguous time series data. The FANTF approach enhances its
ability to capture complex temporal dependencies and multivariate relationships
by embedding fuzzy logic principles into the self-attention module of the
existing transformer's architecture. The framework combines fuzzy-enhanced
attention with a set of benchmark existing transformer-based architectures to
provide efficient predictions, classification and anomaly detection.
Specifically, FANTF generates learnable fuzziness attention scores that
highlight the relative importance of temporal features and data points,
offering insights into its decision-making process. Experimental evaluatios on
some real-world datasets reveal that FANTF significantly enhances the
performance of forecasting, classification, and anomaly detection tasks over
traditional transformer-based models.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:33:50 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Chakraborty",
"Sanjay",
""
],
[
"Heintz",
"Fredrik",
""
]
] | TITLE: Enhancing Time Series Forecasting with Fuzzy Attention-Integrated
Transformers
ABSTRACT: This paper introduces FANTF (Fuzzy Attention Network-Based Transformers), a
novel approach that integrates fuzzy logic with existing transformer
architectures to advance time series forecasting, classification, and anomaly
detection tasks. FANTF leverages a proposed fuzzy attention mechanism
incorporating fuzzy membership functions to handle uncertainty and imprecision
in noisy and ambiguous time series data. The FANTF approach enhances its
ability to capture complex temporal dependencies and multivariate relationships
by embedding fuzzy logic principles into the self-attention module of the
existing transformer's architecture. The framework combines fuzzy-enhanced
attention with a set of benchmark existing transformer-based architectures to
provide efficient predictions, classification and anomaly detection.
Specifically, FANTF generates learnable fuzziness attention scores that
highlight the relative importance of temporal features and data points,
offering insights into its decision-making process. Experimental evaluatios on
some real-world datasets reveal that FANTF significantly enhances the
performance of forecasting, classification, and anomaly detection tasks over
traditional transformer-based models.
| no_new_dataset | 0.946794 |
2504.00120 | Xavier Mootoo | Xavier Mootoo, Hina Tabassum, Luca Chiaraviglio | EMForecaster: A Deep Learning Framework for Time Series Forecasting in
Wireless Networks with Distribution-Free Uncertainty Quantification | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the recent advancements in wireless technologies, forecasting
electromagnetic field (EMF) exposure has become critical to enable proactive
network spectrum and power allocation, as well as network deployment planning.
In this paper, we develop a deep learning (DL) time series forecasting
framework referred to as \textit{EMForecaster}. The proposed DL architecture
employs patching to process temporal patterns at multiple scales, complemented
by reversible instance normalization and mixing operations along both temporal
and patch dimensions for efficient feature extraction. We augment
{EMForecaster} with a conformal prediction mechanism, which is independent of
the data distribution, to enhance the trustworthiness of model predictions via
uncertainty quantification of forecasts. This conformal prediction mechanism
ensures that the ground truth lies within a prediction interval with target
error rate $\alpha$, where $1-\alpha$ is referred to as coverage. However, a
trade-off exists, as increasing coverage often results in wider prediction
intervals. To address this challenge, we propose a new metric called the
\textit{Trade-off Score}, that balances trustworthiness of the forecast (i.e.,
coverage) and the width of prediction interval. Our experiments demonstrate
that EMForecaster achieves superior performance across diverse EMF datasets,
spanning both short-term and long-term prediction horizons. In point
forecasting tasks, EMForecaster substantially outperforms current
state-of-the-art DL approaches, showing improvements of 53.97\% over the
Transformer architecture and 38.44\% over the average of all baseline models.
EMForecaster also exhibits an excellent balance between prediction interval
width and coverage in conformal forecasting, measured by the tradeoff score,
showing marked improvements of 24.73\% over the average baseline and 49.17\%
over the Transformer architecture.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 18:10:08 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Mootoo",
"Xavier",
""
],
[
"Tabassum",
"Hina",
""
],
[
"Chiaraviglio",
"Luca",
""
]
] | TITLE: EMForecaster: A Deep Learning Framework for Time Series Forecasting in
Wireless Networks with Distribution-Free Uncertainty Quantification
ABSTRACT: With the recent advancements in wireless technologies, forecasting
electromagnetic field (EMF) exposure has become critical to enable proactive
network spectrum and power allocation, as well as network deployment planning.
In this paper, we develop a deep learning (DL) time series forecasting
framework referred to as \textit{EMForecaster}. The proposed DL architecture
employs patching to process temporal patterns at multiple scales, complemented
by reversible instance normalization and mixing operations along both temporal
and patch dimensions for efficient feature extraction. We augment
{EMForecaster} with a conformal prediction mechanism, which is independent of
the data distribution, to enhance the trustworthiness of model predictions via
uncertainty quantification of forecasts. This conformal prediction mechanism
ensures that the ground truth lies within a prediction interval with target
error rate $\alpha$, where $1-\alpha$ is referred to as coverage. However, a
trade-off exists, as increasing coverage often results in wider prediction
intervals. To address this challenge, we propose a new metric called the
\textit{Trade-off Score}, that balances trustworthiness of the forecast (i.e.,
coverage) and the width of prediction interval. Our experiments demonstrate
that EMForecaster achieves superior performance across diverse EMF datasets,
spanning both short-term and long-term prediction horizons. In point
forecasting tasks, EMForecaster substantially outperforms current
state-of-the-art DL approaches, showing improvements of 53.97\% over the
Transformer architecture and 38.44\% over the average of all baseline models.
EMForecaster also exhibits an excellent balance between prediction interval
width and coverage in conformal forecasting, measured by the tradeoff score,
showing marked improvements of 24.73\% over the average baseline and 49.17\%
over the Transformer architecture.
| no_new_dataset | 0.952397 |
2504.00139 | Yannick Burkhardt | Yannick Burkhardt, Simon Schaefer, Stefan Leutenegger | SuperEvent: Cross-Modal Learning of Event-based Keypoint Detection | In Review for ICCV25 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event-based keypoint detection and matching holds significant potential,
enabling the integration of event sensors into highly optimized Visual SLAM
systems developed for frame cameras over decades of research. Unfortunately,
existing approaches struggle with the motion-dependent appearance of keypoints
and the complex noise prevalent in event streams, resulting in severely limited
feature matching capabilities and poor performance on downstream tasks. To
mitigate this problem, we propose SuperEvent, a data-driven approach to predict
stable keypoints with expressive descriptors. Due to the absence of event
datasets with ground truth keypoint labels, we leverage existing frame-based
keypoint detectors on readily available event-aligned and synchronized
gray-scale frames for self-supervision: we generate temporally sparse keypoint
pseudo-labels considering that events are a product of both scene appearance
and camera motion. Combined with our novel, information-rich event
representation, we enable SuperEvent to effectively learn robust keypoint
detection and description in event streams. Finally, we demonstrate the
usefulness of SuperEvent by its integration into a modern sparse keypoint and
descriptor-based SLAM framework originally developed for traditional cameras,
surpassing the state-of-the-art in event-based SLAM by a wide margin. Source
code and multimedia material are available at
smartroboticslab.github.io/SuperEvent.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 18:46:02 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Burkhardt",
"Yannick",
""
],
[
"Schaefer",
"Simon",
""
],
[
"Leutenegger",
"Stefan",
""
]
] | TITLE: SuperEvent: Cross-Modal Learning of Event-based Keypoint Detection
ABSTRACT: Event-based keypoint detection and matching holds significant potential,
enabling the integration of event sensors into highly optimized Visual SLAM
systems developed for frame cameras over decades of research. Unfortunately,
existing approaches struggle with the motion-dependent appearance of keypoints
and the complex noise prevalent in event streams, resulting in severely limited
feature matching capabilities and poor performance on downstream tasks. To
mitigate this problem, we propose SuperEvent, a data-driven approach to predict
stable keypoints with expressive descriptors. Due to the absence of event
datasets with ground truth keypoint labels, we leverage existing frame-based
keypoint detectors on readily available event-aligned and synchronized
gray-scale frames for self-supervision: we generate temporally sparse keypoint
pseudo-labels considering that events are a product of both scene appearance
and camera motion. Combined with our novel, information-rich event
representation, we enable SuperEvent to effectively learn robust keypoint
detection and description in event streams. Finally, we demonstrate the
usefulness of SuperEvent by its integration into a modern sparse keypoint and
descriptor-based SLAM framework originally developed for traditional cameras,
surpassing the state-of-the-art in event-based SLAM by a wide margin. Source
code and multimedia material are available at
smartroboticslab.github.io/SuperEvent.
| no_new_dataset | 0.95297 |
2504.00142 | Srinitish Srinivasan | Srinitish Srinivasan and Omkumar CU | Lorentzian Graph Isomorphic Network | Preprint. Under Review | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce the Lorentzian Graph Isomorphic Network (LGIN), a novel graph
neural network (GNN) designed to operate in hyperbolic spaces, leveraging the
Lorentzian model to enhance graph representation learning. Existing GNNs
primarily operate in Euclidean spaces, which can limit their ability to capture
hierarchical and multi-relational structures inherent to complex graphs. LGIN
addresses this by incorporating curvature-aware aggregation functions that
preserve the Lorentzian metric tensor, ensuring embeddings remain constrained
within the hyperbolic space by proposing a new update rule that effectively
captures both local neighborhood interactions and global structural properties,
enabling LGIN to distinguish non-isomorphic graphs with expressiveness at least
as powerful as the Weisfeiler-Lehman test. Through extensive evaluation across
nine benchmark datasets, including molecular and protein structures, LGIN
consistently outperforms or matches state-of-the-art GNNs, demonstrating its
robustness and efficacy in modeling complex graph structures. To the best of
our knowledge, this is the first study to extend the concept of a powerful
graph neural network to Riemannian manifolds, paving the way for future
advancements in hyperbolic graph learning. The code for our paper can be found
at https://github.com/Deceptrax123/LGIN.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 18:49:34 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Srinivasan",
"Srinitish",
""
],
[
"CU",
"Omkumar",
""
]
] | TITLE: Lorentzian Graph Isomorphic Network
ABSTRACT: We introduce the Lorentzian Graph Isomorphic Network (LGIN), a novel graph
neural network (GNN) designed to operate in hyperbolic spaces, leveraging the
Lorentzian model to enhance graph representation learning. Existing GNNs
primarily operate in Euclidean spaces, which can limit their ability to capture
hierarchical and multi-relational structures inherent to complex graphs. LGIN
addresses this by incorporating curvature-aware aggregation functions that
preserve the Lorentzian metric tensor, ensuring embeddings remain constrained
within the hyperbolic space by proposing a new update rule that effectively
captures both local neighborhood interactions and global structural properties,
enabling LGIN to distinguish non-isomorphic graphs with expressiveness at least
as powerful as the Weisfeiler-Lehman test. Through extensive evaluation across
nine benchmark datasets, including molecular and protein structures, LGIN
consistently outperforms or matches state-of-the-art GNNs, demonstrating its
robustness and efficacy in modeling complex graph structures. To the best of
our knowledge, this is the first study to extend the concept of a powerful
graph neural network to Riemannian manifolds, paving the way for future
advancements in hyperbolic graph learning. The code for our paper can be found
at https://github.com/Deceptrax123/LGIN.
| no_new_dataset | 0.94801 |
2504.00150 | Yongyi Shi | Yongyi Shi, Ge Wang | Few-Shot Generation of Brain Tumors for Secure and Fair Data Sharing | 17 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Leveraging multi-center data for medical analytics presents challenges due to
privacy concerns and data heterogeneity. While distributed approaches such as
federated learning has gained traction, they remain vulnerable to privacy
breaches, particularly in sensitive domains like medical imaging. Generative
models, such as diffusion models, enhance privacy by synthesizing realistic
data. However, they are prone to memorization, especially when trained on small
datasets. This study proposes a decentralized few-shot generative model (DFGM)
to synthesize brain tumor images while fully preserving privacy. DFGM
harmonizes private tumor data with publicly shareable healthy images from
multiple medical centers, constructing a new dataset by blending tumor
foregrounds with healthy backgrounds. This approach ensures stringent privacy
protection and enables controllable, high-quality synthesis by preserving both
the healthy backgrounds and tumor foregrounds. We assess DFGM's effectiveness
in brain tumor segmentation using a UNet, achieving Dice score improvements of
3.9% for data augmentation and 4.6% for fairness on a separate dataset.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 18:59:15 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Shi",
"Yongyi",
""
],
[
"Wang",
"Ge",
""
]
] | TITLE: Few-Shot Generation of Brain Tumors for Secure and Fair Data Sharing
ABSTRACT: Leveraging multi-center data for medical analytics presents challenges due to
privacy concerns and data heterogeneity. While distributed approaches such as
federated learning has gained traction, they remain vulnerable to privacy
breaches, particularly in sensitive domains like medical imaging. Generative
models, such as diffusion models, enhance privacy by synthesizing realistic
data. However, they are prone to memorization, especially when trained on small
datasets. This study proposes a decentralized few-shot generative model (DFGM)
to synthesize brain tumor images while fully preserving privacy. DFGM
harmonizes private tumor data with publicly shareable healthy images from
multiple medical centers, constructing a new dataset by blending tumor
foregrounds with healthy backgrounds. This approach ensures stringent privacy
protection and enables controllable, high-quality synthesis by preserving both
the healthy backgrounds and tumor foregrounds. We assess DFGM's effectiveness
in brain tumor segmentation using a UNet, achieving Dice score improvements of
3.9% for data augmentation and 4.6% for fairness on a separate dataset.
| new_dataset | 0.724627 |
2504.00159 | Advaith Venkatramanan Sethuraman | Advaith V. Sethuraman, Max Rucker, Onur Bagoren, Pou-Chun Kung,
Nibarkavi N.B. Amutha, Katherine A. Skinner | SonarSplat: Novel View Synthesis of Imaging Sonar via Gaussian Splatting | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present SonarSplat, a novel Gaussian splatting framework
for imaging sonar that demonstrates realistic novel view synthesis and models
acoustic streaking phenomena. Our method represents the scene as a set of 3D
Gaussians with acoustic reflectance and saturation properties. We develop a
novel method to efficiently rasterize learned Gaussians to produce a
range/azimuth image that is faithful to the acoustic image formation model of
imaging sonar. In particular, we develop a novel approach to model azimuth
streaking in a Gaussian splatting framework. We evaluate SonarSplat using
real-world datasets of sonar images collected from an underwater robotic
platform in a controlled test tank and in a real-world river environment.
Compared to the state-of-the-art, SonarSplat offers improved image synthesis
capabilities (+2.5 dB PSNR). We also demonstrate that SonarSplat can be
leveraged for azimuth streak removal and 3D scene reconstruction.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 19:13:45 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Sethuraman",
"Advaith V.",
""
],
[
"Rucker",
"Max",
""
],
[
"Bagoren",
"Onur",
""
],
[
"Kung",
"Pou-Chun",
""
],
[
"Amutha",
"Nibarkavi N. B.",
""
],
[
"Skinner",
"Katherine A.",
""
]
] | TITLE: SonarSplat: Novel View Synthesis of Imaging Sonar via Gaussian Splatting
ABSTRACT: In this paper, we present SonarSplat, a novel Gaussian splatting framework
for imaging sonar that demonstrates realistic novel view synthesis and models
acoustic streaking phenomena. Our method represents the scene as a set of 3D
Gaussians with acoustic reflectance and saturation properties. We develop a
novel method to efficiently rasterize learned Gaussians to produce a
range/azimuth image that is faithful to the acoustic image formation model of
imaging sonar. In particular, we develop a novel approach to model azimuth
streaking in a Gaussian splatting framework. We evaluate SonarSplat using
real-world datasets of sonar images collected from an underwater robotic
platform in a controlled test tank and in a real-world river environment.
Compared to the state-of-the-art, SonarSplat offers improved image synthesis
capabilities (+2.5 dB PSNR). We also demonstrate that SonarSplat can be
leveraged for azimuth streak removal and 3D scene reconstruction.
| no_new_dataset | 0.950686 |
2504.00167 | Pedro Neto | Teresa Sinico, Giovanni Boschetti, Pedro Neto | Enhancing Physical Human-Robot Interaction: Recognizing Digits via
Intrinsic Robot Tactile Sensing | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Physical human-robot interaction (pHRI) remains a key challenge for achieving
intuitive and safe interaction with robots. Current advancements often rely on
external tactile sensors as interface, which increase the complexity of robotic
systems. In this study, we leverage the intrinsic tactile sensing capabilities
of collaborative robots to recognize digits drawn by humans on an
uninstrumented touchpad mounted to the robot's flange. We propose a dataset of
robot joint torque signals along with corresponding end-effector (EEF) forces
and moments, captured from the robot's integrated torque sensors in each joint,
as users draw handwritten digits (0-9) on the touchpad. The pHRI-DIGI-TACT
dataset was collected from different users to capture natural variations in
handwriting. To enhance classification robustness, we developed a data
augmentation technique to account for reversed and rotated digits inputs. A
Bidirectional Long Short-Term Memory (Bi-LSTM) network, leveraging the
spatiotemporal nature of the data, performs online digit classification with an
overall accuracy of 94\% across various test scenarios, including those
involving users who did not participate in training the system. This
methodology is implemented on a real robot in a fruit delivery task,
demonstrating its potential to assist individuals in everyday life. Dataset and
video demonstrations are available at:
https://TS-Robotics.github.io/pHRI-DIGI/.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 19:22:01 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Sinico",
"Teresa",
""
],
[
"Boschetti",
"Giovanni",
""
],
[
"Neto",
"Pedro",
""
]
] | TITLE: Enhancing Physical Human-Robot Interaction: Recognizing Digits via
Intrinsic Robot Tactile Sensing
ABSTRACT: Physical human-robot interaction (pHRI) remains a key challenge for achieving
intuitive and safe interaction with robots. Current advancements often rely on
external tactile sensors as interface, which increase the complexity of robotic
systems. In this study, we leverage the intrinsic tactile sensing capabilities
of collaborative robots to recognize digits drawn by humans on an
uninstrumented touchpad mounted to the robot's flange. We propose a dataset of
robot joint torque signals along with corresponding end-effector (EEF) forces
and moments, captured from the robot's integrated torque sensors in each joint,
as users draw handwritten digits (0-9) on the touchpad. The pHRI-DIGI-TACT
dataset was collected from different users to capture natural variations in
handwriting. To enhance classification robustness, we developed a data
augmentation technique to account for reversed and rotated digits inputs. A
Bidirectional Long Short-Term Memory (Bi-LSTM) network, leveraging the
spatiotemporal nature of the data, performs online digit classification with an
overall accuracy of 94\% across various test scenarios, including those
involving users who did not participate in training the system. This
methodology is implemented on a real robot in a fruit delivery task,
demonstrating its potential to assist individuals in everyday life. Dataset and
video demonstrations are available at:
https://TS-Robotics.github.io/pHRI-DIGI/.
| new_dataset | 0.971483 |
2504.00174 | Young D. Kwon | Sijia Li, Young D. Kwon, Lik-Hang Lee and Pan Hui | MetaCLBench: Meta Continual Learning Benchmark on Resource-Constrained
Edge Devices | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Meta-Continual Learning (Meta-CL) has emerged as a promising approach to
minimize manual labeling efforts and system resource requirements by enabling
Continual Learning (CL) with limited labeled samples. However, while existing
methods have shown success in image-based tasks, their effectiveness remains
unexplored for sequential time-series data from sensor systems, particularly
audio inputs. To address this gap, we conduct a comprehensive benchmark study
evaluating six representative Meta-CL approaches using three network
architectures on five datasets from both image and audio modalities. We develop
MetaCLBench, an end-to-end Meta-CL benchmark framework for edge devices to
evaluate system overheads and investigate trade-offs among performance,
computational costs, and memory requirements across various Meta-CL methods.
Our results reveal that while many Meta-CL methods enable to learn new classes
for both image and audio modalities, they impose significant computational and
memory costs on edge devices. Also, we find that pre-training and meta-training
procedures based on source data before deployment improve Meta-CL performance.
Finally, to facilitate further research, we provide practical guidelines for
researchers and machine learning practitioners implementing Meta-CL on
resource-constrained environments and make our benchmark framework and tools
publicly available, enabling fair evaluation across both accuracy and
system-level metrics.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 19:31:49 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Li",
"Sijia",
""
],
[
"Kwon",
"Young D.",
""
],
[
"Lee",
"Lik-Hang",
""
],
[
"Hui",
"Pan",
""
]
] | TITLE: MetaCLBench: Meta Continual Learning Benchmark on Resource-Constrained
Edge Devices
ABSTRACT: Meta-Continual Learning (Meta-CL) has emerged as a promising approach to
minimize manual labeling efforts and system resource requirements by enabling
Continual Learning (CL) with limited labeled samples. However, while existing
methods have shown success in image-based tasks, their effectiveness remains
unexplored for sequential time-series data from sensor systems, particularly
audio inputs. To address this gap, we conduct a comprehensive benchmark study
evaluating six representative Meta-CL approaches using three network
architectures on five datasets from both image and audio modalities. We develop
MetaCLBench, an end-to-end Meta-CL benchmark framework for edge devices to
evaluate system overheads and investigate trade-offs among performance,
computational costs, and memory requirements across various Meta-CL methods.
Our results reveal that while many Meta-CL methods enable to learn new classes
for both image and audio modalities, they impose significant computational and
memory costs on edge devices. Also, we find that pre-training and meta-training
procedures based on source data before deployment improve Meta-CL performance.
Finally, to facilitate further research, we provide practical guidelines for
researchers and machine learning practitioners implementing Meta-CL on
resource-constrained environments and make our benchmark framework and tools
publicly available, enabling fair evaluation across both accuracy and
system-level metrics.
| no_new_dataset | 0.910107 |
2504.00187 | Pouya Pezeshkpour | Pouya Pezeshkpour, Estevam Hruschka | Insight-RAG: Enhancing LLMs with Insight-Driven Augmentation | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval Augmented Generation (RAG) frameworks have shown significant
promise in leveraging external knowledge to enhance the performance of large
language models (LLMs). However, conventional RAG methods often retrieve
documents based solely on surface-level relevance, leading to many issues: they
may overlook deeply buried information within individual documents, miss
relevant insights spanning multiple sources, and are not well-suited for tasks
beyond traditional question answering. In this paper, we propose Insight-RAG, a
novel framework designed to address these issues. In the initial stage of
Insight-RAG, instead of using traditional retrieval methods, we employ an LLM
to analyze the input query and task, extracting the underlying informational
requirements. In the subsequent stage, a specialized LLM -- trained on the
document database -- is queried to mine content that directly addresses these
identified insights. Finally, by integrating the original query with the
retrieved insights, similar to conventional RAG approaches, we employ a final
LLM to generate a contextually enriched and accurate response. Using two
scientific paper datasets, we created evaluation benchmarks targeting each of
the mentioned issues and assessed Insight-RAG against traditional RAG pipeline.
Our results demonstrate that the Insight-RAG pipeline successfully addresses
these challenges, outperforming existing methods by a significant margin in
most cases. These findings suggest that integrating insight-driven retrieval
within the RAG framework not only enhances performance but also broadens the
applicability of RAG to tasks beyond conventional question answering.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 19:50:27 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Pezeshkpour",
"Pouya",
""
],
[
"Hruschka",
"Estevam",
""
]
] | TITLE: Insight-RAG: Enhancing LLMs with Insight-Driven Augmentation
ABSTRACT: Retrieval Augmented Generation (RAG) frameworks have shown significant
promise in leveraging external knowledge to enhance the performance of large
language models (LLMs). However, conventional RAG methods often retrieve
documents based solely on surface-level relevance, leading to many issues: they
may overlook deeply buried information within individual documents, miss
relevant insights spanning multiple sources, and are not well-suited for tasks
beyond traditional question answering. In this paper, we propose Insight-RAG, a
novel framework designed to address these issues. In the initial stage of
Insight-RAG, instead of using traditional retrieval methods, we employ an LLM
to analyze the input query and task, extracting the underlying informational
requirements. In the subsequent stage, a specialized LLM -- trained on the
document database -- is queried to mine content that directly addresses these
identified insights. Finally, by integrating the original query with the
retrieved insights, similar to conventional RAG approaches, we employ a final
LLM to generate a contextually enriched and accurate response. Using two
scientific paper datasets, we created evaluation benchmarks targeting each of
the mentioned issues and assessed Insight-RAG against traditional RAG pipeline.
Our results demonstrate that the Insight-RAG pipeline successfully addresses
these challenges, outperforming existing methods by a significant margin in
most cases. These findings suggest that integrating insight-driven retrieval
within the RAG framework not only enhances performance but also broadens the
applicability of RAG to tasks beyond conventional question answering.
| no_new_dataset | 0.925129 |
2504.00189 | Salah A. Aly | Ahmed M. Taha, Salah A. Aly, Mohamed F. Darwish | Detecting Glioma, Meningioma, and Pituitary Tumors, and Normal Brain
Tissues based on Yolov11 and Yolov8 Deep Learning Models | 6 pages, 7 figures, 8 tables | null | null | null | eess.IV cs.CV cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | Accurate and quick diagnosis of normal brain tissue Glioma, Meningioma, and
Pituitary Tumors is crucial for optimal treatment planning and improved medical
results. Magnetic Resonance Imaging (MRI) is widely used as a non-invasive
diagnostic tool for detecting brain abnormalities, including tumors. However,
manual interpretation of MRI scans is often time-consuming, prone to human
error, and dependent on highly specialized expertise. This paper proposes an
advanced AI-driven technique to detecting glioma, meningioma, and pituitary
brain tumors using YoloV11 and YoloV8 deep learning models.
Methods: Using a transfer learning-based fine-tuning approach, we integrate
cutting-edge deep learning techniques with medical imaging to classify brain
tumors into four categories: No-Tumor, Glioma, Meningioma, and Pituitary
Tumors.
Results: The study utilizes the publicly accessible CE-MRI Figshare dataset
and involves fine-tuning pre-trained models YoloV8 and YoloV11 of 99.49% and
99.56% accuracies; and customized CNN accuracy of 96.98%. The results validate
the potential of CNNs in achieving high precision in brain tumor detection and
classification, highlighting their transformative role in medical imaging and
diagnostics.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 19:50:59 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Taha",
"Ahmed M.",
""
],
[
"Aly",
"Salah A.",
""
],
[
"Darwish",
"Mohamed F.",
""
]
] | TITLE: Detecting Glioma, Meningioma, and Pituitary Tumors, and Normal Brain
Tissues based on Yolov11 and Yolov8 Deep Learning Models
ABSTRACT: Accurate and quick diagnosis of normal brain tissue Glioma, Meningioma, and
Pituitary Tumors is crucial for optimal treatment planning and improved medical
results. Magnetic Resonance Imaging (MRI) is widely used as a non-invasive
diagnostic tool for detecting brain abnormalities, including tumors. However,
manual interpretation of MRI scans is often time-consuming, prone to human
error, and dependent on highly specialized expertise. This paper proposes an
advanced AI-driven technique to detecting glioma, meningioma, and pituitary
brain tumors using YoloV11 and YoloV8 deep learning models.
Methods: Using a transfer learning-based fine-tuning approach, we integrate
cutting-edge deep learning techniques with medical imaging to classify brain
tumors into four categories: No-Tumor, Glioma, Meningioma, and Pituitary
Tumors.
Results: The study utilizes the publicly accessible CE-MRI Figshare dataset
and involves fine-tuning pre-trained models YoloV8 and YoloV11 of 99.49% and
99.56% accuracies; and customized CNN accuracy of 96.98%. The results validate
the potential of CNNs in achieving high precision in brain tumor detection and
classification, highlighting their transformative role in medical imaging and
diagnostics.
| no_new_dataset | 0.9455 |
2504.00191 | Lin Zhao | Lin Zhao, Xin Yu, Yikang Liu, Xiao Chen, Eric Z. Chen, Terrence Chen,
Shanhui Sun | Leveraging Diffusion Model and Image Foundation Model for Improved
Correspondence Matching in Coronary Angiography | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate correspondence matching in coronary angiography images is crucial
for reconstructing 3D coronary artery structures, which is essential for
precise diagnosis and treatment planning of coronary artery disease (CAD).
Traditional matching methods for natural images often fail to generalize to
X-ray images due to inherent differences such as lack of texture, lower
contrast, and overlapping structures, compounded by insufficient training data.
To address these challenges, we propose a novel pipeline that generates
realistic paired coronary angiography images using a diffusion model
conditioned on 2D projections of 3D reconstructed meshes from Coronary Computed
Tomography Angiography (CCTA), providing high-quality synthetic data for
training. Additionally, we employ large-scale image foundation models to guide
feature aggregation, enhancing correspondence matching accuracy by focusing on
semantically relevant regions and keypoints. Our approach demonstrates superior
matching performance on synthetic datasets and effectively generalizes to
real-world datasets, offering a practical solution for this task. Furthermore,
our work investigates the efficacy of different foundation models in
correspondence matching, providing novel insights into leveraging advanced
image foundation models for medical imaging applications.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 19:58:06 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhao",
"Lin",
""
],
[
"Yu",
"Xin",
""
],
[
"Liu",
"Yikang",
""
],
[
"Chen",
"Xiao",
""
],
[
"Chen",
"Eric Z.",
""
],
[
"Chen",
"Terrence",
""
],
[
"Sun",
"Shanhui",
""
]
] | TITLE: Leveraging Diffusion Model and Image Foundation Model for Improved
Correspondence Matching in Coronary Angiography
ABSTRACT: Accurate correspondence matching in coronary angiography images is crucial
for reconstructing 3D coronary artery structures, which is essential for
precise diagnosis and treatment planning of coronary artery disease (CAD).
Traditional matching methods for natural images often fail to generalize to
X-ray images due to inherent differences such as lack of texture, lower
contrast, and overlapping structures, compounded by insufficient training data.
To address these challenges, we propose a novel pipeline that generates
realistic paired coronary angiography images using a diffusion model
conditioned on 2D projections of 3D reconstructed meshes from Coronary Computed
Tomography Angiography (CCTA), providing high-quality synthetic data for
training. Additionally, we employ large-scale image foundation models to guide
feature aggregation, enhancing correspondence matching accuracy by focusing on
semantically relevant regions and keypoints. Our approach demonstrates superior
matching performance on synthetic datasets and effectively generalizes to
real-world datasets, offering a practical solution for this task. Furthermore,
our work investigates the efficacy of different foundation models in
correspondence matching, providing novel insights into leveraging advanced
image foundation models for medical imaging applications.
| no_new_dataset | 0.955527 |
2504.00204 | Rustam Tagiew | Rustam Tagiew (1), Ilkay Wunderlich (2), Mark Sastuba (1) and Steffen
Seitz (3) ((1) German Centre for Rail Traffic Research at the Federal Railway
Authority, (2) EYYES GmbH, (3) Conrad Zuse School of Embedded Composite AI
and the Chair of Fundamentals of Electrical Engineering of Dresden University
of Technology) | RailGoerl24: G\"orlitz Rail Test Center CV Dataset 2024 | 4 pages, 5 figures, submitted to Engineering Reliable Autonomous
Systems 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driverless train operation for open tracks on urban guided transport and
mainline railways requires, among other things automatic detection of actual
and potential obstacles, especially humans, in the danger zone of the train's
path. Machine learning algorithms have proven to be powerful state-of-the-art
tools for this task. However, these algorithms require large amounts of
high-quality annotated data containing human beings in railway-specific
environments as training data. Unfortunately, the amount of publicly available
datasets is not yet sufficient and is significantly inferior to the datasets in
the road domain. Therefore, this paper presents RailGoerl24, an on-board visual
light Full HD camera dataset of 12205 frames recorded in a railway test center
of T\"UV S\"UD Rail, in G\"orlitz, Germany. Its main purpose is to support the
development of driverless train operation for guided transport. RailGoerl24
also includes a terrestrial LiDAR scan covering parts of the area used to
acquire the RGB data. In addition to the raw data, the dataset contains 33556
boxwise annotations in total for the object class 'person'. The faces of
recorded actors are not blurred or altered in any other way. RailGoerl24, soon
available at data.fid-move.de/dataset/railgoerl24, can also be used for tasks
beyond collision prediction.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 20:18:39 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Tagiew",
"Rustam",
""
],
[
"Wunderlich",
"Ilkay",
""
],
[
"Sastuba",
"Mark",
""
],
[
"Seitz",
"Steffen",
""
]
] | TITLE: RailGoerl24: G\"orlitz Rail Test Center CV Dataset 2024
ABSTRACT: Driverless train operation for open tracks on urban guided transport and
mainline railways requires, among other things automatic detection of actual
and potential obstacles, especially humans, in the danger zone of the train's
path. Machine learning algorithms have proven to be powerful state-of-the-art
tools for this task. However, these algorithms require large amounts of
high-quality annotated data containing human beings in railway-specific
environments as training data. Unfortunately, the amount of publicly available
datasets is not yet sufficient and is significantly inferior to the datasets in
the road domain. Therefore, this paper presents RailGoerl24, an on-board visual
light Full HD camera dataset of 12205 frames recorded in a railway test center
of T\"UV S\"UD Rail, in G\"orlitz, Germany. Its main purpose is to support the
development of driverless train operation for guided transport. RailGoerl24
also includes a terrestrial LiDAR scan covering parts of the area used to
acquire the RGB data. In addition to the raw data, the dataset contains 33556
boxwise annotations in total for the object class 'person'. The faces of
recorded actors are not blurred or altered in any other way. RailGoerl24, soon
available at data.fid-move.de/dataset/railgoerl24, can also be used for tasks
beyond collision prediction.
| new_dataset | 0.964855 |
2504.00218 | Rana Muhammad Shahroz Khan | Rana Muhammad Shahroz Khan, Zhen Tan, Sukwon Yun, Charles Flemming,
Tianlong Chen | $\textit{Agents Under Siege}$: Breaking Pragmatic Multi-Agent LLM
Systems with Optimized Prompt Attacks | null | null | null | null | cs.MA cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Most discussions about Large Language Model (LLM) safety have focused on
single-agent settings but multi-agent LLM systems now create novel adversarial
risks because their behavior depends on communication between agents and
decentralized reasoning. In this work, we innovatively focus on attacking
pragmatic systems that have constrains such as limited token bandwidth, latency
between message delivery, and defense mechanisms. We design a
$\textit{permutation-invariant adversarial attack}$ that optimizes prompt
distribution across latency and bandwidth-constraint network topologies to
bypass distributed safety mechanisms within the system. Formulating the attack
path as a problem of $\textit{maximum-flow minimum-cost}$, coupled with the
novel $\textit{Permutation-Invariant Evasion Loss (PIEL)}$, we leverage
graph-based optimization to maximize attack success rate while minimizing
detection risk. Evaluating across models including $\texttt{Llama}$,
$\texttt{Mistral}$, $\texttt{Gemma}$, $\texttt{DeepSeek}$ and other variants on
various datasets like $\texttt{JailBreakBench}$ and
$\texttt{AdversarialBench}$, our method outperforms conventional attacks by up
to $7\times$, exposing critical vulnerabilities in multi-agent systems.
Moreover, we demonstrate that existing defenses, including variants of
$\texttt{Llama-Guard}$ and $\texttt{PromptGuard}$, fail to prohibit our attack,
emphasizing the urgent need for multi-agent specific safety mechanisms.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 20:43:56 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Khan",
"Rana Muhammad Shahroz",
""
],
[
"Tan",
"Zhen",
""
],
[
"Yun",
"Sukwon",
""
],
[
"Flemming",
"Charles",
""
],
[
"Chen",
"Tianlong",
""
]
] | TITLE: $\textit{Agents Under Siege}$: Breaking Pragmatic Multi-Agent LLM
Systems with Optimized Prompt Attacks
ABSTRACT: Most discussions about Large Language Model (LLM) safety have focused on
single-agent settings but multi-agent LLM systems now create novel adversarial
risks because their behavior depends on communication between agents and
decentralized reasoning. In this work, we innovatively focus on attacking
pragmatic systems that have constrains such as limited token bandwidth, latency
between message delivery, and defense mechanisms. We design a
$\textit{permutation-invariant adversarial attack}$ that optimizes prompt
distribution across latency and bandwidth-constraint network topologies to
bypass distributed safety mechanisms within the system. Formulating the attack
path as a problem of $\textit{maximum-flow minimum-cost}$, coupled with the
novel $\textit{Permutation-Invariant Evasion Loss (PIEL)}$, we leverage
graph-based optimization to maximize attack success rate while minimizing
detection risk. Evaluating across models including $\texttt{Llama}$,
$\texttt{Mistral}$, $\texttt{Gemma}$, $\texttt{DeepSeek}$ and other variants on
various datasets like $\texttt{JailBreakBench}$ and
$\texttt{AdversarialBench}$, our method outperforms conventional attacks by up
to $7\times$, exposing critical vulnerabilities in multi-agent systems.
Moreover, we demonstrate that existing defenses, including variants of
$\texttt{Llama-Guard}$ and $\texttt{PromptGuard}$, fail to prohibit our attack,
emphasizing the urgent need for multi-agent specific safety mechanisms.
| no_new_dataset | 0.931213 |
2504.00223 | Rahul Bhowmik | Duy Nhat Phan, Alexander B. Morgan, Lokendra Poudel and Rahul Bhowmik | A machine learning platform for development of low flammability polymers | null | null | null | null | cs.LG cond-mat.mtrl-sci | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Flammability index (FI) and cone calorimetry outcomes, such as maximum heat
release rate, time to ignition, total smoke release, and fire growth rate, are
critical factors in evaluating the fire safety of polymers. However, predicting
these properties is challenging due to the complexity of material behavior
under heat exposure. In this work, we investigate the use of machine learning
(ML) techniques to predict these flammability metrics. We generated synthetic
polymers using Synthetic Data Vault to augment the experimental dataset. Our
comprehensive ML investigation employed both our polymer descriptors and those
generated by the RDkit library. Despite the challenges of limited experimental
data, our models demonstrate the potential to accurately predict FI and cone
calorimetry outcomes, which could be instrumental in designing safer polymers.
Additionally, we developed POLYCOMPRED, a module integrated into the
cloud-based MatVerse platform, providing an accessible, web-based interface for
flammability prediction. This work provides not only the predictive modeling of
polymer flammability but also an interactive analysis tool for the discovery
and design of new materials with tailored fire-resistant properties.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 20:50:29 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Phan",
"Duy Nhat",
""
],
[
"Morgan",
"Alexander B.",
""
],
[
"Poudel",
"Lokendra",
""
],
[
"Bhowmik",
"Rahul",
""
]
] | TITLE: A machine learning platform for development of low flammability polymers
ABSTRACT: Flammability index (FI) and cone calorimetry outcomes, such as maximum heat
release rate, time to ignition, total smoke release, and fire growth rate, are
critical factors in evaluating the fire safety of polymers. However, predicting
these properties is challenging due to the complexity of material behavior
under heat exposure. In this work, we investigate the use of machine learning
(ML) techniques to predict these flammability metrics. We generated synthetic
polymers using Synthetic Data Vault to augment the experimental dataset. Our
comprehensive ML investigation employed both our polymer descriptors and those
generated by the RDkit library. Despite the challenges of limited experimental
data, our models demonstrate the potential to accurately predict FI and cone
calorimetry outcomes, which could be instrumental in designing safer polymers.
Additionally, we developed POLYCOMPRED, a module integrated into the
cloud-based MatVerse platform, providing an accessible, web-based interface for
flammability prediction. This work provides not only the predictive modeling of
polymer flammability but also an interactive analysis tool for the discovery
and design of new materials with tailored fire-resistant properties.
| no_new_dataset | 0.950134 |
2504.00232 | David Le | David Le, Ramon Correa-Medero, Amara Tariq, Bhavik Patel, Motoyo Yano,
Imon Banerjee | Opportunistic Screening for Pancreatic Cancer using Computed Tomography
Imaging and Radiology Reports | 8 pages, 2 figures, AMIA 2025 Annual Symposium | null | null | null | cs.LG q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Pancreatic ductal adenocarcinoma (PDAC) is a highly aggressive cancer, with
most cases diagnosed at stage IV and a five-year overall survival rate below
5%. Early detection and prognosis modeling are crucial for improving patient
outcomes and guiding early intervention strategies. In this study, we developed
and evaluated a deep learning fusion model that integrates radiology reports
and CT imaging to predict PDAC risk. The model achieved a concordance index
(C-index) of 0.6750 (95% CI: 0.6429, 0.7121) and 0.6435 (95% CI: 0.6055,
0.6789) on the internal and external dataset, respectively, for 5-year survival
risk estimation. Kaplan-Meier analysis demonstrated significant separation
(p<0.0001) between the low and high risk groups predicted by the fusion model.
These findings highlight the potential of deep learning-based survival models
in leveraging clinical and imaging data for pancreatic cancer.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 21:13:42 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Le",
"David",
""
],
[
"Correa-Medero",
"Ramon",
""
],
[
"Tariq",
"Amara",
""
],
[
"Patel",
"Bhavik",
""
],
[
"Yano",
"Motoyo",
""
],
[
"Banerjee",
"Imon",
""
]
] | TITLE: Opportunistic Screening for Pancreatic Cancer using Computed Tomography
Imaging and Radiology Reports
ABSTRACT: Pancreatic ductal adenocarcinoma (PDAC) is a highly aggressive cancer, with
most cases diagnosed at stage IV and a five-year overall survival rate below
5%. Early detection and prognosis modeling are crucial for improving patient
outcomes and guiding early intervention strategies. In this study, we developed
and evaluated a deep learning fusion model that integrates radiology reports
and CT imaging to predict PDAC risk. The model achieved a concordance index
(C-index) of 0.6750 (95% CI: 0.6429, 0.7121) and 0.6435 (95% CI: 0.6055,
0.6789) on the internal and external dataset, respectively, for 5-year survival
risk estimation. Kaplan-Meier analysis demonstrated significant separation
(p<0.0001) between the low and high risk groups predicted by the fusion model.
These findings highlight the potential of deep learning-based survival models
in leveraging clinical and imaging data for pancreatic cancer.
| no_new_dataset | 0.942981 |
2504.00247 | S. Mazdak Abulnaga | S. Mazdak Abulnaga, Andrew Hoopes, Neel Dey, Malte Hoffmann, Marianne
Rakic, Bruce Fischl, John Guttag, Adrian Dalca | MultiMorph: On-demand Atlas Construction | accepted to CVPR 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present MultiMorph, a fast and efficient method for constructing
anatomical atlases on the fly. Atlases capture the canonical structure of a
collection of images and are essential for quantifying anatomical variability
across populations. However, current atlas construction methods often require
days to weeks of computation, thereby discouraging rapid experimentation. As a
result, many scientific studies rely on suboptimal, precomputed atlases from
mismatched populations, negatively impacting downstream analyses. MultiMorph
addresses these challenges with a feedforward model that rapidly produces
high-quality, population-specific atlases in a single forward pass for any 3D
brain dataset, without any fine-tuning or optimization. MultiMorph is based on
a linear group-interaction layer that aggregates and shares features within the
group of input images. Further, by leveraging auxiliary synthetic data,
MultiMorph generalizes to new imaging modalities and population groups at
test-time. Experimentally, MultiMorph outperforms state-of-the-art
optimization-based and learning-based atlas construction methods in both small
and large population settings, with a 100-fold reduction in time. This makes
MultiMorph an accessible framework for biomedical researchers without machine
learning expertise, enabling rapid, high-quality atlas generation for diverse
studies.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 21:35:24 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Abulnaga",
"S. Mazdak",
""
],
[
"Hoopes",
"Andrew",
""
],
[
"Dey",
"Neel",
""
],
[
"Hoffmann",
"Malte",
""
],
[
"Rakic",
"Marianne",
""
],
[
"Fischl",
"Bruce",
""
],
[
"Guttag",
"John",
""
],
[
"Dalca",
"Adrian",
""
]
] | TITLE: MultiMorph: On-demand Atlas Construction
ABSTRACT: We present MultiMorph, a fast and efficient method for constructing
anatomical atlases on the fly. Atlases capture the canonical structure of a
collection of images and are essential for quantifying anatomical variability
across populations. However, current atlas construction methods often require
days to weeks of computation, thereby discouraging rapid experimentation. As a
result, many scientific studies rely on suboptimal, precomputed atlases from
mismatched populations, negatively impacting downstream analyses. MultiMorph
addresses these challenges with a feedforward model that rapidly produces
high-quality, population-specific atlases in a single forward pass for any 3D
brain dataset, without any fine-tuning or optimization. MultiMorph is based on
a linear group-interaction layer that aggregates and shares features within the
group of input images. Further, by leveraging auxiliary synthetic data,
MultiMorph generalizes to new imaging modalities and population groups at
test-time. Experimentally, MultiMorph outperforms state-of-the-art
optimization-based and learning-based atlas construction methods in both small
and large population settings, with a 100-fold reduction in time. This makes
MultiMorph an accessible framework for biomedical researchers without machine
learning expertise, enabling rapid, high-quality atlas generation for diverse
studies.
| no_new_dataset | 0.946051 |
2504.00287 | Qiuliuyang Bao | Qiuliuyang Bao, Jiawei Wang, Hao Gong, Yiwei Zhang, Xiaojun Guo,
Hanrui Feng | A Deep Learning Approach to Anomaly Detection in High-Frequency Trading
Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes an algorithm based on a staged sliding window Transformer
architecture to detect abnormal behaviors in the microstructure of the foreign
exchange market, focusing on high-frequency EUR/USD trading data. The method
captures multi-scale temporal features through a staged sliding window,
extracts global and local dependencies by combining the self-attention
mechanism and weighted attention mechanism of the Transformer, and uses a
classifier to identify abnormal events. Experimental results on a real
high-frequency dataset containing order book depth, spread, and trading volume
show that the proposed method significantly outperforms traditional machine
learning (such as decision trees and random forests) and deep learning methods
(such as MLP, CNN, RNN, LSTM) in terms of accuracy (0.93), F1-Score (0.91), and
AUC-ROC (0.95). Ablation experiments verify the contribution of each component,
and the visualization of order book depth and anomaly detection further reveals
the effectiveness of the model under complex market dynamics. Despite the false
positive problem, the model still provides important support for market
supervision. In the future, noise processing can be optimized and extended to
other markets to improve generalization and real-time performance.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 23:14:31 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Bao",
"Qiuliuyang",
""
],
[
"Wang",
"Jiawei",
""
],
[
"Gong",
"Hao",
""
],
[
"Zhang",
"Yiwei",
""
],
[
"Guo",
"Xiaojun",
""
],
[
"Feng",
"Hanrui",
""
]
] | TITLE: A Deep Learning Approach to Anomaly Detection in High-Frequency Trading
Data
ABSTRACT: This paper proposes an algorithm based on a staged sliding window Transformer
architecture to detect abnormal behaviors in the microstructure of the foreign
exchange market, focusing on high-frequency EUR/USD trading data. The method
captures multi-scale temporal features through a staged sliding window,
extracts global and local dependencies by combining the self-attention
mechanism and weighted attention mechanism of the Transformer, and uses a
classifier to identify abnormal events. Experimental results on a real
high-frequency dataset containing order book depth, spread, and trading volume
show that the proposed method significantly outperforms traditional machine
learning (such as decision trees and random forests) and deep learning methods
(such as MLP, CNN, RNN, LSTM) in terms of accuracy (0.93), F1-Score (0.91), and
AUC-ROC (0.95). Ablation experiments verify the contribution of each component,
and the visualization of order book depth and anomaly detection further reveals
the effectiveness of the model under complex market dynamics. Despite the false
positive problem, the model still provides important support for market
supervision. In the future, noise processing can be optimized and extended to
other markets to improve generalization and real-time performance.
| no_new_dataset | 0.949763 |
2504.00302 | Pooya Ashtari | Pooya Ashtari, Shahryar Noei, Fateme Nateghi Haredasht, Jonathan H.
Chen, Giuseppe Jurman, Aleksandra Pizurica, Sabine Van Huffel | Deconver: A Deconvolutional Network for Medical Image Segmentation | 12 pages, 6 figures, 5 tables | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | While convolutional neural networks (CNNs) and vision transformers (ViTs)
have advanced medical image segmentation, they face inherent limitations such
as local receptive fields in CNNs and high computational complexity in ViTs.
This paper introduces Deconver, a novel network that integrates traditional
deconvolution techniques from image restoration as a core learnable component
within a U-shaped architecture. Deconver replaces computationally expensive
attention mechanisms with efficient nonnegative deconvolution (NDC) operations,
enabling the restoration of high-frequency details while suppressing artifacts.
Key innovations include a backpropagation-friendly NDC layer based on a
provably monotonic update rule and a parameter-efficient design. Evaluated
across four datasets (ISLES'22, BraTS'23, GlaS, FIVES) covering both 2D and 3D
segmentation tasks, Deconver achieves state-of-the-art performance in Dice
scores and Hausdorff distance while reducing computational costs (FLOPs) by up
to 90% compared to leading baselines. By bridging traditional image restoration
with deep learning, this work offers a practical solution for high-precision
segmentation in resource-constrained clinical workflows. The project is
available at https://github.com/pashtari/deconver.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 00:11:04 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Ashtari",
"Pooya",
""
],
[
"Noei",
"Shahryar",
""
],
[
"Haredasht",
"Fateme Nateghi",
""
],
[
"Chen",
"Jonathan H.",
""
],
[
"Jurman",
"Giuseppe",
""
],
[
"Pizurica",
"Aleksandra",
""
],
[
"Van Huffel",
"Sabine",
""
]
] | TITLE: Deconver: A Deconvolutional Network for Medical Image Segmentation
ABSTRACT: While convolutional neural networks (CNNs) and vision transformers (ViTs)
have advanced medical image segmentation, they face inherent limitations such
as local receptive fields in CNNs and high computational complexity in ViTs.
This paper introduces Deconver, a novel network that integrates traditional
deconvolution techniques from image restoration as a core learnable component
within a U-shaped architecture. Deconver replaces computationally expensive
attention mechanisms with efficient nonnegative deconvolution (NDC) operations,
enabling the restoration of high-frequency details while suppressing artifacts.
Key innovations include a backpropagation-friendly NDC layer based on a
provably monotonic update rule and a parameter-efficient design. Evaluated
across four datasets (ISLES'22, BraTS'23, GlaS, FIVES) covering both 2D and 3D
segmentation tasks, Deconver achieves state-of-the-art performance in Dice
scores and Hausdorff distance while reducing computational costs (FLOPs) by up
to 90% compared to leading baselines. By bridging traditional image restoration
with deep learning, this work offers a practical solution for high-precision
segmentation in resource-constrained clinical workflows. The project is
available at https://github.com/pashtari/deconver.
| no_new_dataset | 0.942348 |
2504.00306 | Muhammad Tahir | Muhammad Tahir, Shehroz S. Khan, James Davie, Soichiro Yamanaka, Ahmed
Ashraf | LOCO-EPI: Leave-one-chromosome-out (LOCO) as a benchmarking paradigm for
deep learning based prediction of enhancer-promoter interactions | null | tahir2025loco, journal={Applied Intelligence}, volume={55},
number={1}, pages={1--16}, year={2025}, publisher={Springer} | 10.1007/s10489-024-05848-6 | null | cs.LG q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In mammalian and vertebrate genomes, the promoter regions of the gene and
their distal enhancers may be located millions of base-pairs from each other,
while a promoter may not interact with the closest enhancer. Since base-pair
proximity is not a good indicator of these interactions, there is considerable
work toward developing methods for predicting Enhancer-Promoter Interactions
(EPI). Several machine learning methods have reported increasingly higher
accuracies for predicting EPI. Typically, these approaches randomly split the
dataset of Enhancer-Promoter (EP) pairs into training and testing subsets
followed by model training. However, the aforementioned random splitting causes
information leakage by assigning EP pairs from the same genomic region to both
testing and training sets, leading to performance overestimation. In this paper
we propose to use a more thorough training and testing paradigm i.e.,
Leave-one-chromosome-out (LOCO) cross-validation for EPI-prediction. We
demonstrate that a deep learning algorithm, which gives higher accuracies when
trained and tested on random-splitting setting, drops drastically in
performance under LOCO setting, confirming overestimation of performance. We
further propose a novel hybrid deep neural network for EPI-prediction that
fuses k-mer features of the nucleotide sequence. We show that the hybrid
architecture performs significantly better in the LOCO setting, demonstrating
it can learn more generalizable aspects of EP interactions. With this paper we
are also releasing the LOCO splitting-based EPI dataset. Research data is
available in this public repository: https://github.com/malikmtahir/EPI
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 00:20:15 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Tahir",
"Muhammad",
""
],
[
"Khan",
"Shehroz S.",
""
],
[
"Davie",
"James",
""
],
[
"Yamanaka",
"Soichiro",
""
],
[
"Ashraf",
"Ahmed",
""
]
] | TITLE: LOCO-EPI: Leave-one-chromosome-out (LOCO) as a benchmarking paradigm for
deep learning based prediction of enhancer-promoter interactions
ABSTRACT: In mammalian and vertebrate genomes, the promoter regions of the gene and
their distal enhancers may be located millions of base-pairs from each other,
while a promoter may not interact with the closest enhancer. Since base-pair
proximity is not a good indicator of these interactions, there is considerable
work toward developing methods for predicting Enhancer-Promoter Interactions
(EPI). Several machine learning methods have reported increasingly higher
accuracies for predicting EPI. Typically, these approaches randomly split the
dataset of Enhancer-Promoter (EP) pairs into training and testing subsets
followed by model training. However, the aforementioned random splitting causes
information leakage by assigning EP pairs from the same genomic region to both
testing and training sets, leading to performance overestimation. In this paper
we propose to use a more thorough training and testing paradigm i.e.,
Leave-one-chromosome-out (LOCO) cross-validation for EPI-prediction. We
demonstrate that a deep learning algorithm, which gives higher accuracies when
trained and tested on random-splitting setting, drops drastically in
performance under LOCO setting, confirming overestimation of performance. We
further propose a novel hybrid deep neural network for EPI-prediction that
fuses k-mer features of the nucleotide sequence. We show that the hybrid
architecture performs significantly better in the LOCO setting, demonstrating
it can learn more generalizable aspects of EP interactions. With this paper we
are also releasing the LOCO splitting-based EPI dataset. Research data is
available in this public repository: https://github.com/malikmtahir/EPI
| no_new_dataset | 0.952309 |
2504.00310 | Rajeev Kumar | Rajeev Kumar, Harishankar Kumar, Kumari Shalini | Detecting and Mitigating Bias in LLMs through Knowledge Graph-Augmented
Training | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models have revolutionized natural language processing with
their surprising capability to understand and generate human-like text.
However, many of these models inherit and further amplify the biases present in
their training data, raising ethical and fairness concerns. The detection and
mitigation of such biases are vital to ensuring that LLMs act responsibly and
equitably across diverse domains. This work investigates Knowledge
Graph-Augmented Training (KGAT) as a novel method to mitigate bias in LLM.
Using structured domain-specific knowledge from real-world knowledge graphs, we
improve the understanding of the model and reduce biased output. Public
datasets for bias assessment include Gender Shades, Bias in Bios, and FairFace,
while metrics such as demographic parity and equal opportunity facilitate
rigorous detection. We also performed targeted mitigation strategies to correct
biased associations, leading to a significant drop in biased output and
improved bias metrics. Equipped with real-world datasets and knowledge graphs,
our framework is both scalable and effective, paving the way toward responsible
deployment in sensitive and high-stakes applications.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 00:27:50 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Kumar",
"Rajeev",
""
],
[
"Kumar",
"Harishankar",
""
],
[
"Shalini",
"Kumari",
""
]
] | TITLE: Detecting and Mitigating Bias in LLMs through Knowledge Graph-Augmented
Training
ABSTRACT: Large language models have revolutionized natural language processing with
their surprising capability to understand and generate human-like text.
However, many of these models inherit and further amplify the biases present in
their training data, raising ethical and fairness concerns. The detection and
mitigation of such biases are vital to ensuring that LLMs act responsibly and
equitably across diverse domains. This work investigates Knowledge
Graph-Augmented Training (KGAT) as a novel method to mitigate bias in LLM.
Using structured domain-specific knowledge from real-world knowledge graphs, we
improve the understanding of the model and reduce biased output. Public
datasets for bias assessment include Gender Shades, Bias in Bios, and FairFace,
while metrics such as demographic parity and equal opportunity facilitate
rigorous detection. We also performed targeted mitigation strategies to correct
biased associations, leading to a significant drop in biased output and
improved bias metrics. Equipped with real-world datasets and knowledge graphs,
our framework is both scalable and effective, paving the way toward responsible
deployment in sensitive and high-stakes applications.
| no_new_dataset | 0.949529 |
2504.00328 | Jongha Lee | Jongha Lee, Taehyung Kwon, Heechan Moon, Kijung Shin | Simple yet Effective Node Property Prediction on Edge Streams under
Distribution Shifts | 14 pages, 14 figures, To Appear in ICDE 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of predicting node properties (e.g., node classes) in graphs has
received significant attention due to its broad range of applications. Graphs
from real-world datasets often evolve over time, with newly emerging edges and
dynamically changing node properties, posing a significant challenge for this
problem. In response, temporal graph neural networks (TGNNs) have been
developed to predict dynamic node properties from a stream of emerging edges.
However, our analysis reveals that most TGNN-based methods are (a) far less
effective without proper node features and, due to their complex model
architectures, (b) vulnerable to distribution shifts. In this paper, we propose
SPLASH, a simple yet powerful method for predicting node properties on edge
streams under distribution shifts. Our key contributions are as follows: (1) we
propose feature augmentation methods and an automatic feature selection method
for edge streams, which improve the effectiveness of TGNNs, (2) we propose a
lightweight MLP-based TGNN architecture that is highly efficient and robust
under distribution shifts, and (3) we conduct extensive experiments to evaluate
the accuracy, efficiency, generalization, and qualitative performance of the
proposed method and its competitors on dynamic node classification, dynamic
anomaly detection, and node affinity prediction tasks across seven real-world
datasets.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 01:20:52 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Lee",
"Jongha",
""
],
[
"Kwon",
"Taehyung",
""
],
[
"Moon",
"Heechan",
""
],
[
"Shin",
"Kijung",
""
]
] | TITLE: Simple yet Effective Node Property Prediction on Edge Streams under
Distribution Shifts
ABSTRACT: The problem of predicting node properties (e.g., node classes) in graphs has
received significant attention due to its broad range of applications. Graphs
from real-world datasets often evolve over time, with newly emerging edges and
dynamically changing node properties, posing a significant challenge for this
problem. In response, temporal graph neural networks (TGNNs) have been
developed to predict dynamic node properties from a stream of emerging edges.
However, our analysis reveals that most TGNN-based methods are (a) far less
effective without proper node features and, due to their complex model
architectures, (b) vulnerable to distribution shifts. In this paper, we propose
SPLASH, a simple yet powerful method for predicting node properties on edge
streams under distribution shifts. Our key contributions are as follows: (1) we
propose feature augmentation methods and an automatic feature selection method
for edge streams, which improve the effectiveness of TGNNs, (2) we propose a
lightweight MLP-based TGNN architecture that is highly efficient and robust
under distribution shifts, and (3) we conduct extensive experiments to evaluate
the accuracy, efficiency, generalization, and qualitative performance of the
proposed method and its competitors on dynamic node classification, dynamic
anomaly detection, and node affinity prediction tasks across seven real-world
datasets.
| no_new_dataset | 0.953013 |
2504.00343 | Timo Spinde | Timo Spinde and Luyang Lin and Smi Hinterreiter and Isao Echizen | Leveraging Large Language Models for Automated Definition Extraction
with TaxoMatic A Case Study on Media Bias | null | Proceedings of the International AAAI Conference on Web and Social
Media (ICWSM'25) (2025) | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper introduces TaxoMatic, a framework that leverages large language
models to automate definition extraction from academic literature. Focusing on
the media bias domain, the framework encompasses data collection, LLM-based
relevance classification, and extraction of conceptual definitions. Evaluated
on a dataset of 2,398 manually rated articles, the study demonstrates the
frameworks effectiveness, with Claude-3-sonnet achieving the best results in
both relevance classification and definition extraction. Future directions
include expanding datasets and applying TaxoMatic to additional domains.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 01:47:16 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Spinde",
"Timo",
""
],
[
"Lin",
"Luyang",
""
],
[
"Hinterreiter",
"Smi",
""
],
[
"Echizen",
"Isao",
""
]
] | TITLE: Leveraging Large Language Models for Automated Definition Extraction
with TaxoMatic A Case Study on Media Bias
ABSTRACT: This paper introduces TaxoMatic, a framework that leverages large language
models to automate definition extraction from academic literature. Focusing on
the media bias domain, the framework encompasses data collection, LLM-based
relevance classification, and extraction of conceptual definitions. Evaluated
on a dataset of 2,398 manually rated articles, the study demonstrates the
frameworks effectiveness, with Claude-3-sonnet achieving the best results in
both relevance classification and definition extraction. Future directions
include expanding datasets and applying TaxoMatic to additional domains.
| no_new_dataset | 0.946843 |
2504.00347 | Kai Li | Li-Heng Wang, Kai Li, Xiang Gao, Ya-Ni Guo, and Guo-You Sun | Using machine learning method for variable star classification using the
TESS Sectors 1-57 data | 15pages, 12 figures, 3 tables, accepted by ApJ, Data available via
China-VO PaperData repository | null | null | null | astro-ph.SR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Transiting Exoplanet Survey Satellite (TESS) is a wide-field all-sky
survey mission designed to detect Earth-sized exoplanets. After over four years
photometric surveys, data from sectors 1-57, including approximately 1,050,000
light curves with a 2-minute cadence, were collected. By cross-matching the
data with Gaia's variable star catalogue, we obtained labeled datasets for
further analysis. Using a random forest classifier, we performed classification
of variable stars and designed distinct classification processes for each
subclass, 6770 EA, 2971 EW, 980 CEP, 8347 DSCT, 457 RRab, 404 RRc and 12348 ROT
were identified. Each variable star was visually inspected to ensure the
reliability and accuracy of the compiled catalog. Subsequently, we ultimately
obtained 6046 EA, 3859 EW, 2058 CEP, 8434 DSCT, 482 RRab, 416 RRc, and 9694
ROT, and a total of 14092 new variable stars were discovered.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 01:58:23 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Li-Heng",
""
],
[
"Li",
"Kai",
""
],
[
"Gao",
"Xiang",
""
],
[
"Guo",
"Ya-Ni",
""
],
[
"Sun",
"Guo-You",
""
]
] | TITLE: Using machine learning method for variable star classification using the
TESS Sectors 1-57 data
ABSTRACT: The Transiting Exoplanet Survey Satellite (TESS) is a wide-field all-sky
survey mission designed to detect Earth-sized exoplanets. After over four years
photometric surveys, data from sectors 1-57, including approximately 1,050,000
light curves with a 2-minute cadence, were collected. By cross-matching the
data with Gaia's variable star catalogue, we obtained labeled datasets for
further analysis. Using a random forest classifier, we performed classification
of variable stars and designed distinct classification processes for each
subclass, 6770 EA, 2971 EW, 980 CEP, 8347 DSCT, 457 RRab, 404 RRc and 12348 ROT
were identified. Each variable star was visually inspected to ensure the
reliability and accuracy of the compiled catalog. Subsequently, we ultimately
obtained 6046 EA, 3859 EW, 2058 CEP, 8434 DSCT, 482 RRab, 416 RRc, and 9694
ROT, and a total of 14092 new variable stars were discovered.
| no_new_dataset | 0.927888 |
2504.00348 | Kyle Stein | Kyle Stein, Andrew A. Mahyari, Guillermo Francia III, Eman El-Sheikh | Transductive One-Shot Learning Meet Subspace Decomposition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | One-shot learning focuses on adapting pretrained models to recognize newly
introduced and unseen classes based on a single labeled image. While variations
of few-shot and zero-shot learning exist, one-shot learning remains a
challenging yet crucial problem due to its ability to generalize knowledge to
unseen classes from just one human-annotated image. In this paper, we introduce
a transductive one-shot learning approach that employs subspace decomposition
to utilize the information from labeled images in the support set and unlabeled
images in the query set. These images are decomposed into a linear combination
of latent variables representing primitives captured by smaller subspaces. By
representing images in the query set as linear combinations of these latent
primitives, we can propagate the label from a single image in the support set
to query images that share similar combinations of primitives. Through a
comprehensive quantitative analysis across various neural network feature
extractors and datasets, we demonstrate that our approach can effectively
generalize to novel classes from just one labeled image.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 02:00:16 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Stein",
"Kyle",
""
],
[
"Mahyari",
"Andrew A.",
""
],
[
"Francia",
"Guillermo",
"III"
],
[
"El-Sheikh",
"Eman",
""
]
] | TITLE: Transductive One-Shot Learning Meet Subspace Decomposition
ABSTRACT: One-shot learning focuses on adapting pretrained models to recognize newly
introduced and unseen classes based on a single labeled image. While variations
of few-shot and zero-shot learning exist, one-shot learning remains a
challenging yet crucial problem due to its ability to generalize knowledge to
unseen classes from just one human-annotated image. In this paper, we introduce
a transductive one-shot learning approach that employs subspace decomposition
to utilize the information from labeled images in the support set and unlabeled
images in the query set. These images are decomposed into a linear combination
of latent variables representing primitives captured by smaller subspaces. By
representing images in the query set as linear combinations of these latent
primitives, we can propagate the label from a single image in the support set
to query images that share similar combinations of primitives. Through a
comprehensive quantitative analysis across various neural network feature
extractors and datasets, we demonstrate that our approach can effectively
generalize to novel classes from just one labeled image.
| no_new_dataset | 0.947039 |
2504.00369 | Yongyi Zang | Yongyi Zang, Sean O'Brien, Taylor Berg-Kirkpatrick, Julian McAuley and
Zachary Novack | Are you really listening? Boosting Perceptual Awareness in Music-QA
Benchmarks | null | null | null | null | cs.SD | http://creativecommons.org/licenses/by/4.0/ | Large Audio Language Models (LALMs), where pretrained text LLMs are finetuned
with audio input, have made remarkable progress in music understanding.
However, current evaluation methodologies exhibit critical limitations: on the
leading Music Question Answering benchmark, MuchoMusic, text-only LLMs without
audio perception capabilities achieve surprisingly high accuracy of up to
56.4%, on par or above most LALMs. Furthermore, when presented with random
Gaussian noise instead of actual audio, LALMs still perform significantly above
chance. These findings suggest existing benchmarks predominantly assess
reasoning abilities rather than audio perception. To overcome this challenge,
we present RUListening: Robust Understanding through Listening, a framework
that enhances perceptual evaluation in Music-QA benchmarks. We introduce the
Perceptual Index (PI), a quantitative metric that measures a question's
reliance on audio perception by analyzing log probability distributions from
text-only language models. Using this metric, we generate synthetic,
challenging distractors to create QA pairs that necessitate genuine audio
perception. When applied to MuchoMusic, our filtered dataset successfully
forces models to rely on perceptual information-text-only LLMs perform at
chance levels, while LALMs similarly deteriorate when audio inputs are replaced
with noise. These results validate our framework's effectiveness in creating
benchmarks that more accurately evaluate audio perception capabilities.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 02:34:19 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zang",
"Yongyi",
""
],
[
"O'Brien",
"Sean",
""
],
[
"Berg-Kirkpatrick",
"Taylor",
""
],
[
"McAuley",
"Julian",
""
],
[
"Novack",
"Zachary",
""
]
] | TITLE: Are you really listening? Boosting Perceptual Awareness in Music-QA
Benchmarks
ABSTRACT: Large Audio Language Models (LALMs), where pretrained text LLMs are finetuned
with audio input, have made remarkable progress in music understanding.
However, current evaluation methodologies exhibit critical limitations: on the
leading Music Question Answering benchmark, MuchoMusic, text-only LLMs without
audio perception capabilities achieve surprisingly high accuracy of up to
56.4%, on par or above most LALMs. Furthermore, when presented with random
Gaussian noise instead of actual audio, LALMs still perform significantly above
chance. These findings suggest existing benchmarks predominantly assess
reasoning abilities rather than audio perception. To overcome this challenge,
we present RUListening: Robust Understanding through Listening, a framework
that enhances perceptual evaluation in Music-QA benchmarks. We introduce the
Perceptual Index (PI), a quantitative metric that measures a question's
reliance on audio perception by analyzing log probability distributions from
text-only language models. Using this metric, we generate synthetic,
challenging distractors to create QA pairs that necessitate genuine audio
perception. When applied to MuchoMusic, our filtered dataset successfully
forces models to rely on perceptual information-text-only LLMs perform at
chance levels, while LALMs similarly deteriorate when audio inputs are replaced
with noise. These results validate our framework's effectiveness in creating
benchmarks that more accurately evaluate audio perception capabilities.
| new_dataset | 0.962321 |
2504.00370 | Tiantian Xie | Tiantian Xie, Pengpai Wang and Rosa H. M. Chan | Spatiotemporal Attention Learning Framework for Event-Driven Object
Recognition | 2025 IEEE NSENS | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event-based vision sensors, inspired by biological neural systems,
asynchronously capture local pixel-level intensity changes as a sparse event
stream containing position, polarity, and timestamp information. These
neuromorphic sensors offer significant advantages in dynamic range, latency,
and power efficiency. Their working principle inherently addresses traditional
camera limitations such as motion blur and redundant background information,
making them particularly suitable for dynamic vision tasks. While recent works
have proposed increasingly complex event-based architectures, the computational
overhead and parameter complexity of these approaches limit their practical
deployment. This paper presents a novel spatiotemporal learning framework for
event-based object recognition, utilizing a VGG network enhanced with
Convolutional Block Attention Module (CBAM). Our approach achieves comparable
performance to state-of-the-art ResNet-based methods while reducing parameter
count by 2.3% compared to the original VGG model. Specifically, it outperforms
ResNet-based methods like MVF-Net, achieving the highest Top-1 accuracy of
76.4% (pretrained) and 71.3% (not pretrained) on CIFAR10-DVS, and 72.4% (not
pretrained) on N-Caltech101. These results highlight the robustness of our
method when pretrained weights are not used, making it suitable for scenarios
where transfer learning is unavailable. Moreover, our approach reduces reliance
on data augmentation. Experimental results on standard event-based datasets
demonstrate the framework's efficiency and effectiveness for real-world
applications.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 02:37:54 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xie",
"Tiantian",
""
],
[
"Wang",
"Pengpai",
""
],
[
"Chan",
"Rosa H. M.",
""
]
] | TITLE: Spatiotemporal Attention Learning Framework for Event-Driven Object
Recognition
ABSTRACT: Event-based vision sensors, inspired by biological neural systems,
asynchronously capture local pixel-level intensity changes as a sparse event
stream containing position, polarity, and timestamp information. These
neuromorphic sensors offer significant advantages in dynamic range, latency,
and power efficiency. Their working principle inherently addresses traditional
camera limitations such as motion blur and redundant background information,
making them particularly suitable for dynamic vision tasks. While recent works
have proposed increasingly complex event-based architectures, the computational
overhead and parameter complexity of these approaches limit their practical
deployment. This paper presents a novel spatiotemporal learning framework for
event-based object recognition, utilizing a VGG network enhanced with
Convolutional Block Attention Module (CBAM). Our approach achieves comparable
performance to state-of-the-art ResNet-based methods while reducing parameter
count by 2.3% compared to the original VGG model. Specifically, it outperforms
ResNet-based methods like MVF-Net, achieving the highest Top-1 accuracy of
76.4% (pretrained) and 71.3% (not pretrained) on CIFAR10-DVS, and 72.4% (not
pretrained) on N-Caltech101. These results highlight the robustness of our
method when pretrained weights are not used, making it suitable for scenarios
where transfer learning is unavailable. Moreover, our approach reduces reliance
on data augmentation. Experimental results on standard event-based datasets
demonstrate the framework's efficiency and effectiveness for real-world
applications.
| no_new_dataset | 0.946101 |
2504.00375 | Xin Zhang | Xin Zhang, Keren Fu, Qijun Zhao | CamoSAM2: Motion-Appearance Induced Auto-Refining Prompts for Video
Camouflaged Object Detection | 10 pages, 5 figures, | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Segment Anything Model 2 (SAM2), a prompt-guided video foundation model,
has remarkably performed in video object segmentation, drawing significant
attention in the community. Due to the high similarity between camouflaged
objects and their surroundings, which makes them difficult to distinguish even
by the human eye, the application of SAM2 for automated segmentation in
real-world scenarios faces challenges in camouflage perception and reliable
prompts generation. To address these issues, we propose CamoSAM2, a
motion-appearance prompt inducer (MAPI) and refinement framework to
automatically generate and refine prompts for SAM2, enabling high-quality
automatic detection and segmentation in VCOD task. Initially, we introduce a
prompt inducer that simultaneously integrates motion and appearance cues to
detect camouflaged objects, delivering more accurate initial predictions than
existing methods. Subsequently, we propose a video-based adaptive multi-prompts
refinement (AMPR) strategy tailored for SAM2, aimed at mitigating prompt error
in initial coarse masks and further producing good prompts. Specifically, we
introduce a novel three-step process to generate reliable prompts by
camouflaged object determination, pivotal prompting frame selection, and
multi-prompts formation. Extensive experiments conducted on two benchmark
datasets demonstrate that our proposed model, CamoSAM2, significantly
outperforms existing state-of-the-art methods, achieving increases of 8.0% and
10.1% in mIoU metric. Additionally, our method achieves the fastest inference
speed compared to current VCOD models.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 02:45:17 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhang",
"Xin",
""
],
[
"Fu",
"Keren",
""
],
[
"Zhao",
"Qijun",
""
]
] | TITLE: CamoSAM2: Motion-Appearance Induced Auto-Refining Prompts for Video
Camouflaged Object Detection
ABSTRACT: The Segment Anything Model 2 (SAM2), a prompt-guided video foundation model,
has remarkably performed in video object segmentation, drawing significant
attention in the community. Due to the high similarity between camouflaged
objects and their surroundings, which makes them difficult to distinguish even
by the human eye, the application of SAM2 for automated segmentation in
real-world scenarios faces challenges in camouflage perception and reliable
prompts generation. To address these issues, we propose CamoSAM2, a
motion-appearance prompt inducer (MAPI) and refinement framework to
automatically generate and refine prompts for SAM2, enabling high-quality
automatic detection and segmentation in VCOD task. Initially, we introduce a
prompt inducer that simultaneously integrates motion and appearance cues to
detect camouflaged objects, delivering more accurate initial predictions than
existing methods. Subsequently, we propose a video-based adaptive multi-prompts
refinement (AMPR) strategy tailored for SAM2, aimed at mitigating prompt error
in initial coarse masks and further producing good prompts. Specifically, we
introduce a novel three-step process to generate reliable prompts by
camouflaged object determination, pivotal prompting frame selection, and
multi-prompts formation. Extensive experiments conducted on two benchmark
datasets demonstrate that our proposed model, CamoSAM2, significantly
outperforms existing state-of-the-art methods, achieving increases of 8.0% and
10.1% in mIoU metric. Additionally, our method achieves the fastest inference
speed compared to current VCOD models.
| no_new_dataset | 0.953449 |
2504.00379 | Shuangping Huang | Zhiyuan Zhang, Xiaofan Li, Zhihao Xu, Wenjie Peng, Zijian Zhou,
Miaojing Shi, Shuangping Huang | MPDrive: Improving Spatial Understanding with Marker-Based Prompt
Learning for Autonomous Driving | Accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Autonomous driving visual question answering (AD-VQA) aims to answer
questions related to perception, prediction, and planning based on given
driving scene images, heavily relying on the model's spatial understanding
capabilities. Prior works typically express spatial information through textual
representations of coordinates, resulting in semantic gaps between visual
coordinate representations and textual descriptions. This oversight hinders the
accurate transmission of spatial information and increases the expressive
burden. To address this, we propose a novel Marker-based Prompt learning
framework (MPDrive), which represents spatial coordinates by concise visual
markers, ensuring linguistic expressive consistency and enhancing the accuracy
of both visual perception and spatial expression in AD-VQA. Specifically, we
create marker images by employing a detection expert to overlay object regions
with numerical labels, converting complex textual coordinate generation into
straightforward text-based visual marker predictions. Moreover, we fuse
original and marker images as scene-level features and integrate them with
detection priors to derive instance-level features. By combining these
features, we construct dual-granularity visual prompts that stimulate the LLM's
spatial perception capabilities. Extensive experiments on the DriveLM and
CODA-LM datasets show that MPDrive achieves state-of-the-art performance,
particularly in cases requiring sophisticated spatial understanding.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 02:49:39 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhang",
"Zhiyuan",
""
],
[
"Li",
"Xiaofan",
""
],
[
"Xu",
"Zhihao",
""
],
[
"Peng",
"Wenjie",
""
],
[
"Zhou",
"Zijian",
""
],
[
"Shi",
"Miaojing",
""
],
[
"Huang",
"Shuangping",
""
]
] | TITLE: MPDrive: Improving Spatial Understanding with Marker-Based Prompt
Learning for Autonomous Driving
ABSTRACT: Autonomous driving visual question answering (AD-VQA) aims to answer
questions related to perception, prediction, and planning based on given
driving scene images, heavily relying on the model's spatial understanding
capabilities. Prior works typically express spatial information through textual
representations of coordinates, resulting in semantic gaps between visual
coordinate representations and textual descriptions. This oversight hinders the
accurate transmission of spatial information and increases the expressive
burden. To address this, we propose a novel Marker-based Prompt learning
framework (MPDrive), which represents spatial coordinates by concise visual
markers, ensuring linguistic expressive consistency and enhancing the accuracy
of both visual perception and spatial expression in AD-VQA. Specifically, we
create marker images by employing a detection expert to overlay object regions
with numerical labels, converting complex textual coordinate generation into
straightforward text-based visual marker predictions. Moreover, we fuse
original and marker images as scene-level features and integrate them with
detection priors to derive instance-level features. By combining these
features, we construct dual-granularity visual prompts that stimulate the LLM's
spatial perception capabilities. Extensive experiments on the DriveLM and
CODA-LM datasets show that MPDrive achieves state-of-the-art performance,
particularly in cases requiring sophisticated spatial understanding.
| no_new_dataset | 0.947672 |
2504.00387 | Weijia Li | Zilong Huang, Jun He, Junyan Ye, Lihan Jiang, Weijia Li, Yiping Chen,
Ting Han | Scene4U: Hierarchical Layered 3D Scene Reconstruction from Single
Panoramic Image for Your Immerse Exploration | CVPR 2025, 11 pages, 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reconstruction of immersive and realistic 3D scenes holds significant
practical importance in various fields of computer vision and computer
graphics. Typically, immersive and realistic scenes should be free from
obstructions by dynamic objects, maintain global texture consistency, and allow
for unrestricted exploration. The current mainstream methods for image-driven
scene construction involves iteratively refining the initial image using a
moving virtual camera to generate the scene. However, previous methods struggle
with visual discontinuities due to global texture inconsistencies under varying
camera poses, and they frequently exhibit scene voids caused by
foreground-background occlusions. To this end, we propose a novel layered 3D
scene reconstruction framework from panoramic image, named Scene4U.
Specifically, Scene4U integrates an open-vocabulary segmentation model with a
large language model to decompose a real panorama into multiple layers. Then,
we employs a layered repair module based on diffusion model to restore occluded
regions using visual cues and depth information, generating a hierarchical
representation of the scene. The multi-layer panorama is then initialized as a
3D Gaussian Splatting representation, followed by layered optimization, which
ultimately produces an immersive 3D scene with semantic and structural
consistency that supports free exploration. Scene4U outperforms
state-of-the-art method, improving by 24.24% in LPIPS and 24.40% in BRISQUE,
while also achieving the fastest training speed. Additionally, to demonstrate
the robustness of Scene4U and allow users to experience immersive scenes from
various landmarks, we build WorldVista3D dataset for 3D scene reconstruction,
which contains panoramic images of globally renowned sites. The implementation
code and dataset will be released at https://github.com/LongHZ140516/Scene4U .
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 03:17:24 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Huang",
"Zilong",
""
],
[
"He",
"Jun",
""
],
[
"Ye",
"Junyan",
""
],
[
"Jiang",
"Lihan",
""
],
[
"Li",
"Weijia",
""
],
[
"Chen",
"Yiping",
""
],
[
"Han",
"Ting",
""
]
] | TITLE: Scene4U: Hierarchical Layered 3D Scene Reconstruction from Single
Panoramic Image for Your Immerse Exploration
ABSTRACT: The reconstruction of immersive and realistic 3D scenes holds significant
practical importance in various fields of computer vision and computer
graphics. Typically, immersive and realistic scenes should be free from
obstructions by dynamic objects, maintain global texture consistency, and allow
for unrestricted exploration. The current mainstream methods for image-driven
scene construction involves iteratively refining the initial image using a
moving virtual camera to generate the scene. However, previous methods struggle
with visual discontinuities due to global texture inconsistencies under varying
camera poses, and they frequently exhibit scene voids caused by
foreground-background occlusions. To this end, we propose a novel layered 3D
scene reconstruction framework from panoramic image, named Scene4U.
Specifically, Scene4U integrates an open-vocabulary segmentation model with a
large language model to decompose a real panorama into multiple layers. Then,
we employs a layered repair module based on diffusion model to restore occluded
regions using visual cues and depth information, generating a hierarchical
representation of the scene. The multi-layer panorama is then initialized as a
3D Gaussian Splatting representation, followed by layered optimization, which
ultimately produces an immersive 3D scene with semantic and structural
consistency that supports free exploration. Scene4U outperforms
state-of-the-art method, improving by 24.24% in LPIPS and 24.40% in BRISQUE,
while also achieving the fastest training speed. Additionally, to demonstrate
the robustness of Scene4U and allow users to experience immersive scenes from
various landmarks, we build WorldVista3D dataset for 3D scene reconstruction,
which contains panoramic images of globally renowned sites. The implementation
code and dataset will be released at https://github.com/LongHZ140516/Scene4U .
| new_dataset | 0.914901 |
2504.00388 | Marinus Ferreira | Marinus Ferreira | Using complex prompts to identify fine-grained biases in image
generation through ChatGPT-4o | Presented at the 74th Annual ICA 2024 Conference, in the stream
"Image-as-Data Methods in the Age of Generative Artificial Intelligence", 22
June 2024 | null | null | null | cs.CY cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | There are not one but two dimensions of bias that can be revealed through the
study of large AI models: not only bias in training data or the products of an
AI, but also bias in society, such as disparity in employment or health
outcomes between different demographic groups. Often training data and AI
output is biased for or against certain demographics (i.e. older white people
are overrepresented in image datasets), but sometimes large AI models
accurately illustrate biases in the real world (i.e. young black men being
disproportionately viewed as threatening). These social disparities often
appear in image generation AI outputs in the form of 'marked' features, where
some feature of an individual or setting is a social marker of disparity, and
prompts both humans and AI systems to treat subjects that are marked in this
way as exceptional and requiring special treatment. Generative AI has proven to
be very sensitive to such marked features, to the extent of over-emphasising
them and thus often exacerbating social biases. I briefly discuss how we can
use complex prompts to image generation AI to investigate either dimension of
bias, emphasising how we can probe the large language models underlying image
generation AI through, for example, automated sentiment analysis of the text
prompts used to generate images.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 03:17:35 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Ferreira",
"Marinus",
""
]
] | TITLE: Using complex prompts to identify fine-grained biases in image
generation through ChatGPT-4o
ABSTRACT: There are not one but two dimensions of bias that can be revealed through the
study of large AI models: not only bias in training data or the products of an
AI, but also bias in society, such as disparity in employment or health
outcomes between different demographic groups. Often training data and AI
output is biased for or against certain demographics (i.e. older white people
are overrepresented in image datasets), but sometimes large AI models
accurately illustrate biases in the real world (i.e. young black men being
disproportionately viewed as threatening). These social disparities often
appear in image generation AI outputs in the form of 'marked' features, where
some feature of an individual or setting is a social marker of disparity, and
prompts both humans and AI systems to treat subjects that are marked in this
way as exceptional and requiring special treatment. Generative AI has proven to
be very sensitive to such marked features, to the extent of over-emphasising
them and thus often exacerbating social biases. I briefly discuss how we can
use complex prompts to image generation AI to investigate either dimension of
bias, emphasising how we can probe the large language models underlying image
generation AI through, for example, automated sentiment analysis of the text
prompts used to generate images.
| no_new_dataset | 0.939858 |
2504.00394 | Lei Wang | Lei Wang, Yujie Zhong, Xiaopeng Sun, Jingchun Cheng, Chengjian Feng,
Qiong Cao, Lin Ma, Zhaoxin Fan | AP-CAP: Advancing High-Quality Data Synthesis for Animal Pose Estimation
via a Controllable Image Generation Pipeline | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of 2D animal pose estimation plays a crucial role in advancing deep
learning applications in animal behavior analysis and ecological research.
Despite notable progress in some existing approaches, our study reveals that
the scarcity of high-quality datasets remains a significant bottleneck,
limiting the full potential of current methods. To address this challenge, we
propose a novel Controllable Image Generation Pipeline for synthesizing animal
pose estimation data, termed AP-CAP. Within this pipeline, we introduce a
Multi-Modal Animal Image Generation Model capable of producing images with
expected poses. To enhance the quality and diversity of the generated data, we
further propose three innovative strategies: (1) Modality-Fusion-Based Animal
Image Synthesis Strategy to integrate multi-source appearance representations,
(2) Pose-Adjustment-Based Animal Image Synthesis Strategy to dynamically
capture diverse pose variations, and (3) Caption-Enhancement-Based Animal Image
Synthesis Strategy to enrich visual semantic understanding. Leveraging the
proposed model and strategies, we create the MPCH Dataset
(Modality-Pose-Caption Hybrid), the first hybrid dataset that innovatively
combines synthetic and real data, establishing the largest-scale multi-source
heterogeneous benchmark repository for animal pose estimation to date.
Extensive experiments demonstrate the superiority of our method in improving
both the performance and generalization capability of animal pose estimators.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 03:28:29 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Lei",
""
],
[
"Zhong",
"Yujie",
""
],
[
"Sun",
"Xiaopeng",
""
],
[
"Cheng",
"Jingchun",
""
],
[
"Feng",
"Chengjian",
""
],
[
"Cao",
"Qiong",
""
],
[
"Ma",
"Lin",
""
],
[
"Fan",
"Zhaoxin",
""
]
] | TITLE: AP-CAP: Advancing High-Quality Data Synthesis for Animal Pose Estimation
via a Controllable Image Generation Pipeline
ABSTRACT: The task of 2D animal pose estimation plays a crucial role in advancing deep
learning applications in animal behavior analysis and ecological research.
Despite notable progress in some existing approaches, our study reveals that
the scarcity of high-quality datasets remains a significant bottleneck,
limiting the full potential of current methods. To address this challenge, we
propose a novel Controllable Image Generation Pipeline for synthesizing animal
pose estimation data, termed AP-CAP. Within this pipeline, we introduce a
Multi-Modal Animal Image Generation Model capable of producing images with
expected poses. To enhance the quality and diversity of the generated data, we
further propose three innovative strategies: (1) Modality-Fusion-Based Animal
Image Synthesis Strategy to integrate multi-source appearance representations,
(2) Pose-Adjustment-Based Animal Image Synthesis Strategy to dynamically
capture diverse pose variations, and (3) Caption-Enhancement-Based Animal Image
Synthesis Strategy to enrich visual semantic understanding. Leveraging the
proposed model and strategies, we create the MPCH Dataset
(Modality-Pose-Caption Hybrid), the first hybrid dataset that innovatively
combines synthetic and real data, establishing the largest-scale multi-source
heterogeneous benchmark repository for animal pose estimation to date.
Extensive experiments demonstrate the superiority of our method in improving
both the performance and generalization capability of animal pose estimators.
| new_dataset | 0.959837 |
2504.00400 | Wang Haodian | Haodian Wang, Yaqi Song | Adaptive Low Light Enhancement via Joint Global-Local Illumination
Adjustment | null | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Images captured under real-world low-light conditions face significant
challenges due to uneven ambient lighting, making it difficult for existing
end-to-end methods to enhance images with a large dynamic range to normal
exposure levels. To address the above issue, we propose a novel
brightness-adaptive enhancement framework designed to tackle the challenge of
local exposure inconsistencies in real-world low-light images. Specifically,
our proposed framework comprises two components: the Local Contrast Enhancement
Network (LCEN) and the Global Illumination Guidance Network (GIGN). We
introduce an early stopping mechanism in the LCEN and design a local
discriminative module, which adaptively perceives the contrast of different
areas in the image to control the premature termination of the enhancement
process for patches with varying exposure levels. Additionally, within the
GIGN, we design a global attention guidance module that effectively models
global illumination by capturing long-range dependencies and contextual
information within the image, which guides the local contrast enhancement
network to significantly improve brightness across different regions. Finally,
in order to coordinate the LCEN and GIGN, we design a novel training strategy
to facilitate the training process. Experiments on multiple datasets
demonstrate that our method achieves superior quantitative and qualitative
results compared to state-of-the-art algorithms.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 03:46:28 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Haodian",
""
],
[
"Song",
"Yaqi",
""
]
] | TITLE: Adaptive Low Light Enhancement via Joint Global-Local Illumination
Adjustment
ABSTRACT: Images captured under real-world low-light conditions face significant
challenges due to uneven ambient lighting, making it difficult for existing
end-to-end methods to enhance images with a large dynamic range to normal
exposure levels. To address the above issue, we propose a novel
brightness-adaptive enhancement framework designed to tackle the challenge of
local exposure inconsistencies in real-world low-light images. Specifically,
our proposed framework comprises two components: the Local Contrast Enhancement
Network (LCEN) and the Global Illumination Guidance Network (GIGN). We
introduce an early stopping mechanism in the LCEN and design a local
discriminative module, which adaptively perceives the contrast of different
areas in the image to control the premature termination of the enhancement
process for patches with varying exposure levels. Additionally, within the
GIGN, we design a global attention guidance module that effectively models
global illumination by capturing long-range dependencies and contextual
information within the image, which guides the local contrast enhancement
network to significantly improve brightness across different regions. Finally,
in order to coordinate the LCEN and GIGN, we design a novel training strategy
to facilitate the training process. Experiments on multiple datasets
demonstrate that our method achieves superior quantitative and qualitative
results compared to state-of-the-art algorithms.
| no_new_dataset | 0.948489 |
2504.00401 | Wenbo Nie | Wenbo Nie, Lang Nie, Chunyu Lin, Jingwen Chen, Ke Xing, Jiyuan Wang,
Yao Zhao | Beyond Wide-Angle Images: Unsupervised Video Portrait Correction via
Spatiotemporal Diffusion Adaptation | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wide-angle cameras, despite their popularity for content creation, suffer
from distortion-induced facial stretching-especially at the edge of the
lens-which degrades visual appeal. To address this issue, we propose an image
portrait correction framework using diffusion models named ImagePD. It
integrates the long-range awareness of transformer and multi-step denoising of
diffusion models into a unified framework, achieving global structural
robustness and local detail refinement. Besides, considering the high cost of
obtaining video labels, we then repurpose ImagePD for unlabeled wide-angle
videos (termed VideoPD), by spatiotemporal diffusion adaption with spatial
consistency and temporal smoothness constraints. For the former, we encourage
the denoised image to approximate pseudo labels following the wide-angle
distortion distribution pattern, while for the latter, we derive rectification
trajectories with backward optical flows and smooth them. Compared with
ImagePD, VideoPD maintains high-quality facial corrections in space and
mitigates the potential temporal shakes sequentially. Finally, to establish an
evaluation benchmark and train the framework, we establish a video portrait
dataset with a large diversity in people number, lighting conditions, and
background. Experiments demonstrate that the proposed methods outperform
existing solutions quantitatively and qualitatively, contributing to
high-fidelity wide-angle videos with stable and natural portraits. The codes
and dataset will be available.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 03:49:59 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Nie",
"Wenbo",
""
],
[
"Nie",
"Lang",
""
],
[
"Lin",
"Chunyu",
""
],
[
"Chen",
"Jingwen",
""
],
[
"Xing",
"Ke",
""
],
[
"Wang",
"Jiyuan",
""
],
[
"Zhao",
"Yao",
""
]
] | TITLE: Beyond Wide-Angle Images: Unsupervised Video Portrait Correction via
Spatiotemporal Diffusion Adaptation
ABSTRACT: Wide-angle cameras, despite their popularity for content creation, suffer
from distortion-induced facial stretching-especially at the edge of the
lens-which degrades visual appeal. To address this issue, we propose an image
portrait correction framework using diffusion models named ImagePD. It
integrates the long-range awareness of transformer and multi-step denoising of
diffusion models into a unified framework, achieving global structural
robustness and local detail refinement. Besides, considering the high cost of
obtaining video labels, we then repurpose ImagePD for unlabeled wide-angle
videos (termed VideoPD), by spatiotemporal diffusion adaption with spatial
consistency and temporal smoothness constraints. For the former, we encourage
the denoised image to approximate pseudo labels following the wide-angle
distortion distribution pattern, while for the latter, we derive rectification
trajectories with backward optical flows and smooth them. Compared with
ImagePD, VideoPD maintains high-quality facial corrections in space and
mitigates the potential temporal shakes sequentially. Finally, to establish an
evaluation benchmark and train the framework, we establish a video portrait
dataset with a large diversity in people number, lighting conditions, and
background. Experiments demonstrate that the proposed methods outperform
existing solutions quantitatively and qualitatively, contributing to
high-fidelity wide-angle videos with stable and natural portraits. The codes
and dataset will be available.
| no_new_dataset | 0.948394 |
2504.00410 | Dongwoo Park | Dongwoo Park and Suk Pil Ko | NCAP: Scene Text Image Super-Resolution with Non-CAtegorical Prior | WACV 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Scene text image super-resolution (STISR) enhances the resolution and quality
of low-resolution images. Unlike previous studies that treated scene text
images as natural images, recent methods using a text prior (TP), extracted
from a pre-trained text recognizer, have shown strong performance. However, two
major issues emerge: (1) Explicit categorical priors, like TP, can negatively
impact STISR if incorrect. We reveal that these explicit priors are unstable
and propose replacing them with Non-CAtegorical Prior (NCAP) using penultimate
layer representations. (2) Pre-trained recognizers used to generate TP struggle
with low-resolution images. To address this, most studies jointly train the
recognizer with the STISR network to bridge the domain gap between low- and
high-resolution images, but this can cause an overconfidence phenomenon in the
prior modality. We highlight this issue and propose a method to mitigate it by
mixing hard and soft labels. Experiments on the TextZoom dataset demonstrate an
improvement by 3.5%, while our method significantly enhances generalization
performance by 14.8\% across four text recognition datasets. Our method
generalizes to all TP-guided STISR networks.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 04:14:07 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Park",
"Dongwoo",
""
],
[
"Ko",
"Suk Pil",
""
]
] | TITLE: NCAP: Scene Text Image Super-Resolution with Non-CAtegorical Prior
ABSTRACT: Scene text image super-resolution (STISR) enhances the resolution and quality
of low-resolution images. Unlike previous studies that treated scene text
images as natural images, recent methods using a text prior (TP), extracted
from a pre-trained text recognizer, have shown strong performance. However, two
major issues emerge: (1) Explicit categorical priors, like TP, can negatively
impact STISR if incorrect. We reveal that these explicit priors are unstable
and propose replacing them with Non-CAtegorical Prior (NCAP) using penultimate
layer representations. (2) Pre-trained recognizers used to generate TP struggle
with low-resolution images. To address this, most studies jointly train the
recognizer with the STISR network to bridge the domain gap between low- and
high-resolution images, but this can cause an overconfidence phenomenon in the
prior modality. We highlight this issue and propose a method to mitigate it by
mixing hard and soft labels. Experiments on the TextZoom dataset demonstrate an
improvement by 3.5%, while our method significantly enhances generalization
performance by 14.8\% across four text recognition datasets. Our method
generalizes to all TP-guided STISR networks.
| no_new_dataset | 0.952486 |
2504.00414 | Niclas Griesshaber | Gavin Greif, Niclas Griesshaber, Robin Greif | Multimodal LLMs for OCR, OCR Post-Correction, and Named Entity
Recognition in Historical Documents | null | null | null | null | cs.CL cs.AI cs.DL | http://creativecommons.org/licenses/by/4.0/ | We explore how multimodal Large Language Models (mLLMs) can help researchers
transcribe historical documents, extract relevant historical information, and
construct datasets from historical sources. Specifically, we investigate the
capabilities of mLLMs in performing (1) Optical Character Recognition (OCR),
(2) OCR Post-Correction, and (3) Named Entity Recognition (NER) tasks on a set
of city directories published in German between 1754 and 1870. First, we
benchmark the off-the-shelf transcription accuracy of both mLLMs and
conventional OCR models. We find that the best-performing mLLM model
significantly outperforms conventional state-of-the-art OCR models and other
frontier mLLMs. Second, we are the first to introduce multimodal
post-correction of OCR output using mLLMs. We find that this novel approach
leads to a drastic improvement in transcription accuracy and consistently
produces highly accurate transcriptions (<1% CER), without any image
pre-processing or model fine-tuning. Third, we demonstrate that mLLMs can
efficiently recognize entities in transcriptions of historical documents and
parse them into structured dataset formats. Our findings provide early evidence
for the long-term potential of mLLMs to introduce a paradigm shift in the
approaches to historical data collection and document transcription.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 04:21:34 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Greif",
"Gavin",
""
],
[
"Griesshaber",
"Niclas",
""
],
[
"Greif",
"Robin",
""
]
] | TITLE: Multimodal LLMs for OCR, OCR Post-Correction, and Named Entity
Recognition in Historical Documents
ABSTRACT: We explore how multimodal Large Language Models (mLLMs) can help researchers
transcribe historical documents, extract relevant historical information, and
construct datasets from historical sources. Specifically, we investigate the
capabilities of mLLMs in performing (1) Optical Character Recognition (OCR),
(2) OCR Post-Correction, and (3) Named Entity Recognition (NER) tasks on a set
of city directories published in German between 1754 and 1870. First, we
benchmark the off-the-shelf transcription accuracy of both mLLMs and
conventional OCR models. We find that the best-performing mLLM model
significantly outperforms conventional state-of-the-art OCR models and other
frontier mLLMs. Second, we are the first to introduce multimodal
post-correction of OCR output using mLLMs. We find that this novel approach
leads to a drastic improvement in transcription accuracy and consistently
produces highly accurate transcriptions (<1% CER), without any image
pre-processing or model fine-tuning. Third, we demonstrate that mLLMs can
efficiently recognize entities in transcriptions of historical documents and
parse them into structured dataset formats. Our findings provide early evidence
for the long-term potential of mLLMs to introduce a paradigm shift in the
approaches to historical data collection and document transcription.
| no_new_dataset | 0.944944 |
2504.00419 | Jianghui Ji | Zixin Chen, Jianghui Ji, Guo Chen, Fei Yan, Xianyu Tan | Asymmetry and Dynamical Constraints in 2-Limbs Retrieval of WASP-39 b
Inferring from JWST Data | 16 pages, 6 figures, accepted for publication in AJ | null | null | null | astro-ph.EP astro-ph.IM astro-ph.SR physics.ao-ph physics.space-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transmission spectroscopy has provided unprecedented insight into the makeup
of exoplanet atmospheres. A transmission spectrum contains contributions from a
planet's morning and evening limbs, which can differ in temperature,
composition and aerosol properties due to atmospheric circulation. While
high-resolution ground-based observations have identified limb asymmetry in
several ultra-hot/hot exoplanets, space-based studies of limb asymmetry are
still in their early stages. The prevalence of limb asymmetry across a broad
range of exoplanets remains largely unexplored. We conduct a comparative
analysis of retrievals on transmission spectra, including traditional 1D
approaches and four 2D models that account for limb asymmetry. Two of these 2D
models include our newly proposed dynamical constraints derived from
shallow-water simulations to provide physically-motivated temperature
differences between limbs. Our analysis of WASP-39 b using JWST observations
and previous combined datasets (HST, VLT, and Spitzer) strongly favors 2D
retrievals over traditional 1D approaches, confirming significant limb
asymmetry in this hot Jupiter. Within our 2D framework, unconstrained models
recover larger temperature contrasts than dynamically-constrained models, with
improved fits to specific spectral features, although Bayesian evidence cannot
definitively distinguish between these 2D approaches. Our results support the
presence of homogeneous C/O in both the morning and evening atmospheres, but
with temperature differences leading to variations in clouds and hazes. Using
this treatment, we can study a larger sample of hot Jupiters to gain insights
into atmospheric limb asymmetries on these planets.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 04:49:17 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Chen",
"Zixin",
""
],
[
"Ji",
"Jianghui",
""
],
[
"Chen",
"Guo",
""
],
[
"Yan",
"Fei",
""
],
[
"Tan",
"Xianyu",
""
]
] | TITLE: Asymmetry and Dynamical Constraints in 2-Limbs Retrieval of WASP-39 b
Inferring from JWST Data
ABSTRACT: Transmission spectroscopy has provided unprecedented insight into the makeup
of exoplanet atmospheres. A transmission spectrum contains contributions from a
planet's morning and evening limbs, which can differ in temperature,
composition and aerosol properties due to atmospheric circulation. While
high-resolution ground-based observations have identified limb asymmetry in
several ultra-hot/hot exoplanets, space-based studies of limb asymmetry are
still in their early stages. The prevalence of limb asymmetry across a broad
range of exoplanets remains largely unexplored. We conduct a comparative
analysis of retrievals on transmission spectra, including traditional 1D
approaches and four 2D models that account for limb asymmetry. Two of these 2D
models include our newly proposed dynamical constraints derived from
shallow-water simulations to provide physically-motivated temperature
differences between limbs. Our analysis of WASP-39 b using JWST observations
and previous combined datasets (HST, VLT, and Spitzer) strongly favors 2D
retrievals over traditional 1D approaches, confirming significant limb
asymmetry in this hot Jupiter. Within our 2D framework, unconstrained models
recover larger temperature contrasts than dynamically-constrained models, with
improved fits to specific spectral features, although Bayesian evidence cannot
definitively distinguish between these 2D approaches. Our results support the
presence of homogeneous C/O in both the morning and evening atmospheres, but
with temperature differences leading to variations in clouds and hazes. Using
this treatment, we can study a larger sample of hot Jupiters to gain insights
into atmospheric limb asymmetries on these planets.
| no_new_dataset | 0.934932 |
2504.00420 | Yuanqi Yao | Yuanqi Yao, Siao Liu, Haoming Song, Delin Qu, Qizhi Chen, Yan Ding,
Bin Zhao, Zhigang Wang, Xuelong Li, Dong Wang | Think Small, Act Big: Primitive Prompt Learning for Lifelong Robot
Manipulation | Accepted to CVPR 2025 | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Building a lifelong robot that can effectively leverage prior knowledge for
continuous skill acquisition remains significantly challenging. Despite the
success of experience replay and parameter-efficient methods in alleviating
catastrophic forgetting problem, naively applying these methods causes a
failure to leverage the shared primitives between skills. To tackle these
issues, we propose Primitive Prompt Learning (PPL), to achieve lifelong robot
manipulation via reusable and extensible primitives. Within our two stage
learning scheme, we first learn a set of primitive prompts to represent shared
primitives through multi-skills pre-training stage, where motion-aware prompts
are learned to capture semantic and motion shared primitives across different
skills. Secondly, when acquiring new skills in lifelong span, new prompts are
appended and optimized with frozen pretrained prompts, boosting the learning
via knowledge transfer from old skills to new ones. For evaluation, we
construct a large-scale skill dataset and conduct extensive experiments in both
simulation and real-world tasks, demonstrating PPL's superior performance over
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 04:55:34 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Yao",
"Yuanqi",
""
],
[
"Liu",
"Siao",
""
],
[
"Song",
"Haoming",
""
],
[
"Qu",
"Delin",
""
],
[
"Chen",
"Qizhi",
""
],
[
"Ding",
"Yan",
""
],
[
"Zhao",
"Bin",
""
],
[
"Wang",
"Zhigang",
""
],
[
"Li",
"Xuelong",
""
],
[
"Wang",
"Dong",
""
]
] | TITLE: Think Small, Act Big: Primitive Prompt Learning for Lifelong Robot
Manipulation
ABSTRACT: Building a lifelong robot that can effectively leverage prior knowledge for
continuous skill acquisition remains significantly challenging. Despite the
success of experience replay and parameter-efficient methods in alleviating
catastrophic forgetting problem, naively applying these methods causes a
failure to leverage the shared primitives between skills. To tackle these
issues, we propose Primitive Prompt Learning (PPL), to achieve lifelong robot
manipulation via reusable and extensible primitives. Within our two stage
learning scheme, we first learn a set of primitive prompts to represent shared
primitives through multi-skills pre-training stage, where motion-aware prompts
are learned to capture semantic and motion shared primitives across different
skills. Secondly, when acquiring new skills in lifelong span, new prompts are
appended and optimized with frozen pretrained prompts, boosting the learning
via knowledge transfer from old skills to new ones. For evaluation, we
construct a large-scale skill dataset and conduct extensive experiments in both
simulation and real-world tasks, demonstrating PPL's superior performance over
state-of-the-art methods.
| new_dataset | 0.95877 |
2504.00421 | Chi Liu Mr | Dongfu Xiao, Chen Gao, Zhengquan Luo, Chi Liu, Sheng Shen | Can LLMs Assist Computer Education? an Empirical Case Study of DeepSeek | null | null | null | null | cs.CV cs.CY | http://creativecommons.org/licenses/by/4.0/ | This study presents an empirical case study to assess the efficacy and
reliability of DeepSeek-V3, an emerging large language model, within the
context of computer education. The evaluation employs both CCNA simulation
questions and real-world inquiries concerning computer network security posed
by Chinese network engineers. To ensure a thorough evaluation, diverse
dimensions are considered, encompassing role dependency, cross-linguistic
proficiency, and answer reproducibility, accompanied by statistical analysis.
The findings demonstrate that the model performs consistently, regardless of
whether prompts include a role definition or not. In addition, its adaptability
across languages is confirmed by maintaining stable accuracy in both original
and translated datasets. A distinct contrast emerges between its performance on
lower-order factual recall tasks and higher-order reasoning exercises, which
underscores its strengths in retrieving information and its limitations in
complex analytical tasks. Although DeepSeek-V3 offers considerable practical
value for network security education, challenges remain in its capability to
process multimodal data and address highly intricate topics. These results
provide valuable insights for future refinement of large language models in
specialized professional environments.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 04:58:16 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xiao",
"Dongfu",
""
],
[
"Gao",
"Chen",
""
],
[
"Luo",
"Zhengquan",
""
],
[
"Liu",
"Chi",
""
],
[
"Shen",
"Sheng",
""
]
] | TITLE: Can LLMs Assist Computer Education? an Empirical Case Study of DeepSeek
ABSTRACT: This study presents an empirical case study to assess the efficacy and
reliability of DeepSeek-V3, an emerging large language model, within the
context of computer education. The evaluation employs both CCNA simulation
questions and real-world inquiries concerning computer network security posed
by Chinese network engineers. To ensure a thorough evaluation, diverse
dimensions are considered, encompassing role dependency, cross-linguistic
proficiency, and answer reproducibility, accompanied by statistical analysis.
The findings demonstrate that the model performs consistently, regardless of
whether prompts include a role definition or not. In addition, its adaptability
across languages is confirmed by maintaining stable accuracy in both original
and translated datasets. A distinct contrast emerges between its performance on
lower-order factual recall tasks and higher-order reasoning exercises, which
underscores its strengths in retrieving information and its limitations in
complex analytical tasks. Although DeepSeek-V3 offers considerable practical
value for network security education, challenges remain in its capability to
process multimodal data and address highly intricate topics. These results
provide valuable insights for future refinement of large language models in
specialized professional environments.
| no_new_dataset | 0.936865 |
2504.00431 | Chi Liu Mr | Yuzhuo Zhou, Chi Liu, Sheng Shen, Siyu Le, Liwen Yu, Sihan Ouyang,
Zongyuan Ge | Enhancing Fundus Image-based Glaucoma Screening via Dynamic Global-Local
Feature Integration | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | With the advancements in medical artificial intelligence (AI), fundus image
classifiers are increasingly being applied to assist in ophthalmic diagnosis.
While existing classification models have achieved high accuracy on specific
fundus datasets, they struggle to address real-world challenges such as
variations in image quality across different imaging devices, discrepancies
between training and testing images across different racial groups, and the
uncertain boundaries due to the characteristics of glaucomatous cases. In this
study, we aim to address the above challenges posed by image variations by
highlighting the importance of incorporating comprehensive fundus image
information, including the optic cup (OC) and optic disc (OD) regions, and
other key image patches. Specifically, we propose a self-adaptive attention
window that autonomously determines optimal boundaries for enhanced feature
extraction. Additionally, we introduce a multi-head attention mechanism to
effectively fuse global and local features via feature linear readout,
improving the model's discriminative capability. Experimental results
demonstrate that our method achieves superior accuracy and robustness in
glaucoma classification.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 05:28:14 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhou",
"Yuzhuo",
""
],
[
"Liu",
"Chi",
""
],
[
"Shen",
"Sheng",
""
],
[
"Le",
"Siyu",
""
],
[
"Yu",
"Liwen",
""
],
[
"Ouyang",
"Sihan",
""
],
[
"Ge",
"Zongyuan",
""
]
] | TITLE: Enhancing Fundus Image-based Glaucoma Screening via Dynamic Global-Local
Feature Integration
ABSTRACT: With the advancements in medical artificial intelligence (AI), fundus image
classifiers are increasingly being applied to assist in ophthalmic diagnosis.
While existing classification models have achieved high accuracy on specific
fundus datasets, they struggle to address real-world challenges such as
variations in image quality across different imaging devices, discrepancies
between training and testing images across different racial groups, and the
uncertain boundaries due to the characteristics of glaucomatous cases. In this
study, we aim to address the above challenges posed by image variations by
highlighting the importance of incorporating comprehensive fundus image
information, including the optic cup (OC) and optic disc (OD) regions, and
other key image patches. Specifically, we propose a self-adaptive attention
window that autonomously determines optimal boundaries for enhanced feature
extraction. Additionally, we introduce a multi-head attention mechanism to
effectively fuse global and local features via feature linear readout,
improving the model's discriminative capability. Experimental results
demonstrate that our method achieves superior accuracy and robustness in
glaucoma classification.
| no_new_dataset | 0.947527 |
2504.00437 | Qi Song | Qi Song, Chenghong Li, Haotong Lin, Sida Peng, Rui Huang | ADGaussian: Generalizable Gaussian Splatting for Autonomous Driving with
Multi-modal Inputs | The project page can be found at
https://maggiesong7.github.io/research/ADGaussian/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel approach, termed ADGaussian, for generalizable street
scene reconstruction. The proposed method enables high-quality rendering from
single-view input. Unlike prior Gaussian Splatting methods that primarily focus
on geometry refinement, we emphasize the importance of joint optimization of
image and depth features for accurate Gaussian prediction. To this end, we
first incorporate sparse LiDAR depth as an additional input modality,
formulating the Gaussian prediction process as a joint learning framework of
visual information and geometric clue. Furthermore, we propose a multi-modal
feature matching strategy coupled with a multi-scale Gaussian decoding model to
enhance the joint refinement of multi-modal features, thereby enabling
efficient multi-modal Gaussian learning. Extensive experiments on two
large-scale autonomous driving datasets, Waymo and KITTI, demonstrate that our
ADGaussian achieves state-of-the-art performance and exhibits superior
zero-shot generalization capabilities in novel-view shifting.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 05:40:23 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Song",
"Qi",
""
],
[
"Li",
"Chenghong",
""
],
[
"Lin",
"Haotong",
""
],
[
"Peng",
"Sida",
""
],
[
"Huang",
"Rui",
""
]
] | TITLE: ADGaussian: Generalizable Gaussian Splatting for Autonomous Driving with
Multi-modal Inputs
ABSTRACT: We present a novel approach, termed ADGaussian, for generalizable street
scene reconstruction. The proposed method enables high-quality rendering from
single-view input. Unlike prior Gaussian Splatting methods that primarily focus
on geometry refinement, we emphasize the importance of joint optimization of
image and depth features for accurate Gaussian prediction. To this end, we
first incorporate sparse LiDAR depth as an additional input modality,
formulating the Gaussian prediction process as a joint learning framework of
visual information and geometric clue. Furthermore, we propose a multi-modal
feature matching strategy coupled with a multi-scale Gaussian decoding model to
enhance the joint refinement of multi-modal features, thereby enabling
efficient multi-modal Gaussian learning. Extensive experiments on two
large-scale autonomous driving datasets, Waymo and KITTI, demonstrate that our
ADGaussian achieves state-of-the-art performance and exhibits superior
zero-shot generalization capabilities in novel-view shifting.
| no_new_dataset | 0.946547 |
2504.00438 | Lan Sun | Lan Sun and Songpengcheng Xia and Jiarui Yang and Ling Pei | Suite-IN++: A FlexiWear BodyNet Integrating Global and Local Motion
Features from Apple Suite for Robust Inertial Navigation | 15 pages,10 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proliferation of wearable technology has established multi-device
ecosystems comprising smartphones, smartwatches, and headphones as critical
enablers for ubiquitous pedestrian localization. However, traditional
pedestrian dead reckoning (PDR) struggles with diverse motion modes, while
data-driven methods, despite improving accuracy, often lack robustness due to
their reliance on a single-device setup. Therefore, a promising solution is to
fully leverage existing wearable devices to form a flexiwear bodynet for robust
and accurate pedestrian localization. This paper presents Suite-IN++, a deep
learning framework for flexiwear bodynet-based pedestrian localization.
Suite-IN++ integrates motion data from wearable devices on different body
parts, using contrastive learning to separate global and local motion features.
It fuses global features based on the data reliability of each device to
capture overall motion trends and employs an attention mechanism to uncover
cross-device correlations in local features, extracting motion details helpful
for accurate localization. To evaluate our method, we construct a real-life
flexiwear bodynet dataset, incorporating Apple Suite (iPhone, Apple Watch, and
AirPods) across diverse walking modes and device configurations. Experimental
results demonstrate that Suite-IN++ achieves superior localization accuracy and
robustness, significantly outperforming state-of-the-art models in real-life
pedestrian tracking scenarios.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 05:40:52 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Sun",
"Lan",
""
],
[
"Xia",
"Songpengcheng",
""
],
[
"Yang",
"Jiarui",
""
],
[
"Pei",
"Ling",
""
]
] | TITLE: Suite-IN++: A FlexiWear BodyNet Integrating Global and Local Motion
Features from Apple Suite for Robust Inertial Navigation
ABSTRACT: The proliferation of wearable technology has established multi-device
ecosystems comprising smartphones, smartwatches, and headphones as critical
enablers for ubiquitous pedestrian localization. However, traditional
pedestrian dead reckoning (PDR) struggles with diverse motion modes, while
data-driven methods, despite improving accuracy, often lack robustness due to
their reliance on a single-device setup. Therefore, a promising solution is to
fully leverage existing wearable devices to form a flexiwear bodynet for robust
and accurate pedestrian localization. This paper presents Suite-IN++, a deep
learning framework for flexiwear bodynet-based pedestrian localization.
Suite-IN++ integrates motion data from wearable devices on different body
parts, using contrastive learning to separate global and local motion features.
It fuses global features based on the data reliability of each device to
capture overall motion trends and employs an attention mechanism to uncover
cross-device correlations in local features, extracting motion details helpful
for accurate localization. To evaluate our method, we construct a real-life
flexiwear bodynet dataset, incorporating Apple Suite (iPhone, Apple Watch, and
AirPods) across diverse walking modes and device configurations. Experimental
results demonstrate that Suite-IN++ achieves superior localization accuracy and
robustness, significantly outperforming state-of-the-art models in real-life
pedestrian tracking scenarios.
| new_dataset | 0.959649 |
2504.00447 | Insoon Yang | Jaeuk Shin, Jungjin Lee, Insoon Yang | Egocentric Conformal Prediction for Safe and Efficient Navigation in
Dynamic Cluttered Environments | null | null | null | null | cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Conformal prediction (CP) has emerged as a powerful tool in robotics and
control, thanks to its ability to calibrate complex, data-driven models with
formal guarantees. However, in robot navigation tasks, existing CP-based
methods often decouple prediction from control, evaluating models without
considering whether prediction errors actually compromise safety. Consequently,
ego-vehicles may become overly conservative or even immobilized when all
potential trajectories appear infeasible. To address this issue, we propose a
novel CP-based navigation framework that responds exclusively to
safety-critical prediction errors. Our approach introduces egocentric score
functions that quantify how much closer obstacles are to a candidate vehicle
position than anticipated. These score functions are then integrated into a
model predictive control scheme, wherein each candidate state is individually
evaluated for safety. Combined with an adaptive CP mechanism, our framework
dynamically adjusts to changes in obstacle motion without resorting to
unnecessary conservatism. Theoretical analyses indicate that our method
outperforms existing CP-based approaches in terms of cost-efficiency while
maintaining the desired safety levels, as further validated through experiments
on real-world datasets featuring densely populated pedestrian environments.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 05:59:05 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Shin",
"Jaeuk",
""
],
[
"Lee",
"Jungjin",
""
],
[
"Yang",
"Insoon",
""
]
] | TITLE: Egocentric Conformal Prediction for Safe and Efficient Navigation in
Dynamic Cluttered Environments
ABSTRACT: Conformal prediction (CP) has emerged as a powerful tool in robotics and
control, thanks to its ability to calibrate complex, data-driven models with
formal guarantees. However, in robot navigation tasks, existing CP-based
methods often decouple prediction from control, evaluating models without
considering whether prediction errors actually compromise safety. Consequently,
ego-vehicles may become overly conservative or even immobilized when all
potential trajectories appear infeasible. To address this issue, we propose a
novel CP-based navigation framework that responds exclusively to
safety-critical prediction errors. Our approach introduces egocentric score
functions that quantify how much closer obstacles are to a candidate vehicle
position than anticipated. These score functions are then integrated into a
model predictive control scheme, wherein each candidate state is individually
evaluated for safety. Combined with an adaptive CP mechanism, our framework
dynamically adjusts to changes in obstacle motion without resorting to
unnecessary conservatism. Theoretical analyses indicate that our method
outperforms existing CP-based approaches in terms of cost-efficiency while
maintaining the desired safety levels, as further validated through experiments
on real-world datasets featuring densely populated pedestrian environments.
| no_new_dataset | 0.939637 |
2504.00451 | Poonam Sharma | Poonam Sharma, Dildar Ali, Suman Banerjee | A Regret-Aware Framework for Effective Social Media Advertising | null | null | null | null | cs.SI | http://creativecommons.org/licenses/by/4.0/ | Social Media Advertisement has emerged as an effective approach for promoting
the brands of a commercial house. Hence, many of them have started using this
medium to maximize the influence among the users and create a customer base. In
recent times, several companies have emerged as Influence Provider who provides
views of advertisement content depending on the budget provided by the
commercial house. In this process, the influence provider tries to exploit the
information diffusion phenomenon of a social network, and a limited number of
highly influential users are chosen and activated initially. Due to diffusion
phenomenon, the hope is that the advertisement content will reach a large
number of people. Now, consider that a group of advertisers is approaching an
influence provider with their respective budget and influence demand. Now, for
any advertiser, if the influence provider provides more or less influence, it
will be a loss for the influence provider. It is an important problem from the
point of view of influence provider, as it is important to allocate the seed
nodes to the advertisers so that the loss is minimized. In this paper, we study
this problem, which we formally referred to as Regret Minimization in Social
Media Advertisement Problem. We propose a noble regret model that captures the
aggregated loss encountered by the influence provider while allocating the seed
nodes. We have shown that this problem is a computationally hard problem to
solve. We have proposed three efficient heuristic solutions to solve our
problem, analyzed to understand their time and space requirements. They have
been implemented with real world social network datasets, and several
experiments have been conducted and compared to many baseline methods.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 06:13:51 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Sharma",
"Poonam",
""
],
[
"Ali",
"Dildar",
""
],
[
"Banerjee",
"Suman",
""
]
] | TITLE: A Regret-Aware Framework for Effective Social Media Advertising
ABSTRACT: Social Media Advertisement has emerged as an effective approach for promoting
the brands of a commercial house. Hence, many of them have started using this
medium to maximize the influence among the users and create a customer base. In
recent times, several companies have emerged as Influence Provider who provides
views of advertisement content depending on the budget provided by the
commercial house. In this process, the influence provider tries to exploit the
information diffusion phenomenon of a social network, and a limited number of
highly influential users are chosen and activated initially. Due to diffusion
phenomenon, the hope is that the advertisement content will reach a large
number of people. Now, consider that a group of advertisers is approaching an
influence provider with their respective budget and influence demand. Now, for
any advertiser, if the influence provider provides more or less influence, it
will be a loss for the influence provider. It is an important problem from the
point of view of influence provider, as it is important to allocate the seed
nodes to the advertisers so that the loss is minimized. In this paper, we study
this problem, which we formally referred to as Regret Minimization in Social
Media Advertisement Problem. We propose a noble regret model that captures the
aggregated loss encountered by the influence provider while allocating the seed
nodes. We have shown that this problem is a computationally hard problem to
solve. We have proposed three efficient heuristic solutions to solve our
problem, analyzed to understand their time and space requirements. They have
been implemented with real world social network datasets, and several
experiments have been conducted and compared to many baseline methods.
| no_new_dataset | 0.943034 |
2504.00456 | Ruben Sevilla | Callum Lock, Oubay Hassan, Ruben Sevilla and Jason Jones | Anisotropic mesh spacing prediction using neural networks | 30 pages, 16 figures | null | null | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | This work presents a framework to predict near-optimal anisotropic spacing
functions suitable to perform simulations with unseen operating conditions or
geometric configurations. The strategy consists of utilising the vast amount of
high fidelity data available in industry to compute a target anisotropic
spacing and train an artificial neural network to predict the spacing for
unseen scenarios. The trained neural network outputs the metric tensor at the
nodes of a coarse background mesh that is then used to generate meshes for
unseen cases. Examples are used to demonstrate the effect of the network
hyperparameters and the training dataset on the accuracy of the predictions.
The potential is demonstrated for examples involving up to 11 geometric
parameters on CFD simulations involving a full aircraft configuration.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 06:32:20 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Lock",
"Callum",
""
],
[
"Hassan",
"Oubay",
""
],
[
"Sevilla",
"Ruben",
""
],
[
"Jones",
"Jason",
""
]
] | TITLE: Anisotropic mesh spacing prediction using neural networks
ABSTRACT: This work presents a framework to predict near-optimal anisotropic spacing
functions suitable to perform simulations with unseen operating conditions or
geometric configurations. The strategy consists of utilising the vast amount of
high fidelity data available in industry to compute a target anisotropic
spacing and train an artificial neural network to predict the spacing for
unseen scenarios. The trained neural network outputs the metric tensor at the
nodes of a coarse background mesh that is then used to generate meshes for
unseen cases. Examples are used to demonstrate the effect of the network
hyperparameters and the training dataset on the accuracy of the predictions.
The potential is demonstrated for examples involving up to 11 geometric
parameters on CFD simulations involving a full aircraft configuration.
| no_new_dataset | 0.952794 |
2504.00458 | Ajian Liu | Shunxin Chen, Ajian Liu, Junze Zheng, Jun Wan, Kailai Peng, Sergio
Escalera, Zhen Lei | Mixture-of-Attack-Experts with Class Regularization for Unified
Physical-Digital Face Attack Detection | 9 pages, 5 figures, accepted by AAAI-2025 (Oral) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial recognition systems in real-world scenarios are susceptible to both
digital and physical attacks. Previous methods have attempted to achieve
classification by learning a comprehensive feature space. However, these
methods have not adequately accounted for the inherent characteristics of
physical and digital attack data, particularly the large intra class variation
in attacks and the small inter-class variation between live and fake faces. To
address these limitations, we propose the Fine-Grained MoE with Class-Aware
Regularization CLIP framework (FG-MoE-CLIP-CAR), incorporating key improvements
at both the feature and loss levels. At the feature level, we employ a Soft
Mixture of Experts (Soft MoE) architecture to leverage different experts for
specialized feature processing. Additionally, we refine the Soft MoE to capture
more subtle differences among various types of fake faces. At the loss level,
we introduce two constraint modules: the Disentanglement Module (DM) and the
Cluster Distillation Module (CDM). The DM enhances class separability by
increasing the distance between the centers of live and fake face classes.
However, center-to-center constraints alone are insufficient to ensure
distinctive representations for individual features. Thus, we propose the CDM
to further cluster features around their respective class centers while
maintaining separation from other classes. Moreover, specific attacks that
significantly deviate from common attack patterns are often overlooked. To
address this issue, our distance calculation prioritizes more distant features.
Experimental results on two unified physical-digital attack datasets
demonstrate that the proposed method achieves state-of-the-art (SOTA)
performance.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 06:33:30 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Chen",
"Shunxin",
""
],
[
"Liu",
"Ajian",
""
],
[
"Zheng",
"Junze",
""
],
[
"Wan",
"Jun",
""
],
[
"Peng",
"Kailai",
""
],
[
"Escalera",
"Sergio",
""
],
[
"Lei",
"Zhen",
""
]
] | TITLE: Mixture-of-Attack-Experts with Class Regularization for Unified
Physical-Digital Face Attack Detection
ABSTRACT: Facial recognition systems in real-world scenarios are susceptible to both
digital and physical attacks. Previous methods have attempted to achieve
classification by learning a comprehensive feature space. However, these
methods have not adequately accounted for the inherent characteristics of
physical and digital attack data, particularly the large intra class variation
in attacks and the small inter-class variation between live and fake faces. To
address these limitations, we propose the Fine-Grained MoE with Class-Aware
Regularization CLIP framework (FG-MoE-CLIP-CAR), incorporating key improvements
at both the feature and loss levels. At the feature level, we employ a Soft
Mixture of Experts (Soft MoE) architecture to leverage different experts for
specialized feature processing. Additionally, we refine the Soft MoE to capture
more subtle differences among various types of fake faces. At the loss level,
we introduce two constraint modules: the Disentanglement Module (DM) and the
Cluster Distillation Module (CDM). The DM enhances class separability by
increasing the distance between the centers of live and fake face classes.
However, center-to-center constraints alone are insufficient to ensure
distinctive representations for individual features. Thus, we propose the CDM
to further cluster features around their respective class centers while
maintaining separation from other classes. Moreover, specific attacks that
significantly deviate from common attack patterns are often overlooked. To
address this issue, our distance calculation prioritizes more distant features.
Experimental results on two unified physical-digital attack datasets
demonstrate that the proposed method achieves state-of-the-art (SOTA)
performance.
| no_new_dataset | 0.946349 |
2504.00463 | Ziyin Zhou | Ziyin Zhou, Ke Sun, Zhongxi Chen, Xianming Lin, Yunpeng Luo, Ke Yan,
Shouhong Ding, Xiaoshuai Sun | Exploring the Collaborative Advantage of Low-level Information on
Generalizable AI-Generated Image Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing state-of-the-art AI-Generated image detection methods mostly
consider extracting low-level information from RGB images to help improve the
generalization of AI-Generated image detection, such as noise patterns.
However, these methods often consider only a single type of low-level
information, which may lead to suboptimal generalization. Through empirical
analysis, we have discovered a key insight: different low-level information
often exhibits generalization capabilities for different types of forgeries.
Furthermore, we found that simple fusion strategies are insufficient to
leverage the detection advantages of each low-level and high-level information
for various forgery types. Therefore, we propose the Adaptive Low-level Experts
Injection (ALEI) framework. Our approach introduces Lora Experts, enabling the
backbone network, which is trained with high-level semantic RGB images, to
accept and learn knowledge from different low-level information. We utilize a
cross-attention method to adaptively fuse these features at intermediate
layers. To prevent the backbone network from losing the modeling capabilities
of different low-level features during the later stages of modeling, we
developed a Low-level Information Adapter that interacts with the features
extracted by the backbone network. Finally, we propose Dynamic Feature
Selection, which dynamically selects the most suitable features for detecting
the current image to maximize generalization detection capability. Extensive
experiments demonstrate that our method, finetuned on only four categories of
mainstream ProGAN data, performs excellently and achieves state-of-the-art
results on multiple datasets containing unseen GAN and Diffusion methods.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 06:38:08 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhou",
"Ziyin",
""
],
[
"Sun",
"Ke",
""
],
[
"Chen",
"Zhongxi",
""
],
[
"Lin",
"Xianming",
""
],
[
"Luo",
"Yunpeng",
""
],
[
"Yan",
"Ke",
""
],
[
"Ding",
"Shouhong",
""
],
[
"Sun",
"Xiaoshuai",
""
]
] | TITLE: Exploring the Collaborative Advantage of Low-level Information on
Generalizable AI-Generated Image Detection
ABSTRACT: Existing state-of-the-art AI-Generated image detection methods mostly
consider extracting low-level information from RGB images to help improve the
generalization of AI-Generated image detection, such as noise patterns.
However, these methods often consider only a single type of low-level
information, which may lead to suboptimal generalization. Through empirical
analysis, we have discovered a key insight: different low-level information
often exhibits generalization capabilities for different types of forgeries.
Furthermore, we found that simple fusion strategies are insufficient to
leverage the detection advantages of each low-level and high-level information
for various forgery types. Therefore, we propose the Adaptive Low-level Experts
Injection (ALEI) framework. Our approach introduces Lora Experts, enabling the
backbone network, which is trained with high-level semantic RGB images, to
accept and learn knowledge from different low-level information. We utilize a
cross-attention method to adaptively fuse these features at intermediate
layers. To prevent the backbone network from losing the modeling capabilities
of different low-level features during the later stages of modeling, we
developed a Low-level Information Adapter that interacts with the features
extracted by the backbone network. Finally, we propose Dynamic Feature
Selection, which dynamically selects the most suitable features for detecting
the current image to maximize generalization detection capability. Extensive
experiments demonstrate that our method, finetuned on only four categories of
mainstream ProGAN data, performs excellently and achieves state-of-the-art
results on multiple datasets containing unseen GAN and Diffusion methods.
| no_new_dataset | 0.949623 |
2504.00476 | Haobo Yuan | Haobo Yuan, Tao Zhang, Xiangtai Li, Lu Qi, Zilong Huang, Shilin Xu,
Jiashi Feng, Ming-Hsuan Yang | 4th PVUW MeViS 3rd Place Report: Sa2VA | Technical Report, 4 pages, Code:
https://github.com/magic-research/Sa2VA | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Referring video object segmentation (RVOS) is a challenging task that
requires the model to segment the object in a video given the language
description. MeViS is a recently proposed dataset that contains motion
expressions of the target objects, leading to a challenging benchmark, compared
with existing RVOS benchmarks. On the other hand, for referring expression
tasks, a new trend is to adopt multi-modal large language model (MLLM) to
achieve better image and text alignment. In this report, we show that with a
simple modification to the test time inference method on stronger MLLMs, we can
lead to stronger results on MeVIS. In particular, we adopt the recent method
Sa2VA, a unified model for dense grounded understanding of both images and
videos. By enlarging the scope of key frames, without any further training, we
can achieve the 3rd place in the 4th PVUW workshop.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 07:06:47 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Yuan",
"Haobo",
""
],
[
"Zhang",
"Tao",
""
],
[
"Li",
"Xiangtai",
""
],
[
"Qi",
"Lu",
""
],
[
"Huang",
"Zilong",
""
],
[
"Xu",
"Shilin",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Yang",
"Ming-Hsuan",
""
]
] | TITLE: 4th PVUW MeViS 3rd Place Report: Sa2VA
ABSTRACT: Referring video object segmentation (RVOS) is a challenging task that
requires the model to segment the object in a video given the language
description. MeViS is a recently proposed dataset that contains motion
expressions of the target objects, leading to a challenging benchmark, compared
with existing RVOS benchmarks. On the other hand, for referring expression
tasks, a new trend is to adopt multi-modal large language model (MLLM) to
achieve better image and text alignment. In this report, we show that with a
simple modification to the test time inference method on stronger MLLMs, we can
lead to stronger results on MeVIS. In particular, we adopt the recent method
Sa2VA, a unified model for dense grounded understanding of both images and
videos. By enlarging the scope of key frames, without any further training, we
can achieve the 3rd place in the 4th PVUW workshop.
| new_dataset | 0.958924 |
2504.00478 | Zhuohao Li | Zhuohao Li, Zhicheng Huang, Wenchao Liu, Zhuxing Zhang, and Jianming
Miao | FSSUWNet: Mitigating the Fragility of Pre-trained Models with Feature
Enhancement for Few-Shot Semantic Segmentation in Underwater Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Few-Shot Semantic Segmentation (FSS), which focuses on segmenting new classes
in images using only a limited number of annotated examples, has recently
progressed in data-scarce domains. However, in this work, we show that the
existing FSS methods often struggle to generalize to underwater environments.
Specifically, the prior features extracted by pre-trained models used as
feature extractors are fragile due to the unique challenges of underwater
images. To address this, we propose FSSUWNet, a tailored FSS framework for
underwater images with feature enhancement. FSSUWNet exploits the integration
of complementary features, emphasizing both low-level and high-level image
characteristics. In addition to employing a pre-trained model as the primary
encoder, we propose an auxiliary encoder called Feature Enhanced Encoder which
extracts complementary features to better adapt to underwater scene
characteristics. Furthermore, a simple and effective Feature Alignment Module
aims to provide global prior knowledge and align low-level features with
high-level features in dimensions. Given the scarcity of underwater images, we
introduce a cross-validation dataset version based on the Segmentation of
Underwater Imagery dataset. Extensive experiments on public underwater
segmentation datasets demonstrate that our approach achieves state-of-the-art
performance. For example, our method outperforms the previous best method by
2.8% and 2.6% in terms of the mean Intersection over Union metric for 1-shot
and 5-shot scenarios in the datasets, respectively. Our implementation is
available at https://github.com/lizhh268/FSSUWNet.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 07:09:15 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Li",
"Zhuohao",
""
],
[
"Huang",
"Zhicheng",
""
],
[
"Liu",
"Wenchao",
""
],
[
"Zhang",
"Zhuxing",
""
],
[
"Miao",
"Jianming",
""
]
] | TITLE: FSSUWNet: Mitigating the Fragility of Pre-trained Models with Feature
Enhancement for Few-Shot Semantic Segmentation in Underwater Images
ABSTRACT: Few-Shot Semantic Segmentation (FSS), which focuses on segmenting new classes
in images using only a limited number of annotated examples, has recently
progressed in data-scarce domains. However, in this work, we show that the
existing FSS methods often struggle to generalize to underwater environments.
Specifically, the prior features extracted by pre-trained models used as
feature extractors are fragile due to the unique challenges of underwater
images. To address this, we propose FSSUWNet, a tailored FSS framework for
underwater images with feature enhancement. FSSUWNet exploits the integration
of complementary features, emphasizing both low-level and high-level image
characteristics. In addition to employing a pre-trained model as the primary
encoder, we propose an auxiliary encoder called Feature Enhanced Encoder which
extracts complementary features to better adapt to underwater scene
characteristics. Furthermore, a simple and effective Feature Alignment Module
aims to provide global prior knowledge and align low-level features with
high-level features in dimensions. Given the scarcity of underwater images, we
introduce a cross-validation dataset version based on the Segmentation of
Underwater Imagery dataset. Extensive experiments on public underwater
segmentation datasets demonstrate that our approach achieves state-of-the-art
performance. For example, our method outperforms the previous best method by
2.8% and 2.6% in terms of the mean Intersection over Union metric for 1-shot
and 5-shot scenarios in the datasets, respectively. Our implementation is
available at https://github.com/lizhh268/FSSUWNet.
| no_new_dataset | 0.921852 |
2504.00480 | Martin Stoll | Theresa Wagner, Tianshi Xu, Franziska Nestler, Yuanzhe Xi, Martin
Stoll | Preconditioned Additive Gaussian Processes with Fourier Acceleration | null | null | null | null | cs.LG cs.NA math.NA | http://creativecommons.org/licenses/by/4.0/ | Gaussian processes (GPs) are crucial in machine learning for quantifying
uncertainty in predictions. However, their associated covariance matrices,
defined by kernel functions, are typically dense and large-scale, posing
significant computational challenges. This paper introduces a matrix-free
method that utilizes the Non-equispaced Fast Fourier Transform (NFFT) to
achieve nearly linear complexity in the multiplication of kernel matrices and
their derivatives with vectors for a predetermined accuracy level. To address
high-dimensional problems, we propose an additive kernel approach. Each
sub-kernel in this approach captures lower-order feature interactions, allowing
for the efficient application of the NFFT method and potentially increasing
accuracy across various real-world datasets. Additionally, we implement a
preconditioning strategy that accelerates hyperparameter tuning, further
improving the efficiency and effectiveness of GPs.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 07:14:06 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wagner",
"Theresa",
""
],
[
"Xu",
"Tianshi",
""
],
[
"Nestler",
"Franziska",
""
],
[
"Xi",
"Yuanzhe",
""
],
[
"Stoll",
"Martin",
""
]
] | TITLE: Preconditioned Additive Gaussian Processes with Fourier Acceleration
ABSTRACT: Gaussian processes (GPs) are crucial in machine learning for quantifying
uncertainty in predictions. However, their associated covariance matrices,
defined by kernel functions, are typically dense and large-scale, posing
significant computational challenges. This paper introduces a matrix-free
method that utilizes the Non-equispaced Fast Fourier Transform (NFFT) to
achieve nearly linear complexity in the multiplication of kernel matrices and
their derivatives with vectors for a predetermined accuracy level. To address
high-dimensional problems, we propose an additive kernel approach. Each
sub-kernel in this approach captures lower-order feature interactions, allowing
for the efficient application of the NFFT method and potentially increasing
accuracy across various real-world datasets. Additionally, we implement a
preconditioning strategy that accelerates hyperparameter tuning, further
improving the efficiency and effectiveness of GPs.
| no_new_dataset | 0.946745 |
2504.00490 | Zetong Chen | Zetong Chen, Yuzhuo Chen, Hai Zhong, Xu Qiao | SCFANet: Style Distribution Constraint Feature Alignment Network For
Pathological Staining Translation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Immunohistochemical (IHC) staining serves as a valuable technique for
detecting specific antigens or proteins through antibody-mediated
visualization. However, the IHC staining process is both time-consuming and
costly. To address these limitations, the application of deep learning models
for direct translation of cost-effective Hematoxylin and Eosin (H&E) stained
images into IHC stained images has emerged as an efficient solution.
Nevertheless, the conversion from H&E to IHC images presents significant
challenges, primarily due to alignment discrepancies between image pairs and
the inherent diversity in IHC staining style patterns. To overcome these
challenges, we propose the Style Distribution Constraint Feature Alignment
Network (SCFANet), which incorporates two innovative modules: the Style
Distribution Constrainer (SDC) and Feature Alignment Learning (FAL). The SDC
ensures consistency between the generated and target images' style
distributions while integrating cycle consistency loss to maintain structural
consistency. To mitigate the complexity of direct image-to-image translation,
the FAL module decomposes the end-to-end translation task into two subtasks:
image reconstruction and feature alignment. Furthermore, we ensure pathological
consistency between generated and target images by maintaining pathological
pattern consistency and Optical Density (OD) uniformity. Extensive experiments
conducted on the Breast Cancer Immunohistochemical (BCI) dataset demonstrate
that our SCFANet model outperforms existing methods, achieving precise
transformation of H&E-stained images into their IHC-stained counterparts. The
proposed approach not only addresses the technical challenges in H&E to IHC
image translation but also provides a robust framework for accurate and
efficient stain conversion in pathological analysis.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 07:29:53 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Chen",
"Zetong",
""
],
[
"Chen",
"Yuzhuo",
""
],
[
"Zhong",
"Hai",
""
],
[
"Qiao",
"Xu",
""
]
] | TITLE: SCFANet: Style Distribution Constraint Feature Alignment Network For
Pathological Staining Translation
ABSTRACT: Immunohistochemical (IHC) staining serves as a valuable technique for
detecting specific antigens or proteins through antibody-mediated
visualization. However, the IHC staining process is both time-consuming and
costly. To address these limitations, the application of deep learning models
for direct translation of cost-effective Hematoxylin and Eosin (H&E) stained
images into IHC stained images has emerged as an efficient solution.
Nevertheless, the conversion from H&E to IHC images presents significant
challenges, primarily due to alignment discrepancies between image pairs and
the inherent diversity in IHC staining style patterns. To overcome these
challenges, we propose the Style Distribution Constraint Feature Alignment
Network (SCFANet), which incorporates two innovative modules: the Style
Distribution Constrainer (SDC) and Feature Alignment Learning (FAL). The SDC
ensures consistency between the generated and target images' style
distributions while integrating cycle consistency loss to maintain structural
consistency. To mitigate the complexity of direct image-to-image translation,
the FAL module decomposes the end-to-end translation task into two subtasks:
image reconstruction and feature alignment. Furthermore, we ensure pathological
consistency between generated and target images by maintaining pathological
pattern consistency and Optical Density (OD) uniformity. Extensive experiments
conducted on the Breast Cancer Immunohistochemical (BCI) dataset demonstrate
that our SCFANet model outperforms existing methods, achieving precise
transformation of H&E-stained images into their IHC-stained counterparts. The
proposed approach not only addresses the technical challenges in H&E to IHC
image translation but also provides a robust framework for accurate and
efficient stain conversion in pathological analysis.
| no_new_dataset | 0.94887 |
2504.00496 | Jingbo Lu | Jingbo Lu, Leheng Zhang, Xingyu Zhou, Mu Li, Wen Li, Shuhang Gu | Learned Image Compression with Dictionary-based Entropy Model | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learned image compression methods have attracted great research interest and
exhibited superior rate-distortion performance to the best classical image
compression standards of the present. The entropy model plays a key role in
learned image compression, which estimates the probability distribution of the
latent representation for further entropy coding. Most existing methods
employed hyper-prior and auto-regressive architectures to form their entropy
models. However, they only aimed to explore the internal dependencies of latent
representation while neglecting the importance of extracting prior from
training data. In this work, we propose a novel entropy model named
Dictionary-based Cross Attention Entropy model, which introduces a learnable
dictionary to summarize the typical structures occurring in the training
dataset to enhance the entropy model. Extensive experimental results have
demonstrated that the proposed model strikes a better balance between
performance and latency, achieving state-of-the-art results on various
benchmark datasets.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 07:43:10 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Lu",
"Jingbo",
""
],
[
"Zhang",
"Leheng",
""
],
[
"Zhou",
"Xingyu",
""
],
[
"Li",
"Mu",
""
],
[
"Li",
"Wen",
""
],
[
"Gu",
"Shuhang",
""
]
] | TITLE: Learned Image Compression with Dictionary-based Entropy Model
ABSTRACT: Learned image compression methods have attracted great research interest and
exhibited superior rate-distortion performance to the best classical image
compression standards of the present. The entropy model plays a key role in
learned image compression, which estimates the probability distribution of the
latent representation for further entropy coding. Most existing methods
employed hyper-prior and auto-regressive architectures to form their entropy
models. However, they only aimed to explore the internal dependencies of latent
representation while neglecting the importance of extracting prior from
training data. In this work, we propose a novel entropy model named
Dictionary-based Cross Attention Entropy model, which introduces a learnable
dictionary to summarize the typical structures occurring in the training
dataset to enhance the entropy model. Extensive experimental results have
demonstrated that the proposed model strikes a better balance between
performance and latency, achieving state-of-the-art results on various
benchmark datasets.
| no_new_dataset | 0.944536 |
2504.00497 | Mahdi Madani | Mahdi Madani and El-Bay Bourennane | Visually Image Encryption and Compression Using a CNN-Based Auto Encoder | null | International Journal of Computer Networks & Communications
(IJCNC) Vol.17, No.2, March 2025 | 10.5121/ijcnc.2025.17207 | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | This paper proposes a visual encryption method to ensure the confidentiality
of digital images. The model used is based on an autoencoder using
aConvolutional Neural Network (CNN) to ensure the protection of the user data
on both the sender side (encryption process) and the receiver side(decryption
process)in a symmetric mode. To train and test the model, we used the MNIST and
CIFAR-10 datasets. Our focus lies in generating an encrypted dataset by
combining the original dataset with a random mask. Then, a convolutional
autoencoder in the masked dataset will be designed and trained to learn
essential image features in a reduced-dimensional latent space and reconstruct
the image from this space. The used mask can be considered as a secret key
known in standard cryptographic algorithms which allows the receiver of the
masked data to recover the plain data. The implementation of this proposed
encryption model demonstrates efficacy in preserving data confidentiality and
integrity while reducing the dimensionality (for example we pass from 3072
Bytes to 1024 Bytes for CIFAR-10 images). Experimental results show that the
used CNN exhibits a proficient encryption and decryption process on the MNIST
dataset, and a proficient encryption and acceptable decryption process on the
CIFAR-10 dataset.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 07:43:36 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Madani",
"Mahdi",
""
],
[
"Bourennane",
"El-Bay",
""
]
] | TITLE: Visually Image Encryption and Compression Using a CNN-Based Auto Encoder
ABSTRACT: This paper proposes a visual encryption method to ensure the confidentiality
of digital images. The model used is based on an autoencoder using
aConvolutional Neural Network (CNN) to ensure the protection of the user data
on both the sender side (encryption process) and the receiver side(decryption
process)in a symmetric mode. To train and test the model, we used the MNIST and
CIFAR-10 datasets. Our focus lies in generating an encrypted dataset by
combining the original dataset with a random mask. Then, a convolutional
autoencoder in the masked dataset will be designed and trained to learn
essential image features in a reduced-dimensional latent space and reconstruct
the image from this space. The used mask can be considered as a secret key
known in standard cryptographic algorithms which allows the receiver of the
masked data to recover the plain data. The implementation of this proposed
encryption model demonstrates efficacy in preserving data confidentiality and
integrity while reducing the dimensionality (for example we pass from 3072
Bytes to 1024 Bytes for CIFAR-10 images). Experimental results show that the
used CNN exhibits a proficient encryption and decryption process on the MNIST
dataset, and a proficient encryption and acceptable decryption process on the
CIFAR-10 dataset.
| no_new_dataset | 0.947817 |
2504.00522 | Kyuhan Lee | Kyuhan Lee, Geon Lee, Kijung Shin | MARIOH: Multiplicity-Aware Hypergraph Reconstruction | to be published in the 41st IEEE International Conference on Data
Engineering (ICDE '25) | null | null | null | cs.DB cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Hypergraphs offer a powerful framework for modeling higher-order interactions
that traditional pairwise graphs cannot fully capture. However, practical
constraints often lead to their simplification into projected graphs, resulting
in substantial information loss and ambiguity in representing higher-order
relationships. In this work, we propose MARIOH, a supervised approach for
reconstructing the original hypergraph from its projected graph by leveraging
edge multiplicity. To overcome the difficulties posed by the large search
space, MARIOH integrates several key ideas: (a) identifying provable size-2
hyperedges, which reduces the candidate search space, (b) predicting the
likelihood of candidates being hyperedges by utilizing both structural and
multiplicity-related features, and (c) not only targeting promising hyperedge
candidates but also examining less confident ones to explore alternative
possibilities. Together, these ideas enable MARIOH to efficiently and
effectively explore the search space. In our experiments using 10 real-world
datasets, MARIOH achieves up to 74.51% higher reconstruction accuracy compared
to state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 08:14:59 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Lee",
"Kyuhan",
""
],
[
"Lee",
"Geon",
""
],
[
"Shin",
"Kijung",
""
]
] | TITLE: MARIOH: Multiplicity-Aware Hypergraph Reconstruction
ABSTRACT: Hypergraphs offer a powerful framework for modeling higher-order interactions
that traditional pairwise graphs cannot fully capture. However, practical
constraints often lead to their simplification into projected graphs, resulting
in substantial information loss and ambiguity in representing higher-order
relationships. In this work, we propose MARIOH, a supervised approach for
reconstructing the original hypergraph from its projected graph by leveraging
edge multiplicity. To overcome the difficulties posed by the large search
space, MARIOH integrates several key ideas: (a) identifying provable size-2
hyperedges, which reduces the candidate search space, (b) predicting the
likelihood of candidates being hyperedges by utilizing both structural and
multiplicity-related features, and (c) not only targeting promising hyperedge
candidates but also examining less confident ones to explore alternative
possibilities. Together, these ideas enable MARIOH to efficiently and
effectively explore the search space. In our experiments using 10 real-world
datasets, MARIOH achieves up to 74.51% higher reconstruction accuracy compared
to state-of-the-art methods.
| no_new_dataset | 0.951729 |
2504.00526 | Xinrun Xu | Xinrun Xu, Qiuhong Zhang, Jianwen Yang, Zhanbiao Lian, Jin Yan,
Zhiming Ding, Shan Jiang | High-Quality Pseudo-Label Generation Based on Visual Prompt Assisted
Cloud Model Update | IJCNN'25 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating high-quality pseudo-labels on the cloud is crucial for cloud-edge
object detection, especially in dynamic traffic monitoring where data
distributions evolve. Existing methods often assume reliable cloud models,
neglecting potential errors or struggling with complex distribution shifts.
This paper proposes Cloud-Adaptive High-Quality Pseudo-label generation
(CA-HQP), addressing these limitations by incorporating a learnable Visual
Prompt Generator (VPG) and dual feature alignment into cloud model updates. The
VPG enables parameter-efficient adaptation by injecting visual prompts,
enhancing flexibility without extensive fine-tuning. CA-HQP mitigates domain
discrepancies via two feature alignment techniques: global Domain Query Feature
Alignment (DQFA) capturing scene-level shifts, and fine-grained Temporal
Instance-Aware Feature Embedding Alignment (TIAFA) addressing instance
variations. Experiments on the Bellevue traffic dataset demonstrate that CA-HQP
significantly improves pseudo-label quality compared to existing methods,
leading to notable performance gains for the edge model and showcasing CA-HQP's
adaptation effectiveness. Ablation studies validate each component (DQFA,
TIAFA, VPG) and the synergistic effect of combined alignment strategies,
highlighting the importance of adaptive cloud updates and domain adaptation for
robust object detection in evolving scenarios. CA-HQP provides a promising
solution for enhancing cloud-edge object detection systems in real-world
applications.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 08:20:16 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xu",
"Xinrun",
""
],
[
"Zhang",
"Qiuhong",
""
],
[
"Yang",
"Jianwen",
""
],
[
"Lian",
"Zhanbiao",
""
],
[
"Yan",
"Jin",
""
],
[
"Ding",
"Zhiming",
""
],
[
"Jiang",
"Shan",
""
]
] | TITLE: High-Quality Pseudo-Label Generation Based on Visual Prompt Assisted
Cloud Model Update
ABSTRACT: Generating high-quality pseudo-labels on the cloud is crucial for cloud-edge
object detection, especially in dynamic traffic monitoring where data
distributions evolve. Existing methods often assume reliable cloud models,
neglecting potential errors or struggling with complex distribution shifts.
This paper proposes Cloud-Adaptive High-Quality Pseudo-label generation
(CA-HQP), addressing these limitations by incorporating a learnable Visual
Prompt Generator (VPG) and dual feature alignment into cloud model updates. The
VPG enables parameter-efficient adaptation by injecting visual prompts,
enhancing flexibility without extensive fine-tuning. CA-HQP mitigates domain
discrepancies via two feature alignment techniques: global Domain Query Feature
Alignment (DQFA) capturing scene-level shifts, and fine-grained Temporal
Instance-Aware Feature Embedding Alignment (TIAFA) addressing instance
variations. Experiments on the Bellevue traffic dataset demonstrate that CA-HQP
significantly improves pseudo-label quality compared to existing methods,
leading to notable performance gains for the edge model and showcasing CA-HQP's
adaptation effectiveness. Ablation studies validate each component (DQFA,
TIAFA, VPG) and the synergistic effect of combined alignment strategies,
highlighting the importance of adaptive cloud updates and domain adaptation for
robust object detection in evolving scenarios. CA-HQP provides a promising
solution for enhancing cloud-edge object detection systems in real-world
applications.
| no_new_dataset | 0.95275 |
2504.00527 | Fida Mohammad Thoker | Fida Mohammad Thoker, Letian Jiang, Chen Zhao, Bernard Ghanem | SMILE: Infusing Spatial and Motion Semantics in Masked Video Learning | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Masked video modeling, such as VideoMAE, is an effective paradigm for video
self-supervised learning (SSL). However, they are primarily based on
reconstructing pixel-level details on natural videos which have substantial
temporal redundancy, limiting their capability for semantic representation and
sufficient encoding of motion dynamics. To address these issues, this paper
introduces a novel SSL approach for video representation learning, dubbed as
SMILE, by infusing both spatial and motion semantics. In SMILE, we leverage
image-language pretrained models, such as CLIP, to guide the learning process
with their high-level spatial semantics. We enhance the representation of
motion by introducing synthetic motion patterns in the training data, allowing
the model to capture more complex and dynamic content. Furthermore, using
SMILE, we establish a new self-supervised video learning paradigm capable of
learning strong video representations without requiring any natural video data.
We have carried out extensive experiments on 7 datasets with various downstream
scenarios. SMILE surpasses current state-of-the-art SSL methods, showcasing its
effectiveness in learning more discriminative and generalizable video
representations. Code is available: https://github.com/fmthoker/SMILE
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 08:20:55 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Thoker",
"Fida Mohammad",
""
],
[
"Jiang",
"Letian",
""
],
[
"Zhao",
"Chen",
""
],
[
"Ghanem",
"Bernard",
""
]
] | TITLE: SMILE: Infusing Spatial and Motion Semantics in Masked Video Learning
ABSTRACT: Masked video modeling, such as VideoMAE, is an effective paradigm for video
self-supervised learning (SSL). However, they are primarily based on
reconstructing pixel-level details on natural videos which have substantial
temporal redundancy, limiting their capability for semantic representation and
sufficient encoding of motion dynamics. To address these issues, this paper
introduces a novel SSL approach for video representation learning, dubbed as
SMILE, by infusing both spatial and motion semantics. In SMILE, we leverage
image-language pretrained models, such as CLIP, to guide the learning process
with their high-level spatial semantics. We enhance the representation of
motion by introducing synthetic motion patterns in the training data, allowing
the model to capture more complex and dynamic content. Furthermore, using
SMILE, we establish a new self-supervised video learning paradigm capable of
learning strong video representations without requiring any natural video data.
We have carried out extensive experiments on 7 datasets with various downstream
scenarios. SMILE surpasses current state-of-the-art SSL methods, showcasing its
effectiveness in learning more discriminative and generalizable video
representations. Code is available: https://github.com/fmthoker/SMILE
| no_new_dataset | 0.949201 |
2504.00543 | Dong Zhao | Qi Zang, Shuang Wang, Dong Zhao, Dou Quan, Yang Hu, and Licheng Jiao | Generalization-aware Remote Sensing Change Detection via Domain-agnostic
Learning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Change detection has essential significance for the region's development, in
which pseudo-changes between bitemporal images induced by imaging environmental
factors are key challenges. Existing transformation-based methods regard
pseudo-changes as a kind of style shift and alleviate it by transforming
bitemporal images into the same style using generative adversarial networks
(GANs). However, their efforts are limited by two drawbacks: 1) Transformed
images suffer from distortion that reduces feature discrimination. 2) Alignment
hampers the model from learning domain-agnostic representations that degrades
performance on scenes with domain shifts from the training data. Therefore,
oriented from pseudo-changes caused by style differences, we present a
generalizable domain-agnostic difference learning network (DonaNet). For the
drawback 1), we argue for local-level statistics as style proxies to assist
against domain shifts. For the drawback 2), DonaNet learns domain-agnostic
representations by removing domain-specific style of encoded features and
highlighting the class characteristics of objects. In the removal, we propose a
domain difference removal module to reduce feature variance while preserving
discriminative properties and propose its enhanced version to provide
possibilities for eliminating more style by decorrelating the correlation
between features. In the highlighting, we propose a cross-temporal
generalization learning strategy to imitate latent domain shifts, thus enabling
the model to extract feature representations more robust to shifts actively.
Extensive experiments conducted on three public datasets demonstrate that
DonaNet outperforms existing state-of-the-art methods with a smaller model size
and is more robust to domain shift.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 08:51:16 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zang",
"Qi",
""
],
[
"Wang",
"Shuang",
""
],
[
"Zhao",
"Dong",
""
],
[
"Quan",
"Dou",
""
],
[
"Hu",
"Yang",
""
],
[
"Jiao",
"Licheng",
""
]
] | TITLE: Generalization-aware Remote Sensing Change Detection via Domain-agnostic
Learning
ABSTRACT: Change detection has essential significance for the region's development, in
which pseudo-changes between bitemporal images induced by imaging environmental
factors are key challenges. Existing transformation-based methods regard
pseudo-changes as a kind of style shift and alleviate it by transforming
bitemporal images into the same style using generative adversarial networks
(GANs). However, their efforts are limited by two drawbacks: 1) Transformed
images suffer from distortion that reduces feature discrimination. 2) Alignment
hampers the model from learning domain-agnostic representations that degrades
performance on scenes with domain shifts from the training data. Therefore,
oriented from pseudo-changes caused by style differences, we present a
generalizable domain-agnostic difference learning network (DonaNet). For the
drawback 1), we argue for local-level statistics as style proxies to assist
against domain shifts. For the drawback 2), DonaNet learns domain-agnostic
representations by removing domain-specific style of encoded features and
highlighting the class characteristics of objects. In the removal, we propose a
domain difference removal module to reduce feature variance while preserving
discriminative properties and propose its enhanced version to provide
possibilities for eliminating more style by decorrelating the correlation
between features. In the highlighting, we propose a cross-temporal
generalization learning strategy to imitate latent domain shifts, thus enabling
the model to extract feature representations more robust to shifts actively.
Extensive experiments conducted on three public datasets demonstrate that
DonaNet outperforms existing state-of-the-art methods with a smaller model size
and is more robust to domain shift.
| no_new_dataset | 0.950365 |
2504.00558 | Marek Va\v{s}ko | Marek Va\v{s}ko and Adam Herout and Michal Hradi\v{s} | Archival Faces: Detection of Faces in Digitized Historical Documents | 15 pages, 6 figures, 6 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | When digitizing historical archives, it is necessary to search for the faces
of celebrities and ordinary people, especially in newspapers, link them to the
surrounding text, and make them searchable. Existing face detectors on datasets
of scanned historical documents fail remarkably -- current detection tools only
achieve around $24\%$ mAP at $50:90\%$ IoU. This work compensates for this
failure by introducing a new manually annotated domain-specific dataset in the
style of the popular Wider Face dataset, containing 2.2k new images from
digitized historical newspapers from the $19^{th}$ to $20^{th}$ century, with
11k new bounding-box annotations and associated facial landmarks. This dataset
allows existing detectors to be retrained to bring their results closer to the
standard in the field of face detection in the wild. We report several
experimental results comparing different families of fine-tuned detectors
against publicly available pre-trained face detectors and ablation studies of
multiple detector sizes with comprehensive detection and landmark prediction
performance results.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 09:10:45 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Vaško",
"Marek",
""
],
[
"Herout",
"Adam",
""
],
[
"Hradiš",
"Michal",
""
]
] | TITLE: Archival Faces: Detection of Faces in Digitized Historical Documents
ABSTRACT: When digitizing historical archives, it is necessary to search for the faces
of celebrities and ordinary people, especially in newspapers, link them to the
surrounding text, and make them searchable. Existing face detectors on datasets
of scanned historical documents fail remarkably -- current detection tools only
achieve around $24\%$ mAP at $50:90\%$ IoU. This work compensates for this
failure by introducing a new manually annotated domain-specific dataset in the
style of the popular Wider Face dataset, containing 2.2k new images from
digitized historical newspapers from the $19^{th}$ to $20^{th}$ century, with
11k new bounding-box annotations and associated facial landmarks. This dataset
allows existing detectors to be retrained to bring their results closer to the
standard in the field of face detection in the wild. We report several
experimental results comparing different families of fine-tuned detectors
against publicly available pre-trained face detectors and ablation studies of
multiple detector sizes with comprehensive detection and landmark prediction
performance results.
| new_dataset | 0.960324 |
2504.00559 | Loveneet Saini | Loveneet Saini, Mirko Meuter, Hasan Tercan, Tobias Meisen | AttentiveGRU: Recurrent Spatio-Temporal Modeling for Advanced
Radar-Based BEV Object Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Bird's-eye view (BEV) object detection has become important for advanced
automotive 3D radar-based perception systems. However, the inherently sparse
and non-deterministic nature of radar data limits the effectiveness of
traditional single-frame BEV paradigms. In this paper, we addresses this
limitation by introducing AttentiveGRU, a novel attention-based recurrent
approach tailored for radar constraints, which extracts individualized
spatio-temporal context for objects by dynamically identifying and fusing
temporally correlated structures across present and memory states. By
leveraging the consistency of object's latent representation over time, our
approach exploits temporal relations to enrich feature representations for both
stationary and moving objects, thereby enhancing detection performance and
eliminating the need for externally providing or estimating any information
about ego vehicle motion. Our experimental results on the public nuScenes
dataset show a significant increase in mAP for the car category by 21% over the
best radar-only submission. Further evaluations on an additional dataset
demonstrate notable improvements in object detection capabilities, underscoring
the applicability and effectiveness of our method.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 09:10:47 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Saini",
"Loveneet",
""
],
[
"Meuter",
"Mirko",
""
],
[
"Tercan",
"Hasan",
""
],
[
"Meisen",
"Tobias",
""
]
] | TITLE: AttentiveGRU: Recurrent Spatio-Temporal Modeling for Advanced
Radar-Based BEV Object Detection
ABSTRACT: Bird's-eye view (BEV) object detection has become important for advanced
automotive 3D radar-based perception systems. However, the inherently sparse
and non-deterministic nature of radar data limits the effectiveness of
traditional single-frame BEV paradigms. In this paper, we addresses this
limitation by introducing AttentiveGRU, a novel attention-based recurrent
approach tailored for radar constraints, which extracts individualized
spatio-temporal context for objects by dynamically identifying and fusing
temporally correlated structures across present and memory states. By
leveraging the consistency of object's latent representation over time, our
approach exploits temporal relations to enrich feature representations for both
stationary and moving objects, thereby enhancing detection performance and
eliminating the need for externally providing or estimating any information
about ego vehicle motion. Our experimental results on the public nuScenes
dataset show a significant increase in mAP for the car category by 21% over the
best radar-only submission. Further evaluations on an additional dataset
demonstrate notable improvements in object detection capabilities, underscoring
the applicability and effectiveness of our method.
| no_new_dataset | 0.948251 |
2504.00573 | Yilong Xu | Yilong Xu, Jinhua Gao, Xiaoming Yu, Yuanhai Xue, Baolong Bi, Huawei
Shen, Xueqi Cheng | Training a Utility-based Retriever Through Shared Context Attribution
for Retrieval-Augmented Language Models | 20 pages, 9 figures. Code will be released after review | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval-Augmented Language Models boost task performance, owing to the
retriever that provides external knowledge. Although crucial, the retriever
primarily focuses on semantics relevance, which may not always be effective for
generation. Thus, utility-based retrieval has emerged as a promising topic,
prioritizing passages that provides valid benefits for downstream tasks.
However, due to insufficient understanding, capturing passage utility
accurately remains unexplored. This work proposes SCARLet, a framework for
training utility-based retrievers in RALMs, which incorporates two key factors,
multi-task generalization and inter-passage interaction. First, SCARLet
constructs shared context on which training data for various tasks is
synthesized. This mitigates semantic bias from context differences, allowing
retrievers to focus on learning task-specific utility for better task
generalization. Next, SCARLet uses a perturbation-based attribution method to
estimate passage-level utility for shared context, which reflects interactions
between passages and provides more accurate feedback. We evaluate our approach
on ten datasets across various tasks, both in-domain and out-of-domain, showing
that retrievers trained by SCARLet consistently improve the overall performance
of RALMs.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 09:28:28 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xu",
"Yilong",
""
],
[
"Gao",
"Jinhua",
""
],
[
"Yu",
"Xiaoming",
""
],
[
"Xue",
"Yuanhai",
""
],
[
"Bi",
"Baolong",
""
],
[
"Shen",
"Huawei",
""
],
[
"Cheng",
"Xueqi",
""
]
] | TITLE: Training a Utility-based Retriever Through Shared Context Attribution
for Retrieval-Augmented Language Models
ABSTRACT: Retrieval-Augmented Language Models boost task performance, owing to the
retriever that provides external knowledge. Although crucial, the retriever
primarily focuses on semantics relevance, which may not always be effective for
generation. Thus, utility-based retrieval has emerged as a promising topic,
prioritizing passages that provides valid benefits for downstream tasks.
However, due to insufficient understanding, capturing passage utility
accurately remains unexplored. This work proposes SCARLet, a framework for
training utility-based retrievers in RALMs, which incorporates two key factors,
multi-task generalization and inter-passage interaction. First, SCARLet
constructs shared context on which training data for various tasks is
synthesized. This mitigates semantic bias from context differences, allowing
retrievers to focus on learning task-specific utility for better task
generalization. Next, SCARLet uses a perturbation-based attribution method to
estimate passage-level utility for shared context, which reflects interactions
between passages and provides more accurate feedback. We evaluate our approach
on ten datasets across various tasks, both in-domain and out-of-domain, showing
that retrievers trained by SCARLet consistently improve the overall performance
of RALMs.
| no_new_dataset | 0.947235 |
2504.00608 | Xianghong Xu | Xianghong Xu, Xiao He, Tieying Zhang, Lei Zhang, Rui Shi, Jianjun Chen | PLM4NDV: Minimizing Data Access for Number of Distinct Values Estimation
with Pre-trained Language Models | Accepted by SIGMOD 2025 | null | 10.1145/3725336 | null | cs.DB cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Number of Distinct Values (NDV) estimation of a multiset/column is a basis
for many data management tasks, especially within databases. Despite decades of
research, most existing methods require either a significant amount of samples
through uniform random sampling or access to the entire column to produce
estimates, leading to substantial data access costs and potentially ineffective
estimations in scenarios with limited data access. In this paper, we propose
leveraging semantic information, i.e., schema, to address these challenges. The
schema contains rich semantic information that can benefit the NDV estimation.
To this end, we propose PLM4NDV, a learned method incorporating Pre-trained
Language Models (PLMs) to extract semantic schema information for NDV
estimation. Specifically, PLM4NDV leverages the semantics of the target column
and the corresponding table to gain a comprehensive understanding of the
column's meaning. By using the semantics, PLM4NDV reduces data access costs,
provides accurate NDV estimation, and can even operate effectively without any
data access. Extensive experiments on a large-scale real-world dataset
demonstrate the superiority of PLM4NDV over baseline methods. Our code is
available at https://github.com/bytedance/plm4ndv.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 10:06:20 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xu",
"Xianghong",
""
],
[
"He",
"Xiao",
""
],
[
"Zhang",
"Tieying",
""
],
[
"Zhang",
"Lei",
""
],
[
"Shi",
"Rui",
""
],
[
"Chen",
"Jianjun",
""
]
] | TITLE: PLM4NDV: Minimizing Data Access for Number of Distinct Values Estimation
with Pre-trained Language Models
ABSTRACT: Number of Distinct Values (NDV) estimation of a multiset/column is a basis
for many data management tasks, especially within databases. Despite decades of
research, most existing methods require either a significant amount of samples
through uniform random sampling or access to the entire column to produce
estimates, leading to substantial data access costs and potentially ineffective
estimations in scenarios with limited data access. In this paper, we propose
leveraging semantic information, i.e., schema, to address these challenges. The
schema contains rich semantic information that can benefit the NDV estimation.
To this end, we propose PLM4NDV, a learned method incorporating Pre-trained
Language Models (PLMs) to extract semantic schema information for NDV
estimation. Specifically, PLM4NDV leverages the semantics of the target column
and the corresponding table to gain a comprehensive understanding of the
column's meaning. By using the semantics, PLM4NDV reduces data access costs,
provides accurate NDV estimation, and can even operate effectively without any
data access. Extensive experiments on a large-scale real-world dataset
demonstrate the superiority of PLM4NDV over baseline methods. Our code is
available at https://github.com/bytedance/plm4ndv.
| no_new_dataset | 0.951594 |
2504.00609 | Huichuang Huang | Huichuan Huang, Zhiqing Zhong, Guangyu Wei, Yonghao Wan, Wenlong Sun,
Aimin Feng | Bi-Grid Reconstruction for Image Anomaly Detection | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In image anomaly detection, significant advancements have been made using un-
and self-supervised methods with datasets containing only normal samples.
However, these approaches often struggle with fine-grained anomalies. This
paper introduces \textbf{GRAD}: Bi-\textbf{G}rid \textbf{R}econstruction for
Image \textbf{A}nomaly \textbf{D}etection, which employs two continuous grids
to enhance anomaly detection from both normal and abnormal perspectives. In
this work: 1) Grids as feature repositories that improve generalization and
mitigate the Identical Shortcut (IS) issue; 2) An abnormal feature grid that
refines normal feature boundaries, boosting detection of fine-grained defects;
3) The Feature Block Paste (FBP) module, which synthesizes various anomalies at
the feature level for quick abnormal grid deployment. GRAD's robust
representation capabilities also allow it to handle multiple classes with a
single model. Evaluations on datasets like MVTecAD, VisA, and GoodsAD show
significant performance improvements in fine-grained anomaly detection. GRAD
excels in overall accuracy and in discerning subtle differences, demonstrating
its superiority over existing methods.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 10:06:38 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Huang",
"Huichuan",
""
],
[
"Zhong",
"Zhiqing",
""
],
[
"Wei",
"Guangyu",
""
],
[
"Wan",
"Yonghao",
""
],
[
"Sun",
"Wenlong",
""
],
[
"Feng",
"Aimin",
""
]
] | TITLE: Bi-Grid Reconstruction for Image Anomaly Detection
ABSTRACT: In image anomaly detection, significant advancements have been made using un-
and self-supervised methods with datasets containing only normal samples.
However, these approaches often struggle with fine-grained anomalies. This
paper introduces \textbf{GRAD}: Bi-\textbf{G}rid \textbf{R}econstruction for
Image \textbf{A}nomaly \textbf{D}etection, which employs two continuous grids
to enhance anomaly detection from both normal and abnormal perspectives. In
this work: 1) Grids as feature repositories that improve generalization and
mitigate the Identical Shortcut (IS) issue; 2) An abnormal feature grid that
refines normal feature boundaries, boosting detection of fine-grained defects;
3) The Feature Block Paste (FBP) module, which synthesizes various anomalies at
the feature level for quick abnormal grid deployment. GRAD's robust
representation capabilities also allow it to handle multiple classes with a
single model. Evaluations on datasets like MVTecAD, VisA, and GoodsAD show
significant performance improvements in fine-grained anomaly detection. GRAD
excels in overall accuracy and in discerning subtle differences, demonstrating
its superiority over existing methods.
| no_new_dataset | 0.948202 |
2504.00615 | Danial Hooshyar | Danial Hooshyar, Eve Kikas, Yeongwook Yang, Gustav \v{S}\'ir, Raija
H\"am\"al\"ainen, Tommi K\"arkk\"ainen, Roger Azevedo | Towards Responsible and Trustworthy Educational Data Mining: Comparing
Symbolic, Sub-Symbolic, and Neural-Symbolic AI Methods | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given the demand for responsible and trustworthy AI for education, this study
evaluates symbolic, sub-symbolic, and neural-symbolic AI (NSAI) in terms of
generalizability and interpretability. Our extensive experiments on balanced
and imbalanced self-regulated learning datasets of Estonian primary school
students predicting 7th-grade mathematics national test performance showed that
symbolic and sub-symbolic methods performed well on balanced data but struggled
to identify low performers in imbalanced datasets. Interestingly, symbolic and
sub-symbolic methods emphasized different factors in their decision-making:
symbolic approaches primarily relied on cognitive and motivational factors,
while sub-symbolic methods focused more on cognitive aspects, learned
knowledge, and the demographic variable of gender -- yet both largely
overlooked metacognitive factors. The NSAI method, on the other hand, showed
advantages by: (i) being more generalizable across both classes -- even in
imbalanced datasets -- as its symbolic knowledge component compensated for the
underrepresented class; and (ii) relying on a more integrated set of factors in
its decision-making, including motivation, (meta)cognition, and learned
knowledge, thus offering a comprehensive and theoretically grounded
interpretability framework. These contrasting findings highlight the need for a
holistic comparison of AI methods before drawing conclusions based solely on
predictive performance. They also underscore the potential of hybrid,
human-centered NSAI methods to address the limitations of other AI families and
move us closer to responsible AI for education. Specifically, by enabling
stakeholders to contribute to AI design, NSAI aligns learned patterns with
theoretical constructs, incorporates factors like motivation and metacognition,
and strengthens the trustworthiness and responsibility of educational data
mining.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 10:14:11 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Hooshyar",
"Danial",
""
],
[
"Kikas",
"Eve",
""
],
[
"Yang",
"Yeongwook",
""
],
[
"Šír",
"Gustav",
""
],
[
"Hämäläinen",
"Raija",
""
],
[
"Kärkkäinen",
"Tommi",
""
],
[
"Azevedo",
"Roger",
""
]
] | TITLE: Towards Responsible and Trustworthy Educational Data Mining: Comparing
Symbolic, Sub-Symbolic, and Neural-Symbolic AI Methods
ABSTRACT: Given the demand for responsible and trustworthy AI for education, this study
evaluates symbolic, sub-symbolic, and neural-symbolic AI (NSAI) in terms of
generalizability and interpretability. Our extensive experiments on balanced
and imbalanced self-regulated learning datasets of Estonian primary school
students predicting 7th-grade mathematics national test performance showed that
symbolic and sub-symbolic methods performed well on balanced data but struggled
to identify low performers in imbalanced datasets. Interestingly, symbolic and
sub-symbolic methods emphasized different factors in their decision-making:
symbolic approaches primarily relied on cognitive and motivational factors,
while sub-symbolic methods focused more on cognitive aspects, learned
knowledge, and the demographic variable of gender -- yet both largely
overlooked metacognitive factors. The NSAI method, on the other hand, showed
advantages by: (i) being more generalizable across both classes -- even in
imbalanced datasets -- as its symbolic knowledge component compensated for the
underrepresented class; and (ii) relying on a more integrated set of factors in
its decision-making, including motivation, (meta)cognition, and learned
knowledge, thus offering a comprehensive and theoretically grounded
interpretability framework. These contrasting findings highlight the need for a
holistic comparison of AI methods before drawing conclusions based solely on
predictive performance. They also underscore the potential of hybrid,
human-centered NSAI methods to address the limitations of other AI families and
move us closer to responsible AI for education. Specifically, by enabling
stakeholders to contribute to AI design, NSAI aligns learned patterns with
theoretical constructs, incorporates factors like motivation and metacognition,
and strengthens the trustworthiness and responsibility of educational data
mining.
| no_new_dataset | 0.948489 |
2504.00660 | Jin Shaocheng | Rui Wang, Shaocheng Jin, Ziheng Chen, Xiaoqing Luo, Xiao-Jun Wu | Learning to Normalize on the SPD Manifold under Bures-Wasserstein
Geometry | Accepted by CVPR 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Covariance matrices have proven highly effective across many scientific
fields. Since these matrices lie within the Symmetric Positive Definite (SPD)
manifold - a Riemannian space with intrinsic non-Euclidean geometry, the
primary challenge in representation learning is to respect this underlying
geometric structure. Drawing inspiration from the success of Euclidean deep
learning, researchers have developed neural networks on the SPD manifolds for
more faithful covariance embedding learning. A notable advancement in this area
is the implementation of Riemannian batch normalization (RBN), which has been
shown to improve the performance of SPD network models. Nonetheless, the
Riemannian metric beneath the existing RBN might fail to effectively deal with
the ill-conditioned SPD matrices (ICSM), undermining the effectiveness of RBN.
In contrast, the Bures-Wasserstein metric (BWM) demonstrates superior
performance for ill-conditioning. In addition, the recently introduced
Generalized BWM (GBWM) parameterizes the vanilla BWM via an SPD matrix,
allowing for a more nuanced representation of vibrant geometries of the SPD
manifold. Therefore, we propose a novel RBN algorithm based on the GBW
geometry, incorporating a learnable metric parameter. Moreover, the deformation
of GBWM by matrix power is also introduced to further enhance the
representational capacity of GBWM-based RBN. Experimental results on different
datasets validate the effectiveness of our proposed method.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 11:12:58 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Rui",
""
],
[
"Jin",
"Shaocheng",
""
],
[
"Chen",
"Ziheng",
""
],
[
"Luo",
"Xiaoqing",
""
],
[
"Wu",
"Xiao-Jun",
""
]
] | TITLE: Learning to Normalize on the SPD Manifold under Bures-Wasserstein
Geometry
ABSTRACT: Covariance matrices have proven highly effective across many scientific
fields. Since these matrices lie within the Symmetric Positive Definite (SPD)
manifold - a Riemannian space with intrinsic non-Euclidean geometry, the
primary challenge in representation learning is to respect this underlying
geometric structure. Drawing inspiration from the success of Euclidean deep
learning, researchers have developed neural networks on the SPD manifolds for
more faithful covariance embedding learning. A notable advancement in this area
is the implementation of Riemannian batch normalization (RBN), which has been
shown to improve the performance of SPD network models. Nonetheless, the
Riemannian metric beneath the existing RBN might fail to effectively deal with
the ill-conditioned SPD matrices (ICSM), undermining the effectiveness of RBN.
In contrast, the Bures-Wasserstein metric (BWM) demonstrates superior
performance for ill-conditioning. In addition, the recently introduced
Generalized BWM (GBWM) parameterizes the vanilla BWM via an SPD matrix,
allowing for a more nuanced representation of vibrant geometries of the SPD
manifold. Therefore, we propose a novel RBN algorithm based on the GBW
geometry, incorporating a learnable metric parameter. Moreover, the deformation
of GBWM by matrix power is also introduced to further enhance the
representational capacity of GBWM-based RBN. Experimental results on different
datasets validate the effectiveness of our proposed method.
| no_new_dataset | 0.945248 |
2504.00664 | Ramakanth Kavuluru | Motasem S Obeidat and Md Sultan Al Nahian and Ramakanth Kavuluru | Do LLMs Surpass Encoders for Biomedical NER? | Accepted to appear in IEEE ICHI 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recognizing spans of biomedical concepts and their types (e.g., drug or gene)
in free text, often called biomedical named entity recognition (NER), is a
basic component of information extraction (IE) pipelines. Without a strong NER
component, other applications, such as knowledge discovery and information
retrieval, are not practical. State-of-the-art in NER shifted from traditional
ML models to deep neural networks with transformer-based encoder models (e.g.,
BERT) emerging as the current standard. However, decoder models (also called
large language models or LLMs) are gaining traction in IE. But LLM-driven NER
often ignores positional information due to the generative nature of decoder
models. Furthermore, they are computationally very expensive (both in inference
time and hardware needs). Hence, it is worth exploring if they actually excel
at biomedical NER and assess any associated trade-offs (performance vs
efficiency). This is exactly what we do in this effort employing the same BIO
entity tagging scheme (that retains positional information) using five
different datasets with varying proportions of longer entities. Our results
show that the LLMs chosen (Mistral and Llama: 8B range) often outperform best
encoder models (BERT-(un)cased, BiomedBERT, and DeBERTav3: 300M range) by 2-8%
in F-scores except for one dataset, where they equal encoder performance. This
gain is more prominent among longer entities of length >= 3 tokens. However,
LLMs are one to two orders of magnitude more expensive at inference time and
may need cost prohibitive hardware. Thus, when performance differences are
small or real time user feedback is needed, encoder models might still be more
suitable than LLMs.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 11:16:13 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Obeidat",
"Motasem S",
""
],
[
"Nahian",
"Md Sultan Al",
""
],
[
"Kavuluru",
"Ramakanth",
""
]
] | TITLE: Do LLMs Surpass Encoders for Biomedical NER?
ABSTRACT: Recognizing spans of biomedical concepts and their types (e.g., drug or gene)
in free text, often called biomedical named entity recognition (NER), is a
basic component of information extraction (IE) pipelines. Without a strong NER
component, other applications, such as knowledge discovery and information
retrieval, are not practical. State-of-the-art in NER shifted from traditional
ML models to deep neural networks with transformer-based encoder models (e.g.,
BERT) emerging as the current standard. However, decoder models (also called
large language models or LLMs) are gaining traction in IE. But LLM-driven NER
often ignores positional information due to the generative nature of decoder
models. Furthermore, they are computationally very expensive (both in inference
time and hardware needs). Hence, it is worth exploring if they actually excel
at biomedical NER and assess any associated trade-offs (performance vs
efficiency). This is exactly what we do in this effort employing the same BIO
entity tagging scheme (that retains positional information) using five
different datasets with varying proportions of longer entities. Our results
show that the LLMs chosen (Mistral and Llama: 8B range) often outperform best
encoder models (BERT-(un)cased, BiomedBERT, and DeBERTav3: 300M range) by 2-8%
in F-scores except for one dataset, where they equal encoder performance. This
gain is more prominent among longer entities of length >= 3 tokens. However,
LLMs are one to two orders of magnitude more expensive at inference time and
may need cost prohibitive hardware. Thus, when performance differences are
small or real time user feedback is needed, encoder models might still be more
suitable than LLMs.
| no_new_dataset | 0.94256 |
2504.00665 | Shuangping Huang | Shengjie Gong, Haojie Li, Jiapeng Tang, Dongming Hu, Shuangping Huang,
Hao Chen, Tianshui Chen, Zhuoman Liu | Monocular and Generalizable Gaussian Talking Head Animation | Accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | In this work, we introduce Monocular and Generalizable Gaussian Talking Head
Animation (MGGTalk), which requires monocular datasets and generalizes to
unseen identities without personalized re-training. Compared with previous 3D
Gaussian Splatting (3DGS) methods that requires elusive multi-view datasets or
tedious personalized learning/inference, MGGtalk enables more practical and
broader applications. However, in the absence of multi-view and personalized
training data, the incompleteness of geometric and appearance information poses
a significant challenge. To address these challenges, MGGTalk explores depth
information to enhance geometric and facial symmetry characteristics to
supplement both geometric and appearance features. Initially, based on the
pixel-wise geometric information obtained from depth estimation, we incorporate
symmetry operations and point cloud filtering techniques to ensure a complete
and precise position parameter for 3DGS. Subsequently, we adopt a two-stage
strategy with symmetric priors for predicting the remaining 3DGS parameters. We
begin by predicting Gaussian parameters for the visible facial regions of the
source image. These parameters are subsequently utilized to improve the
prediction of Gaussian parameters for the non-visible regions. Extensive
experiments demonstrate that MGGTalk surpasses previous state-of-the-art
methods, achieving superior performance across various metrics.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 11:16:52 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Gong",
"Shengjie",
""
],
[
"Li",
"Haojie",
""
],
[
"Tang",
"Jiapeng",
""
],
[
"Hu",
"Dongming",
""
],
[
"Huang",
"Shuangping",
""
],
[
"Chen",
"Hao",
""
],
[
"Chen",
"Tianshui",
""
],
[
"Liu",
"Zhuoman",
""
]
] | TITLE: Monocular and Generalizable Gaussian Talking Head Animation
ABSTRACT: In this work, we introduce Monocular and Generalizable Gaussian Talking Head
Animation (MGGTalk), which requires monocular datasets and generalizes to
unseen identities without personalized re-training. Compared with previous 3D
Gaussian Splatting (3DGS) methods that requires elusive multi-view datasets or
tedious personalized learning/inference, MGGtalk enables more practical and
broader applications. However, in the absence of multi-view and personalized
training data, the incompleteness of geometric and appearance information poses
a significant challenge. To address these challenges, MGGTalk explores depth
information to enhance geometric and facial symmetry characteristics to
supplement both geometric and appearance features. Initially, based on the
pixel-wise geometric information obtained from depth estimation, we incorporate
symmetry operations and point cloud filtering techniques to ensure a complete
and precise position parameter for 3DGS. Subsequently, we adopt a two-stage
strategy with symmetric priors for predicting the remaining 3DGS parameters. We
begin by predicting Gaussian parameters for the visible facial regions of the
source image. These parameters are subsequently utilized to improve the
prediction of Gaussian parameters for the non-visible regions. Extensive
experiments demonstrate that MGGTalk surpasses previous state-of-the-art
methods, achieving superior performance across various metrics.
| no_new_dataset | 0.950732 |
2504.00676 | Anthony Yazdani | Anthony Yazdani, Ihor Stepanov, Douglas Teodoro | GLiNER-biomed: A Suite of Efficient Models for Open Biomedical Named
Entity Recognition | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Biomedical named entity recognition (NER) presents unique challenges due to
specialized vocabularies, the sheer volume of entities, and the continuous
emergence of novel entities. Traditional NER models, constrained by fixed
taxonomies and human annotations, struggle to generalize beyond predefined
entity types or efficiently adapt to emerging concepts. To address these
issues, we introduce GLiNER-biomed, a domain-adapted suite of Generalist and
Lightweight Model for NER (GLiNER) models specifically tailored for biomedical
NER. In contrast to conventional approaches, GLiNER uses natural language
descriptions to infer arbitrary entity types, enabling zero-shot recognition.
Our approach first distills the annotation capabilities of large language
models (LLMs) into a smaller, more efficient model, enabling the generation of
high-coverage synthetic biomedical NER data. We subsequently train two GLiNER
architectures, uni- and bi-encoder, at multiple scales to balance computational
efficiency and recognition performance. Evaluations on several biomedical
datasets demonstrate that GLiNER-biomed outperforms state-of-the-art GLiNER
models in both zero- and few-shot scenarios, achieving 5.96% improvement in
F1-score over the strongest baseline. Ablation studies highlight the
effectiveness of our synthetic data generation strategy and emphasize the
complementary benefits of synthetic biomedical pre-training combined with
fine-tuning on high-quality general-domain annotations. All datasets, models,
and training pipelines are publicly available at
https://github.com/ds4dh/GLiNER-biomed.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 11:40:50 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Yazdani",
"Anthony",
""
],
[
"Stepanov",
"Ihor",
""
],
[
"Teodoro",
"Douglas",
""
]
] | TITLE: GLiNER-biomed: A Suite of Efficient Models for Open Biomedical Named
Entity Recognition
ABSTRACT: Biomedical named entity recognition (NER) presents unique challenges due to
specialized vocabularies, the sheer volume of entities, and the continuous
emergence of novel entities. Traditional NER models, constrained by fixed
taxonomies and human annotations, struggle to generalize beyond predefined
entity types or efficiently adapt to emerging concepts. To address these
issues, we introduce GLiNER-biomed, a domain-adapted suite of Generalist and
Lightweight Model for NER (GLiNER) models specifically tailored for biomedical
NER. In contrast to conventional approaches, GLiNER uses natural language
descriptions to infer arbitrary entity types, enabling zero-shot recognition.
Our approach first distills the annotation capabilities of large language
models (LLMs) into a smaller, more efficient model, enabling the generation of
high-coverage synthetic biomedical NER data. We subsequently train two GLiNER
architectures, uni- and bi-encoder, at multiple scales to balance computational
efficiency and recognition performance. Evaluations on several biomedical
datasets demonstrate that GLiNER-biomed outperforms state-of-the-art GLiNER
models in both zero- and few-shot scenarios, achieving 5.96% improvement in
F1-score over the strongest baseline. Ablation studies highlight the
effectiveness of our synthetic data generation strategy and emphasize the
complementary benefits of synthetic biomedical pre-training combined with
fine-tuning on high-quality general-domain annotations. All datasets, models,
and training pipelines are publicly available at
https://github.com/ds4dh/GLiNER-biomed.
| no_new_dataset | 0.946498 |
2504.00679 | Sai Li | Sai Li, Linliang Chen, Yihao Zhang, Zhongkui Zhang, Ao Du, Biao Pan,
Zhaohao Wang, Lianggong Wen, and Weisheng Zhao | QUEST: A Quantized Energy-Aware SNN Training Framework for Multi-State
Neuromorphic Devices | null | null | null | null | physics.app-ph | http://creativecommons.org/licenses/by/4.0/ | Neuromorphic devices, leveraging novel physical phenomena, offer a promising
path toward energy-efficient hardware beyond CMOS technology by emulating
brain-inspired computation. However, their progress is often limited to
proof-of-concept studies due to the lack of flexible spiking neural network
(SNN) algorithm frameworks tailored to device-specific characteristics, posing
a significant challenge to scalability and practical deployment. To address
this, we propose QUEST, a unified co-design framework that directly trains SNN
for emerging devices featuring multilevel resistances. With Skyrmionic Magnetic
Tunnel Junction (Sk-MTJ) as a case study, experimental results on the CIFAR-10
dataset demonstrate the framework's ability to enable scalable on-device SNN
training with minimal energy consumption during both feedforward and
backpropagation. By introducing device mapping pattern and activation operation
sparsity, QUEST achieves effective trade-offs among high accuracy (89.6%), low
bit precision (2-bit), and energy efficiency (93 times improvement over the
ANNs). QUEST offers practical design guidelines for both the device and
algorithm communities, providing insights to build energy-efficient and
large-scale neuromorphic systems.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 11:47:07 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Li",
"Sai",
""
],
[
"Chen",
"Linliang",
""
],
[
"Zhang",
"Yihao",
""
],
[
"Zhang",
"Zhongkui",
""
],
[
"Du",
"Ao",
""
],
[
"Pan",
"Biao",
""
],
[
"Wang",
"Zhaohao",
""
],
[
"Wen",
"Lianggong",
""
],
[
"Zhao",
"Weisheng",
""
]
] | TITLE: QUEST: A Quantized Energy-Aware SNN Training Framework for Multi-State
Neuromorphic Devices
ABSTRACT: Neuromorphic devices, leveraging novel physical phenomena, offer a promising
path toward energy-efficient hardware beyond CMOS technology by emulating
brain-inspired computation. However, their progress is often limited to
proof-of-concept studies due to the lack of flexible spiking neural network
(SNN) algorithm frameworks tailored to device-specific characteristics, posing
a significant challenge to scalability and practical deployment. To address
this, we propose QUEST, a unified co-design framework that directly trains SNN
for emerging devices featuring multilevel resistances. With Skyrmionic Magnetic
Tunnel Junction (Sk-MTJ) as a case study, experimental results on the CIFAR-10
dataset demonstrate the framework's ability to enable scalable on-device SNN
training with minimal energy consumption during both feedforward and
backpropagation. By introducing device mapping pattern and activation operation
sparsity, QUEST achieves effective trade-offs among high accuracy (89.6%), low
bit precision (2-bit), and energy efficiency (93 times improvement over the
ANNs). QUEST offers practical design guidelines for both the device and
algorithm communities, providing insights to build energy-efficient and
large-scale neuromorphic systems.
| no_new_dataset | 0.944944 |
2504.00691 | Yuanchen Wu | Yuanchen Wu, Junlong Du, Ke Yan, Shouhong Ding, Xiaoqiang Li | ToVE: Efficient Vision-Language Learning via Knowledge Transfer from
Vision Experts | Accepted to ICLR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-language (VL) learning requires extensive visual perception
capabilities, such as fine-grained object recognition and spatial perception.
Recent works typically rely on training huge models on massive datasets to
develop these capabilities. As a more efficient alternative, this paper
proposes a new framework that Transfers the knowledge from a hub of Vision
Experts (ToVE) for efficient VL learning, leveraging pre-trained vision expert
models to promote visual perception capability. Specifically, building on a
frozen CLIP encoder that provides vision tokens for image-conditioned language
generation, ToVE introduces a hub of multiple vision experts and a token-aware
gating network that dynamically routes expert knowledge to vision tokens. In
the transfer phase, we propose a "residual knowledge transfer" strategy, which
not only preserves the generalizability of the vision tokens but also allows
detachment of low-contributing experts to improve inference efficiency.
Further, we explore to merge these expert knowledge to a single CLIP encoder,
creating a knowledge-merged CLIP that produces more informative vision tokens
without expert inference during deployment. Experiment results across various
VL tasks demonstrate that the proposed ToVE achieves competitive performance
with two orders of magnitude fewer training data.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 12:02:40 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wu",
"Yuanchen",
""
],
[
"Du",
"Junlong",
""
],
[
"Yan",
"Ke",
""
],
[
"Ding",
"Shouhong",
""
],
[
"Li",
"Xiaoqiang",
""
]
] | TITLE: ToVE: Efficient Vision-Language Learning via Knowledge Transfer from
Vision Experts
ABSTRACT: Vision-language (VL) learning requires extensive visual perception
capabilities, such as fine-grained object recognition and spatial perception.
Recent works typically rely on training huge models on massive datasets to
develop these capabilities. As a more efficient alternative, this paper
proposes a new framework that Transfers the knowledge from a hub of Vision
Experts (ToVE) for efficient VL learning, leveraging pre-trained vision expert
models to promote visual perception capability. Specifically, building on a
frozen CLIP encoder that provides vision tokens for image-conditioned language
generation, ToVE introduces a hub of multiple vision experts and a token-aware
gating network that dynamically routes expert knowledge to vision tokens. In
the transfer phase, we propose a "residual knowledge transfer" strategy, which
not only preserves the generalizability of the vision tokens but also allows
detachment of low-contributing experts to improve inference efficiency.
Further, we explore to merge these expert knowledge to a single CLIP encoder,
creating a knowledge-merged CLIP that produces more informative vision tokens
without expert inference during deployment. Experiment results across various
VL tasks demonstrate that the proposed ToVE achieves competitive performance
with two orders of magnitude fewer training data.
| no_new_dataset | 0.95297 |
2504.00694 | Yiling He | Yiling He, Hongyu She, Xingzhi Qian, Xinran Zheng, Zhuo Chen, Zhan
Qin, Lorenzo Cavallaro | On Benchmarking Code LLMs for Android Malware Analysis | null | null | null | null | cs.CR cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models (LLMs) have demonstrated strong capabilities in various
code intelligence tasks. However, their effectiveness for Android malware
analysis remains underexplored. Decompiled Android code poses unique challenges
for analysis, primarily due to its large volume of functions and the frequent
absence of meaningful function names. This paper presents Cama, a benchmarking
framework designed to systematically evaluate the effectiveness of Code LLMs in
Android malware analysis tasks. Cama specifies structured model outputs
(comprising function summaries, refined function names, and maliciousness
scores) to support key malware analysis tasks, including malicious function
identification and malware purpose summarization. Built on these, it integrates
three domain-specific evaluation metrics, consistency, fidelity, and semantic
relevance, enabling rigorous stability and effectiveness assessment and
cross-model comparison. We construct a benchmark dataset consisting of 118
Android malware samples, encompassing over 7.5 million distinct functions, and
use Cama to evaluate four popular open-source models. Our experiments provide
insights into how Code LLMs interpret decompiled code and quantify the
sensitivity to function renaming, highlighting both the potential and current
limitations of Code LLMs in malware analysis tasks.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 12:05:49 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"He",
"Yiling",
""
],
[
"She",
"Hongyu",
""
],
[
"Qian",
"Xingzhi",
""
],
[
"Zheng",
"Xinran",
""
],
[
"Chen",
"Zhuo",
""
],
[
"Qin",
"Zhan",
""
],
[
"Cavallaro",
"Lorenzo",
""
]
] | TITLE: On Benchmarking Code LLMs for Android Malware Analysis
ABSTRACT: Large Language Models (LLMs) have demonstrated strong capabilities in various
code intelligence tasks. However, their effectiveness for Android malware
analysis remains underexplored. Decompiled Android code poses unique challenges
for analysis, primarily due to its large volume of functions and the frequent
absence of meaningful function names. This paper presents Cama, a benchmarking
framework designed to systematically evaluate the effectiveness of Code LLMs in
Android malware analysis tasks. Cama specifies structured model outputs
(comprising function summaries, refined function names, and maliciousness
scores) to support key malware analysis tasks, including malicious function
identification and malware purpose summarization. Built on these, it integrates
three domain-specific evaluation metrics, consistency, fidelity, and semantic
relevance, enabling rigorous stability and effectiveness assessment and
cross-model comparison. We construct a benchmark dataset consisting of 118
Android malware samples, encompassing over 7.5 million distinct functions, and
use Cama to evaluate four popular open-source models. Our experiments provide
insights into how Code LLMs interpret decompiled code and quantify the
sensitivity to function renaming, highlighting both the potential and current
limitations of Code LLMs in malware analysis tasks.
| new_dataset | 0.957991 |
2504.00711 | Enjun Du | Enjun Du, Xunkai Li, Tian Jin, Zhihan Zhang, Rong-Hua Li, and Guoren
Wang | GraphMaster: Automated Graph Synthesis via LLM Agents in Data-Limited
Environments | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | The era of foundation models has revolutionized AI research, yet Graph
Foundation Models (GFMs) remain constrained by the scarcity of large-scale
graph corpora. Traditional graph data synthesis techniques primarily focus on
simplistic structural operations, lacking the capacity to generate semantically
rich nodes with meaningful textual attributes: a critical limitation for
real-world applications. While large language models (LLMs) demonstrate
exceptional text generation capabilities, their direct application to graph
synthesis is impeded by context window limitations, hallucination phenomena,
and structural consistency challenges. To address these issues, we introduce
GraphMaster, the first multi-agent framework specifically designed for graph
data synthesis in data-limited environments. GraphMaster orchestrates four
specialized LLM agents (Manager, Perception, Enhancement, and Evaluation) that
collaboratively optimize the synthesis process through iterative refinement,
ensuring both semantic coherence and structural integrity. To rigorously
evaluate our approach, we create new data-limited "Sub" variants of six
standard graph benchmarks, specifically designed to test synthesis capabilities
under realistic constraints. Additionally, we develop a novel interpretability
assessment framework that combines human evaluation with a principled
Grassmannian manifold-based analysis, providing both qualitative and
quantitative measures of semantic coherence. Experimental results demonstrate
that GraphMaster significantly outperforms traditional synthesis methods across
multiple datasets, establishing a strong foundation for advancing GFMs in
data-scarce environments.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 12:21:50 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Du",
"Enjun",
""
],
[
"Li",
"Xunkai",
""
],
[
"Jin",
"Tian",
""
],
[
"Zhang",
"Zhihan",
""
],
[
"Li",
"Rong-Hua",
""
],
[
"Wang",
"Guoren",
""
]
] | TITLE: GraphMaster: Automated Graph Synthesis via LLM Agents in Data-Limited
Environments
ABSTRACT: The era of foundation models has revolutionized AI research, yet Graph
Foundation Models (GFMs) remain constrained by the scarcity of large-scale
graph corpora. Traditional graph data synthesis techniques primarily focus on
simplistic structural operations, lacking the capacity to generate semantically
rich nodes with meaningful textual attributes: a critical limitation for
real-world applications. While large language models (LLMs) demonstrate
exceptional text generation capabilities, their direct application to graph
synthesis is impeded by context window limitations, hallucination phenomena,
and structural consistency challenges. To address these issues, we introduce
GraphMaster, the first multi-agent framework specifically designed for graph
data synthesis in data-limited environments. GraphMaster orchestrates four
specialized LLM agents (Manager, Perception, Enhancement, and Evaluation) that
collaboratively optimize the synthesis process through iterative refinement,
ensuring both semantic coherence and structural integrity. To rigorously
evaluate our approach, we create new data-limited "Sub" variants of six
standard graph benchmarks, specifically designed to test synthesis capabilities
under realistic constraints. Additionally, we develop a novel interpretability
assessment framework that combines human evaluation with a principled
Grassmannian manifold-based analysis, providing both qualitative and
quantitative measures of semantic coherence. Experimental results demonstrate
that GraphMaster significantly outperforms traditional synthesis methods across
multiple datasets, establishing a strong foundation for advancing GFMs in
data-scarce environments.
| no_new_dataset | 0.946547 |
2504.00712 | Sanath Keshav | Sanath Keshav, Julius Herb, Felix Fritzen | Spectral Normalization and Voigt-Reuss net: A universal approach to
microstructure-property forecasting with physical guarantees | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Heterogeneous materials are crucial to producing lightweight components,
functional components, and structures composed of them. A crucial step in the
design process is the rapid evaluation of their effective mechanical, thermal,
or, in general, constitutive properties. The established procedure is to use
forward models that accept microstructure geometry and local constitutive
properties as inputs. The classical simulation-based approach, which uses,
e.g., finite elements and FFT-based solvers, can require substantial
computational resources. At the same time, simulation-based models struggle to
provide gradients with respect to the microstructure and the constitutive
parameters. Such gradients are, however, of paramount importance for
microstructure design and for inverting the microstructure-property mapping.
Machine learning surrogates can excel in these situations. However, they can
lead to unphysical predictions that violate essential bounds on the
constitutive response, such as the upper (Voigt-like) or the lower (Reuss-like)
bound in linear elasticity. Therefore, we propose a novel spectral
normalization scheme that a priori enforces these bounds. The approach is fully
agnostic with respect to the chosen microstructural features and the utilized
surrogate model. All of these will automatically and strictly predict outputs
that obey the upper and lower bounds by construction. The technique can be used
for any constitutive tensor that is symmetric and where upper and lower bounds
(in the L\"owner sense) exist, i.e., for permeability, thermal conductivity,
linear elasticity, and many more. We demonstrate the use of spectral
normalization in the Voigt-Reuss net using a simple neural network. Numerical
examples on truly extensive datasets illustrate the improved accuracy,
robustness, and independence of the type of input features in comparison to
much-used neural networks.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 12:21:57 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Keshav",
"Sanath",
""
],
[
"Herb",
"Julius",
""
],
[
"Fritzen",
"Felix",
""
]
] | TITLE: Spectral Normalization and Voigt-Reuss net: A universal approach to
microstructure-property forecasting with physical guarantees
ABSTRACT: Heterogeneous materials are crucial to producing lightweight components,
functional components, and structures composed of them. A crucial step in the
design process is the rapid evaluation of their effective mechanical, thermal,
or, in general, constitutive properties. The established procedure is to use
forward models that accept microstructure geometry and local constitutive
properties as inputs. The classical simulation-based approach, which uses,
e.g., finite elements and FFT-based solvers, can require substantial
computational resources. At the same time, simulation-based models struggle to
provide gradients with respect to the microstructure and the constitutive
parameters. Such gradients are, however, of paramount importance for
microstructure design and for inverting the microstructure-property mapping.
Machine learning surrogates can excel in these situations. However, they can
lead to unphysical predictions that violate essential bounds on the
constitutive response, such as the upper (Voigt-like) or the lower (Reuss-like)
bound in linear elasticity. Therefore, we propose a novel spectral
normalization scheme that a priori enforces these bounds. The approach is fully
agnostic with respect to the chosen microstructural features and the utilized
surrogate model. All of these will automatically and strictly predict outputs
that obey the upper and lower bounds by construction. The technique can be used
for any constitutive tensor that is symmetric and where upper and lower bounds
(in the L\"owner sense) exist, i.e., for permeability, thermal conductivity,
linear elasticity, and many more. We demonstrate the use of spectral
normalization in the Voigt-Reuss net using a simple neural network. Numerical
examples on truly extensive datasets illustrate the improved accuracy,
robustness, and independence of the type of input features in comparison to
much-used neural networks.
| no_new_dataset | 0.951051 |
2504.00719 | Jules Lecomte | Thomas E. Huber, Jules Lecomte, Borislav Polovnikov, and Axel von
Arnim | Scaling Up Resonate-and-Fire Networks for Fast Deep Learning | 19 pages, 3 figures | Lecture Notes in Computer Science, volume 15059, Proceedings of
the 18th European Conference on Computer Vision, ECCV 2024, part I | null | null | cs.NE cs.CV | http://creativecommons.org/licenses/by/4.0/ | Spiking neural networks (SNNs) present a promising computing paradigm for
neuromorphic processing of event-based sensor data. The resonate-and-fire (RF)
neuron, in particular, appeals through its biological plausibility, complex
dynamics, yet computational simplicity. Despite theoretically predicted
benefits, challenges in parameter initialization and efficient learning
inhibited the implementation of RF networks, constraining their use to a single
layer. In this paper, we address these shortcomings by deriving the RF neuron
as a structured state space model (SSM) from the HiPPO framework. We introduce
S5-RF, a new SSM layer comprised of RF neurons based on the S5 model, that
features a generic initialization scheme and fast training within a deep
architecture. S5-RF scales for the first time a RF network to a deep SNN with
up to four layers and achieves with 78.8% a new state-of-the-art result for
recurrent SNNs on the Spiking Speech Commands dataset in under three hours of
training time. Moreover, compared to the reference SNNs that solve our
benchmarking tasks, it achieves similar performance with much fewer spiking
operations. Our code is publicly available at
https://github.com/ThomasEHuber/s5-rf.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 12:30:55 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Huber",
"Thomas E.",
""
],
[
"Lecomte",
"Jules",
""
],
[
"Polovnikov",
"Borislav",
""
],
[
"von Arnim",
"Axel",
""
]
] | TITLE: Scaling Up Resonate-and-Fire Networks for Fast Deep Learning
ABSTRACT: Spiking neural networks (SNNs) present a promising computing paradigm for
neuromorphic processing of event-based sensor data. The resonate-and-fire (RF)
neuron, in particular, appeals through its biological plausibility, complex
dynamics, yet computational simplicity. Despite theoretically predicted
benefits, challenges in parameter initialization and efficient learning
inhibited the implementation of RF networks, constraining their use to a single
layer. In this paper, we address these shortcomings by deriving the RF neuron
as a structured state space model (SSM) from the HiPPO framework. We introduce
S5-RF, a new SSM layer comprised of RF neurons based on the S5 model, that
features a generic initialization scheme and fast training within a deep
architecture. S5-RF scales for the first time a RF network to a deep SNN with
up to four layers and achieves with 78.8% a new state-of-the-art result for
recurrent SNNs on the Spiking Speech Commands dataset in under three hours of
training time. Moreover, compared to the reference SNNs that solve our
benchmarking tasks, it achieves similar performance with much fewer spiking
operations. Our code is publicly available at
https://github.com/ThomasEHuber/s5-rf.
| no_new_dataset | 0.947962 |
2504.00730 | Peiqi Li | Jiayuan She, Lin Shi, Peiqi Li, Ziling Dong, Renxing Li, Shengkai Li,
Liping Gu, Tong Zhao, Zhuochang Yang, Yajie Ji, Liang Feng, Jiangang Chen | Detection of Disease on Nasal Breath Sound by New Lightweight
Architecture: Using COVID-19 as An Example | 14 pages, 5 figures, 6 tables | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background. Infectious diseases, particularly COVID-19, continue to be a
significant global health issue. Although many countries have reduced or
stopped large-scale testing measures, the detection of such diseases remains a
propriety. Objective. This study aims to develop a novel, lightweight deep
neural network for efficient, accurate, and cost-effective detection of
COVID-19 using a nasal breathing audio data collected via smartphones.
Methodology. Nasal breathing audio from 128 patients diagnosed with the Omicron
variant was collected. Mel-Frequency Cepstral Coefficients (MFCCs), a widely
used feature in speech and sound analysis, were employed for extracting
important characteristics from the audio signals. Additional feature selection
was performed using Random Forest (RF) and Principal Component Analysis (PCA)
for dimensionality reduction. A Dense-ReLU-Dropout model was trained with
K-fold cross-validation (K=3), and performance metrics like accuracy,
precision, recall, and F1-score were used to evaluate the model. Results. The
proposed model achieved 97% accuracy in detecting COVID-19 from nasal breathing
sounds, outperforming state-of-the-art methods such as those by [23] and [13].
Our Dense-ReLU-Dropout model, using RF and PCA for feature selection, achieves
high accuracy with greater computational efficiency compared to existing
methods that require more complex models or larger datasets. Conclusion. The
findings suggest that the proposed method holds significant potential for
clinical implementation, advancing smartphone-based diagnostics in infectious
diseases. The Dense-ReLU-Dropout model, combined with innovative feature
processing techniques, offers a promising approach for efficient and accurate
COVID-19 detection, showcasing the capabilities of mobile device-based
diagnostics
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 12:41:53 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"She",
"Jiayuan",
""
],
[
"Shi",
"Lin",
""
],
[
"Li",
"Peiqi",
""
],
[
"Dong",
"Ziling",
""
],
[
"Li",
"Renxing",
""
],
[
"Li",
"Shengkai",
""
],
[
"Gu",
"Liping",
""
],
[
"Zhao",
"Tong",
""
],
[
"Yang",
"Zhuochang",
""
],
[
"Ji",
"Yajie",
""
],
[
"Feng",
"Liang",
""
],
[
"Chen",
"Jiangang",
""
]
] | TITLE: Detection of Disease on Nasal Breath Sound by New Lightweight
Architecture: Using COVID-19 as An Example
ABSTRACT: Background. Infectious diseases, particularly COVID-19, continue to be a
significant global health issue. Although many countries have reduced or
stopped large-scale testing measures, the detection of such diseases remains a
propriety. Objective. This study aims to develop a novel, lightweight deep
neural network for efficient, accurate, and cost-effective detection of
COVID-19 using a nasal breathing audio data collected via smartphones.
Methodology. Nasal breathing audio from 128 patients diagnosed with the Omicron
variant was collected. Mel-Frequency Cepstral Coefficients (MFCCs), a widely
used feature in speech and sound analysis, were employed for extracting
important characteristics from the audio signals. Additional feature selection
was performed using Random Forest (RF) and Principal Component Analysis (PCA)
for dimensionality reduction. A Dense-ReLU-Dropout model was trained with
K-fold cross-validation (K=3), and performance metrics like accuracy,
precision, recall, and F1-score were used to evaluate the model. Results. The
proposed model achieved 97% accuracy in detecting COVID-19 from nasal breathing
sounds, outperforming state-of-the-art methods such as those by [23] and [13].
Our Dense-ReLU-Dropout model, using RF and PCA for feature selection, achieves
high accuracy with greater computational efficiency compared to existing
methods that require more complex models or larger datasets. Conclusion. The
findings suggest that the proposed method holds significant potential for
clinical implementation, advancing smartphone-based diagnostics in infectious
diseases. The Dense-ReLU-Dropout model, combined with innovative feature
processing techniques, offers a promising approach for efficient and accurate
COVID-19 detection, showcasing the capabilities of mobile device-based
diagnostics
| no_new_dataset | 0.951997 |
2504.00748 | Yunsoo Kim | Yunsoo Kim and Michal W. S. Ong and Daniel W. Rogalsky and Manuel
Rodriguez-Justo and Honghan Wu and Adam P. Levine | IHC-LLMiner: Automated extraction of tumour immunohistochemical profiles
from PubMed abstracts using large language models | currently under review | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Immunohistochemistry (IHC) is essential in diagnostic pathology and
biomedical research, offering critical insights into protein expression and
tumour biology. This study presents an automated pipeline, IHC-LLMiner, for
extracting IHC-tumour profiles from PubMed abstracts, leveraging advanced
biomedical text mining. There are two subtasks: abstract classification
(include/exclude as relevant) and IHC-tumour profile extraction on relevant
included abstracts. The best-performing model, "Gemma-2 finetuned", achieved
91.5% accuracy and an F1 score of 91.4, outperforming GPT4-O by 9.5% accuracy
with 5.9 times faster inference time. From an initial dataset of 107,759
abstracts identified for 50 immunohistochemical markers, the classification
task identified 30,481 relevant abstracts (Include) using the Gemma-2 finetuned
model. For IHC-tumour profile extraction, the Gemma-2 finetuned model achieved
the best performance with 63.3% Correct outputs. Extracted IHC-tumour profiles
(tumour types and markers) were normalised to Unified Medical Language System
(UMLS) concepts to ensure consistency and facilitate IHC-tumour profile
landscape analysis. The extracted IHC-tumour profiles demonstrated excellent
concordance with available online summary data and provided considerable added
value in terms of both missing IHC-tumour profiles and quantitative
assessments. Our proposed LLM based pipeline provides a practical solution for
large-scale IHC-tumour profile data mining, enhancing the accessibility and
utility of such data for research and clinical applications as well as enabling
the generation of quantitative and structured data to support cancer-specific
knowledge base development. Models and training datasets are available at
https://github.com/knowlab/IHC-LLMiner.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 12:58:07 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Kim",
"Yunsoo",
""
],
[
"Ong",
"Michal W. S.",
""
],
[
"Rogalsky",
"Daniel W.",
""
],
[
"Rodriguez-Justo",
"Manuel",
""
],
[
"Wu",
"Honghan",
""
],
[
"Levine",
"Adam P.",
""
]
] | TITLE: IHC-LLMiner: Automated extraction of tumour immunohistochemical profiles
from PubMed abstracts using large language models
ABSTRACT: Immunohistochemistry (IHC) is essential in diagnostic pathology and
biomedical research, offering critical insights into protein expression and
tumour biology. This study presents an automated pipeline, IHC-LLMiner, for
extracting IHC-tumour profiles from PubMed abstracts, leveraging advanced
biomedical text mining. There are two subtasks: abstract classification
(include/exclude as relevant) and IHC-tumour profile extraction on relevant
included abstracts. The best-performing model, "Gemma-2 finetuned", achieved
91.5% accuracy and an F1 score of 91.4, outperforming GPT4-O by 9.5% accuracy
with 5.9 times faster inference time. From an initial dataset of 107,759
abstracts identified for 50 immunohistochemical markers, the classification
task identified 30,481 relevant abstracts (Include) using the Gemma-2 finetuned
model. For IHC-tumour profile extraction, the Gemma-2 finetuned model achieved
the best performance with 63.3% Correct outputs. Extracted IHC-tumour profiles
(tumour types and markers) were normalised to Unified Medical Language System
(UMLS) concepts to ensure consistency and facilitate IHC-tumour profile
landscape analysis. The extracted IHC-tumour profiles demonstrated excellent
concordance with available online summary data and provided considerable added
value in terms of both missing IHC-tumour profiles and quantitative
assessments. Our proposed LLM based pipeline provides a practical solution for
large-scale IHC-tumour profile data mining, enhancing the accessibility and
utility of such data for research and clinical applications as well as enabling
the generation of quantitative and structured data to support cancer-specific
knowledge base development. Models and training datasets are available at
https://github.com/knowlab/IHC-LLMiner.
| no_new_dataset | 0.948775 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.