id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.02064 | Rustin Soraki | Rustin Soraki, Huayu Wang, Joann G. Elmore, Linda Shapiro | CrossFusion: A Multi-Scale Cross-Attention Convolutional Fusion Model
for Cancer Survival Prediction | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Cancer survival prediction from whole slide images (WSIs) is a challenging
task in computational pathology due to the large size, irregular shape, and
high granularity of the WSIs. These characteristics make it difficult to
capture the full spectrum of patterns, from subtle cellular abnormalities to
complex tissue interactions, which are crucial for accurate prognosis. To
address this, we propose CrossFusion, a novel multi-scale feature integration
framework that extracts and fuses information from patches across different
magnification levels. By effectively modeling both scale-specific patterns and
their interactions, CrossFusion generates a rich feature set that enhances
survival prediction accuracy. We validate our approach across six cancer types
from public datasets, demonstrating significant improvements over existing
state-of-the-art methods. Moreover, when coupled with domain-specific feature
extraction backbones, our method shows further gains in prognostic performance
compared to general-purpose backbones. The source code is available at:
https://github.com/RustinS/CrossFusion
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 21:34:52 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Soraki",
"Rustin",
""
],
[
"Wang",
"Huayu",
""
],
[
"Elmore",
"Joann G.",
""
],
[
"Shapiro",
"Linda",
""
]
]
| TITLE: CrossFusion: A Multi-Scale Cross-Attention Convolutional Fusion Model
for Cancer Survival Prediction
ABSTRACT: Cancer survival prediction from whole slide images (WSIs) is a challenging
task in computational pathology due to the large size, irregular shape, and
high granularity of the WSIs. These characteristics make it difficult to
capture the full spectrum of patterns, from subtle cellular abnormalities to
complex tissue interactions, which are crucial for accurate prognosis. To
address this, we propose CrossFusion, a novel multi-scale feature integration
framework that extracts and fuses information from patches across different
magnification levels. By effectively modeling both scale-specific patterns and
their interactions, CrossFusion generates a rich feature set that enhances
survival prediction accuracy. We validate our approach across six cancer types
from public datasets, demonstrating significant improvements over existing
state-of-the-art methods. Moreover, when coupled with domain-specific feature
extraction backbones, our method shows further gains in prognostic performance
compared to general-purpose backbones. The source code is available at:
https://github.com/RustinS/CrossFusion
| no_new_dataset | 0.948775 |
2503.02092 | Ayush Gaggar | Ayush Gaggar and Todd D. Murphey | Data Augmentation for NeRFs in the Low Data Limit | To be published in 2025 IEEE International Conference on Robotics and
Automation (ICRA 2025) | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Current methods based on Neural Radiance Fields fail in the low data limit,
particularly when training on incomplete scene data. Prior works augment
training data only in next-best-view applications, which lead to hallucinations
and model collapse with sparse data. In contrast, we propose adding a set of
views during training by rejection sampling from a posterior uncertainty
distribution, generated by combining a volumetric uncertainty estimator with
spatial coverage. We validate our results on partially observed scenes; on
average, our method performs 39.9% better with 87.5% less variability across
established scene reconstruction benchmarks, as compared to state of the art
baselines. We further demonstrate that augmenting the training set by sampling
from any distribution leads to better, more consistent scene reconstruction in
sparse environments. This work is foundational for robotic tasks where
augmenting a dataset with informative data is critical in resource-constrained,
a priori unknown environments. Videos and source code are available at
https://murpheylab.github.io/low-data-nerf/.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 22:23:57 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Gaggar",
"Ayush",
""
],
[
"Murphey",
"Todd D.",
""
]
]
| TITLE: Data Augmentation for NeRFs in the Low Data Limit
ABSTRACT: Current methods based on Neural Radiance Fields fail in the low data limit,
particularly when training on incomplete scene data. Prior works augment
training data only in next-best-view applications, which lead to hallucinations
and model collapse with sparse data. In contrast, we propose adding a set of
views during training by rejection sampling from a posterior uncertainty
distribution, generated by combining a volumetric uncertainty estimator with
spatial coverage. We validate our results on partially observed scenes; on
average, our method performs 39.9% better with 87.5% less variability across
established scene reconstruction benchmarks, as compared to state of the art
baselines. We further demonstrate that augmenting the training set by sampling
from any distribution leads to better, more consistent scene reconstruction in
sparse environments. This work is foundational for robotic tasks where
augmenting a dataset with informative data is critical in resource-constrained,
a priori unknown environments. Videos and source code are available at
https://murpheylab.github.io/low-data-nerf/.
| no_new_dataset | 0.954563 |
2503.02093 | Emam Hossain | Emam Hossain, Muhammad Hasan Ferdous, Jianwu Wang, Aneesh Subramanian,
Md Osman Gani | Correlation to Causation: A Causal Deep Learning Framework for Arctic
Sea Ice Prediction | Accepted for Publication in Causal AI for Robust Decision Making
(CARD) Workshop in the International Conference on Pervasive Computing and
Communications (PerCom 2025) | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional machine learning and deep learning techniques rely on
correlation-based learning, often failing to distinguish spurious associations
from true causal relationships, which limits robustness, interpretability, and
generalizability. To address these challenges, we propose a causality-driven
deep learning framework that integrates Multivariate Granger Causality (MVGC)
and PCMCI+ causal discovery algorithms with a hybrid deep learning
architecture. Using 43 years (1979-2021) of daily and monthly Arctic Sea Ice
Extent (SIE) and ocean-atmospheric datasets, our approach identifies causally
significant factors, prioritizes features with direct influence, reduces
feature overhead, and improves computational efficiency. Experiments
demonstrate that integrating causal features enhances the deep learning model's
predictive accuracy and interpretability across multiple lead times. Beyond SIE
prediction, the proposed framework offers a scalable solution for dynamic,
high-dimensional systems, advancing both theoretical understanding and
practical applications in predictive modeling.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 22:24:14 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Hossain",
"Emam",
""
],
[
"Ferdous",
"Muhammad Hasan",
""
],
[
"Wang",
"Jianwu",
""
],
[
"Subramanian",
"Aneesh",
""
],
[
"Gani",
"Md Osman",
""
]
]
| TITLE: Correlation to Causation: A Causal Deep Learning Framework for Arctic
Sea Ice Prediction
ABSTRACT: Traditional machine learning and deep learning techniques rely on
correlation-based learning, often failing to distinguish spurious associations
from true causal relationships, which limits robustness, interpretability, and
generalizability. To address these challenges, we propose a causality-driven
deep learning framework that integrates Multivariate Granger Causality (MVGC)
and PCMCI+ causal discovery algorithms with a hybrid deep learning
architecture. Using 43 years (1979-2021) of daily and monthly Arctic Sea Ice
Extent (SIE) and ocean-atmospheric datasets, our approach identifies causally
significant factors, prioritizes features with direct influence, reduces
feature overhead, and improves computational efficiency. Experiments
demonstrate that integrating causal features enhances the deep learning model's
predictive accuracy and interpretability across multiple lead times. Beyond SIE
prediction, the proposed framework offers a scalable solution for dynamic,
high-dimensional systems, advancing both theoretical understanding and
practical applications in predictive modeling.
| no_new_dataset | 0.948775 |
2503.02104 | Xiangrui Liu | Xiangrui Liu, Yuanyuan Zhang, Yingzhou Lu, Changchang Yin, Xiaoling
Hu, Xiaoou Liu, Lulu Chen, Sheng Wang, Alexander Rodriguez, Huaxiu Yao,
Yezhou Yang, Ping Zhang, Jintai Chen, Tianfan Fu, and Xiao Wang | Biomedical Foundation Model: A Survey | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Foundation models, first introduced in 2021, are large-scale pre-trained
models (e.g., large language models (LLMs) and vision-language models (VLMs))
that learn from extensive unlabeled datasets through unsupervised methods,
enabling them to excel in diverse downstream tasks. These models, like GPT, can
be adapted to various applications such as question answering and visual
understanding, outperforming task-specific AI models and earning their name due
to broad applicability across fields. The development of biomedical foundation
models marks a significant milestone in leveraging artificial intelligence (AI)
to understand complex biological phenomena and advance medical research and
practice. This survey explores the potential of foundation models across
diverse domains within biomedical fields, including computational biology, drug
discovery and development, clinical informatics, medical imaging, and public
health. The purpose of this survey is to inspire ongoing research in the
application of foundation models to health science.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 22:42:00 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Liu",
"Xiangrui",
""
],
[
"Zhang",
"Yuanyuan",
""
],
[
"Lu",
"Yingzhou",
""
],
[
"Yin",
"Changchang",
""
],
[
"Hu",
"Xiaoling",
""
],
[
"Liu",
"Xiaoou",
""
],
[
"Chen",
"Lulu",
""
],
[
"Wang",
"Sheng",
""
],
[
"Rodriguez",
"Alexander",
""
],
[
"Yao",
"Huaxiu",
""
],
[
"Yang",
"Yezhou",
""
],
[
"Zhang",
"Ping",
""
],
[
"Chen",
"Jintai",
""
],
[
"Fu",
"Tianfan",
""
],
[
"Wang",
"Xiao",
""
]
]
| TITLE: Biomedical Foundation Model: A Survey
ABSTRACT: Foundation models, first introduced in 2021, are large-scale pre-trained
models (e.g., large language models (LLMs) and vision-language models (VLMs))
that learn from extensive unlabeled datasets through unsupervised methods,
enabling them to excel in diverse downstream tasks. These models, like GPT, can
be adapted to various applications such as question answering and visual
understanding, outperforming task-specific AI models and earning their name due
to broad applicability across fields. The development of biomedical foundation
models marks a significant milestone in leveraging artificial intelligence (AI)
to understand complex biological phenomena and advance medical research and
practice. This survey explores the potential of foundation models across
diverse domains within biomedical fields, including computational biology, drug
discovery and development, clinical informatics, medical imaging, and public
health. The purpose of this survey is to inspire ongoing research in the
application of foundation models to health science.
| no_new_dataset | 0.946843 |
2503.02114 | Bartlomiej Surma | Bartlomiej Surma, Michael Backes, Yang Zhang | Fairness and/or Privacy on Social Graphs | null | null | null | null | cs.LG cs.CY cs.SI | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) have shown remarkable success in various
graph-based learning tasks. However, recent studies have raised concerns about
fairness and privacy issues in GNNs, highlighting the potential for biased or
discriminatory outcomes and the vulnerability of sensitive information. This
paper presents a comprehensive investigation of fairness and privacy in GNNs,
exploring the impact of various fairness-preserving measures on model
performance. We conduct experiments across diverse datasets and evaluate the
effectiveness of different fairness interventions. Our analysis considers the
trade-offs between fairness, privacy, and accuracy, providing insights into the
challenges and opportunities in achieving both fair and private graph learning.
The results highlight the importance of carefully selecting and combining
fairness-preserving measures based on the specific characteristics of the data
and the desired fairness objectives. This study contributes to a deeper
understanding of the complex interplay between fairness, privacy, and accuracy
in GNNs, paving the way for the development of more robust and ethical graph
learning models.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 22:56:32 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Surma",
"Bartlomiej",
""
],
[
"Backes",
"Michael",
""
],
[
"Zhang",
"Yang",
""
]
]
| TITLE: Fairness and/or Privacy on Social Graphs
ABSTRACT: Graph Neural Networks (GNNs) have shown remarkable success in various
graph-based learning tasks. However, recent studies have raised concerns about
fairness and privacy issues in GNNs, highlighting the potential for biased or
discriminatory outcomes and the vulnerability of sensitive information. This
paper presents a comprehensive investigation of fairness and privacy in GNNs,
exploring the impact of various fairness-preserving measures on model
performance. We conduct experiments across diverse datasets and evaluate the
effectiveness of different fairness interventions. Our analysis considers the
trade-offs between fairness, privacy, and accuracy, providing insights into the
challenges and opportunities in achieving both fair and private graph learning.
The results highlight the importance of carefully selecting and combining
fairness-preserving measures based on the specific characteristics of the data
and the desired fairness objectives. This study contributes to a deeper
understanding of the complex interplay between fairness, privacy, and accuracy
in GNNs, paving the way for the development of more robust and ethical graph
learning models.
| no_new_dataset | 0.951097 |
2503.02123 | Danial Chitnis Dr | Emmanuel A. Olowe and Danial Chitnis | TMIQ: Quantifying Test and Measurement Domain Intelligence in Large
Language Models | accepted in IEEE I2MTC 2025 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The Test and Measurement domain, known for its strict requirements for
accuracy and efficiency, is increasingly adopting Generative AI technologies to
enhance the performance of data analysis, automation, and decision-making
processes. Among these, Large Language Models (LLMs) show significant promise
for advancing automation and precision in testing. However, the evaluation of
LLMs in this specialized area remains insufficiently explored. To address this
gap, we introduce the Test and Measurement Intelligence Quotient (TMIQ), a
benchmark designed to quantitatively assess LLMs across a wide range of
electronic engineering tasks. TMIQ offers a comprehensive set of scenarios and
metrics for detailed evaluation, including SCPI command matching accuracy,
ranked response evaluation, Chain-of-Thought Reasoning (CoT), and the impact of
output formatting variations required by LLMs on performance. In testing
various LLMs, our findings indicate varying levels of proficiency, with exact
SCPI command match accuracy ranging from around 56% to 73%, and ranked matching
first-position scores achieving around 33% for the best-performing model. We
also assess token usage, cost-efficiency, and response times, identifying
trade-offs between accuracy and operational efficiency. Additionally, we
present a command-line interface (CLI) tool that enables users to generate
datasets using the same methodology, allowing for tailored assessments of LLMs.
TMIQ and the CLI tool provide a rigorous, reproducible means of evaluating LLMs
for production environments, facilitating continuous monitoring and identifying
strengths and areas for improvement, and driving innovation in their selections
for applications within the Test and Measurement industry.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 23:12:49 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Olowe",
"Emmanuel A.",
""
],
[
"Chitnis",
"Danial",
""
]
]
| TITLE: TMIQ: Quantifying Test and Measurement Domain Intelligence in Large
Language Models
ABSTRACT: The Test and Measurement domain, known for its strict requirements for
accuracy and efficiency, is increasingly adopting Generative AI technologies to
enhance the performance of data analysis, automation, and decision-making
processes. Among these, Large Language Models (LLMs) show significant promise
for advancing automation and precision in testing. However, the evaluation of
LLMs in this specialized area remains insufficiently explored. To address this
gap, we introduce the Test and Measurement Intelligence Quotient (TMIQ), a
benchmark designed to quantitatively assess LLMs across a wide range of
electronic engineering tasks. TMIQ offers a comprehensive set of scenarios and
metrics for detailed evaluation, including SCPI command matching accuracy,
ranked response evaluation, Chain-of-Thought Reasoning (CoT), and the impact of
output formatting variations required by LLMs on performance. In testing
various LLMs, our findings indicate varying levels of proficiency, with exact
SCPI command match accuracy ranging from around 56% to 73%, and ranked matching
first-position scores achieving around 33% for the best-performing model. We
also assess token usage, cost-efficiency, and response times, identifying
trade-offs between accuracy and operational efficiency. Additionally, we
present a command-line interface (CLI) tool that enables users to generate
datasets using the same methodology, allowing for tailored assessments of LLMs.
TMIQ and the CLI tool provide a rigorous, reproducible means of evaluating LLMs
for production environments, facilitating continuous monitoring and identifying
strengths and areas for improvement, and driving innovation in their selections
for applications within the Test and Measurement industry.
| no_new_dataset | 0.942612 |
2503.02127 | Qifan Fu | Qifan Fu, Xu Chen, Muhammad Asad, Shanxin Yuan, Changjae Oh and
Gregory Slabaugh | HanDrawer: Leveraging Spatial Information to Render Realistic Hands
Using a Conditional Diffusion Model in Single Stage | 9 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Although diffusion methods excel in text-to-image generation, generating
accurate hand gestures remains a major challenge, resulting in severe
artifacts, such as incorrect number of fingers or unnatural gestures. To enable
the diffusion model to learn spatial information to improve the quality of the
hands generated, we propose HanDrawer, a module to condition the hand
generation process. Specifically, we apply graph convolutional layers to
extract the endogenous spatial structure and physical constraints implicit in
MANO hand mesh vertices. We then align and fuse these spatial features with
other modalities via cross-attention. The spatially fused features are used to
guide a single stage diffusion model denoising process for high quality
generation of the hand region. To improve the accuracy of spatial feature
fusion, we propose a Position-Preserving Zero Padding (PPZP) fusion strategy,
which ensures that the features extracted by HanDrawer are fused into the
region of interest in the relevant layers of the diffusion model. HanDrawer
learns the entire image features while paying special attention to the hand
region thanks to an additional hand reconstruction loss combined with the
denoising loss. To accurately train and evaluate our approach, we perform
careful cleansing and relabeling of the widely used HaGRID hand gesture dataset
and obtain high quality multimodal data. Quantitative and qualitative analyses
demonstrate the state-of-the-art performance of our method on the HaGRID
dataset through multiple evaluation metrics. Source code and our enhanced
dataset will be released publicly if the paper is accepted.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 23:29:33 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Fu",
"Qifan",
""
],
[
"Chen",
"Xu",
""
],
[
"Asad",
"Muhammad",
""
],
[
"Yuan",
"Shanxin",
""
],
[
"Oh",
"Changjae",
""
],
[
"Slabaugh",
"Gregory",
""
]
]
| TITLE: HanDrawer: Leveraging Spatial Information to Render Realistic Hands
Using a Conditional Diffusion Model in Single Stage
ABSTRACT: Although diffusion methods excel in text-to-image generation, generating
accurate hand gestures remains a major challenge, resulting in severe
artifacts, such as incorrect number of fingers or unnatural gestures. To enable
the diffusion model to learn spatial information to improve the quality of the
hands generated, we propose HanDrawer, a module to condition the hand
generation process. Specifically, we apply graph convolutional layers to
extract the endogenous spatial structure and physical constraints implicit in
MANO hand mesh vertices. We then align and fuse these spatial features with
other modalities via cross-attention. The spatially fused features are used to
guide a single stage diffusion model denoising process for high quality
generation of the hand region. To improve the accuracy of spatial feature
fusion, we propose a Position-Preserving Zero Padding (PPZP) fusion strategy,
which ensures that the features extracted by HanDrawer are fused into the
region of interest in the relevant layers of the diffusion model. HanDrawer
learns the entire image features while paying special attention to the hand
region thanks to an additional hand reconstruction loss combined with the
denoising loss. To accurately train and evaluate our approach, we perform
careful cleansing and relabeling of the widely used HaGRID hand gesture dataset
and obtain high quality multimodal data. Quantitative and qualitative analyses
demonstrate the state-of-the-art performance of our method on the HaGRID
dataset through multiple evaluation metrics. Source code and our enhanced
dataset will be released publicly if the paper is accepted.
| no_new_dataset | 0.953013 |
2503.02128 | Isaac Corley | Isaac Corley, Conor Wallace, Sourav Agrawal, Burton Putrah and
Jonathan Lwowski | Aerial Infrared Health Monitoring of Solar Photovoltaic Farms at Scale | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Solar photovoltaic (PV) farms represent a major source of global renewable
energy generation, yet their true operational efficiency often remains unknown
at scale. In this paper, we present a comprehensive, data-driven framework for
large-scale airborne infrared inspection of North American solar installations.
Leveraging high-resolution thermal imagery, we construct and curate a
geographically diverse dataset encompassing thousands of PV sites, enabling
machine learning-based detection and localization of defects that are not
detectable in the visible spectrum. Our pipeline integrates advanced image
processing, georeferencing, and airborne thermal infrared anomaly detection to
provide rigorous estimates of performance losses. We highlight practical
considerations in aerial data collection, annotation methodologies, and model
deployment across a wide range of environmental and operational conditions. Our
work delivers new insights into the reliability of large-scale solar assets and
serves as a foundation for ongoing research on performance trends, predictive
maintenance, and scalable analytics in the renewable energy sector.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 23:32:21 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Corley",
"Isaac",
""
],
[
"Wallace",
"Conor",
""
],
[
"Agrawal",
"Sourav",
""
],
[
"Putrah",
"Burton",
""
],
[
"Lwowski",
"Jonathan",
""
]
]
| TITLE: Aerial Infrared Health Monitoring of Solar Photovoltaic Farms at Scale
ABSTRACT: Solar photovoltaic (PV) farms represent a major source of global renewable
energy generation, yet their true operational efficiency often remains unknown
at scale. In this paper, we present a comprehensive, data-driven framework for
large-scale airborne infrared inspection of North American solar installations.
Leveraging high-resolution thermal imagery, we construct and curate a
geographically diverse dataset encompassing thousands of PV sites, enabling
machine learning-based detection and localization of defects that are not
detectable in the visible spectrum. Our pipeline integrates advanced image
processing, georeferencing, and airborne thermal infrared anomaly detection to
provide rigorous estimates of performance losses. We highlight practical
considerations in aerial data collection, annotation methodologies, and model
deployment across a wide range of environmental and operational conditions. Our
work delivers new insights into the reliability of large-scale solar assets and
serves as a foundation for ongoing research on performance trends, predictive
maintenance, and scalable analytics in the renewable energy sector.
| new_dataset | 0.953837 |
2503.02132 | Allassan Tchangmena A Nken | Allassan Tchangmena A Nken, Susan Mckeever, Peter Corcoran, Ihsan
Ullah | Video-DPRP: A Differentially Private Approach for Visual
Privacy-Preserving Video Human Activity Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Considerable effort has been made in privacy-preserving video human activity
recognition (HAR). Two primary approaches to ensure privacy preservation in
Video HAR are differential privacy (DP) and visual privacy. Techniques
enforcing DP during training provide strong theoretical privacy guarantees but
offer limited capabilities for visual privacy assessment. Conversely methods,
such as low-resolution transformations, data obfuscation and adversarial
networks, emphasize visual privacy but lack clear theoretical privacy
assurances. In this work, we focus on two main objectives: (1) leveraging DP
properties to develop a model-free approach for visual privacy in videos and
(2) evaluating our proposed technique using both differential privacy and
visual privacy assessments on HAR tasks. To achieve goal (1), we introduce
Video-DPRP: a Video-sample-wise Differentially Private Random Projection
framework for privacy-preserved video reconstruction for HAR. By using random
projections, noise matrices and right singular vectors derived from the
singular value decomposition of videos, Video-DPRP reconstructs DP videos using
privacy parameters ($\epsilon,\delta$) while enabling visual privacy
assessment. For goal (2), using UCF101 and HMDB51 datasets, we compare
Video-DPRP's performance on activity recognition with traditional DP methods,
and state-of-the-art (SOTA) visual privacy-preserving techniques. Additionally,
we assess its effectiveness in preserving privacy-related attributes such as
facial features, gender, and skin color, using the PA-HMDB and VISPR datasets.
Video-DPRP combines privacy-preservation from both a DP and visual privacy
perspective unlike SOTA methods that typically address only one of these
aspects.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 23:43:12 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Nken",
"Allassan Tchangmena A",
""
],
[
"Mckeever",
"Susan",
""
],
[
"Corcoran",
"Peter",
""
],
[
"Ullah",
"Ihsan",
""
]
]
| TITLE: Video-DPRP: A Differentially Private Approach for Visual
Privacy-Preserving Video Human Activity Recognition
ABSTRACT: Considerable effort has been made in privacy-preserving video human activity
recognition (HAR). Two primary approaches to ensure privacy preservation in
Video HAR are differential privacy (DP) and visual privacy. Techniques
enforcing DP during training provide strong theoretical privacy guarantees but
offer limited capabilities for visual privacy assessment. Conversely methods,
such as low-resolution transformations, data obfuscation and adversarial
networks, emphasize visual privacy but lack clear theoretical privacy
assurances. In this work, we focus on two main objectives: (1) leveraging DP
properties to develop a model-free approach for visual privacy in videos and
(2) evaluating our proposed technique using both differential privacy and
visual privacy assessments on HAR tasks. To achieve goal (1), we introduce
Video-DPRP: a Video-sample-wise Differentially Private Random Projection
framework for privacy-preserved video reconstruction for HAR. By using random
projections, noise matrices and right singular vectors derived from the
singular value decomposition of videos, Video-DPRP reconstructs DP videos using
privacy parameters ($\epsilon,\delta$) while enabling visual privacy
assessment. For goal (2), using UCF101 and HMDB51 datasets, we compare
Video-DPRP's performance on activity recognition with traditional DP methods,
and state-of-the-art (SOTA) visual privacy-preserving techniques. Additionally,
we assess its effectiveness in preserving privacy-related attributes such as
facial features, gender, and skin color, using the PA-HMDB and VISPR datasets.
Video-DPRP combines privacy-preservation from both a DP and visual privacy
perspective unlike SOTA methods that typically address only one of these
aspects.
| no_new_dataset | 0.948442 |
2503.02141 | Huthaifa I. Ashqar | Ahmad Antari, Yazan Abo-Aisheh, Jehad Shamasneh, and Huthaifa I.
Ashqar | Network Traffic Classification Using Machine Learning, Transformer, and
Large Language Models | null | null | null | null | cs.LG cs.CL cs.CR | http://creativecommons.org/licenses/by/4.0/ | This study uses various models to address network traffic classification,
categorizing traffic into web, browsing, IPSec, backup, and email. We collected
a comprehensive dataset from Arbor Edge Defender (AED) devices, comprising of
30,959 observations and 19 features. Multiple models were evaluated, including
Naive Bayes, Decision Tree, Random Forest, Gradient Boosting, XGBoost, Deep
Neural Networks (DNN), Transformer, and two Large Language Models (LLMs)
including GPT-4o and Gemini with zero- and few-shot learning. Transformer and
XGBoost showed the best performance, achieving the highest accuracy of 98.95
and 97.56%, respectively. GPT-4o and Gemini showed promising results with
few-shot learning, improving accuracy significantly from initial zero-shot
performance. While Gemini Few-Shot and GPT-4o Few-Shot performed well in
categories like Web and Email, misclassifications occurred in more complex
categories like IPSec and Backup. The study highlights the importance of model
selection, fine-tuning, and the balance between training data size and model
complexity for achieving reliable classification results.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 00:18:58 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Antari",
"Ahmad",
""
],
[
"Abo-Aisheh",
"Yazan",
""
],
[
"Shamasneh",
"Jehad",
""
],
[
"Ashqar",
"Huthaifa I.",
""
]
]
| TITLE: Network Traffic Classification Using Machine Learning, Transformer, and
Large Language Models
ABSTRACT: This study uses various models to address network traffic classification,
categorizing traffic into web, browsing, IPSec, backup, and email. We collected
a comprehensive dataset from Arbor Edge Defender (AED) devices, comprising of
30,959 observations and 19 features. Multiple models were evaluated, including
Naive Bayes, Decision Tree, Random Forest, Gradient Boosting, XGBoost, Deep
Neural Networks (DNN), Transformer, and two Large Language Models (LLMs)
including GPT-4o and Gemini with zero- and few-shot learning. Transformer and
XGBoost showed the best performance, achieving the highest accuracy of 98.95
and 97.56%, respectively. GPT-4o and Gemini showed promising results with
few-shot learning, improving accuracy significantly from initial zero-shot
performance. While Gemini Few-Shot and GPT-4o Few-Shot performed well in
categories like Web and Email, misclassifications occurred in more complex
categories like IPSec and Backup. The study highlights the importance of model
selection, fine-tuning, and the balance between training data size and model
complexity for achieving reliable classification results.
| new_dataset | 0.893263 |
2503.02144 | Huthaifa I. Ashqar | Areej Dweib, Montaser Tanina, Shehab Alawi, Mohammad Dyab, and
Huthaifa I. Ashqar | Malware Classification from Memory Dumps Using Machine Learning,
Transformers, and Large Language Models | null | null | null | null | cs.LG cs.CL cs.CR | http://creativecommons.org/licenses/by/4.0/ | This study investigates the performance of various classification models for
a malware classification task using different feature sets and data
configurations. Six models-Logistic Regression, K-Nearest Neighbors (KNN),
Support Vector Machines (SVM), Decision Trees, Random Forest (RF), and Extreme
Gradient Boosting (XGB)-were evaluated alongside two deep learning models,
Recurrent Neural Networks (RNN) and Transformers, as well as the Gemini
zero-shot and few-shot learning methods. Four feature sets were tested
including All Features, Literature Review Features, the Top 45 Features from
RF, and Down-Sampled with Top 45 Features. XGB achieved the highest accuracy of
87.42% using the Top 45 Features, outperforming all other models. RF followed
closely with 87.23% accuracy on the same feature set. In contrast, deep
learning models underperformed, with RNN achieving 66.71% accuracy and
Transformers reaching 71.59%. Down-sampling reduced performance across all
models, with XGB dropping to 81.31%. Gemini zero-shot and few-shot learning
approaches showed the lowest performance, with accuracies of 40.65% and 48.65%,
respectively. The results highlight the importance of feature selection in
improving model performance while reducing computational complexity.
Traditional models like XGB and RF demonstrated superior performance, while
deep learning and few-shot methods struggled to match their accuracy. This
study underscores the effectiveness of traditional machine learning models for
structured datasets and provides a foundation for future research into hybrid
approaches and larger datasets.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 00:24:21 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Dweib",
"Areej",
""
],
[
"Tanina",
"Montaser",
""
],
[
"Alawi",
"Shehab",
""
],
[
"Dyab",
"Mohammad",
""
],
[
"Ashqar",
"Huthaifa I.",
""
]
]
| TITLE: Malware Classification from Memory Dumps Using Machine Learning,
Transformers, and Large Language Models
ABSTRACT: This study investigates the performance of various classification models for
a malware classification task using different feature sets and data
configurations. Six models-Logistic Regression, K-Nearest Neighbors (KNN),
Support Vector Machines (SVM), Decision Trees, Random Forest (RF), and Extreme
Gradient Boosting (XGB)-were evaluated alongside two deep learning models,
Recurrent Neural Networks (RNN) and Transformers, as well as the Gemini
zero-shot and few-shot learning methods. Four feature sets were tested
including All Features, Literature Review Features, the Top 45 Features from
RF, and Down-Sampled with Top 45 Features. XGB achieved the highest accuracy of
87.42% using the Top 45 Features, outperforming all other models. RF followed
closely with 87.23% accuracy on the same feature set. In contrast, deep
learning models underperformed, with RNN achieving 66.71% accuracy and
Transformers reaching 71.59%. Down-sampling reduced performance across all
models, with XGB dropping to 81.31%. Gemini zero-shot and few-shot learning
approaches showed the lowest performance, with accuracies of 40.65% and 48.65%,
respectively. The results highlight the importance of feature selection in
improving model performance while reducing computational complexity.
Traditional models like XGB and RF demonstrated superior performance, while
deep learning and few-shot methods struggled to match their accuracy. This
study underscores the effectiveness of traditional machine learning models for
structured datasets and provides a foundation for future research into hybrid
approaches and larger datasets.
| no_new_dataset | 0.950411 |
2503.02152 | Sonia Cromp | Sonia Cromp, Satya Sai Srinath Namburi GNVV, Mohammed Alkhudhayri,
Catherine Cao, Samuel Guo, Nicholas Roberts, Frederic Sala | Tabby: Tabular Data Synthesis with Language Models | 21 pages, 8 figures | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | While advances in large language models (LLMs) have greatly improved the
quality of synthetic text data in recent years, synthesizing tabular data has
received relatively less attention. We address this disparity with Tabby, a
simple but powerful post-training modification to the standard Transformer
language model architecture, enabling its use for tabular dataset synthesis.
Tabby enables the representation of differences across columns using Gated
Mixture-of-Experts, with column-specific sets of parameters. Empirically, Tabby
results in data quality near or equal to that of real data. By pairing our
novel LLM table training technique, Plain, with Tabby, we observe up to a 44%
improvement in quality over previous methods. We also show that Tabby extends
beyond tables to more general structured data, reaching parity with real data
on a nested JSON dataset as well.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 00:32:15 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Cromp",
"Sonia",
""
],
[
"GNVV",
"Satya Sai Srinath Namburi",
""
],
[
"Alkhudhayri",
"Mohammed",
""
],
[
"Cao",
"Catherine",
""
],
[
"Guo",
"Samuel",
""
],
[
"Roberts",
"Nicholas",
""
],
[
"Sala",
"Frederic",
""
]
]
| TITLE: Tabby: Tabular Data Synthesis with Language Models
ABSTRACT: While advances in large language models (LLMs) have greatly improved the
quality of synthetic text data in recent years, synthesizing tabular data has
received relatively less attention. We address this disparity with Tabby, a
simple but powerful post-training modification to the standard Transformer
language model architecture, enabling its use for tabular dataset synthesis.
Tabby enables the representation of differences across columns using Gated
Mixture-of-Experts, with column-specific sets of parameters. Empirically, Tabby
results in data quality near or equal to that of real data. By pairing our
novel LLM table training technique, Plain, with Tabby, we observe up to a 44%
improvement in quality over previous methods. We also show that Tabby extends
beyond tables to more general structured data, reaching parity with real data
on a nested JSON dataset as well.
| no_new_dataset | 0.949763 |
2503.02156 | Stepan Mazokha | Stepan Mazokha, Fanchen Bao, George Sklivanitis, Jason O. Hallstrom | MobRFFI: Non-cooperative Device Re-identification for Mobility
Intelligence | 10 pages, 9 figures, 3 tables | null | null | null | eess.SP cs.AI cs.LG cs.NI | http://creativecommons.org/licenses/by/4.0/ | WiFi-based mobility monitoring in urban environments can provide valuable
insights into pedestrian and vehicle movements. However, MAC address
randomization introduces a significant obstacle in accurately estimating
congestion levels and path trajectories. To this end, we consider radio
frequency fingerprinting and re-identification for attributing WiFi traffic to
emitting devices without the use of MAC addresses.
We present MobRFFI, an AI-based device fingerprinting and re-identification
framework for WiFi networks that leverages an encoder deep learning model to
extract unique features based on WiFi chipset hardware impairments. It is
entirely independent of frame type. When evaluated on the WiFi fingerprinting
dataset WiSig, our approach achieves 94% and 100% device accuracy in multi-day
and single-day re-identification scenarios, respectively.
We also collect a novel dataset, MobRFFI, for granular multi-receiver WiFi
device fingerprinting evaluation. Using the dataset, we demonstrate that the
combination of fingerprints from multiple receivers boosts re-identification
performance from 81% to 100% on a single-day scenario and from 41% to 100% on a
multi-day scenario.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 00:39:50 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Mazokha",
"Stepan",
""
],
[
"Bao",
"Fanchen",
""
],
[
"Sklivanitis",
"George",
""
],
[
"Hallstrom",
"Jason O.",
""
]
]
| TITLE: MobRFFI: Non-cooperative Device Re-identification for Mobility
Intelligence
ABSTRACT: WiFi-based mobility monitoring in urban environments can provide valuable
insights into pedestrian and vehicle movements. However, MAC address
randomization introduces a significant obstacle in accurately estimating
congestion levels and path trajectories. To this end, we consider radio
frequency fingerprinting and re-identification for attributing WiFi traffic to
emitting devices without the use of MAC addresses.
We present MobRFFI, an AI-based device fingerprinting and re-identification
framework for WiFi networks that leverages an encoder deep learning model to
extract unique features based on WiFi chipset hardware impairments. It is
entirely independent of frame type. When evaluated on the WiFi fingerprinting
dataset WiSig, our approach achieves 94% and 100% device accuracy in multi-day
and single-day re-identification scenarios, respectively.
We also collect a novel dataset, MobRFFI, for granular multi-receiver WiFi
device fingerprinting evaluation. Using the dataset, we demonstrate that the
combination of fingerprints from multiple receivers boosts re-identification
performance from 81% to 100% on a single-day scenario and from 41% to 100% on a
multi-day scenario.
| new_dataset | 0.957873 |
2503.02157 | Aofei Chang | Aofei Chang, Le Huang, Parminder Bhatia, Taha Kass-Hout, Fenglong Ma,
Cao Xiao | MedHEval: Benchmarking Hallucinations and Mitigation Strategies in
Medical Large Vision-Language Models | Preprint, under review | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Vision Language Models (LVLMs) are becoming increasingly important in
the medical domain, yet Medical LVLMs (Med-LVLMs) frequently generate
hallucinations due to limited expertise and the complexity of medical
applications. Existing benchmarks fail to effectively evaluate hallucinations
based on their underlying causes and lack assessments of mitigation strategies.
To address this gap, we introduce MedHEval, a novel benchmark that
systematically evaluates hallucinations and mitigation strategies in Med-LVLMs
by categorizing them into three underlying causes: visual misinterpretation,
knowledge deficiency, and context misalignment. We construct a diverse set of
close- and open-ended medical VQA datasets with comprehensive evaluation
metrics to assess these hallucination types. We conduct extensive experiments
across 11 popular (Med)-LVLMs and evaluate 7 state-of-the-art hallucination
mitigation techniques. Results reveal that Med-LVLMs struggle with
hallucinations arising from different causes while existing mitigation methods
show limited effectiveness, especially for knowledge- and context-based errors.
These findings underscore the need for improved alignment training and
specialized mitigation strategies to enhance Med-LVLMs' reliability. MedHEval
establishes a standardized framework for evaluating and mitigating medical
hallucinations, guiding the development of more trustworthy Med-LVLMs.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 00:40:09 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Chang",
"Aofei",
""
],
[
"Huang",
"Le",
""
],
[
"Bhatia",
"Parminder",
""
],
[
"Kass-Hout",
"Taha",
""
],
[
"Ma",
"Fenglong",
""
],
[
"Xiao",
"Cao",
""
]
]
| TITLE: MedHEval: Benchmarking Hallucinations and Mitigation Strategies in
Medical Large Vision-Language Models
ABSTRACT: Large Vision Language Models (LVLMs) are becoming increasingly important in
the medical domain, yet Medical LVLMs (Med-LVLMs) frequently generate
hallucinations due to limited expertise and the complexity of medical
applications. Existing benchmarks fail to effectively evaluate hallucinations
based on their underlying causes and lack assessments of mitigation strategies.
To address this gap, we introduce MedHEval, a novel benchmark that
systematically evaluates hallucinations and mitigation strategies in Med-LVLMs
by categorizing them into three underlying causes: visual misinterpretation,
knowledge deficiency, and context misalignment. We construct a diverse set of
close- and open-ended medical VQA datasets with comprehensive evaluation
metrics to assess these hallucination types. We conduct extensive experiments
across 11 popular (Med)-LVLMs and evaluate 7 state-of-the-art hallucination
mitigation techniques. Results reveal that Med-LVLMs struggle with
hallucinations arising from different causes while existing mitigation methods
show limited effectiveness, especially for knowledge- and context-based errors.
These findings underscore the need for improved alignment training and
specialized mitigation strategies to enhance Med-LVLMs' reliability. MedHEval
establishes a standardized framework for evaluating and mitigating medical
hallucinations, guiding the development of more trustworthy Med-LVLMs.
| new_dataset | 0.904102 |
2503.02161 | Yunbo Long | Yunbo Long, Liming Xu, Alexandra Brintrup | LLM-TabFlow: Synthetic Tabular Data Generation with Inter-column Logical
Relationship Preservation | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Synthetic tabular data have widespread applications in industrial domains
such as healthcare, finance, and supply chains, owing to their potential to
protect privacy and mitigate data scarcity. However, generating realistic
synthetic tabular data while preserving inter-column logical relationships
remains a significant challenge for the existing generative models. To address
these challenges, we propose LLM-TabFlow, a novel approach that leverages Large
Language Model (LLM) reasoning to capture complex inter-column relationships
and compress tabular data, while using Score-based Diffusion to model the
distribution of the compressed data in latent space. Additionally, we introduce
an evaluation framework, which is absent in literature, to fairly assess the
performance of synthetic tabular data generation methods in real-world
contexts. Using this framework, we conduct extensive experiments on two
real-world industrial datasets, evaluating LLM-TabFlow against other five
baseline methods, including SMOTE (an interpolation-based approach) and other
state-of-the-art generative models. Our results show that LLM-TabFlow
outperforms all baselines, fully preserving inter-column relationships while
achieving the best balance between data fidelity, utility, and privacy. This
study is the first to explicitly address inter-column relationship preservation
in synthetic tabular data generation, offering new insights for developing more
realistic and reliable tabular data generation methods.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 00:47:52 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Long",
"Yunbo",
""
],
[
"Xu",
"Liming",
""
],
[
"Brintrup",
"Alexandra",
""
]
]
| TITLE: LLM-TabFlow: Synthetic Tabular Data Generation with Inter-column Logical
Relationship Preservation
ABSTRACT: Synthetic tabular data have widespread applications in industrial domains
such as healthcare, finance, and supply chains, owing to their potential to
protect privacy and mitigate data scarcity. However, generating realistic
synthetic tabular data while preserving inter-column logical relationships
remains a significant challenge for the existing generative models. To address
these challenges, we propose LLM-TabFlow, a novel approach that leverages Large
Language Model (LLM) reasoning to capture complex inter-column relationships
and compress tabular data, while using Score-based Diffusion to model the
distribution of the compressed data in latent space. Additionally, we introduce
an evaluation framework, which is absent in literature, to fairly assess the
performance of synthetic tabular data generation methods in real-world
contexts. Using this framework, we conduct extensive experiments on two
real-world industrial datasets, evaluating LLM-TabFlow against other five
baseline methods, including SMOTE (an interpolation-based approach) and other
state-of-the-art generative models. Our results show that LLM-TabFlow
outperforms all baselines, fully preserving inter-column relationships while
achieving the best balance between data fidelity, utility, and privacy. This
study is the first to explicitly address inter-column relationship preservation
in synthetic tabular data generation, offering new insights for developing more
realistic and reliable tabular data generation methods.
| no_new_dataset | 0.946843 |
2503.02170 | Eunsu Baek | Eunsu Baek, Sunghwan Han, Taesik Gong and Hyung-Sin Kim | Adaptive Camera Sensor for Vision Models | The International Conference on Learning Representations (ICLR 2025) | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain shift remains a persistent challenge in deep-learning-based computer
vision, often requiring extensive model modifications or large labeled datasets
to address. Inspired by human visual perception, which adjusts input quality
through corrective lenses rather than over-training the brain, we propose Lens,
a novel camera sensor control method that enhances model performance by
capturing high-quality images from the model's perspective rather than relying
on traditional human-centric sensor control. Lens is lightweight and adapts
sensor parameters to specific models and scenes in real-time. At its core, Lens
utilizes VisiT, a training-free, model-specific quality indicator that
evaluates individual unlabeled samples at test time using confidence scores
without additional adaptation costs. To validate Lens, we introduce ImageNet-ES
Diverse, a new benchmark dataset capturing natural perturbations from varying
sensor and lighting conditions. Extensive experiments on both ImageNet-ES and
our new ImageNet-ES Diverse show that Lens significantly improves model
accuracy across various baseline schemes for sensor control and model
modification while maintaining low latency in image captures. Lens effectively
compensates for large model size differences and integrates synergistically
with model improvement techniques. Our code and dataset are available at
github.com/Edw2n/Lens.git.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 01:20:23 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Baek",
"Eunsu",
""
],
[
"Han",
"Sunghwan",
""
],
[
"Gong",
"Taesik",
""
],
[
"Kim",
"Hyung-Sin",
""
]
]
| TITLE: Adaptive Camera Sensor for Vision Models
ABSTRACT: Domain shift remains a persistent challenge in deep-learning-based computer
vision, often requiring extensive model modifications or large labeled datasets
to address. Inspired by human visual perception, which adjusts input quality
through corrective lenses rather than over-training the brain, we propose Lens,
a novel camera sensor control method that enhances model performance by
capturing high-quality images from the model's perspective rather than relying
on traditional human-centric sensor control. Lens is lightweight and adapts
sensor parameters to specific models and scenes in real-time. At its core, Lens
utilizes VisiT, a training-free, model-specific quality indicator that
evaluates individual unlabeled samples at test time using confidence scores
without additional adaptation costs. To validate Lens, we introduce ImageNet-ES
Diverse, a new benchmark dataset capturing natural perturbations from varying
sensor and lighting conditions. Extensive experiments on both ImageNet-ES and
our new ImageNet-ES Diverse show that Lens significantly improves model
accuracy across various baseline schemes for sensor control and model
modification while maintaining low latency in image captures. Lens effectively
compensates for large model size differences and integrates synergistically
with model improvement techniques. Our code and dataset are available at
github.com/Edw2n/Lens.git.
| new_dataset | 0.960137 |
2503.02174 | Zilei Shao | Renato Lui Geh, Zilei Shao, Guy Van den Broeck | Adversarial Tokenization | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Current LLM pipelines account for only one possible tokenization for a given
string, ignoring exponentially many alternative tokenizations during training
and inference. For example, the standard Llama3 tokenization of penguin is
[p,enguin], yet [peng,uin] is another perfectly valid alternative. In this
paper, we show that despite LLMs being trained solely on one tokenization, they
still retain semantic understanding of other tokenizations, raising questions
about their implications in LLM safety. Put succinctly, we answer the following
question: can we adversarially tokenize an obviously malicious string to evade
safety and alignment restrictions? We show that not only is adversarial
tokenization an effective yet previously neglected axis of attack, but it is
also competitive against existing state-of-the-art adversarial approaches
without changing the text of the harmful request. We empirically validate this
exploit across three state-of-the-art LLMs and adversarial datasets, revealing
a previously unknown vulnerability in subword models.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 01:31:17 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Geh",
"Renato Lui",
""
],
[
"Shao",
"Zilei",
""
],
[
"Broeck",
"Guy Van den",
""
]
]
| TITLE: Adversarial Tokenization
ABSTRACT: Current LLM pipelines account for only one possible tokenization for a given
string, ignoring exponentially many alternative tokenizations during training
and inference. For example, the standard Llama3 tokenization of penguin is
[p,enguin], yet [peng,uin] is another perfectly valid alternative. In this
paper, we show that despite LLMs being trained solely on one tokenization, they
still retain semantic understanding of other tokenizations, raising questions
about their implications in LLM safety. Put succinctly, we answer the following
question: can we adversarially tokenize an obviously malicious string to evade
safety and alignment restrictions? We show that not only is adversarial
tokenization an effective yet previously neglected axis of attack, but it is
also competitive against existing state-of-the-art adversarial approaches
without changing the text of the harmful request. We empirically validate this
exploit across three state-of-the-art LLMs and adversarial datasets, revealing
a previously unknown vulnerability in subword models.
| no_new_dataset | 0.94474 |
2503.02180 | Yu Zhang | Da Wang, Yu Zhang, Kai Zhang, Junqing Li, Dengwang Li | Discrete Differential Evolution Particle Swarm Optimization Algorithm
for Energy Saving Flexible Job Shop Scheduling Problem Considering Machine
Multi States | null | null | null | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the continuous deepening of low-carbon emission reduction policies, the
manufacturing industries urgently need sensible energy-saving scheduling
schemes to achieve the balance between improving production efficiency and
reducing energy consumption. In energy-saving scheduling, reasonable machine
states-switching is a key point to achieve expected goals, i.e., whether the
machines need to switch speed between different operations, and whether the
machines need to add extra setup time between different jobs. Regarding this
matter, this work proposes a novel machine multi states-based energy saving
flexible job scheduling problem (EFJSP-M), which simultaneously takes into
account machine multi speeds and setup time. To address the proposed EFJSP-M, a
kind of discrete differential evolution particle swarm optimization algorithm
(D-DEPSO) is designed. In specific, D-DEPSO includes a hybrid initialization
strategy to improve the initial population performance, an updating mechanism
embedded with differential evolution operators to enhance population diversity,
and a critical path variable neighborhood search strategy to expand the
solution space. At last, based on datasets DPs and MKs, the experiment results
compared with five state-of-the-art algorithms demonstrate the feasible of
EFJSP-M and the superior of D-DEPSO.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 01:40:24 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Da",
""
],
[
"Zhang",
"Yu",
""
],
[
"Zhang",
"Kai",
""
],
[
"Li",
"Junqing",
""
],
[
"Li",
"Dengwang",
""
]
]
| TITLE: Discrete Differential Evolution Particle Swarm Optimization Algorithm
for Energy Saving Flexible Job Shop Scheduling Problem Considering Machine
Multi States
ABSTRACT: As the continuous deepening of low-carbon emission reduction policies, the
manufacturing industries urgently need sensible energy-saving scheduling
schemes to achieve the balance between improving production efficiency and
reducing energy consumption. In energy-saving scheduling, reasonable machine
states-switching is a key point to achieve expected goals, i.e., whether the
machines need to switch speed between different operations, and whether the
machines need to add extra setup time between different jobs. Regarding this
matter, this work proposes a novel machine multi states-based energy saving
flexible job scheduling problem (EFJSP-M), which simultaneously takes into
account machine multi speeds and setup time. To address the proposed EFJSP-M, a
kind of discrete differential evolution particle swarm optimization algorithm
(D-DEPSO) is designed. In specific, D-DEPSO includes a hybrid initialization
strategy to improve the initial population performance, an updating mechanism
embedded with differential evolution operators to enhance population diversity,
and a critical path variable neighborhood search strategy to expand the
solution space. At last, based on datasets DPs and MKs, the experiment results
compared with five state-of-the-art algorithms demonstrate the feasible of
EFJSP-M and the superior of D-DEPSO.
| no_new_dataset | 0.944995 |
2503.02194 | Sharif S M A | S M A Sharif, Rizwan Ali Naqvi, Farman Alic, Mithun Biswas | DarkDeblur: Learning single-shot image deblurring in low-light condition | null | Expert Systems with Applications 222 (2023): 119739 | 10.1016/j.eswa.2023.119739 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Single-shot image deblurring in a low-light condition is known to be a
profoundly challenging image translation task. This study tackles the
limitations of the low-light image deblurring with a learning-based approach
and proposes a novel deep network named as DarkDeblurNet. The proposed
DarkDeblur- Net comprises a dense-attention block and a contextual gating
mechanism in a feature pyramid structure to leverage content awareness. The
model additionally incorporates a multi-term objective function to perceive a
plausible perceptual image quality while performing image deblurring in the
low-light settings. The practicability of the proposed model has been verified
by fusing it in numerous computer vision applications. Apart from that, this
study introduces a benchmark dataset collected with actual hardware to assess
the low-light image deblurring methods in a real-world setup. The experimental
results illustrate that the proposed method can outperform the state-of-the-art
methods in both synthesized and real-world data for single-shot image
deblurring, even in challenging lighting environment.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 02:04:50 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Sharif",
"S M A",
""
],
[
"Naqvi",
"Rizwan Ali",
""
],
[
"Alic",
"Farman",
""
],
[
"Biswas",
"Mithun",
""
]
]
| TITLE: DarkDeblur: Learning single-shot image deblurring in low-light condition
ABSTRACT: Single-shot image deblurring in a low-light condition is known to be a
profoundly challenging image translation task. This study tackles the
limitations of the low-light image deblurring with a learning-based approach
and proposes a novel deep network named as DarkDeblurNet. The proposed
DarkDeblur- Net comprises a dense-attention block and a contextual gating
mechanism in a feature pyramid structure to leverage content awareness. The
model additionally incorporates a multi-term objective function to perceive a
plausible perceptual image quality while performing image deblurring in the
low-light settings. The practicability of the proposed model has been verified
by fusing it in numerous computer vision applications. Apart from that, this
study introduces a benchmark dataset collected with actual hardware to assess
the low-light image deblurring methods in a real-world setup. The experimental
results illustrate that the proposed method can outperform the state-of-the-art
methods in both synthesized and real-world data for single-shot image
deblurring, even in challenging lighting environment.
| new_dataset | 0.964556 |
2503.02201 | Ahmed Eldawy | Ahmed El-Dawy, Amr El-Zawawi, and Mohamed El-Habrouk | MonoLite3D: Lightweight 3D Object Properties Estimation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Reliable perception of the environment plays a crucial role in enabling
efficient self-driving vehicles. Therefore, the perception system necessitates
the acquisition of comprehensive 3D data regarding the surrounding objects
within a specific time constrain, including their dimensions, spatial location
and orientation. Deep learning has gained significant popularity in perception
systems, enabling the conversion of image features captured by a camera into
meaningful semantic information. This research paper introduces MonoLite3D
network, an embedded-device friendly lightweight deep learning methodology
designed for hardware environments with limited resources. MonoLite3D network
is a cutting-edge technique that focuses on estimating multiple properties of
3D objects, encompassing their dimensions and spatial orientation, solely from
monocular images. This approach is specifically designed to meet the
requirements of resource-constrained environments, making it highly suitable
for deployment on devices with limited computational capabilities. The
experimental results validate the accuracy and efficiency of the proposed
approach on the orientation benchmark of the KITTI dataset. It achieves an
impressive score of 82.27% on the moderate class and 69.81% on the hard class,
while still meeting the real-time requirements.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 02:31:09 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"El-Dawy",
"Ahmed",
""
],
[
"El-Zawawi",
"Amr",
""
],
[
"El-Habrouk",
"Mohamed",
""
]
]
| TITLE: MonoLite3D: Lightweight 3D Object Properties Estimation
ABSTRACT: Reliable perception of the environment plays a crucial role in enabling
efficient self-driving vehicles. Therefore, the perception system necessitates
the acquisition of comprehensive 3D data regarding the surrounding objects
within a specific time constrain, including their dimensions, spatial location
and orientation. Deep learning has gained significant popularity in perception
systems, enabling the conversion of image features captured by a camera into
meaningful semantic information. This research paper introduces MonoLite3D
network, an embedded-device friendly lightweight deep learning methodology
designed for hardware environments with limited resources. MonoLite3D network
is a cutting-edge technique that focuses on estimating multiple properties of
3D objects, encompassing their dimensions and spatial orientation, solely from
monocular images. This approach is specifically designed to meet the
requirements of resource-constrained environments, making it highly suitable
for deployment on devices with limited computational capabilities. The
experimental results validate the accuracy and efficiency of the proposed
approach on the orientation benchmark of the KITTI dataset. It achieves an
impressive score of 82.27% on the moderate class and 69.81% on the hard class,
while still meeting the real-time requirements.
| no_new_dataset | 0.951233 |
2503.02206 | Zhichao Yang | Zhichao Yang, Leida Li, Pengfei Chen, Jinjian Wu and Giuseppe
Valenzise | Language-Guided Visual Perception Disentanglement for Image Quality
Assessment and Conditional Image Generation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contrastive vision-language models, such as CLIP, have demonstrated excellent
zero-shot capability across semantic recognition tasks, mainly attributed to
the training on a large-scale I&1T (one Image with one Text) dataset. This kind
of multimodal representations often blend semantic and perceptual elements,
placing a particular emphasis on semantics. However, this could be problematic
for popular tasks like image quality assessment (IQA) and conditional image
generation (CIG), which typically need to have fine control on perceptual and
semantic features. Motivated by the above facts, this paper presents a new
multimodal disentangled representation learning framework, which leverages
disentangled text to guide image disentanglement. To this end, we first build
an I&2T (one Image with a perceptual Text and a semantic Text) dataset, which
consists of disentangled perceptual and semantic text descriptions for an
image. Then, the disentangled text descriptions are utilized as supervisory
signals to disentangle pure perceptual representations from CLIP's original
`coarse' feature space, dubbed DeCLIP. Finally, the decoupled feature
representations are used for both image quality assessment (technical quality
and aesthetic quality) and conditional image generation. Extensive experiments
and comparisons have demonstrated the advantages of the proposed method on the
two popular tasks. The dataset, code, and model will be available.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 02:36:48 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Yang",
"Zhichao",
""
],
[
"Li",
"Leida",
""
],
[
"Chen",
"Pengfei",
""
],
[
"Wu",
"Jinjian",
""
],
[
"Valenzise",
"Giuseppe",
""
]
]
| TITLE: Language-Guided Visual Perception Disentanglement for Image Quality
Assessment and Conditional Image Generation
ABSTRACT: Contrastive vision-language models, such as CLIP, have demonstrated excellent
zero-shot capability across semantic recognition tasks, mainly attributed to
the training on a large-scale I&1T (one Image with one Text) dataset. This kind
of multimodal representations often blend semantic and perceptual elements,
placing a particular emphasis on semantics. However, this could be problematic
for popular tasks like image quality assessment (IQA) and conditional image
generation (CIG), which typically need to have fine control on perceptual and
semantic features. Motivated by the above facts, this paper presents a new
multimodal disentangled representation learning framework, which leverages
disentangled text to guide image disentanglement. To this end, we first build
an I&2T (one Image with a perceptual Text and a semantic Text) dataset, which
consists of disentangled perceptual and semantic text descriptions for an
image. Then, the disentangled text descriptions are utilized as supervisory
signals to disentangle pure perceptual representations from CLIP's original
`coarse' feature space, dubbed DeCLIP. Finally, the decoupled feature
representations are used for both image quality assessment (technical quality
and aesthetic quality) and conditional image generation. Extensive experiments
and comparisons have demonstrated the advantages of the proposed method on the
two popular tasks. The dataset, code, and model will be available.
| new_dataset | 0.963848 |
2503.02218 | Shuo Wang | Shuo Wang, Tong Ren, Nan Cheng, Rong Wang, Li Zhang | Time-Varying Coronary Artery Deformation: A Dynamic Skinning Framework
for Surgical Training | 24 pages,8 figures,Submitted to International Journal of Computer
Assisted Radiology and Surgery | null | null | null | cs.GR cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Purpose: This study proposes a novel anatomically-driven dynamic modeling
framework for coronary arteries using skeletal skinning weights computation,
aiming to achieve precise control over vessel deformation while maintaining
real-time performance for surgical simulation applications. Methods: We
developed a computational framework based on biharmonic energy minimization for
skinning weight calculation, incorporating volumetric discretization through
tetrahedral mesh generation. The method implements temporal sampling and
interpolation for continuous vessel deformation throughout the cardiac cycle,
with mechanical constraints and volume conservation enforcement. The framework
was validated using clinical datasets from 5 patients, comparing interpolated
deformation results against ground truth data obtained from frame-by-frame
segmentation across cardiac phases. Results: The proposed framework effectively
handled interactive vessel manipulation. Geometric accuracy evaluation showed
mean Hausdorff distance of 4.96 +- 1.78 mm and mean surface distance of 1.78 +-
0.75 mm between interpolated meshes and ground truth models. The Branch
Completeness Ratio achieved 1.82 +- 0.46, while Branch Continuity Score
maintained 0.84 +- 0.06 (scale 0-1) across all datasets. The system
demonstrated capability in supporting real-time guidewire-vessel collision
detection and contrast medium flow simulation throughout the complete coronary
tree structure. Conclusion: Our skinning weight-based methodology enhances
model interactivity and applicability while maintaining geometric accuracy. The
framework provides a more flexible technical foundation for virtual surgical
training systems, demonstrating promising potential for both clinical practice
and medical education applications. The code is available at
https://github.com/ipoirot/DynamicArtery.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 02:51:37 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Shuo",
""
],
[
"Ren",
"Tong",
""
],
[
"Cheng",
"Nan",
""
],
[
"Wang",
"Rong",
""
],
[
"Zhang",
"Li",
""
]
]
| TITLE: Time-Varying Coronary Artery Deformation: A Dynamic Skinning Framework
for Surgical Training
ABSTRACT: Purpose: This study proposes a novel anatomically-driven dynamic modeling
framework for coronary arteries using skeletal skinning weights computation,
aiming to achieve precise control over vessel deformation while maintaining
real-time performance for surgical simulation applications. Methods: We
developed a computational framework based on biharmonic energy minimization for
skinning weight calculation, incorporating volumetric discretization through
tetrahedral mesh generation. The method implements temporal sampling and
interpolation for continuous vessel deformation throughout the cardiac cycle,
with mechanical constraints and volume conservation enforcement. The framework
was validated using clinical datasets from 5 patients, comparing interpolated
deformation results against ground truth data obtained from frame-by-frame
segmentation across cardiac phases. Results: The proposed framework effectively
handled interactive vessel manipulation. Geometric accuracy evaluation showed
mean Hausdorff distance of 4.96 +- 1.78 mm and mean surface distance of 1.78 +-
0.75 mm between interpolated meshes and ground truth models. The Branch
Completeness Ratio achieved 1.82 +- 0.46, while Branch Continuity Score
maintained 0.84 +- 0.06 (scale 0-1) across all datasets. The system
demonstrated capability in supporting real-time guidewire-vessel collision
detection and contrast medium flow simulation throughout the complete coronary
tree structure. Conclusion: Our skinning weight-based methodology enhances
model interactivity and applicability while maintaining geometric accuracy. The
framework provides a more flexible technical foundation for virtual surgical
training systems, demonstrating promising potential for both clinical practice
and medical education applications. The code is available at
https://github.com/ipoirot/DynamicArtery.
| no_new_dataset | 0.949342 |
2503.02220 | Zhihua Shen | Zhihua Shen, Siyang Chen, Han Wang, Tongsu Zhang, Xiaohu Zhang,
Xiangpeng Xu and Xia Yang | Low-Level Matters: An Efficient Hybrid Architecture for Robust
Multi-frame Infrared Small Target Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-frame infrared small target detection (IRSTD) plays a crucial role in
low-altitude and maritime surveillance. The hybrid architecture combining CNNs
and Transformers shows great promise for enhancing multi-frame IRSTD
performance. In this paper, we propose LVNet, a simple yet powerful hybrid
architecture that redefines low-level feature learning in hybrid frameworks for
multi-frame IRSTD. Our key insight is that the standard linear patch embeddings
in Vision Transformers are insufficient for capturing the scale-sensitive local
features critical to infrared small targets. To address this limitation, we
introduce a multi-scale CNN frontend that explicitly models local features by
leveraging the local spatial bias of convolution. Additionally, we design a
U-shaped video Transformer for multi-frame spatiotemporal context modeling,
effectively capturing the motion characteristics of targets. Experiments on the
publicly available datasets IRDST and NUDT-MIRSDT demonstrate that LVNet
outperforms existing state-of-the-art methods. Notably, compared to the current
best-performing method, LMAFormer, LVNet achieves an improvement of 5.63\% /
18.36\% in nIoU, while using only 1/221 of the parameters and 1/92 / 1/21 of
the computational cost. Ablation studies further validate the importance of
low-level representation learning in hybrid architectures. Our code and trained
models are available at https://github.com/ZhihuaShen/LVNet.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 02:53:25 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Shen",
"Zhihua",
""
],
[
"Chen",
"Siyang",
""
],
[
"Wang",
"Han",
""
],
[
"Zhang",
"Tongsu",
""
],
[
"Zhang",
"Xiaohu",
""
],
[
"Xu",
"Xiangpeng",
""
],
[
"Yang",
"Xia",
""
]
]
| TITLE: Low-Level Matters: An Efficient Hybrid Architecture for Robust
Multi-frame Infrared Small Target Detection
ABSTRACT: Multi-frame infrared small target detection (IRSTD) plays a crucial role in
low-altitude and maritime surveillance. The hybrid architecture combining CNNs
and Transformers shows great promise for enhancing multi-frame IRSTD
performance. In this paper, we propose LVNet, a simple yet powerful hybrid
architecture that redefines low-level feature learning in hybrid frameworks for
multi-frame IRSTD. Our key insight is that the standard linear patch embeddings
in Vision Transformers are insufficient for capturing the scale-sensitive local
features critical to infrared small targets. To address this limitation, we
introduce a multi-scale CNN frontend that explicitly models local features by
leveraging the local spatial bias of convolution. Additionally, we design a
U-shaped video Transformer for multi-frame spatiotemporal context modeling,
effectively capturing the motion characteristics of targets. Experiments on the
publicly available datasets IRDST and NUDT-MIRSDT demonstrate that LVNet
outperforms existing state-of-the-art methods. Notably, compared to the current
best-performing method, LMAFormer, LVNet achieves an improvement of 5.63\% /
18.36\% in nIoU, while using only 1/221 of the parameters and 1/92 / 1/21 of
the computational cost. Ablation studies further validate the importance of
low-level representation learning in hybrid architectures. Our code and trained
models are available at https://github.com/ZhihuaShen/LVNet.
| no_new_dataset | 0.951684 |
2503.02223 | Chao Ye | Haoyuan Li, Ziqin Ye, Yue Hao, Weiyang Lin, Chao Ye | DQO-MAP: Dual Quadrics Multi-Object mapping with Gaussian Splatting | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate object perception is essential for robotic applications such as
object navigation. In this paper, we propose DQO-MAP, a novel object-SLAM
system that seamlessly integrates object pose estimation and reconstruction. We
employ 3D Gaussian Splatting for high-fidelity object reconstruction and
leverage quadrics for precise object pose estimation. Both of them management
is handled on the CPU, while optimization is performed on the GPU,
significantly improving system efficiency. By associating objects with unique
IDs, our system enables rapid object extraction from the scene. Extensive
experimental results on object reconstruction and pose estimation demonstrate
that DQO-MAP achieves outstanding performance in terms of precision,
reconstruction quality, and computational efficiency. The code and dataset are
available at: https://github.com/LiHaoy-ux/DQO-MAP.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 02:55:07 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Li",
"Haoyuan",
""
],
[
"Ye",
"Ziqin",
""
],
[
"Hao",
"Yue",
""
],
[
"Lin",
"Weiyang",
""
],
[
"Ye",
"Chao",
""
]
]
| TITLE: DQO-MAP: Dual Quadrics Multi-Object mapping with Gaussian Splatting
ABSTRACT: Accurate object perception is essential for robotic applications such as
object navigation. In this paper, we propose DQO-MAP, a novel object-SLAM
system that seamlessly integrates object pose estimation and reconstruction. We
employ 3D Gaussian Splatting for high-fidelity object reconstruction and
leverage quadrics for precise object pose estimation. Both of them management
is handled on the CPU, while optimization is performed on the GPU,
significantly improving system efficiency. By associating objects with unique
IDs, our system enables rapid object extraction from the scene. Extensive
experimental results on object reconstruction and pose estimation demonstrate
that DQO-MAP achieves outstanding performance in terms of precision,
reconstruction quality, and computational efficiency. The code and dataset are
available at: https://github.com/LiHaoy-ux/DQO-MAP.
| no_new_dataset | 0.949389 |
2503.02231 | Bo Cheng | Bo Cheng, Jueqing Lu, Yuan Tian, Haifeng Zhao, Yi Chang, Lan Du | CGMatch: A Different Perspective of Semi-supervised Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi-supervised learning (SSL) has garnered significant attention due to its
ability to leverage limited labeled data and a large amount of unlabeled data
to improve model generalization performance. Recent approaches achieve
impressive successes by combining ideas from both consistency regularization
and pseudo-labeling. However, these methods tend to underperform in the more
realistic situations with relatively scarce labeled data. We argue that this
issue arises because existing methods rely solely on the model's confidence,
making them challenging to accurately assess the model's state and identify
unlabeled examples contributing to the training phase when supervision
information is limited, especially during the early stages of model training.
In this paper, we propose a novel SSL model called CGMatch, which, for the
first time, incorporates a new metric known as Count-Gap (CG). We demonstrate
that CG is effective in discovering unlabeled examples beneficial for model
training. Along with confidence, a commonly used metric in SSL, we propose a
fine-grained dynamic selection (FDS) strategy. This strategy dynamically
divides the unlabeled dataset into three subsets with different
characteristics: easy-to-learn set, ambiguous set, and hard-to-learn set. By
selective filtering subsets, and applying corresponding regularization with
selected subsets, we mitigate the negative impact of incorrect pseudo-labels on
model optimization and generalization. Extensive experimental results on
several common SSL benchmarks indicate the effectiveness of CGMatch especially
when the labeled data are particularly limited. Source code is available at
https://github.com/BoCheng-96/CGMatch.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 03:14:15 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Cheng",
"Bo",
""
],
[
"Lu",
"Jueqing",
""
],
[
"Tian",
"Yuan",
""
],
[
"Zhao",
"Haifeng",
""
],
[
"Chang",
"Yi",
""
],
[
"Du",
"Lan",
""
]
]
| TITLE: CGMatch: A Different Perspective of Semi-supervised Learning
ABSTRACT: Semi-supervised learning (SSL) has garnered significant attention due to its
ability to leverage limited labeled data and a large amount of unlabeled data
to improve model generalization performance. Recent approaches achieve
impressive successes by combining ideas from both consistency regularization
and pseudo-labeling. However, these methods tend to underperform in the more
realistic situations with relatively scarce labeled data. We argue that this
issue arises because existing methods rely solely on the model's confidence,
making them challenging to accurately assess the model's state and identify
unlabeled examples contributing to the training phase when supervision
information is limited, especially during the early stages of model training.
In this paper, we propose a novel SSL model called CGMatch, which, for the
first time, incorporates a new metric known as Count-Gap (CG). We demonstrate
that CG is effective in discovering unlabeled examples beneficial for model
training. Along with confidence, a commonly used metric in SSL, we propose a
fine-grained dynamic selection (FDS) strategy. This strategy dynamically
divides the unlabeled dataset into three subsets with different
characteristics: easy-to-learn set, ambiguous set, and hard-to-learn set. By
selective filtering subsets, and applying corresponding regularization with
selected subsets, we mitigate the negative impact of incorrect pseudo-labels on
model optimization and generalization. Extensive experimental results on
several common SSL benchmarks indicate the effectiveness of CGMatch especially
when the labeled data are particularly limited. Source code is available at
https://github.com/BoCheng-96/CGMatch.
| no_new_dataset | 0.949153 |
2503.02234 | Debashis Sen | Gargi V. Pillai and Debashis Sen | Anomaly detection in non-stationary videos using time-recursive
differencing network based prediction | Copyright 2022 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works | IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1-5,
2022, Art no. 8010605 | 10.1109/LGRS.2021.3072191 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Most videos, including those captured through aerial remote sensing, are
usually non-stationary in nature having time-varying feature statistics.
Although, sophisticated reconstruction and prediction models exist for video
anomaly detection, effective handling of non-stationarity has seldom been
considered explicitly. In this paper, we propose to perform prediction using a
time-recursive differencing network followed by autoregressive moving average
estimation for video anomaly detection. The differencing network is employed to
effectively handle non-stationarity in video data during the anomaly detection.
Focusing on the prediction process, the effectiveness of the proposed approach
is demonstrated considering a simple optical flow based video feature, and by
generating qualitative and quantitative results on three aerial video datasets
and two standard anomaly detection video datasets. EER, AUC and ROC curve based
comparison with several existing methods including the state-of-the-art reveal
the superiority of the proposed approach.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 03:16:39 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Pillai",
"Gargi V.",
""
],
[
"Sen",
"Debashis",
""
]
]
| TITLE: Anomaly detection in non-stationary videos using time-recursive
differencing network based prediction
ABSTRACT: Most videos, including those captured through aerial remote sensing, are
usually non-stationary in nature having time-varying feature statistics.
Although, sophisticated reconstruction and prediction models exist for video
anomaly detection, effective handling of non-stationarity has seldom been
considered explicitly. In this paper, we propose to perform prediction using a
time-recursive differencing network followed by autoregressive moving average
estimation for video anomaly detection. The differencing network is employed to
effectively handle non-stationarity in video data during the anomaly detection.
Focusing on the prediction process, the effectiveness of the proposed approach
is demonstrated considering a simple optical flow based video feature, and by
generating qualitative and quantitative results on three aerial video datasets
and two standard anomaly detection video datasets. EER, AUC and ROC curve based
comparison with several existing methods including the state-of-the-art reveal
the superiority of the proposed approach.
| no_new_dataset | 0.951369 |
2503.02240 | Haoyang Li | Haoyang Li, Shang Wu, Xiaokang Zhang, Xinmei Huang, Jing Zhang, Fuxin
Jiang, Shuai Wang, Tieying Zhang, Jianjun Chen, Rui Shi, Hong Chen, Cuiping
Li | OmniSQL: Synthesizing High-quality Text-to-SQL Data at Scale | null | null | null | null | cs.CL cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-to-SQL, the task of translating natural language questions into SQL
queries, plays a crucial role in enabling non-experts to interact with
databases. While recent advancements in large language models (LLMs) have
significantly enhanced text-to-SQL performance, existing approaches face
notable limitations in real-world text-to-SQL applications. Prompting-based
methods often depend on closed-source LLMs, which are expensive, raise privacy
concerns, and lack customization. Fine-tuning-based methods, on the other hand,
suffer from poor generalizability due to the limited coverage of publicly
available training data. To overcome these challenges, we propose a novel and
scalable text-to-SQL data synthesis framework for automatically synthesizing
large-scale, high-quality, and diverse datasets without extensive human
intervention. Using this framework, we introduce SynSQL-2.5M, the first
million-scale text-to-SQL dataset, containing 2.5 million samples spanning over
16,000 synthetic databases. Each sample includes a database, SQL query, natural
language question, and chain-of-thought (CoT) solution. Leveraging SynSQL-2.5M,
we develop OmniSQL, a powerful open-source text-to-SQL model available in three
sizes: 7B, 14B, and 32B. Extensive evaluations across nine datasets demonstrate
that OmniSQL achieves state-of-the-art performance, matching or surpassing
leading closed-source and open-source LLMs, including GPT-4o and DeepSeek-V3,
despite its smaller size. We release all code, datasets, and models to support
further research.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 03:30:56 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Li",
"Haoyang",
""
],
[
"Wu",
"Shang",
""
],
[
"Zhang",
"Xiaokang",
""
],
[
"Huang",
"Xinmei",
""
],
[
"Zhang",
"Jing",
""
],
[
"Jiang",
"Fuxin",
""
],
[
"Wang",
"Shuai",
""
],
[
"Zhang",
"Tieying",
""
],
[
"Chen",
"Jianjun",
""
],
[
"Shi",
"Rui",
""
],
[
"Chen",
"Hong",
""
],
[
"Li",
"Cuiping",
""
]
]
| TITLE: OmniSQL: Synthesizing High-quality Text-to-SQL Data at Scale
ABSTRACT: Text-to-SQL, the task of translating natural language questions into SQL
queries, plays a crucial role in enabling non-experts to interact with
databases. While recent advancements in large language models (LLMs) have
significantly enhanced text-to-SQL performance, existing approaches face
notable limitations in real-world text-to-SQL applications. Prompting-based
methods often depend on closed-source LLMs, which are expensive, raise privacy
concerns, and lack customization. Fine-tuning-based methods, on the other hand,
suffer from poor generalizability due to the limited coverage of publicly
available training data. To overcome these challenges, we propose a novel and
scalable text-to-SQL data synthesis framework for automatically synthesizing
large-scale, high-quality, and diverse datasets without extensive human
intervention. Using this framework, we introduce SynSQL-2.5M, the first
million-scale text-to-SQL dataset, containing 2.5 million samples spanning over
16,000 synthetic databases. Each sample includes a database, SQL query, natural
language question, and chain-of-thought (CoT) solution. Leveraging SynSQL-2.5M,
we develop OmniSQL, a powerful open-source text-to-SQL model available in three
sizes: 7B, 14B, and 32B. Extensive evaluations across nine datasets demonstrate
that OmniSQL achieves state-of-the-art performance, matching or surpassing
leading closed-source and open-source LLMs, including GPT-4o and DeepSeek-V3,
despite its smaller size. We release all code, datasets, and models to support
further research.
| new_dataset | 0.709975 |
2503.02241 | Chichun Zhou | Kui Huang, Mengke Song, Shuo Ba, Ling An, Huajie Liang, Huanxi Deng,
Yang Liu, Zhenyu Zhang and Chichun Zhou | Unsupervised Waste Classification By Dual-Encoder Contrastive Learning
and Multi-Clustering Voting (DECMCV) | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Waste classification is crucial for improving processing efficiency and
reducing environmental pollution. Supervised deep learning methods are commonly
used for automated waste classification, but they rely heavily on large labeled
datasets, which are costly and inefficient to obtain. Real-world waste data
often exhibit category and style biases, such as variations in camera angles,
lighting conditions, and types of waste, which can impact the model's
performance and generalization ability. Therefore, constructing a bias-free
dataset is essential. Manual labeling is not only costly but also inefficient.
While self-supervised learning helps address data scarcity, it still depends on
some labeled data and generally results in lower accuracy compared to
supervised methods. Unsupervised methods show potential in certain cases but
typically do not perform as well as supervised models, highlighting the need
for an efficient and cost-effective unsupervised approach. This study presents
a novel unsupervised method, Dual-Encoder Contrastive Learning with
Multi-Clustering Voting (DECMCV). The approach involves using a pre-trained
ConvNeXt model for image encoding, leveraging VisionTransformer to generate
positive samples, and applying a multi-clustering voting mechanism to address
data labeling and domain shift issues. Experimental results demonstrate that
DECMCV achieves classification accuracies of 93.78% and 98.29% on the TrashNet
and Huawei Cloud datasets, respectively, outperforming or matching supervised
models. On a real-world dataset of 4,169 waste images, only 50 labeled samples
were needed to accurately label thousands, improving classification accuracy by
29.85% compared to supervised models. This method effectively addresses style
differences, enhances model generalization, and contributes to the advancement
of automated waste classification.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 03:31:01 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Huang",
"Kui",
""
],
[
"Song",
"Mengke",
""
],
[
"Ba",
"Shuo",
""
],
[
"An",
"Ling",
""
],
[
"Liang",
"Huajie",
""
],
[
"Deng",
"Huanxi",
""
],
[
"Liu",
"Yang",
""
],
[
"Zhang",
"Zhenyu",
""
],
[
"Zhou",
"Chichun",
""
]
]
| TITLE: Unsupervised Waste Classification By Dual-Encoder Contrastive Learning
and Multi-Clustering Voting (DECMCV)
ABSTRACT: Waste classification is crucial for improving processing efficiency and
reducing environmental pollution. Supervised deep learning methods are commonly
used for automated waste classification, but they rely heavily on large labeled
datasets, which are costly and inefficient to obtain. Real-world waste data
often exhibit category and style biases, such as variations in camera angles,
lighting conditions, and types of waste, which can impact the model's
performance and generalization ability. Therefore, constructing a bias-free
dataset is essential. Manual labeling is not only costly but also inefficient.
While self-supervised learning helps address data scarcity, it still depends on
some labeled data and generally results in lower accuracy compared to
supervised methods. Unsupervised methods show potential in certain cases but
typically do not perform as well as supervised models, highlighting the need
for an efficient and cost-effective unsupervised approach. This study presents
a novel unsupervised method, Dual-Encoder Contrastive Learning with
Multi-Clustering Voting (DECMCV). The approach involves using a pre-trained
ConvNeXt model for image encoding, leveraging VisionTransformer to generate
positive samples, and applying a multi-clustering voting mechanism to address
data labeling and domain shift issues. Experimental results demonstrate that
DECMCV achieves classification accuracies of 93.78% and 98.29% on the TrashNet
and Huawei Cloud datasets, respectively, outperforming or matching supervised
models. On a real-world dataset of 4,169 waste images, only 50 labeled samples
were needed to accurately label thousands, improving classification accuracy by
29.85% compared to supervised models. This method effectively addresses style
differences, enhances model generalization, and contributes to the advancement
of automated waste classification.
| no_new_dataset | 0.946695 |
2503.02242 | Yihan Zhuang | Xidan Zhang, Yihan Zhuang, Qian Guo, Haodong Yang, Xuelin Qian, Gong
Cheng, Junwei Han, Zhongling Huang | $\mathbf{\Phi}$-GAN: Physics-Inspired GAN for Generating SAR Images
Under Limited Data | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Approaches for improving generative adversarial networks (GANs) training
under a few samples have been explored for natural images. However, these
methods have limited effectiveness for synthetic aperture radar (SAR) images,
as they do not account for the unique electromagnetic scattering properties of
SAR. To remedy this, we propose a physics-inspired regularization method dubbed
$\Phi$-GAN, which incorporates the ideal point scattering center (PSC) model of
SAR with two physical consistency losses. The PSC model approximates SAR
targets using physical parameters, ensuring that $\Phi$-GAN generates SAR
images consistent with real physical properties while preventing discriminator
overfitting by focusing on PSC-based decision cues. To embed the PSC model into
GANs for end-to-end training, we introduce a physics-inspired neural module
capable of estimating the physical parameters of SAR targets efficiently. This
module retains the interpretability of the physical model and can be trained
with limited data. We propose two physical loss functions: one for the
generator, guiding it to produce SAR images with physical parameters consistent
with real ones, and one for the discriminator, enhancing its robustness by
basing decisions on PSC attributes. We evaluate $\Phi$-GAN across several
conditional GAN (cGAN) models, demonstrating state-of-the-art performance in
data-scarce scenarios on three SAR image datasets.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 03:32:11 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhang",
"Xidan",
""
],
[
"Zhuang",
"Yihan",
""
],
[
"Guo",
"Qian",
""
],
[
"Yang",
"Haodong",
""
],
[
"Qian",
"Xuelin",
""
],
[
"Cheng",
"Gong",
""
],
[
"Han",
"Junwei",
""
],
[
"Huang",
"Zhongling",
""
]
]
| TITLE: $\mathbf{\Phi}$-GAN: Physics-Inspired GAN for Generating SAR Images
Under Limited Data
ABSTRACT: Approaches for improving generative adversarial networks (GANs) training
under a few samples have been explored for natural images. However, these
methods have limited effectiveness for synthetic aperture radar (SAR) images,
as they do not account for the unique electromagnetic scattering properties of
SAR. To remedy this, we propose a physics-inspired regularization method dubbed
$\Phi$-GAN, which incorporates the ideal point scattering center (PSC) model of
SAR with two physical consistency losses. The PSC model approximates SAR
targets using physical parameters, ensuring that $\Phi$-GAN generates SAR
images consistent with real physical properties while preventing discriminator
overfitting by focusing on PSC-based decision cues. To embed the PSC model into
GANs for end-to-end training, we introduce a physics-inspired neural module
capable of estimating the physical parameters of SAR targets efficiently. This
module retains the interpretability of the physical model and can be trained
with limited data. We propose two physical loss functions: one for the
generator, guiding it to produce SAR images with physical parameters consistent
with real ones, and one for the discriminator, enhancing its robustness by
basing decisions on PSC attributes. We evaluate $\Phi$-GAN across several
conditional GAN (cGAN) models, demonstrating state-of-the-art performance in
data-scarce scenarios on three SAR image datasets.
| no_new_dataset | 0.950365 |
2503.02248 | Tong Liang | Tong Liang, Jim Davis | Making Better Mistakes in CLIP-Based Zero-Shot Classification with
Hierarchy-Aware Language Prompts | 20 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent studies are leveraging advancements in large language models (LLMs)
trained on extensive internet-crawled text data to generate textual
descriptions of downstream classes in CLIP-based zero-shot image
classification. While most of these approaches aim at improving accuracy, our
work focuses on ``making better mistakes", of which the mistakes' severities
are derived from the given label hierarchy of downstream tasks. Since CLIP's
image encoder is trained with language supervising signals, it implicitly
captures the hierarchical semantic relationships between different classes.
This motivates our goal of making better mistakes in zero-shot classification,
a task for which CLIP is naturally well-suited. Our approach (HAPrompts)
queries the language model to produce textual representations for given classes
as zero-shot classifiers of CLIP to perform image classification on downstream
tasks. To our knowledge, this is the first work to introduce making better
mistakes in CLIP-based zero-shot classification. Our approach outperforms the
related methods in a holistic comparison across five datasets of varying scales
with label hierarchies of different heights in our experiments. Our code and
LLM-generated image prompts:
\href{https://github.com/ltong1130ztr/HAPrompts}{https://github.com/ltong1130ztr/HAPrompts}.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 03:54:50 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Liang",
"Tong",
""
],
[
"Davis",
"Jim",
""
]
]
| TITLE: Making Better Mistakes in CLIP-Based Zero-Shot Classification with
Hierarchy-Aware Language Prompts
ABSTRACT: Recent studies are leveraging advancements in large language models (LLMs)
trained on extensive internet-crawled text data to generate textual
descriptions of downstream classes in CLIP-based zero-shot image
classification. While most of these approaches aim at improving accuracy, our
work focuses on ``making better mistakes", of which the mistakes' severities
are derived from the given label hierarchy of downstream tasks. Since CLIP's
image encoder is trained with language supervising signals, it implicitly
captures the hierarchical semantic relationships between different classes.
This motivates our goal of making better mistakes in zero-shot classification,
a task for which CLIP is naturally well-suited. Our approach (HAPrompts)
queries the language model to produce textual representations for given classes
as zero-shot classifiers of CLIP to perform image classification on downstream
tasks. To our knowledge, this is the first work to introduce making better
mistakes in CLIP-based zero-shot classification. Our approach outperforms the
related methods in a holistic comparison across five datasets of varying scales
with label hierarchies of different heights in our experiments. Our code and
LLM-generated image prompts:
\href{https://github.com/ltong1130ztr/HAPrompts}{https://github.com/ltong1130ztr/HAPrompts}.
| no_new_dataset | 0.95222 |
2503.02255 | Fanyu Wang | Fanyu Wang, Hangyu Zhu, Zhenping Xie | AxBERT: An Interpretable Chinese Spelling Correction Method Driven by
Associative Knowledge Network | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Deep learning has shown promising performance on various machine learning
tasks. Nevertheless, the uninterpretability of deep learning models severely
restricts the usage domains that require feature explanations, such as text
correction. Therefore, a novel interpretable deep learning model (named AxBERT)
is proposed for Chinese spelling correction by aligning with an associative
knowledge network (AKN). Wherein AKN is constructed based on the co-occurrence
relations among Chinese characters, which denotes the interpretable statistic
logic contrasted with uninterpretable BERT logic. And a translator matrix
between BERT and AKN is introduced for the alignment and regulation of the
attention component in BERT. In addition, a weight regulator is designed to
adjust the attention distributions in BERT to appropriately model the sentence
semantics. Experimental results on SIGHAN datasets demonstrate that AxBERT can
achieve extraordinary performance, especially upon model precision compared to
baselines. Our interpretable analysis, together with qualitative reasoning, can
effectively illustrate the interpretability of AxBERT.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 04:09:10 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Fanyu",
""
],
[
"Zhu",
"Hangyu",
""
],
[
"Xie",
"Zhenping",
""
]
]
| TITLE: AxBERT: An Interpretable Chinese Spelling Correction Method Driven by
Associative Knowledge Network
ABSTRACT: Deep learning has shown promising performance on various machine learning
tasks. Nevertheless, the uninterpretability of deep learning models severely
restricts the usage domains that require feature explanations, such as text
correction. Therefore, a novel interpretable deep learning model (named AxBERT)
is proposed for Chinese spelling correction by aligning with an associative
knowledge network (AKN). Wherein AKN is constructed based on the co-occurrence
relations among Chinese characters, which denotes the interpretable statistic
logic contrasted with uninterpretable BERT logic. And a translator matrix
between BERT and AKN is introduced for the alignment and regulation of the
attention component in BERT. In addition, a weight regulator is designed to
adjust the attention distributions in BERT to appropriately model the sentence
semantics. Experimental results on SIGHAN datasets demonstrate that AxBERT can
achieve extraordinary performance, especially upon model precision compared to
baselines. Our interpretable analysis, together with qualitative reasoning, can
effectively illustrate the interpretability of AxBERT.
| no_new_dataset | 0.946745 |
2503.02259 | Hua Huang | Hua Huang, Tianshi Xu, Yuanzhe Xi, Edmond Chow | HiGP: A high-performance Python package for Gaussian Process | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Gaussian Processes (GPs) are flexible, nonparametric Bayesian models widely
used for regression and classification tasks due to their ability to capture
complex data patterns and provide uncertainty quantification (UQ). Traditional
GP implementations often face challenges in scalability and computational
efficiency, especially with large datasets. To address these challenges, HiGP,
a high-performance Python package, is designed for efficient Gaussian Process
regression (GPR) and classification (GPC) across datasets of varying sizes.
HiGP combines multiple new iterative methods to enhance the performance and
efficiency of GP computations. It implements various effective matrix-vector
(MatVec) and matrix-matrix (MatMul) multiplication strategies specifically
tailored for kernel matrices. To improve the convergence of iterative methods,
HiGP also integrates the recently developed Adaptive Factorized Nystrom (AFN)
preconditioner and employs precise formulas for computing the gradients. With a
user-friendly Python interface, HiGP seamlessly integrates with PyTorch and
other Python packages, allowing easy incorporation into existing machine
learning and data analysis workflows.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 04:17:36 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Huang",
"Hua",
""
],
[
"Xu",
"Tianshi",
""
],
[
"Xi",
"Yuanzhe",
""
],
[
"Chow",
"Edmond",
""
]
]
| TITLE: HiGP: A high-performance Python package for Gaussian Process
ABSTRACT: Gaussian Processes (GPs) are flexible, nonparametric Bayesian models widely
used for regression and classification tasks due to their ability to capture
complex data patterns and provide uncertainty quantification (UQ). Traditional
GP implementations often face challenges in scalability and computational
efficiency, especially with large datasets. To address these challenges, HiGP,
a high-performance Python package, is designed for efficient Gaussian Process
regression (GPR) and classification (GPC) across datasets of varying sizes.
HiGP combines multiple new iterative methods to enhance the performance and
efficiency of GP computations. It implements various effective matrix-vector
(MatVec) and matrix-matrix (MatMul) multiplication strategies specifically
tailored for kernel matrices. To improve the convergence of iterative methods,
HiGP also integrates the recently developed Adaptive Factorized Nystrom (AFN)
preconditioner and employs precise formulas for computing the gradients. With a
user-friendly Python interface, HiGP seamlessly integrates with PyTorch and
other Python packages, allowing easy incorporation into existing machine
learning and data analysis workflows.
| no_new_dataset | 0.939692 |
2503.02261 | Zelin Li | Zelin Li, Chenwei Wang, Zhaoke Huang, Yiming MA, Cunmin Zhao,
Zhongying Zhao, Hong Yan | Volume Tells: Dual Cycle-Consistent Diffusion for 3D Fluorescence
Microscopy De-noising and Super-Resolution | Accepted on CVPR 2025 | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D fluorescence microscopy is essential for understanding fundamental life
processes through long-term live-cell imaging. However, due to inherent issues
in imaging principles, it faces significant challenges including spatially
varying noise and anisotropic resolution, where the axial resolution lags
behind the lateral resolution up to 4.5 times. Meanwhile, laser power is kept
low to maintain cell viability, leading to inaccessible low-noise and
high-resolution paired ground truth (GT). To tackle these limitations, a dual
Cycle-consistent Diffusion is proposed to effectively mine intra-volume imaging
priors within 3D cell volumes in an unsupervised manner, i.e., Volume Tells
(VTCD), achieving de-noising and super-resolution (SR) simultaneously.
Specifically, a spatially iso-distributed denoiser is designed to exploit the
noise distribution consistency between adjacent low-noise and high-noise
regions within the 3D cell volume, suppressing the spatially varying noise.
Then, in light of the structural consistency of the cell volume, a cross-plane
global-propagation SR module propagates high-resolution details from the XY
plane into adjacent regions in the XZ and YZ planes, progressively enhancing
resolution across the entire 3D cell volume. Experimental results on 10 in vivo
cellular dataset demonstrate high improvements in both denoising and
super-resolution, with axial resolution enhanced from ~ 430 nm to ~ 90 nm.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 04:19:50 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Li",
"Zelin",
""
],
[
"Wang",
"Chenwei",
""
],
[
"Huang",
"Zhaoke",
""
],
[
"MA",
"Yiming",
""
],
[
"Zhao",
"Cunmin",
""
],
[
"Zhao",
"Zhongying",
""
],
[
"Yan",
"Hong",
""
]
]
| TITLE: Volume Tells: Dual Cycle-Consistent Diffusion for 3D Fluorescence
Microscopy De-noising and Super-Resolution
ABSTRACT: 3D fluorescence microscopy is essential for understanding fundamental life
processes through long-term live-cell imaging. However, due to inherent issues
in imaging principles, it faces significant challenges including spatially
varying noise and anisotropic resolution, where the axial resolution lags
behind the lateral resolution up to 4.5 times. Meanwhile, laser power is kept
low to maintain cell viability, leading to inaccessible low-noise and
high-resolution paired ground truth (GT). To tackle these limitations, a dual
Cycle-consistent Diffusion is proposed to effectively mine intra-volume imaging
priors within 3D cell volumes in an unsupervised manner, i.e., Volume Tells
(VTCD), achieving de-noising and super-resolution (SR) simultaneously.
Specifically, a spatially iso-distributed denoiser is designed to exploit the
noise distribution consistency between adjacent low-noise and high-noise
regions within the 3D cell volume, suppressing the spatially varying noise.
Then, in light of the structural consistency of the cell volume, a cross-plane
global-propagation SR module propagates high-resolution details from the XY
plane into adjacent regions in the XZ and YZ planes, progressively enhancing
resolution across the entire 3D cell volume. Experimental results on 10 in vivo
cellular dataset demonstrate high improvements in both denoising and
super-resolution, with axial resolution enhanced from ~ 430 nm to ~ 90 nm.
| no_new_dataset | 0.951006 |
2503.02269 | Yasuhiro Fujita | Yasuhiro Fujita | Experience Replay with Random Reshuffling | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Experience replay is a key component in reinforcement learning for
stabilizing learning and improving sample efficiency. Its typical
implementation samples transitions with replacement from a replay buffer. In
contrast, in supervised learning with a fixed dataset, it is a common practice
to shuffle the dataset every epoch and consume data sequentially, which is
called random reshuffling (RR). RR enjoys theoretically better convergence
properties and has been shown to outperform with-replacement sampling
empirically. To leverage the benefits of RR in reinforcement learning, we
propose sampling methods that extend RR to experience replay, both in uniform
and prioritized settings. We evaluate our sampling methods on Atari benchmarks,
demonstrating their effectiveness in deep reinforcement learning.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 04:37:22 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Fujita",
"Yasuhiro",
""
]
]
| TITLE: Experience Replay with Random Reshuffling
ABSTRACT: Experience replay is a key component in reinforcement learning for
stabilizing learning and improving sample efficiency. Its typical
implementation samples transitions with replacement from a replay buffer. In
contrast, in supervised learning with a fixed dataset, it is a common practice
to shuffle the dataset every epoch and consume data sequentially, which is
called random reshuffling (RR). RR enjoys theoretically better convergence
properties and has been shown to outperform with-replacement sampling
empirically. To leverage the benefits of RR in reinforcement learning, we
propose sampling methods that extend RR to experience replay, both in uniform
and prioritized settings. We evaluate our sampling methods on Atari benchmarks,
demonstrating their effectiveness in deep reinforcement learning.
| no_new_dataset | 0.95253 |
2503.02270 | Gargi Panda | Gargi Panda, Soumitra Kundu, Saumik Bhattacharya, Aurobinda Routray | SSNet: Saliency Prior and State Space Model-based Network for Salient
Object Detection in RGB-D Images | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Salient object detection (SOD) in RGB-D images is an essential task in
computer vision, enabling applications in scene understanding, robotics, and
augmented reality. However, existing methods struggle to capture global
dependency across modalities, lack comprehensive saliency priors from both RGB
and depth data, and are ineffective in handling low-quality depth maps. To
address these challenges, we propose SSNet, a saliency-prior and state space
model (SSM)-based network for the RGB-D SOD task. Unlike existing convolution-
or transformer-based approaches, SSNet introduces an SSM-based multi-modal
multi-scale decoder module to efficiently capture both intra- and inter-modal
global dependency with linear complexity. Specifically, we propose a
cross-modal selective scan SSM (CM-S6) mechanism, which effectively captures
global dependency between different modalities. Furthermore, we introduce a
saliency enhancement module (SEM) that integrates three saliency priors with
deep features to refine feature representation and improve the localization of
salient objects. To further address the issue of low-quality depth maps, we
propose an adaptive contrast enhancement technique that dynamically refines
depth maps, making them more suitable for the RGB-D SOD task. Extensive
quantitative and qualitative experiments on seven benchmark datasets
demonstrate that SSNet outperforms state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 04:38:36 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Panda",
"Gargi",
""
],
[
"Kundu",
"Soumitra",
""
],
[
"Bhattacharya",
"Saumik",
""
],
[
"Routray",
"Aurobinda",
""
]
]
| TITLE: SSNet: Saliency Prior and State Space Model-based Network for Salient
Object Detection in RGB-D Images
ABSTRACT: Salient object detection (SOD) in RGB-D images is an essential task in
computer vision, enabling applications in scene understanding, robotics, and
augmented reality. However, existing methods struggle to capture global
dependency across modalities, lack comprehensive saliency priors from both RGB
and depth data, and are ineffective in handling low-quality depth maps. To
address these challenges, we propose SSNet, a saliency-prior and state space
model (SSM)-based network for the RGB-D SOD task. Unlike existing convolution-
or transformer-based approaches, SSNet introduces an SSM-based multi-modal
multi-scale decoder module to efficiently capture both intra- and inter-modal
global dependency with linear complexity. Specifically, we propose a
cross-modal selective scan SSM (CM-S6) mechanism, which effectively captures
global dependency between different modalities. Furthermore, we introduce a
saliency enhancement module (SEM) that integrates three saliency priors with
deep features to refine feature representation and improve the localization of
salient objects. To further address the issue of low-quality depth maps, we
propose an adaptive contrast enhancement technique that dynamically refines
depth maps, making them more suitable for the RGB-D SOD task. Extensive
quantitative and qualitative experiments on seven benchmark datasets
demonstrate that SSNet outperforms state-of-the-art methods.
| no_new_dataset | 0.949949 |
2503.02281 | Ahmad Mohammad Saber Dr | Ahmad Mohammad Saber and Max Mauro Dias Santos and Mohammad Al
Janaideh and Amr Youssef and Deepa Kundur | A Kolmogorov-Arnold Network for Explainable Detection of Cyberattacks on
EV Chargers | Accepted for the 2025 IEEE Power & Energy Society General Meeting
(PESGM), 27-31 July 2025 Austin, TX, USA | null | null | null | cs.LG cs.CR eess.SP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The increasing adoption of Electric Vehicles (EVs) and the expansion of
charging infrastructure and their reliance on communication expose Electric
Vehicle Supply Equipment (EVSE) to cyberattacks. This paper presents a novel
Kolmogorov-Arnold Network (KAN)-based framework for detecting cyberattacks on
EV chargers using only power consumption measurements. Leveraging the KAN's
capability to model nonlinear, high-dimensional functions and its inherently
interpretable architecture, the framework effectively differentiates between
normal and malicious charging scenarios. The model is trained offline on a
comprehensive dataset containing over 100,000 cyberattack cases generated
through an experimental setup. Once trained, the KAN model can be deployed
within individual chargers for real-time detection of abnormal charging
behaviors indicative of cyberattacks. Our results demonstrate that the proposed
KAN-based approach can accurately detect cyberattacks on EV chargers with
Precision and F1-score of 99% and 92%, respectively, outperforming existing
detection methods. Additionally, the proposed KANs's enable the extraction of
mathematical formulas representing KAN's detection decisions, addressing
interpretability, a key challenge in deep learning-based cybersecurity
frameworks. This work marks a significant step toward building secure and
explainable EV charging infrastructure.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 05:06:39 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Saber",
"Ahmad Mohammad",
""
],
[
"Santos",
"Max Mauro Dias",
""
],
[
"Janaideh",
"Mohammad Al",
""
],
[
"Youssef",
"Amr",
""
],
[
"Kundur",
"Deepa",
""
]
]
| TITLE: A Kolmogorov-Arnold Network for Explainable Detection of Cyberattacks on
EV Chargers
ABSTRACT: The increasing adoption of Electric Vehicles (EVs) and the expansion of
charging infrastructure and their reliance on communication expose Electric
Vehicle Supply Equipment (EVSE) to cyberattacks. This paper presents a novel
Kolmogorov-Arnold Network (KAN)-based framework for detecting cyberattacks on
EV chargers using only power consumption measurements. Leveraging the KAN's
capability to model nonlinear, high-dimensional functions and its inherently
interpretable architecture, the framework effectively differentiates between
normal and malicious charging scenarios. The model is trained offline on a
comprehensive dataset containing over 100,000 cyberattack cases generated
through an experimental setup. Once trained, the KAN model can be deployed
within individual chargers for real-time detection of abnormal charging
behaviors indicative of cyberattacks. Our results demonstrate that the proposed
KAN-based approach can accurately detect cyberattacks on EV chargers with
Precision and F1-score of 99% and 92%, respectively, outperforming existing
detection methods. Additionally, the proposed KANs's enable the extraction of
mathematical formulas representing KAN's detection decisions, addressing
interpretability, a key challenge in deep learning-based cybersecurity
frameworks. This work marks a significant step toward building secure and
explainable EV charging infrastructure.
| new_dataset | 0.949529 |
2503.02284 | Soekun Kang | Seokun Kang, Taehwan Kim | Semi-Supervised Audio-Visual Video Action Recognition with Audio Source
Localization Guided Mixup | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Video action recognition is a challenging but important task for
understanding and discovering what the video does. However, acquiring
annotations for a video is costly, and semi-supervised learning (SSL) has been
studied to improve performance even with a small number of labeled data in the
task. Prior studies for semi-supervised video action recognition have mostly
focused on using single modality - visuals - but the video is multi-modal, so
utilizing both visuals and audio would be desirable and improve performance
further, which has not been explored well. Therefore, we propose audio-visual
SSL for video action recognition, which uses both visual and audio together,
even with quite a few labeled data, which is challenging. In addition, to
maximize the information of audio and video, we propose a novel audio source
localization-guided mixup method that considers inter-modal relations between
video and audio modalities. In experiments on UCF-51, Kinetics-400, and
VGGSound datasets, our model shows the superior performance of the proposed
semi-supervised audio-visual action recognition framework and audio source
localization-guided mixup.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 05:13:56 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Kang",
"Seokun",
""
],
[
"Kim",
"Taehwan",
""
]
]
| TITLE: Semi-Supervised Audio-Visual Video Action Recognition with Audio Source
Localization Guided Mixup
ABSTRACT: Video action recognition is a challenging but important task for
understanding and discovering what the video does. However, acquiring
annotations for a video is costly, and semi-supervised learning (SSL) has been
studied to improve performance even with a small number of labeled data in the
task. Prior studies for semi-supervised video action recognition have mostly
focused on using single modality - visuals - but the video is multi-modal, so
utilizing both visuals and audio would be desirable and improve performance
further, which has not been explored well. Therefore, we propose audio-visual
SSL for video action recognition, which uses both visual and audio together,
even with quite a few labeled data, which is challenging. In addition, to
maximize the information of audio and video, we propose a novel audio source
localization-guided mixup method that considers inter-modal relations between
video and audio modalities. In experiments on UCF-51, Kinetics-400, and
VGGSound datasets, our model shows the superior performance of the proposed
semi-supervised audio-visual action recognition framework and audio source
localization-guided mixup.
| no_new_dataset | 0.947672 |
2503.02298 | Ziyang Zeng | Ziyang Zeng, Dongyuan Li and Yuqing Yang | Towards Explainable Doctor Recommendation with Large Language Models | 12 pages, 6 figures, Journal of Biomedical and Health Informatics
(JBHI) under review | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advent of internet medicine provides patients with unprecedented
convenience in searching and communicating with doctors relevant to their
diseases and desired treatments online. However, the current doctor
recommendation systems fail to fully ensure the professionalism and
interpretability of the recommended results. In this work, we formulate doctor
recommendation as a ranking task and develop a large language model (LLM)-based
pointwise ranking framework. Our framework ranks doctors according to their
relevance regarding specific diseases-treatment pairs in a zero-shot setting.
The advantage of our framework lies in its ability to generate precise and
explainable doctor ranking results. Additionally, we construct DrRank, a new
expertise-driven doctor ranking dataset comprising over 38 disease-treatment
pairs. Experiment results on the DrRank dataset demonstrate that our framework
significantly outperforms the strongest cross-encoder baseline, achieving a
notable gain of +5.45 in the NDCG@10 score while maintaining affordable latency
consumption. Furthermore, we comprehensively present the fairness analysis
results of our framework from three perspectives of different diseases, patient
gender, and geographical regions. Meanwhile, the interpretability of our
framework is rigorously verified by three human experts, providing further
evidence of the reliability of our proposed framework for doctor
recommendation.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 05:48:07 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zeng",
"Ziyang",
""
],
[
"Li",
"Dongyuan",
""
],
[
"Yang",
"Yuqing",
""
]
]
| TITLE: Towards Explainable Doctor Recommendation with Large Language Models
ABSTRACT: The advent of internet medicine provides patients with unprecedented
convenience in searching and communicating with doctors relevant to their
diseases and desired treatments online. However, the current doctor
recommendation systems fail to fully ensure the professionalism and
interpretability of the recommended results. In this work, we formulate doctor
recommendation as a ranking task and develop a large language model (LLM)-based
pointwise ranking framework. Our framework ranks doctors according to their
relevance regarding specific diseases-treatment pairs in a zero-shot setting.
The advantage of our framework lies in its ability to generate precise and
explainable doctor ranking results. Additionally, we construct DrRank, a new
expertise-driven doctor ranking dataset comprising over 38 disease-treatment
pairs. Experiment results on the DrRank dataset demonstrate that our framework
significantly outperforms the strongest cross-encoder baseline, achieving a
notable gain of +5.45 in the NDCG@10 score while maintaining affordable latency
consumption. Furthermore, we comprehensively present the fairness analysis
results of our framework from three perspectives of different diseases, patient
gender, and geographical regions. Meanwhile, the interpretability of our
framework is rigorously verified by three human experts, providing further
evidence of the reliability of our proposed framework for doctor
recommendation.
| new_dataset | 0.961425 |
2503.02300 | Zhi Zheng | Ruixin Wu, Zihan Li, Jin Wang, Xiangyu Xu, Huan Yu, Zhi Zheng,
Kaixiang Huang and Guodong Lu | Diffusion-Based mmWave Radar Point Cloud Enhancement Driven by Range
Images | 8 pages, 7 figures, submitted to 2025 IROS. This work has been
submitted to the IEEE for possible publication | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Millimeter-wave (mmWave) radar has attracted significant attention in
robotics and autonomous driving. However, despite the perception stability in
harsh environments, the point cloud generated by mmWave radar is relatively
sparse while containing significant noise, which limits its further
development. Traditional mmWave radar enhancement approaches often struggle to
leverage the effectiveness of diffusion models in super-resolution, largely due
to the unnatural range-azimuth heatmap (RAH) or bird's eye view (BEV)
representation. To overcome this limitation, we propose a novel method that
pioneers the application of fusing range images with image diffusion models,
achieving accurate and dense mmWave radar point clouds that are similar to
LiDAR. Benefitting from the projection that aligns with human observation, the
range image representation of mmWave radar is close to natural images, allowing
the knowledge from pre-trained image diffusion models to be effectively
transferred, significantly improving the overall performance. Extensive
evaluations on both public datasets and self-constructed datasets demonstrate
that our approach provides substantial improvements, establishing a new
state-of-the-art performance in generating truly three-dimensional LiDAR-like
point clouds via mmWave radar.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:00:15 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wu",
"Ruixin",
""
],
[
"Li",
"Zihan",
""
],
[
"Wang",
"Jin",
""
],
[
"Xu",
"Xiangyu",
""
],
[
"Yu",
"Huan",
""
],
[
"Zheng",
"Zhi",
""
],
[
"Huang",
"Kaixiang",
""
],
[
"Lu",
"Guodong",
""
]
]
| TITLE: Diffusion-Based mmWave Radar Point Cloud Enhancement Driven by Range
Images
ABSTRACT: Millimeter-wave (mmWave) radar has attracted significant attention in
robotics and autonomous driving. However, despite the perception stability in
harsh environments, the point cloud generated by mmWave radar is relatively
sparse while containing significant noise, which limits its further
development. Traditional mmWave radar enhancement approaches often struggle to
leverage the effectiveness of diffusion models in super-resolution, largely due
to the unnatural range-azimuth heatmap (RAH) or bird's eye view (BEV)
representation. To overcome this limitation, we propose a novel method that
pioneers the application of fusing range images with image diffusion models,
achieving accurate and dense mmWave radar point clouds that are similar to
LiDAR. Benefitting from the projection that aligns with human observation, the
range image representation of mmWave radar is close to natural images, allowing
the knowledge from pre-trained image diffusion models to be effectively
transferred, significantly improving the overall performance. Extensive
evaluations on both public datasets and self-constructed datasets demonstrate
that our approach provides substantial improvements, establishing a new
state-of-the-art performance in generating truly three-dimensional LiDAR-like
point clouds via mmWave radar.
| new_dataset | 0.730049 |
2503.02311 | Akifumi Wachi | Kensuke Tatematsu, Akifumi Wachi | Target Return Optimizer for Multi-Game Decision Transformer | 10 pages | null | null | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Achieving autonomous agents with robust generalization capabilities across
diverse games and tasks remains one of the ultimate goals in AI research.
Recent advancements in transformer-based offline reinforcement learning,
exemplified by the MultiGame Decision Transformer [Lee et al., 2022], have
shown remarkable performance across various games or tasks. However, these
approaches depend heavily on human expertise, presenting substantial challenges
for practical deployment, particularly in scenarios with limited prior
game-specific knowledge. In this paper, we propose an algorithm called
Multi-Game Target Return Optimizer (MTRO) to autonomously determine
game-specific target returns within the Multi-Game Decision Transformer
framework using solely offline datasets. MTRO addresses the existing
limitations by automating the target return configuration process, leveraging
environmental reward information extracted from offline datasets. Notably, MTRO
does not require additional training, enabling seamless integration into
existing Multi-Game Decision Transformer architectures. Our experimental
evaluations on Atari games demonstrate that MTRO enhances the performance of RL
policies across a wide array of games, underscoring its potential to advance
the field of autonomous agent development.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:13:53 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Tatematsu",
"Kensuke",
""
],
[
"Wachi",
"Akifumi",
""
]
]
| TITLE: Target Return Optimizer for Multi-Game Decision Transformer
ABSTRACT: Achieving autonomous agents with robust generalization capabilities across
diverse games and tasks remains one of the ultimate goals in AI research.
Recent advancements in transformer-based offline reinforcement learning,
exemplified by the MultiGame Decision Transformer [Lee et al., 2022], have
shown remarkable performance across various games or tasks. However, these
approaches depend heavily on human expertise, presenting substantial challenges
for practical deployment, particularly in scenarios with limited prior
game-specific knowledge. In this paper, we propose an algorithm called
Multi-Game Target Return Optimizer (MTRO) to autonomously determine
game-specific target returns within the Multi-Game Decision Transformer
framework using solely offline datasets. MTRO addresses the existing
limitations by automating the target return configuration process, leveraging
environmental reward information extracted from offline datasets. Notably, MTRO
does not require additional training, enabling seamless integration into
existing Multi-Game Decision Transformer architectures. Our experimental
evaluations on Atari games demonstrate that MTRO enhances the performance of RL
policies across a wide array of games, underscoring its potential to advance
the field of autonomous agent development.
| no_new_dataset | 0.940953 |
2503.02312 | Aviv Shamsian | Aviv Shamsian, Eitan Shaar, Aviv Navon, Gal Chechik, Ethan Fetaya | Go Beyond Your Means: Unlearning with Per-Sample Gradient
Orthogonalization | Under Review | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine unlearning aims to remove the influence of problematic training data
after a model has been trained. The primary challenge in machine unlearning is
ensuring that the process effectively removes specified data without
compromising the model's overall performance on the remaining dataset. Many
existing machine unlearning methods address this challenge by carefully
balancing gradient ascent on the unlearn data with the gradient descent on a
retain set representing the training data. Here, we propose OrthoGrad, a novel
approach that mitigates interference between the unlearn set and the retain set
rather than competing ascent and descent processes. Our method projects the
gradient of the unlearn set onto the subspace orthogonal to all gradients in
the retain batch, effectively avoiding any gradient interference. We
demonstrate the effectiveness of OrthoGrad on multiple machine unlearning
benchmarks, including automatic speech recognition, outperforming competing
methods.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:14:33 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Shamsian",
"Aviv",
""
],
[
"Shaar",
"Eitan",
""
],
[
"Navon",
"Aviv",
""
],
[
"Chechik",
"Gal",
""
],
[
"Fetaya",
"Ethan",
""
]
]
| TITLE: Go Beyond Your Means: Unlearning with Per-Sample Gradient
Orthogonalization
ABSTRACT: Machine unlearning aims to remove the influence of problematic training data
after a model has been trained. The primary challenge in machine unlearning is
ensuring that the process effectively removes specified data without
compromising the model's overall performance on the remaining dataset. Many
existing machine unlearning methods address this challenge by carefully
balancing gradient ascent on the unlearn data with the gradient descent on a
retain set representing the training data. Here, we propose OrthoGrad, a novel
approach that mitigates interference between the unlearn set and the retain set
rather than competing ascent and descent processes. Our method projects the
gradient of the unlearn set onto the subspace orthogonal to all gradients in
the retain batch, effectively avoiding any gradient interference. We
demonstrate the effectiveness of OrthoGrad on multiple machine unlearning
benchmarks, including automatic speech recognition, outperforming competing
methods.
| no_new_dataset | 0.944125 |
2503.02318 | Xie Zhifei | Zhifei Xie, Mingbao Lin, Zihang Liu, Pengcheng Wu, Shuicheng Yan and
Chunyan Miao | Audio-Reasoner: Improving Reasoning Capability in Large Audio Language
Models | Technical report, in process | null | null | null | cs.SD cs.AI cs.CL cs.LG cs.MM eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in multimodal reasoning have largely overlooked the audio
modality. We introduce Audio-Reasoner, a large-scale audio language model for
deep reasoning in audio tasks. We meticulously curated a large-scale and
diverse multi-task audio dataset with simple annotations. Then, we leverage
closed-source models to conduct secondary labeling, QA generation, along with
structured COT process. These datasets together form a high-quality reasoning
dataset with 1.2 million reasoning-rich samples, which we name CoTA. Following
inference scaling principles, we train Audio-Reasoner on CoTA, enabling it to
achieve great logical capabilities in audio reasoning. Experiments show
state-of-the-art performance across key benchmarks, including MMAU-mini
(+25.42%), AIR-Bench chat/foundation(+14.57%/+10.13%), and MELD (+8.01%). Our
findings stress the core of structured CoT training in advancing audio
reasoning.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:18:34 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Xie",
"Zhifei",
""
],
[
"Lin",
"Mingbao",
""
],
[
"Liu",
"Zihang",
""
],
[
"Wu",
"Pengcheng",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Miao",
"Chunyan",
""
]
]
| TITLE: Audio-Reasoner: Improving Reasoning Capability in Large Audio Language
Models
ABSTRACT: Recent advancements in multimodal reasoning have largely overlooked the audio
modality. We introduce Audio-Reasoner, a large-scale audio language model for
deep reasoning in audio tasks. We meticulously curated a large-scale and
diverse multi-task audio dataset with simple annotations. Then, we leverage
closed-source models to conduct secondary labeling, QA generation, along with
structured COT process. These datasets together form a high-quality reasoning
dataset with 1.2 million reasoning-rich samples, which we name CoTA. Following
inference scaling principles, we train Audio-Reasoner on CoTA, enabling it to
achieve great logical capabilities in audio reasoning. Experiments show
state-of-the-art performance across key benchmarks, including MMAU-mini
(+25.42%), AIR-Bench chat/foundation(+14.57%/+10.13%), and MELD (+8.01%). Our
findings stress the core of structured CoT training in advancing audio
reasoning.
| new_dataset | 0.949389 |
2503.02322 | Jiahui Luo | Jiahui Luo, Kai Feng, Haijin Zeng, Yongyong Chen | Generative Model-Assisted Demosaicing for Cross-multispectral Cameras | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a crucial part of the spectral filter array (SFA)-based multispectral
imaging process, spectral demosaicing has exploded with the proliferation of
deep learning techniques. However, (1) bothering by the difficulty of capturing
corresponding labels for real data or simulating the practical spectral imaging
process, end-to-end networks trained in a supervised manner using simulated
data often perform poorly on real data. (2) cross-camera spectral discrepancies
make it difficult to apply pre-trained models to new cameras. (3) existing
demosaicing networks are prone to introducing visual artifacts on hard cases
due to the interpolation of unknown values. To address these issues, we propose
a hybrid supervised training method with the assistance of the self-supervised
generative model, which performs well on real data across different spectral
cameras. Specifically, our approach consists of three steps: (1) Pre-Training
step: training the end-to-end neural network on a large amount of simulated
data; (2) Pseudo-Pairing step: generating pseudo-labels of real target data
using the self-supervised generative model; (3) Fine-Tuning step: fine-tuning
the pre-trained model on the pseudo data pairs obtained in (2). To alleviate
artifacts, we propose a frequency-domain hard patch selection method that
identifies artifact-prone regions by analyzing spectral discrepancies using
Fourier transform and filtering techniques, allowing targeted fine-tuning to
enhance demosaicing performance. Finally, we propose UniSpecTest, a real-world
multispectral mosaic image dataset for testing. Ablation experiments have
demonstrated the effectiveness of each training step, and extensive experiments
on both synthetic and real datasets show that our method achieves significant
performance gains compared to state-of-the-art techniques.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:27:05 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Luo",
"Jiahui",
""
],
[
"Feng",
"Kai",
""
],
[
"Zeng",
"Haijin",
""
],
[
"Chen",
"Yongyong",
""
]
]
| TITLE: Generative Model-Assisted Demosaicing for Cross-multispectral Cameras
ABSTRACT: As a crucial part of the spectral filter array (SFA)-based multispectral
imaging process, spectral demosaicing has exploded with the proliferation of
deep learning techniques. However, (1) bothering by the difficulty of capturing
corresponding labels for real data or simulating the practical spectral imaging
process, end-to-end networks trained in a supervised manner using simulated
data often perform poorly on real data. (2) cross-camera spectral discrepancies
make it difficult to apply pre-trained models to new cameras. (3) existing
demosaicing networks are prone to introducing visual artifacts on hard cases
due to the interpolation of unknown values. To address these issues, we propose
a hybrid supervised training method with the assistance of the self-supervised
generative model, which performs well on real data across different spectral
cameras. Specifically, our approach consists of three steps: (1) Pre-Training
step: training the end-to-end neural network on a large amount of simulated
data; (2) Pseudo-Pairing step: generating pseudo-labels of real target data
using the self-supervised generative model; (3) Fine-Tuning step: fine-tuning
the pre-trained model on the pseudo data pairs obtained in (2). To alleviate
artifacts, we propose a frequency-domain hard patch selection method that
identifies artifact-prone regions by analyzing spectral discrepancies using
Fourier transform and filtering techniques, allowing targeted fine-tuning to
enhance demosaicing performance. Finally, we propose UniSpecTest, a real-world
multispectral mosaic image dataset for testing. Ablation experiments have
demonstrated the effectiveness of each training step, and extensive experiments
on both synthetic and real datasets show that our method achieves significant
performance gains compared to state-of-the-art techniques.
| no_new_dataset | 0.954858 |
2503.02324 | Xueliang Zhao | Xueliang Zhao, Wei Wu, Jian Guan, Lingpeng Kong | PromptCoT: Synthesizing Olympiad-level Problems for Mathematical
Reasoning in Large Language Models | Preprint | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The ability of large language models to solve complex mathematical problems
has progressed significantly, particularly for tasks requiring advanced
reasoning. However, the scarcity of sufficiently challenging problems,
particularly at the Olympiad level, hinders further advancements. In this work,
we introduce PromptCoT, a novel approach for automatically generating
high-quality Olympiad-level math problems. The proposed method synthesizes
complex problems based on mathematical concepts and the rationale behind
problem construction, emulating the thought processes of experienced problem
designers. We provide a theoretical analysis demonstrating that an optimal
rationale should maximize both the likelihood of rationale generation given the
associated concepts and the likelihood of problem generation conditioned on
both the rationale and the concepts. Our method is evaluated on standard
benchmarks including GSM8K, MATH-500, and AIME2024, where it consistently
outperforms existing problem generation methods. Furthermore, we demonstrate
that PromptCoT exhibits superior data scalability, consistently maintaining
high performance as the dataset size increases, outperforming the baselines.
The implementation is available at https://github.com/zhaoxlpku/PromptCoT.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:32:30 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhao",
"Xueliang",
""
],
[
"Wu",
"Wei",
""
],
[
"Guan",
"Jian",
""
],
[
"Kong",
"Lingpeng",
""
]
]
| TITLE: PromptCoT: Synthesizing Olympiad-level Problems for Mathematical
Reasoning in Large Language Models
ABSTRACT: The ability of large language models to solve complex mathematical problems
has progressed significantly, particularly for tasks requiring advanced
reasoning. However, the scarcity of sufficiently challenging problems,
particularly at the Olympiad level, hinders further advancements. In this work,
we introduce PromptCoT, a novel approach for automatically generating
high-quality Olympiad-level math problems. The proposed method synthesizes
complex problems based on mathematical concepts and the rationale behind
problem construction, emulating the thought processes of experienced problem
designers. We provide a theoretical analysis demonstrating that an optimal
rationale should maximize both the likelihood of rationale generation given the
associated concepts and the likelihood of problem generation conditioned on
both the rationale and the concepts. Our method is evaluated on standard
benchmarks including GSM8K, MATH-500, and AIME2024, where it consistently
outperforms existing problem generation methods. Furthermore, we demonstrate
that PromptCoT exhibits superior data scalability, consistently maintaining
high performance as the dataset size increases, outperforming the baselines.
The implementation is available at https://github.com/zhaoxlpku/PromptCoT.
| no_new_dataset | 0.941223 |
2503.02328 | Eun Cheol Choi | Eun Cheol Choi and Ashwin Balasubramanian and Jinhu Qi and Emilio
Ferrara | Limited Effectiveness of LLM-based Data Augmentation for COVID-19
Misinformation Stance Detection | null | null | 10.1145/3701716.3715521 | null | cs.CL cs.CY cs.HC cs.SI | http://creativecommons.org/licenses/by/4.0/ | Misinformation surrounding emerging outbreaks poses a serious societal
threat, making robust countermeasures essential. One promising approach is
stance detection (SD), which identifies whether social media posts support or
oppose misleading claims. In this work, we finetune classifiers on COVID-19
misinformation SD datasets consisting of claims and corresponding tweets.
Specifically, we test controllable misinformation generation (CMG) using large
language models (LLMs) as a method for data augmentation. While CMG
demonstrates the potential for expanding training datasets, our experiments
reveal that performance gains over traditional augmentation methods are often
minimal and inconsistent, primarily due to built-in safeguards within LLMs. We
release our code and datasets to facilitate further research on misinformation
detection and generation.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:38:29 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Choi",
"Eun Cheol",
""
],
[
"Balasubramanian",
"Ashwin",
""
],
[
"Qi",
"Jinhu",
""
],
[
"Ferrara",
"Emilio",
""
]
]
| TITLE: Limited Effectiveness of LLM-based Data Augmentation for COVID-19
Misinformation Stance Detection
ABSTRACT: Misinformation surrounding emerging outbreaks poses a serious societal
threat, making robust countermeasures essential. One promising approach is
stance detection (SD), which identifies whether social media posts support or
oppose misleading claims. In this work, we finetune classifiers on COVID-19
misinformation SD datasets consisting of claims and corresponding tweets.
Specifically, we test controllable misinformation generation (CMG) using large
language models (LLMs) as a method for data augmentation. While CMG
demonstrates the potential for expanding training datasets, our experiments
reveal that performance gains over traditional augmentation methods are often
minimal and inconsistent, primarily due to built-in safeguards within LLMs. We
release our code and datasets to facilitate further research on misinformation
detection and generation.
| new_dataset | 0.949342 |
2503.02333 | Vedika Gupta | Sarvesh Arora, Sarthak Arora, Deepika Kumar, Vallari Agrawal, Vedika
Gupta, Dipit Vasdev | Examining the Mental Health Impact of Misinformation on Social Media
Using a Hybrid Transformer-Based Approach | 20 pages | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social media has significantly reshaped interpersonal communication,
fostering connectivity while also enabling the proliferation of misinformation.
The unchecked spread of false narratives has profound effects on mental health,
contributing to increased stress, anxiety, and misinformation-driven paranoia.
This study presents a hybrid transformer-based approach using a RoBERTa-LSTM
classifier to detect misinformation, assess its impact on mental health, and
classify disorders linked to misinformation exposure. The proposed models
demonstrate accuracy rates of 98.4, 87.8, and 77.3 in detecting misinformation,
mental health implications, and disorder classification, respectively.
Furthermore, Pearson's Chi-Squared Test for Independence (p-value = 0.003871)
validates the direct correlation between misinformation and deteriorating
mental well-being. This study underscores the urgent need for better
misinformation management strategies to mitigate its psychological
repercussions. Future research could explore broader datasets incorporating
linguistic, demographic, and cultural variables to deepen the understanding of
misinformation-induced mental health distress.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:45:17 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Arora",
"Sarvesh",
""
],
[
"Arora",
"Sarthak",
""
],
[
"Kumar",
"Deepika",
""
],
[
"Agrawal",
"Vallari",
""
],
[
"Gupta",
"Vedika",
""
],
[
"Vasdev",
"Dipit",
""
]
]
| TITLE: Examining the Mental Health Impact of Misinformation on Social Media
Using a Hybrid Transformer-Based Approach
ABSTRACT: Social media has significantly reshaped interpersonal communication,
fostering connectivity while also enabling the proliferation of misinformation.
The unchecked spread of false narratives has profound effects on mental health,
contributing to increased stress, anxiety, and misinformation-driven paranoia.
This study presents a hybrid transformer-based approach using a RoBERTa-LSTM
classifier to detect misinformation, assess its impact on mental health, and
classify disorders linked to misinformation exposure. The proposed models
demonstrate accuracy rates of 98.4, 87.8, and 77.3 in detecting misinformation,
mental health implications, and disorder classification, respectively.
Furthermore, Pearson's Chi-Squared Test for Independence (p-value = 0.003871)
validates the direct correlation between misinformation and deteriorating
mental well-being. This study underscores the urgent need for better
misinformation management strategies to mitigate its psychological
repercussions. Future research could explore broader datasets incorporating
linguistic, demographic, and cultural variables to deepen the understanding of
misinformation-induced mental health distress.
| no_new_dataset | 0.945197 |
2503.02335 | Renshuang Jiang | Renshuang Jiang, Pan Dong, Zhenling Duan, Yu Shi, Xiaoxiang Fang, Yan
Ding, Jun Ma, Shuai Zhao and Zhe Jiang | Unlocking a New Rust Programming Experience: Fast and Slow Thinking with
LLMs to Conquer Undefined Behaviors | null | null | null | null | cs.SE cs.CL | http://creativecommons.org/licenses/by/4.0/ | To provide flexibility and low-level interaction capabilities, the unsafe tag
in Rust is essential in many projects, but undermines memory safety and
introduces Undefined Behaviors (UBs) that reduce safety. Eliminating these UBs
requires a deep understanding of Rust's safety rules and strong typing.
Traditional methods require depth analysis of code, which is laborious and
depends on knowledge design. The powerful semantic understanding capabilities
of LLM offer new opportunities to solve this problem. Although existing large
model debugging frameworks excel in semantic tasks, limited by fixed processes
and lack adaptive and dynamic adjustment capabilities. Inspired by the dual
process theory of decision-making (Fast and Slow Thinking), we present a
LLM-based framework called RustBrain that automatically and flexibly minimizes
UBs in Rust projects. Fast thinking extracts features to generate solutions,
while slow thinking decomposes, verifies, and generalizes them abstractly. To
apply verification and generalization results to solution generation, enabling
dynamic adjustments and precise outputs, RustBrain integrates two thinking
through a feedback mechanism. Experimental results on Miri dataset show a 94.3%
pass rate and 80.4% execution rate, improving flexibility and Rust projects
safety.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:48:45 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Jiang",
"Renshuang",
""
],
[
"Dong",
"Pan",
""
],
[
"Duan",
"Zhenling",
""
],
[
"Shi",
"Yu",
""
],
[
"Fang",
"Xiaoxiang",
""
],
[
"Ding",
"Yan",
""
],
[
"Ma",
"Jun",
""
],
[
"Zhao",
"Shuai",
""
],
[
"Jiang",
"Zhe",
""
]
]
| TITLE: Unlocking a New Rust Programming Experience: Fast and Slow Thinking with
LLMs to Conquer Undefined Behaviors
ABSTRACT: To provide flexibility and low-level interaction capabilities, the unsafe tag
in Rust is essential in many projects, but undermines memory safety and
introduces Undefined Behaviors (UBs) that reduce safety. Eliminating these UBs
requires a deep understanding of Rust's safety rules and strong typing.
Traditional methods require depth analysis of code, which is laborious and
depends on knowledge design. The powerful semantic understanding capabilities
of LLM offer new opportunities to solve this problem. Although existing large
model debugging frameworks excel in semantic tasks, limited by fixed processes
and lack adaptive and dynamic adjustment capabilities. Inspired by the dual
process theory of decision-making (Fast and Slow Thinking), we present a
LLM-based framework called RustBrain that automatically and flexibly minimizes
UBs in Rust projects. Fast thinking extracts features to generate solutions,
while slow thinking decomposes, verifies, and generalizes them abstractly. To
apply verification and generalization results to solution generation, enabling
dynamic adjustments and precise outputs, RustBrain integrates two thinking
through a feedback mechanism. Experimental results on Miri dataset show a 94.3%
pass rate and 80.4% execution rate, improving flexibility and Rust projects
safety.
| no_new_dataset | 0.938237 |
2503.02338 | Jisoo Hong | Jisoo Hong, Yongmin Hong, Jung-Woo Baek, Sung-Woo Kang | Enhancing the Product Quality of the Injection Process Using eXplainable
Artificial Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The injection molding process is a traditional technique for making products
in various industries such as electronics and automobiles via solidifying
liquid resin into certain molds. Although the process is not related to
creating the main part of engines or semiconductors, this manufacturing
methodology sets the final form of the products. Re-cently, research has
continued to reduce the defect rate of the injection molding process. This
study proposes an optimal injection molding process control system to reduce
the defect rate of injection molding products with XAI (eXplainable Artificial
Intelligence) ap-proaches. Boosting algorithms (XGBoost and LightGBM) are used
as tree-based classifiers for predicting whether each product is normal or
defective. The main features to control the process for improving the product
are extracted by SHapley Additive exPlanations, while the individual
conditional expectation analyzes the optimal control range of these extracted
features. To validate the methodology presented in this work, the actual
injection molding AI manufacturing dataset provided by KAMP (Korea AI
Manufacturing Platform) is employed for the case study. The results reveal that
the defect rate decreases from 1.00% (Original defect rate) to 0.21% with
XGBoost and 0.13% with LightGBM, respectively.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:59:01 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Hong",
"Jisoo",
""
],
[
"Hong",
"Yongmin",
""
],
[
"Baek",
"Jung-Woo",
""
],
[
"Kang",
"Sung-Woo",
""
]
]
| TITLE: Enhancing the Product Quality of the Injection Process Using eXplainable
Artificial Intelligence
ABSTRACT: The injection molding process is a traditional technique for making products
in various industries such as electronics and automobiles via solidifying
liquid resin into certain molds. Although the process is not related to
creating the main part of engines or semiconductors, this manufacturing
methodology sets the final form of the products. Re-cently, research has
continued to reduce the defect rate of the injection molding process. This
study proposes an optimal injection molding process control system to reduce
the defect rate of injection molding products with XAI (eXplainable Artificial
Intelligence) ap-proaches. Boosting algorithms (XGBoost and LightGBM) are used
as tree-based classifiers for predicting whether each product is normal or
defective. The main features to control the process for improving the product
are extracted by SHapley Additive exPlanations, while the individual
conditional expectation analyzes the optimal control range of these extracted
features. To validate the methodology presented in this work, the actual
injection molding AI manufacturing dataset provided by KAMP (Korea AI
Manufacturing Platform) is employed for the case study. The results reveal that
the defect rate decreases from 1.00% (Original defect rate) to 0.21% with
XGBoost and 0.13% with LightGBM, respectively.
| no_new_dataset | 0.951188 |
2503.02341 | Zhun Mou | Zhun Mou, Bin Xia, Zhengchao Huang, Wenming Yang, Jiaya Jia | GRADEO: Towards Human-Like Evaluation for Text-to-Video Generation via
Multi-Step Reasoning | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent great advances in video generation models have demonstrated their
potential to produce high-quality videos, bringing challenges to effective
evaluation. Unlike human evaluation, existing automated evaluation metrics lack
high-level semantic understanding and reasoning capabilities for video, thus
making them infeasible and unexplainable. To fill this gap, we curate
GRADEO-Instruct, a multi-dimensional T2V evaluation instruction tuning dataset,
including 3.3k videos from over 10 existing video generation models and
multi-step reasoning assessments converted by 16k human annotations. We then
introduce GRADEO, one of the first specifically designed video evaluation
models, which grades AI-generated videos for explainable scores and assessments
through multi-step reasoning. Experiments show that our method aligns better
with human evaluations than existing methods. Furthermore, our benchmarking
reveals that current video generation models struggle to produce content that
aligns with human reasoning and complex real-world scenarios. The models,
datasets, and codes will be released soon.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 07:04:55 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Mou",
"Zhun",
""
],
[
"Xia",
"Bin",
""
],
[
"Huang",
"Zhengchao",
""
],
[
"Yang",
"Wenming",
""
],
[
"Jia",
"Jiaya",
""
]
]
| TITLE: GRADEO: Towards Human-Like Evaluation for Text-to-Video Generation via
Multi-Step Reasoning
ABSTRACT: Recent great advances in video generation models have demonstrated their
potential to produce high-quality videos, bringing challenges to effective
evaluation. Unlike human evaluation, existing automated evaluation metrics lack
high-level semantic understanding and reasoning capabilities for video, thus
making them infeasible and unexplainable. To fill this gap, we curate
GRADEO-Instruct, a multi-dimensional T2V evaluation instruction tuning dataset,
including 3.3k videos from over 10 existing video generation models and
multi-step reasoning assessments converted by 16k human annotations. We then
introduce GRADEO, one of the first specifically designed video evaluation
models, which grades AI-generated videos for explainable scores and assessments
through multi-step reasoning. Experiments show that our method aligns better
with human evaluations than existing methods. Furthermore, our benchmarking
reveals that current video generation models struggle to produce content that
aligns with human reasoning and complex real-world scenarios. The models,
datasets, and codes will be released soon.
| new_dataset | 0.952794 |
2503.02353 | Luobin Wang | Luobin Wang, Hongzhan Yu, Chenning Yu, Sicun Gao, Henrik Christensen | Controllable Motion Generation via Diffusion Modal Coupling | null | null | null | null | cs.RO cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion models have recently gained significant attention in robotics due
to their ability to generate multi-modal distributions of system states and
behaviors. However, a key challenge remains: ensuring precise control over the
generated outcomes without compromising realism. This is crucial for
applications such as motion planning or trajectory forecasting, where adherence
to physical constraints and task-specific objectives is essential. We propose a
novel framework that enhances controllability in diffusion models by leveraging
multi-modal prior distributions and enforcing strong modal coupling. This
allows us to initiate the denoising process directly from distinct prior modes
that correspond to different possible system behaviors, ensuring sampling to
align with the training distribution. We evaluate our approach on motion
prediction using the Waymo dataset and multi-task control in Maze2D
environments. Experimental results show that our framework outperforms both
guidance-based techniques and conditioned models with unimodal priors,
achieving superior fidelity, diversity, and controllability, even in the
absence of explicit conditioning. Overall, our approach provides a more
reliable and scalable solution for controllable motion generation in robotics.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 07:22:34 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Luobin",
""
],
[
"Yu",
"Hongzhan",
""
],
[
"Yu",
"Chenning",
""
],
[
"Gao",
"Sicun",
""
],
[
"Christensen",
"Henrik",
""
]
]
| TITLE: Controllable Motion Generation via Diffusion Modal Coupling
ABSTRACT: Diffusion models have recently gained significant attention in robotics due
to their ability to generate multi-modal distributions of system states and
behaviors. However, a key challenge remains: ensuring precise control over the
generated outcomes without compromising realism. This is crucial for
applications such as motion planning or trajectory forecasting, where adherence
to physical constraints and task-specific objectives is essential. We propose a
novel framework that enhances controllability in diffusion models by leveraging
multi-modal prior distributions and enforcing strong modal coupling. This
allows us to initiate the denoising process directly from distinct prior modes
that correspond to different possible system behaviors, ensuring sampling to
align with the training distribution. We evaluate our approach on motion
prediction using the Waymo dataset and multi-task control in Maze2D
environments. Experimental results show that our framework outperforms both
guidance-based techniques and conditioned models with unimodal priors,
achieving superior fidelity, diversity, and controllability, even in the
absence of explicit conditioning. Overall, our approach provides a more
reliable and scalable solution for controllable motion generation in robotics.
| no_new_dataset | 0.945701 |
2503.02359 | Zhuo Li | Zhuo Li, Yuhao Du, Xiaoqi Jiao, Yiwen Guo, Yuege Feng, Xiang Wan,
Anningzhe Gao, Jinpeng Hu | Add-One-In: Incremental Sample Selection for Large Language Models via a
Choice-Based Greedy Paradigm | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Selecting high-quality and diverse training samples from extensive datasets
plays a crucial role in reducing training overhead and enhancing the
performance of Large Language Models (LLMs). However, existing studies fall
short in assessing the overall value of selected data, focusing primarily on
individual quality, and struggle to strike an effective balance between
ensuring diversity and minimizing data point traversals. Therefore, this paper
introduces a novel choice-based sample selection framework that shifts the
focus from evaluating individual sample quality to comparing the contribution
value of different samples when incorporated into the subset. Thanks to the
advanced language understanding capabilities of LLMs, we utilize LLMs to
evaluate the value of each option during the selection process. Furthermore, we
design a greedy sampling process where samples are incrementally added to the
subset, thereby improving efficiency by eliminating the need for exhaustive
traversal of the entire dataset with the limited budget. Extensive experiments
demonstrate that selected data from our method not only surpass the performance
of the full dataset but also achieves competitive results with state-of-the-art
(SOTA) studies, while requiring fewer selections. Moreover, we validate our
approach on a larger medical dataset, highlighting its practical applicability
in real-world applications.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 07:32:41 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Li",
"Zhuo",
""
],
[
"Du",
"Yuhao",
""
],
[
"Jiao",
"Xiaoqi",
""
],
[
"Guo",
"Yiwen",
""
],
[
"Feng",
"Yuege",
""
],
[
"Wan",
"Xiang",
""
],
[
"Gao",
"Anningzhe",
""
],
[
"Hu",
"Jinpeng",
""
]
]
| TITLE: Add-One-In: Incremental Sample Selection for Large Language Models via a
Choice-Based Greedy Paradigm
ABSTRACT: Selecting high-quality and diverse training samples from extensive datasets
plays a crucial role in reducing training overhead and enhancing the
performance of Large Language Models (LLMs). However, existing studies fall
short in assessing the overall value of selected data, focusing primarily on
individual quality, and struggle to strike an effective balance between
ensuring diversity and minimizing data point traversals. Therefore, this paper
introduces a novel choice-based sample selection framework that shifts the
focus from evaluating individual sample quality to comparing the contribution
value of different samples when incorporated into the subset. Thanks to the
advanced language understanding capabilities of LLMs, we utilize LLMs to
evaluate the value of each option during the selection process. Furthermore, we
design a greedy sampling process where samples are incrementally added to the
subset, thereby improving efficiency by eliminating the need for exhaustive
traversal of the entire dataset with the limited budget. Extensive experiments
demonstrate that selected data from our method not only surpass the performance
of the full dataset but also achieves competitive results with state-of-the-art
(SOTA) studies, while requiring fewer selections. Moreover, we validate our
approach on a larger medical dataset, highlighting its practical applicability
in real-world applications.
| no_new_dataset | 0.948298 |
2503.02360 | Hasan Mahmud | Husne Ara Rubaiyeat, Njayou Youssouf, Md Kamrul Hasan, Hasan Mahmud | BdSLW401: Transformer-Based Word-Level Bangla Sign Language Recognition
Using Relative Quantization Encoding (RQE) | null | null | null | null | cs.CV cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Sign language recognition (SLR) for low-resource languages like Bangla
suffers from signer variability, viewpoint variations, and limited annotated
datasets. In this paper, we present BdSLW401, a large-scale, multi-view,
word-level Bangla Sign Language (BdSL) dataset with 401 signs and 102,176 video
samples from 18 signers in front and lateral views. To improve
transformer-based SLR, we introduce Relative Quantization Encoding (RQE), a
structured embedding approach anchoring landmarks to physiological reference
points and quantize motion trajectories. RQE improves attention allocation by
decreasing spatial variability, resulting in 44.3% WER reduction in WLASL100,
21.0% in SignBD-200, and significant gains in BdSLW60 and SignBD-90. However,
fixed quantization becomes insufficient on large-scale datasets (e.g.,
WLASL2000), indicating the need for adaptive encoding strategies. Further,
RQE-SF, an extended variant that stabilizes shoulder landmarks, achieves
improvements in pose consistency at the cost of small trade-offs in lateral
view recognition. The attention graphs prove that RQE improves model
interpretability by focusing on the major articulatory features (fingers,
wrists) and the more distinctive frames instead of global pose changes.
Introducing BdSLW401 and demonstrating the effectiveness of RQE-enhanced
structured embeddings, this work advances transformer-based SLR for
low-resource languages and sets a benchmark for future research in this area.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 07:34:06 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Rubaiyeat",
"Husne Ara",
""
],
[
"Youssouf",
"Njayou",
""
],
[
"Hasan",
"Md Kamrul",
""
],
[
"Mahmud",
"Hasan",
""
]
]
| TITLE: BdSLW401: Transformer-Based Word-Level Bangla Sign Language Recognition
Using Relative Quantization Encoding (RQE)
ABSTRACT: Sign language recognition (SLR) for low-resource languages like Bangla
suffers from signer variability, viewpoint variations, and limited annotated
datasets. In this paper, we present BdSLW401, a large-scale, multi-view,
word-level Bangla Sign Language (BdSL) dataset with 401 signs and 102,176 video
samples from 18 signers in front and lateral views. To improve
transformer-based SLR, we introduce Relative Quantization Encoding (RQE), a
structured embedding approach anchoring landmarks to physiological reference
points and quantize motion trajectories. RQE improves attention allocation by
decreasing spatial variability, resulting in 44.3% WER reduction in WLASL100,
21.0% in SignBD-200, and significant gains in BdSLW60 and SignBD-90. However,
fixed quantization becomes insufficient on large-scale datasets (e.g.,
WLASL2000), indicating the need for adaptive encoding strategies. Further,
RQE-SF, an extended variant that stabilizes shoulder landmarks, achieves
improvements in pose consistency at the cost of small trade-offs in lateral
view recognition. The attention graphs prove that RQE improves model
interpretability by focusing on the major articulatory features (fingers,
wrists) and the more distinctive frames instead of global pose changes.
Introducing BdSLW401 and demonstrating the effectiveness of RQE-enhanced
structured embeddings, this work advances transformer-based SLR for
low-resource languages and sets a benchmark for future research in this area.
| no_new_dataset | 0.82741 |
2503.02374 | Haoan Jin | Haoan Jin, Jiacheng Shi, Hanhui Xu, Kenny Q. Zhu, Mengyue Wu | MedEthicEval: Evaluating Large Language Models Based on Chinese Medical
Ethics | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) demonstrate significant potential in advancing
medical applications, yet their capabilities in addressing medical ethics
challenges remain underexplored. This paper introduces MedEthicEval, a novel
benchmark designed to systematically evaluate LLMs in the domain of medical
ethics. Our framework encompasses two key components: knowledge, assessing the
models' grasp of medical ethics principles, and application, focusing on their
ability to apply these principles across diverse scenarios. To support this
benchmark, we consulted with medical ethics researchers and developed three
datasets addressing distinct ethical challenges: blatant violations of medical
ethics, priority dilemmas with clear inclinations, and equilibrium dilemmas
without obvious resolutions. MedEthicEval serves as a critical tool for
understanding LLMs' ethical reasoning in healthcare, paving the way for their
responsible and effective use in medical contexts.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 08:01:34 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Jin",
"Haoan",
""
],
[
"Shi",
"Jiacheng",
""
],
[
"Xu",
"Hanhui",
""
],
[
"Zhu",
"Kenny Q.",
""
],
[
"Wu",
"Mengyue",
""
]
]
| TITLE: MedEthicEval: Evaluating Large Language Models Based on Chinese Medical
Ethics
ABSTRACT: Large language models (LLMs) demonstrate significant potential in advancing
medical applications, yet their capabilities in addressing medical ethics
challenges remain underexplored. This paper introduces MedEthicEval, a novel
benchmark designed to systematically evaluate LLMs in the domain of medical
ethics. Our framework encompasses two key components: knowledge, assessing the
models' grasp of medical ethics principles, and application, focusing on their
ability to apply these principles across diverse scenarios. To support this
benchmark, we consulted with medical ethics researchers and developed three
datasets addressing distinct ethical challenges: blatant violations of medical
ethics, priority dilemmas with clear inclinations, and equilibrium dilemmas
without obvious resolutions. MedEthicEval serves as a critical tool for
understanding LLMs' ethical reasoning in healthcare, paving the way for their
responsible and effective use in medical contexts.
| new_dataset | 0.951233 |
2503.02375 | Jiarui Yang | Jiarui Yang, Songpengcheng Xia, Zengyuan Lai, Lan Sun, Qi Wu, Wenxian
Yu, Ling Pei | mmDEAR: mmWave Point Cloud Density Enhancement for Accurate Human Body
Reconstruction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Millimeter-wave (mmWave) radar offers robust sensing capabilities in diverse
environments, making it a highly promising solution for human body
reconstruction due to its privacy-friendly and non-intrusive nature. However,
the significant sparsity of mmWave point clouds limits the estimation accuracy.
To overcome this challenge, we propose a two-stage deep learning framework that
enhances mmWave point clouds and improves human body reconstruction accuracy.
Our method includes a mmWave point cloud enhancement module that densifies the
raw data by leveraging temporal features and a multi-stage completion network,
followed by a 2D-3D fusion module that extracts both 2D and 3D motion features
to refine SMPL parameters. The mmWave point cloud enhancement module learns the
detailed shape and posture information from 2D human masks in single-view
images. However, image-based supervision is involved only during the training
phase, and the inference relies solely on sparse point clouds to maintain
privacy. Experiments on multiple datasets demonstrate that our approach
outperforms state-of-the-art methods, with the enhanced point clouds further
improving performance when integrated into existing models.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 08:03:53 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Yang",
"Jiarui",
""
],
[
"Xia",
"Songpengcheng",
""
],
[
"Lai",
"Zengyuan",
""
],
[
"Sun",
"Lan",
""
],
[
"Wu",
"Qi",
""
],
[
"Yu",
"Wenxian",
""
],
[
"Pei",
"Ling",
""
]
]
| TITLE: mmDEAR: mmWave Point Cloud Density Enhancement for Accurate Human Body
Reconstruction
ABSTRACT: Millimeter-wave (mmWave) radar offers robust sensing capabilities in diverse
environments, making it a highly promising solution for human body
reconstruction due to its privacy-friendly and non-intrusive nature. However,
the significant sparsity of mmWave point clouds limits the estimation accuracy.
To overcome this challenge, we propose a two-stage deep learning framework that
enhances mmWave point clouds and improves human body reconstruction accuracy.
Our method includes a mmWave point cloud enhancement module that densifies the
raw data by leveraging temporal features and a multi-stage completion network,
followed by a 2D-3D fusion module that extracts both 2D and 3D motion features
to refine SMPL parameters. The mmWave point cloud enhancement module learns the
detailed shape and posture information from 2D human masks in single-view
images. However, image-based supervision is involved only during the training
phase, and the inference relies solely on sparse point clouds to maintain
privacy. Experiments on multiple datasets demonstrate that our approach
outperforms state-of-the-art methods, with the enhanced point clouds further
improving performance when integrated into existing models.
| no_new_dataset | 0.948965 |
2503.02382 | Sun Wei | Wei Sun, Qianlong Du, Fuwei Cui, Jiajun Zhang | An Efficient and Precise Training Data Construction Framework for
Process-supervised Reward Model in Mathematical Reasoning | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Enhancing the mathematical reasoning capabilities of Large Language Models
(LLMs) is of great scientific and practical significance. Researchers typically
employ process-supervised reward models (PRMs) to guide the reasoning process,
effectively improving the models' reasoning abilities. However, existing
methods for constructing process supervision training data, such as manual
annotation and per-step Monte Carlo estimation, are often costly or suffer from
poor quality. To address these challenges, this paper introduces a framework
called EpicPRM, which annotates each intermediate reasoning step based on its
quantified contribution and uses an adaptive binary search algorithm to enhance
both annotation precision and efficiency. Using this approach, we efficiently
construct a high-quality process supervision training dataset named Epic50k,
consisting of 50k annotated intermediate steps. Compared to other publicly
available datasets, the PRM trained on Epic50k demonstrates significantly
superior performance. Getting Epic50k at https://github.com/xiaolizh1/EpicPRM.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 08:18:46 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Sun",
"Wei",
""
],
[
"Du",
"Qianlong",
""
],
[
"Cui",
"Fuwei",
""
],
[
"Zhang",
"Jiajun",
""
]
]
| TITLE: An Efficient and Precise Training Data Construction Framework for
Process-supervised Reward Model in Mathematical Reasoning
ABSTRACT: Enhancing the mathematical reasoning capabilities of Large Language Models
(LLMs) is of great scientific and practical significance. Researchers typically
employ process-supervised reward models (PRMs) to guide the reasoning process,
effectively improving the models' reasoning abilities. However, existing
methods for constructing process supervision training data, such as manual
annotation and per-step Monte Carlo estimation, are often costly or suffer from
poor quality. To address these challenges, this paper introduces a framework
called EpicPRM, which annotates each intermediate reasoning step based on its
quantified contribution and uses an adaptive binary search algorithm to enhance
both annotation precision and efficiency. Using this approach, we efficiently
construct a high-quality process supervision training dataset named Epic50k,
consisting of 50k annotated intermediate steps. Compared to other publicly
available datasets, the PRM trained on Epic50k demonstrates significantly
superior performance. Getting Epic50k at https://github.com/xiaolizh1/EpicPRM.
| new_dataset | 0.953275 |
2503.02387 | Yifeng Xu | Yifeng Xu, Fan Zhu, Ye Li, Sebastian Ren, Xiaonan Huang, Yuhao Chen | RGBSQGrasp: Inferring Local Superquadric Primitives from Single RGB
Image for Graspability-Aware Bin Picking | 8 pages, 7 figures, In submission to IROS2025 | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bin picking is a challenging robotic task due to occlusions and physical
constraints that limit visual information for object recognition and grasping.
Existing approaches often rely on known CAD models or prior object geometries,
restricting generalization to novel or unknown objects. Other methods directly
regress grasp poses from RGB-D data without object priors, but the inherent
noise in depth sensing and the lack of object understanding make grasp
synthesis and evaluation more difficult. Superquadrics (SQ) offer a compact,
interpretable shape representation that captures the physical and graspability
understanding of objects. However, recovering them from limited viewpoints is
challenging, as existing methods rely on multiple perspectives for
near-complete point cloud reconstruction, limiting their effectiveness in
bin-picking. To address these challenges, we propose \textbf{RGBSQGrasp}, a
grasping framework that leverages superquadric shape primitives and foundation
metric depth estimation models to infer grasp poses from a monocular RGB camera
-- eliminating the need for depth sensors. Our framework integrates a
universal, cross-platform dataset generation pipeline, a foundation model-based
object point cloud estimation module, a global-local superquadric fitting
network, and an SQ-guided grasp pose sampling module. By integrating these
components, RGBSQGrasp reliably infers grasp poses through geometric reasoning,
enhancing grasp stability and adaptability to unseen objects. Real-world
robotic experiments demonstrate a 92\% grasp success rate, highlighting the
effectiveness of RGBSQGrasp in packed bin-picking environments.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 08:23:01 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Xu",
"Yifeng",
""
],
[
"Zhu",
"Fan",
""
],
[
"Li",
"Ye",
""
],
[
"Ren",
"Sebastian",
""
],
[
"Huang",
"Xiaonan",
""
],
[
"Chen",
"Yuhao",
""
]
]
| TITLE: RGBSQGrasp: Inferring Local Superquadric Primitives from Single RGB
Image for Graspability-Aware Bin Picking
ABSTRACT: Bin picking is a challenging robotic task due to occlusions and physical
constraints that limit visual information for object recognition and grasping.
Existing approaches often rely on known CAD models or prior object geometries,
restricting generalization to novel or unknown objects. Other methods directly
regress grasp poses from RGB-D data without object priors, but the inherent
noise in depth sensing and the lack of object understanding make grasp
synthesis and evaluation more difficult. Superquadrics (SQ) offer a compact,
interpretable shape representation that captures the physical and graspability
understanding of objects. However, recovering them from limited viewpoints is
challenging, as existing methods rely on multiple perspectives for
near-complete point cloud reconstruction, limiting their effectiveness in
bin-picking. To address these challenges, we propose \textbf{RGBSQGrasp}, a
grasping framework that leverages superquadric shape primitives and foundation
metric depth estimation models to infer grasp poses from a monocular RGB camera
-- eliminating the need for depth sensors. Our framework integrates a
universal, cross-platform dataset generation pipeline, a foundation model-based
object point cloud estimation module, a global-local superquadric fitting
network, and an SQ-guided grasp pose sampling module. By integrating these
components, RGBSQGrasp reliably infers grasp poses through geometric reasoning,
enhancing grasp stability and adaptability to unseen objects. Real-world
robotic experiments demonstrate a 92\% grasp success rate, highlighting the
effectiveness of RGBSQGrasp in packed bin-picking environments.
| no_new_dataset | 0.944638 |
2503.02388 | Wooju Lee | Wooju Lee, Juhye Park, Dasol Hong, Changki Sung, Youngwoo Seo, Dongwan
Kang, and Hyun Myung | PIDLoc: Cross-View Pose Optimization Network Inspired by PID Controllers | Accepted by CVPR-25 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate localization is essential for autonomous driving, but GNSS-based
methods struggle in challenging environments such as urban canyons. Cross-view
pose optimization offers an effective solution by directly estimating vehicle
pose using satellite-view images. However, existing methods primarily rely on
cross-view features at a given pose, neglecting fine-grained contexts for
precision and global contexts for robustness against large initial pose errors.
To overcome these limitations, we propose PIDLoc, a novel cross-view pose
optimization approach inspired by the proportional-integral-derivative (PID)
controller. Using RGB images and LiDAR, the PIDLoc comprises the PID branches
to model cross-view feature relationships and the spatially aware pose
estimator (SPE) to estimate the pose from these relationships. The PID branches
leverage feature differences for local context (P), aggregated feature
differences for global context (I), and gradients of feature differences for
precise pose adjustment (D) to enhance localization accuracy under large
initial pose errors. Integrated with the PID branches, the SPE captures spatial
relationships within the PID-branch features for consistent localization.
Experimental results demonstrate that the PIDLoc achieves state-of-the-art
performance in cross-view pose estimation for the KITTI dataset, reducing
position error by $37.8\%$ compared with the previous state-of-the-art.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 08:24:08 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lee",
"Wooju",
""
],
[
"Park",
"Juhye",
""
],
[
"Hong",
"Dasol",
""
],
[
"Sung",
"Changki",
""
],
[
"Seo",
"Youngwoo",
""
],
[
"Kang",
"Dongwan",
""
],
[
"Myung",
"Hyun",
""
]
]
| TITLE: PIDLoc: Cross-View Pose Optimization Network Inspired by PID Controllers
ABSTRACT: Accurate localization is essential for autonomous driving, but GNSS-based
methods struggle in challenging environments such as urban canyons. Cross-view
pose optimization offers an effective solution by directly estimating vehicle
pose using satellite-view images. However, existing methods primarily rely on
cross-view features at a given pose, neglecting fine-grained contexts for
precision and global contexts for robustness against large initial pose errors.
To overcome these limitations, we propose PIDLoc, a novel cross-view pose
optimization approach inspired by the proportional-integral-derivative (PID)
controller. Using RGB images and LiDAR, the PIDLoc comprises the PID branches
to model cross-view feature relationships and the spatially aware pose
estimator (SPE) to estimate the pose from these relationships. The PID branches
leverage feature differences for local context (P), aggregated feature
differences for global context (I), and gradients of feature differences for
precise pose adjustment (D) to enhance localization accuracy under large
initial pose errors. Integrated with the PID branches, the SPE captures spatial
relationships within the PID-branch features for consistent localization.
Experimental results demonstrate that the PIDLoc achieves state-of-the-art
performance in cross-view pose estimation for the KITTI dataset, reducing
position error by $37.8\%$ compared with the previous state-of-the-art.
| no_new_dataset | 0.945349 |
2503.02389 | Louis Mahon | Louis Mahon, Benjamin Hoffman, Logan S James, Maddie Cusimano, Masato
Hagiwara, Sarah C Woolley, Olivier Pietquin | Robust detection of overlapping bioacoustic sound events | null | null | null | null | cs.SD cs.LG eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We propose a method for accurately detecting bioacoustic sound events that is
robust to overlapping events, a common issue in domains such as ethology,
ecology and conservation. While standard methods employ a frame-based,
multi-label approach, we introduce an onset-based detection method which we
name Voxaboxen. It takes inspiration from object detection methods in computer
vision, but simultaneously takes advantage of recent advances in
self-supervised audio encoders. For each time window, Voxaboxen predicts
whether it contains the start of a vocalization and how long the vocalization
is. It also does the same in reverse, predicting whether each window contains
the end of a vocalization, and how long ago it started. The two resulting sets
of bounding boxes are then fused using a graph-matching algorithm. We also
release a new dataset designed to measure performance on detecting overlapping
vocalizations. This consists of recordings of zebra finches annotated with
temporally-strong labels and showing frequent overlaps. We test Voxaboxen on
seven existing data sets and on our new data set. We compare Voxaboxen to
natural baselines and existing sound event detection methods and demonstrate
SotA results. Further experiments show that improvements are robust to frequent
vocalization overlap.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 08:26:03 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Mahon",
"Louis",
""
],
[
"Hoffman",
"Benjamin",
""
],
[
"James",
"Logan S",
""
],
[
"Cusimano",
"Maddie",
""
],
[
"Hagiwara",
"Masato",
""
],
[
"Woolley",
"Sarah C",
""
],
[
"Pietquin",
"Olivier",
""
]
]
| TITLE: Robust detection of overlapping bioacoustic sound events
ABSTRACT: We propose a method for accurately detecting bioacoustic sound events that is
robust to overlapping events, a common issue in domains such as ethology,
ecology and conservation. While standard methods employ a frame-based,
multi-label approach, we introduce an onset-based detection method which we
name Voxaboxen. It takes inspiration from object detection methods in computer
vision, but simultaneously takes advantage of recent advances in
self-supervised audio encoders. For each time window, Voxaboxen predicts
whether it contains the start of a vocalization and how long the vocalization
is. It also does the same in reverse, predicting whether each window contains
the end of a vocalization, and how long ago it started. The two resulting sets
of bounding boxes are then fused using a graph-matching algorithm. We also
release a new dataset designed to measure performance on detecting overlapping
vocalizations. This consists of recordings of zebra finches annotated with
temporally-strong labels and showing frequent overlaps. We test Voxaboxen on
seven existing data sets and on our new data set. We compare Voxaboxen to
natural baselines and existing sound event detection methods and demonstrate
SotA results. Further experiments show that improvements are robust to frequent
vocalization overlap.
| new_dataset | 0.959307 |
2503.02397 | Adnan Ali | Adnan Ali, Jinglong Li, Huanhuan Chen, AlMotasem Bellah Al Ajlouni | A Binary Classification Social Network Dataset for Graph Machine
Learning | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Social networks have a vast range of applications with graphs. The available
benchmark datasets are citation, co-occurrence, e-commerce networks, etc, with
classes ranging from 3 to 15. However, there is no benchmark classification
social network dataset for graph machine learning. This paper fills the gap and
presents the Binary Classification Social Network Dataset (\textit{BiSND}),
designed for graph machine learning applications to predict binary classes. We
present the BiSND in \textit{tabular and graph} formats to verify its
robustness across classical and advanced machine learning. We employ a diverse
set of classifiers, including four traditional machine learning algorithms
(Decision Trees, K-Nearest Neighbour, Random Forest, XGBoost), one Deep Neural
Network (multi-layer perceptrons), one Graph Neural Network (Graph
Convolutional Network), and three state-of-the-art Graph Contrastive Learning
methods (BGRL, GRACE, DAENS). Our findings reveal that BiSND is suitable for
classification tasks, with F1-scores ranging from 67.66 to 70.15, indicating
promising avenues for future enhancements.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 08:40:42 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ali",
"Adnan",
""
],
[
"Li",
"Jinglong",
""
],
[
"Chen",
"Huanhuan",
""
],
[
"Ajlouni",
"AlMotasem Bellah Al",
""
]
]
| TITLE: A Binary Classification Social Network Dataset for Graph Machine
Learning
ABSTRACT: Social networks have a vast range of applications with graphs. The available
benchmark datasets are citation, co-occurrence, e-commerce networks, etc, with
classes ranging from 3 to 15. However, there is no benchmark classification
social network dataset for graph machine learning. This paper fills the gap and
presents the Binary Classification Social Network Dataset (\textit{BiSND}),
designed for graph machine learning applications to predict binary classes. We
present the BiSND in \textit{tabular and graph} formats to verify its
robustness across classical and advanced machine learning. We employ a diverse
set of classifiers, including four traditional machine learning algorithms
(Decision Trees, K-Nearest Neighbour, Random Forest, XGBoost), one Deep Neural
Network (multi-layer perceptrons), one Graph Neural Network (Graph
Convolutional Network), and three state-of-the-art Graph Contrastive Learning
methods (BGRL, GRACE, DAENS). Our findings reveal that BiSND is suitable for
classification tasks, with F1-scores ranging from 67.66 to 70.15, indicating
promising avenues for future enhancements.
| new_dataset | 0.962708 |
2503.02410 | Jiesi Hu | Jiesi Hu, Hanyang Peng, Yanwu Yang, Xutao Guo, Yang Shang, Pengcheng
Shi, Chenfei Ye, Ting Ma | Building 3D In-Context Learning Universal Model in Neuroimaging | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In-context learning (ICL), a type of universal model, demonstrates
exceptional generalization across a wide range of tasks without retraining by
leveraging task-specific guidance from context, making it particularly
effective for the complex demands of neuroimaging. However, existing ICL
models, which take 2D images as input, struggle to fully leverage the 3D
anatomical structures in neuroimages, leading to a lack of global awareness and
suboptimal performance. In this regard, we introduce Neuroverse3D, an ICL model
capable of performing multiple neuroimaging tasks (e.g., segmentation,
denoising, inpainting) in 3D. Neuroverse3D overcomes the large memory
consumption due to 3D inputs through adaptive parallel-sequential context
processing and a U-shape fusion strategy, allowing it to handle an unlimited
number of context images. Additionally, we propose an optimized loss to balance
multi-task training and enhance the focus on anatomical structures. Our study
incorporates 43,674 3D scans from 19 neuroimaging datasets and evaluates
Neuroverse3D on 14 diverse tasks using held-out test sets. The results
demonstrate that Neuroverse3D significantly outperforms existing ICL models and
closely matches the performance of task-specific models. The code and model
weights are publicly released at: https://github.com/jiesihu/Neu3D.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 08:51:44 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Hu",
"Jiesi",
""
],
[
"Peng",
"Hanyang",
""
],
[
"Yang",
"Yanwu",
""
],
[
"Guo",
"Xutao",
""
],
[
"Shang",
"Yang",
""
],
[
"Shi",
"Pengcheng",
""
],
[
"Ye",
"Chenfei",
""
],
[
"Ma",
"Ting",
""
]
]
| TITLE: Building 3D In-Context Learning Universal Model in Neuroimaging
ABSTRACT: In-context learning (ICL), a type of universal model, demonstrates
exceptional generalization across a wide range of tasks without retraining by
leveraging task-specific guidance from context, making it particularly
effective for the complex demands of neuroimaging. However, existing ICL
models, which take 2D images as input, struggle to fully leverage the 3D
anatomical structures in neuroimages, leading to a lack of global awareness and
suboptimal performance. In this regard, we introduce Neuroverse3D, an ICL model
capable of performing multiple neuroimaging tasks (e.g., segmentation,
denoising, inpainting) in 3D. Neuroverse3D overcomes the large memory
consumption due to 3D inputs through adaptive parallel-sequential context
processing and a U-shape fusion strategy, allowing it to handle an unlimited
number of context images. Additionally, we propose an optimized loss to balance
multi-task training and enhance the focus on anatomical structures. Our study
incorporates 43,674 3D scans from 19 neuroimaging datasets and evaluates
Neuroverse3D on 14 diverse tasks using held-out test sets. The results
demonstrate that Neuroverse3D significantly outperforms existing ICL models and
closely matches the performance of task-specific models. The code and model
weights are publicly released at: https://github.com/jiesihu/Neu3D.
| no_new_dataset | 0.945197 |
2503.02414 | Ling Gao | Ling Gao, Zhenyu Shu, Shiqing Xin | InfoGNN: End-to-end deep learning on mesh via graph neural networks | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D models are widely used in various industries, and mesh data has become an
indispensable part of 3D modeling because of its unique advantages. Mesh data
can provide an intuitive and practical expression of rich 3D information.
However, its disordered, irregular data structure and complex surface
information make it challenging to apply with deep learning models directly.
Traditional mesh data processing methods often rely on mesh models with many
limitations, such as manifold, which restrict their application scopes in
reality and do not fully utilize the advantages of mesh models. This paper
proposes a novel end-to-end framework for addressing the challenges associated
with deep learning in mesh models centered around graph neural networks (GNN)
and is titled InfoGNN. InfoGNN treats the mesh model as a graph, which enables
it to handle irregular mesh data efficiently. Moreover, we propose InfoConv and
InfoMP modules, which utilize the position information of the points and fully
use the static information such as face normals, dihedral angles, and dynamic
global feature information to fully use all kinds of data. In addition, InfoGNN
is an end-to-end framework, and we simplify the network design to make it more
efficient, paving the way for efficient deep learning of complex 3D models. We
conducted experiments on several publicly available datasets, and the results
show that InfoGNN achieves excellent performance in mesh classification and
segmentation tasks.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 08:58:30 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Gao",
"Ling",
""
],
[
"Shu",
"Zhenyu",
""
],
[
"Xin",
"Shiqing",
""
]
]
| TITLE: InfoGNN: End-to-end deep learning on mesh via graph neural networks
ABSTRACT: 3D models are widely used in various industries, and mesh data has become an
indispensable part of 3D modeling because of its unique advantages. Mesh data
can provide an intuitive and practical expression of rich 3D information.
However, its disordered, irregular data structure and complex surface
information make it challenging to apply with deep learning models directly.
Traditional mesh data processing methods often rely on mesh models with many
limitations, such as manifold, which restrict their application scopes in
reality and do not fully utilize the advantages of mesh models. This paper
proposes a novel end-to-end framework for addressing the challenges associated
with deep learning in mesh models centered around graph neural networks (GNN)
and is titled InfoGNN. InfoGNN treats the mesh model as a graph, which enables
it to handle irregular mesh data efficiently. Moreover, we propose InfoConv and
InfoMP modules, which utilize the position information of the points and fully
use the static information such as face normals, dihedral angles, and dynamic
global feature information to fully use all kinds of data. In addition, InfoGNN
is an end-to-end framework, and we simplify the network design to make it more
efficient, paving the way for efficient deep learning of complex 3D models. We
conducted experiments on several publicly available datasets, and the results
show that InfoGNN achieves excellent performance in mesh classification and
segmentation tasks.
| no_new_dataset | 0.950778 |
2503.02421 | Chrysa Pratikaki | Chrysa Pratikaki, Panagiotis Filntisis, Athanasios Katsamanis,
Anastasios Roussos and Petros Maragos | A Transformer-Based Framework for Greek Sign Language Production using
Extended Skeletal Motion Representations | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Sign Languages are the primary form of communication for Deaf communities
across the world. To break the communication barriers between the Deaf and
Hard-of-Hearing and the hearing communities, it is imperative to build systems
capable of translating the spoken language into sign language and vice versa.
Building on insights from previous research, we propose a deep learning model
for Sign Language Production (SLP), which to our knowledge is the first attempt
on Greek SLP. We tackle this task by utilizing a transformer-based architecture
that enables the translation from text input to human pose keypoints, and the
opposite. We evaluate the effectiveness of the proposed pipeline on the Greek
SL dataset Elementary23, through a series of comparative analyses and ablation
studies. Our pipeline's components, which include data-driven gloss generation,
training through video to text translation and a scheduling algorithm for
teacher forcing - auto-regressive decoding seem to actively enhance the quality
of produced SL videos.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 09:05:42 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Pratikaki",
"Chrysa",
""
],
[
"Filntisis",
"Panagiotis",
""
],
[
"Katsamanis",
"Athanasios",
""
],
[
"Roussos",
"Anastasios",
""
],
[
"Maragos",
"Petros",
""
]
]
| TITLE: A Transformer-Based Framework for Greek Sign Language Production using
Extended Skeletal Motion Representations
ABSTRACT: Sign Languages are the primary form of communication for Deaf communities
across the world. To break the communication barriers between the Deaf and
Hard-of-Hearing and the hearing communities, it is imperative to build systems
capable of translating the spoken language into sign language and vice versa.
Building on insights from previous research, we propose a deep learning model
for Sign Language Production (SLP), which to our knowledge is the first attempt
on Greek SLP. We tackle this task by utilizing a transformer-based architecture
that enables the translation from text input to human pose keypoints, and the
opposite. We evaluate the effectiveness of the proposed pipeline on the Greek
SL dataset Elementary23, through a series of comparative analyses and ablation
studies. Our pipeline's components, which include data-driven gloss generation,
training through video to text translation and a scheduling algorithm for
teacher forcing - auto-regressive decoding seem to actively enhance the quality
of produced SL videos.
| no_new_dataset | 0.941708 |
2503.02422 | Olof Mogren | Richard Lindholm, Oscar Marklund, Olof Mogren, John Martinsson | Aggregation Strategies for Efficient Annotation of Bioacoustic Sound
Events Using Active Learning | null | null | null | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The vast amounts of audio data collected in Sound Event Detection (SED)
applications require efficient annotation strategies to enable supervised
learning. Manual labeling is expensive and time-consuming, making Active
Learning (AL) a promising approach for reducing annotation effort. We introduce
Top K Entropy, a novel uncertainty aggregation strategy for AL that prioritizes
the most uncertain segments within an audio recording, instead of averaging
uncertainty across all segments. This approach enables the selection of entire
recordings for annotation, improving efficiency in sparse data scenarios. We
compare Top K Entropy to random sampling and Mean Entropy, and show that fewer
labels can lead to the same model performance, particularly in datasets with
sparse sound events. Evaluations are conducted on audio mixtures of sound
recordings from parks with meerkat, dog, and baby crying sound events,
representing real-world bioacoustic monitoring scenarios. Using Top K Entropy
for active learning, we can achieve comparable performance to training on the
fully labeled dataset with only 8% of the labels. Top K Entropy outperforms
Mean Entropy, suggesting that it is best to let the most uncertain segments
represent the uncertainty of an audio file. The findings highlight the
potential of AL for scalable annotation in audio and time-series applications,
including bioacoustics.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 09:08:33 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lindholm",
"Richard",
""
],
[
"Marklund",
"Oscar",
""
],
[
"Mogren",
"Olof",
""
],
[
"Martinsson",
"John",
""
]
]
| TITLE: Aggregation Strategies for Efficient Annotation of Bioacoustic Sound
Events Using Active Learning
ABSTRACT: The vast amounts of audio data collected in Sound Event Detection (SED)
applications require efficient annotation strategies to enable supervised
learning. Manual labeling is expensive and time-consuming, making Active
Learning (AL) a promising approach for reducing annotation effort. We introduce
Top K Entropy, a novel uncertainty aggregation strategy for AL that prioritizes
the most uncertain segments within an audio recording, instead of averaging
uncertainty across all segments. This approach enables the selection of entire
recordings for annotation, improving efficiency in sparse data scenarios. We
compare Top K Entropy to random sampling and Mean Entropy, and show that fewer
labels can lead to the same model performance, particularly in datasets with
sparse sound events. Evaluations are conducted on audio mixtures of sound
recordings from parks with meerkat, dog, and baby crying sound events,
representing real-world bioacoustic monitoring scenarios. Using Top K Entropy
for active learning, we can achieve comparable performance to training on the
fully labeled dataset with only 8% of the labels. Top K Entropy outperforms
Mean Entropy, suggesting that it is best to let the most uncertain segments
represent the uncertainty of an audio file. The findings highlight the
potential of AL for scalable annotation in audio and time-series applications,
including bioacoustics.
| no_new_dataset | 0.953837 |
2503.02441 | Matteo Brosolo | Matteo Brosolo, Vinod Puthuvath, Mauro Conti | Through the Static: Demystifying Malware Visualization via
Explainability | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by-sa/4.0/ | Security researchers grapple with the surge of malicious files, necessitating
swift identification and classification of malware strains for effective
protection. Visual classifiers and in particular Convolutional Neural Networks
(CNNs) have emerged as vital tools for this task. However, issues of robustness
and explainability, common in other high risk domain like medicine and
autonomous vehicles, remain understudied in current literature. Although deep
learning visualization classifiers presented in research obtain great results
without the need for expert feature extraction, they have not been properly
studied in terms of their replicability. Additionally, the literature is not
clear on how these types of classifiers arrive to their answers. Our study
addresses these gaps by replicating six CNN models and exploring their
pitfalls. We employ Class Activation Maps (CAMs), like GradCAM and HiResCAM, to
assess model explainability. We evaluate the CNNs' performance and
interpretability on two standard datasets, MalImg and Big2015, and a newly
created called VX-Zoo. We employ these different CAM techniques to gauge the
explainability of each of the models. With these tools, we investigate the
underlying factors contributing to different interpretations of inputs across
the different models, empowering human researchers to discern patterns crucial
for identifying distinct malware families and explain why CNN models arrive at
their conclusions. Other then highlighting the patterns found in the
interpretability study, we employ the extracted heatmpas to enhance Visual
Transformers classifiers' performance and explanation quality. This approach
yields substantial improvements in F1 score, ranging from 2% to 8%, across the
datasets compared to benchmark values.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 09:38:50 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Brosolo",
"Matteo",
""
],
[
"Puthuvath",
"Vinod",
""
],
[
"Conti",
"Mauro",
""
]
]
| TITLE: Through the Static: Demystifying Malware Visualization via
Explainability
ABSTRACT: Security researchers grapple with the surge of malicious files, necessitating
swift identification and classification of malware strains for effective
protection. Visual classifiers and in particular Convolutional Neural Networks
(CNNs) have emerged as vital tools for this task. However, issues of robustness
and explainability, common in other high risk domain like medicine and
autonomous vehicles, remain understudied in current literature. Although deep
learning visualization classifiers presented in research obtain great results
without the need for expert feature extraction, they have not been properly
studied in terms of their replicability. Additionally, the literature is not
clear on how these types of classifiers arrive to their answers. Our study
addresses these gaps by replicating six CNN models and exploring their
pitfalls. We employ Class Activation Maps (CAMs), like GradCAM and HiResCAM, to
assess model explainability. We evaluate the CNNs' performance and
interpretability on two standard datasets, MalImg and Big2015, and a newly
created called VX-Zoo. We employ these different CAM techniques to gauge the
explainability of each of the models. With these tools, we investigate the
underlying factors contributing to different interpretations of inputs across
the different models, empowering human researchers to discern patterns crucial
for identifying distinct malware families and explain why CNN models arrive at
their conclusions. Other then highlighting the patterns found in the
interpretability study, we employ the extracted heatmpas to enhance Visual
Transformers classifiers' performance and explanation quality. This approach
yields substantial improvements in F1 score, ranging from 2% to 8%, across the
datasets compared to benchmark values.
| no_new_dataset | 0.944689 |
2503.02449 | JianYu Wang | Jianyu Wang, Zhengqiao Zhao, Nicolas Dobigeon, and Jingdong Chen | Joint Tensor and Inter-View Low-Rank Recovery for Incomplete Multiview
Clustering | The paper is under review at IEEE Transactions on Knowledge and Data
Engineering | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Incomplete multiview clustering (IMVC) has gained significant attention for
its effectiveness in handling missing sample challenges across various views in
real-world multiview clustering applications. Most IMVC approaches tackle this
problem by either learning consensus representations from available views or
reconstructing missing samples using the underlying manifold structure.
However, the reconstruction of learned similarity graph tensor in prior studies
only exploits the low-tubal-rank information, neglecting the exploration of
inter-view correlations. This paper propose a novel joint tensor and inter-view
low-rank Recovery (JTIV-LRR), framing IMVC as a joint optimization problem that
integrates incomplete similarity graph learning and tensor representation
recovery. By leveraging both intra-view and inter-view low rank information,
the method achieves robust estimation of the complete similarity graph tensor
through sparse noise removal and low-tubal-rank constraints along different
modes. Extensive experiments on both synthetic and real-world datasets
demonstrate the superiority of the proposed approach, achieving significant
improvements in clustering accuracy and robustness compared to state-of-the-art
methods.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 09:50:59 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Jianyu",
""
],
[
"Zhao",
"Zhengqiao",
""
],
[
"Dobigeon",
"Nicolas",
""
],
[
"Chen",
"Jingdong",
""
]
]
| TITLE: Joint Tensor and Inter-View Low-Rank Recovery for Incomplete Multiview
Clustering
ABSTRACT: Incomplete multiview clustering (IMVC) has gained significant attention for
its effectiveness in handling missing sample challenges across various views in
real-world multiview clustering applications. Most IMVC approaches tackle this
problem by either learning consensus representations from available views or
reconstructing missing samples using the underlying manifold structure.
However, the reconstruction of learned similarity graph tensor in prior studies
only exploits the low-tubal-rank information, neglecting the exploration of
inter-view correlations. This paper propose a novel joint tensor and inter-view
low-rank Recovery (JTIV-LRR), framing IMVC as a joint optimization problem that
integrates incomplete similarity graph learning and tensor representation
recovery. By leveraging both intra-view and inter-view low rank information,
the method achieves robust estimation of the complete similarity graph tensor
through sparse noise removal and low-tubal-rank constraints along different
modes. Extensive experiments on both synthetic and real-world datasets
demonstrate the superiority of the proposed approach, achieving significant
improvements in clustering accuracy and robustness compared to state-of-the-art
methods.
| no_new_dataset | 0.947817 |
2503.02450 | Yilun Qiu | Yilun Qiu, Xiaoyan Zhao, Yang Zhang, Yimeng Bai, Wenjie Wang, Hong
Cheng, Fuli Feng, Tat-Seng Chua | Measuring What Makes You Unique: Difference-Aware User Modeling for
Enhancing LLM Personalization | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personalizing Large Language Models (LLMs) has become a critical step in
facilitating their widespread application to enhance individual life
experiences. In pursuit of personalization, distilling key preference
information from an individual's historical data as instructional preference
context to customize LLM generation has emerged as a promising direction.
However, these methods face a fundamental limitation by overlooking the
inter-user comparative analysis, which is essential for identifying the
inter-user differences that truly shape preferences. To address this
limitation, we propose Difference-aware Personalization Learning (DPL), a novel
approach that emphasizes extracting inter-user differences to enhance LLM
personalization. DPL strategically selects representative users for comparison
and establishes a structured standard to extract meaningful, task-relevant
differences for customizing LLM generation. Extensive experiments on real-world
datasets demonstrate that DPL significantly enhances LLM personalization. We
release our code at https://github.com/SnowCharmQ/DPL.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 09:53:26 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Qiu",
"Yilun",
""
],
[
"Zhao",
"Xiaoyan",
""
],
[
"Zhang",
"Yang",
""
],
[
"Bai",
"Yimeng",
""
],
[
"Wang",
"Wenjie",
""
],
[
"Cheng",
"Hong",
""
],
[
"Feng",
"Fuli",
""
],
[
"Chua",
"Tat-Seng",
""
]
]
| TITLE: Measuring What Makes You Unique: Difference-Aware User Modeling for
Enhancing LLM Personalization
ABSTRACT: Personalizing Large Language Models (LLMs) has become a critical step in
facilitating their widespread application to enhance individual life
experiences. In pursuit of personalization, distilling key preference
information from an individual's historical data as instructional preference
context to customize LLM generation has emerged as a promising direction.
However, these methods face a fundamental limitation by overlooking the
inter-user comparative analysis, which is essential for identifying the
inter-user differences that truly shape preferences. To address this
limitation, we propose Difference-aware Personalization Learning (DPL), a novel
approach that emphasizes extracting inter-user differences to enhance LLM
personalization. DPL strategically selects representative users for comparison
and establishes a structured standard to extract meaningful, task-relevant
differences for customizing LLM generation. Extensive experiments on real-world
datasets demonstrate that DPL significantly enhances LLM personalization. We
release our code at https://github.com/SnowCharmQ/DPL.
| no_new_dataset | 0.946051 |
2503.02452 | Qipeng Yan | Qipeng Yan, Mingyang Sun, Lihua Zhang | 2DGS-Avatar: Animatable High-fidelity Clothed Avatar via 2D Gaussian
Splatting | ICVRV 2024 | null | null | null | cs.CV cs.MM | http://creativecommons.org/licenses/by/4.0/ | Real-time rendering of high-fidelity and animatable avatars from monocular
videos remains a challenging problem in computer vision and graphics. Over the
past few years, the Neural Radiance Field (NeRF) has made significant progress
in rendering quality but behaves poorly in run-time performance due to the low
efficiency of volumetric rendering. Recently, methods based on 3D Gaussian
Splatting (3DGS) have shown great potential in fast training and real-time
rendering. However, they still suffer from artifacts caused by inaccurate
geometry. To address these problems, we propose 2DGS-Avatar, a novel approach
based on 2D Gaussian Splatting (2DGS) for modeling animatable clothed avatars
with high-fidelity and fast training performance. Given monocular RGB videos as
input, our method generates an avatar that can be driven by poses and rendered
in real-time. Compared to 3DGS-based methods, our 2DGS-Avatar retains the
advantages of fast training and rendering while also capturing detailed,
dynamic, and photo-realistic appearances. We conduct abundant experiments on
popular datasets such as AvatarRex and THuman4.0, demonstrating impressive
performance in both qualitative and quantitative metrics.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 09:57:24 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Yan",
"Qipeng",
""
],
[
"Sun",
"Mingyang",
""
],
[
"Zhang",
"Lihua",
""
]
]
| TITLE: 2DGS-Avatar: Animatable High-fidelity Clothed Avatar via 2D Gaussian
Splatting
ABSTRACT: Real-time rendering of high-fidelity and animatable avatars from monocular
videos remains a challenging problem in computer vision and graphics. Over the
past few years, the Neural Radiance Field (NeRF) has made significant progress
in rendering quality but behaves poorly in run-time performance due to the low
efficiency of volumetric rendering. Recently, methods based on 3D Gaussian
Splatting (3DGS) have shown great potential in fast training and real-time
rendering. However, they still suffer from artifacts caused by inaccurate
geometry. To address these problems, we propose 2DGS-Avatar, a novel approach
based on 2D Gaussian Splatting (2DGS) for modeling animatable clothed avatars
with high-fidelity and fast training performance. Given monocular RGB videos as
input, our method generates an avatar that can be driven by poses and rendered
in real-time. Compared to 3DGS-based methods, our 2DGS-Avatar retains the
advantages of fast training and rendering while also capturing detailed,
dynamic, and photo-realistic appearances. We conduct abundant experiments on
popular datasets such as AvatarRex and THuman4.0, demonstrating impressive
performance in both qualitative and quantitative metrics.
| no_new_dataset | 0.951278 |
2503.02453 | Yuhao Yang | Yuhao Yang, Zhi Ji, Zhaopeng Li, Yi Li, Zhonglin Mo, Yue Ding, Kai
Chen, Zijian Zhang, Jie Li, Shuanglong Li, Lin Liu | Sparse Meets Dense: Unified Generative Recommendations with Cascaded
Sparse-Dense Representations | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative models have recently gained attention in recommendation systems by
directly predicting item identifiers from user interaction sequences. However,
existing methods suffer from significant information loss due to the separation
of stages such as quantization and sequence modeling, hindering their ability
to achieve the modeling precision and accuracy of sequential dense retrieval
techniques. Integrating generative and dense retrieval methods remains a
critical challenge. To address this, we introduce the Cascaded Organized
Bi-Represented generAtive retrieval (COBRA) framework, which innovatively
integrates sparse semantic IDs and dense vectors through a cascading process.
Our method alternates between generating these representations by first
generating sparse IDs, which serve as conditions to aid in the generation of
dense vectors. End-to-end training enables dynamic refinement of dense
representations, capturing both semantic insights and collaborative signals
from user-item interactions. During inference, COBRA employs a coarse-to-fine
strategy, starting with sparse ID generation and refining them into dense
vectors via the generative model. We further propose BeamFusion, an innovative
approach combining beam search with nearest neighbor scores to enhance
inference flexibility and recommendation diversity. Extensive experiments on
public datasets and offline tests validate our method's robustness. Online A/B
tests on a real-world advertising platform with over 200 million daily users
demonstrate substantial improvements in key metrics, highlighting COBRA's
practical advantages.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 10:00:05 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Yang",
"Yuhao",
""
],
[
"Ji",
"Zhi",
""
],
[
"Li",
"Zhaopeng",
""
],
[
"Li",
"Yi",
""
],
[
"Mo",
"Zhonglin",
""
],
[
"Ding",
"Yue",
""
],
[
"Chen",
"Kai",
""
],
[
"Zhang",
"Zijian",
""
],
[
"Li",
"Jie",
""
],
[
"Li",
"Shuanglong",
""
],
[
"Liu",
"Lin",
""
]
]
| TITLE: Sparse Meets Dense: Unified Generative Recommendations with Cascaded
Sparse-Dense Representations
ABSTRACT: Generative models have recently gained attention in recommendation systems by
directly predicting item identifiers from user interaction sequences. However,
existing methods suffer from significant information loss due to the separation
of stages such as quantization and sequence modeling, hindering their ability
to achieve the modeling precision and accuracy of sequential dense retrieval
techniques. Integrating generative and dense retrieval methods remains a
critical challenge. To address this, we introduce the Cascaded Organized
Bi-Represented generAtive retrieval (COBRA) framework, which innovatively
integrates sparse semantic IDs and dense vectors through a cascading process.
Our method alternates between generating these representations by first
generating sparse IDs, which serve as conditions to aid in the generation of
dense vectors. End-to-end training enables dynamic refinement of dense
representations, capturing both semantic insights and collaborative signals
from user-item interactions. During inference, COBRA employs a coarse-to-fine
strategy, starting with sparse ID generation and refining them into dense
vectors via the generative model. We further propose BeamFusion, an innovative
approach combining beam search with nearest neighbor scores to enhance
inference flexibility and recommendation diversity. Extensive experiments on
public datasets and offline tests validate our method's robustness. Online A/B
tests on a real-world advertising platform with over 200 million daily users
demonstrate substantial improvements in key metrics, highlighting COBRA's
practical advantages.
| no_new_dataset | 0.944074 |
2503.02456 | Tobias Buck | Tobias Buck, Berkay G\"unes, Giuseppe Viterbo, William H. Oliver, Sven
Buder | Inferring Galactic Parameters from Chemical Abundances with
Simulation-Based Inference | submitted to A&A, comments welcome, all source code to reproduce this
work can be found on GitHub under url: https://github.com/TobiBu/sbi-chempy | null | null | null | astro-ph.GA astro-ph.IM physics.comp-ph physics.data-an physics.space-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Galactic chemical abundances provide crucial insights into fundamental
galactic parameters, such as the high-mass slope of the initial mass function
(IMF) and the normalization of Type Ia supernova (SN Ia) rates. Constraining
these parameters is essential for advancing our understanding of stellar
feedback, metal enrichment, and galaxy formation processes. However,
traditional Bayesian inference techniques, such as Hamiltonian Monte Carlo
(HMC), are computationally prohibitive when applied to large datasets of modern
stellar surveys. We leverage simulation-based-inference (SBI) as a scalable,
robust, and efficient method for constraining galactic parameters from stellar
chemical abundances and demonstrate its the advantages over HMC in terms of
speed, scalability, and robustness against model misspecifications. We combine
a Galactic Chemical Evolution (GCE) model, CHEMPY, with a neural network
emulator and a Neural Posterior Estimator (NPE) to train our SBI pipeline. Mock
datasets are generated using CHEMPY, including scenarios with mismatched
nucleosynthetic yields, with additional tests conducted on data from a
simulated Milky Way-like galaxy. SBI results are benchmarked against HMC-based
inference, focusing on computational performance, accuracy, and resilience to
systematic discrepancies. SBI achieves a $\sim75,600\times$ speed-up compared
to HMC, reducing inference runtime from $\gtrsim42$ hours to mere seconds for
thousands of stars. Inference on $1,000$ stars yields precise estimates for the
IMF slope ($\alpha_{\rm IMF} = -2.298 \pm 0.002$) and SN Ia normalization
($\log_{10}(N_{\rm Ia}) = -2.885 \pm 0.003$), deviating less than 0.05% from
the ground truth. SBI also demonstrates similar robustness to model
misspecification than HMC, recovering accurate parameters even with alternate
yield tables or data from a cosmological simulation. (shortened...)
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 10:05:58 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Buck",
"Tobias",
""
],
[
"Günes",
"Berkay",
""
],
[
"Viterbo",
"Giuseppe",
""
],
[
"Oliver",
"William H.",
""
],
[
"Buder",
"Sven",
""
]
]
| TITLE: Inferring Galactic Parameters from Chemical Abundances with
Simulation-Based Inference
ABSTRACT: Galactic chemical abundances provide crucial insights into fundamental
galactic parameters, such as the high-mass slope of the initial mass function
(IMF) and the normalization of Type Ia supernova (SN Ia) rates. Constraining
these parameters is essential for advancing our understanding of stellar
feedback, metal enrichment, and galaxy formation processes. However,
traditional Bayesian inference techniques, such as Hamiltonian Monte Carlo
(HMC), are computationally prohibitive when applied to large datasets of modern
stellar surveys. We leverage simulation-based-inference (SBI) as a scalable,
robust, and efficient method for constraining galactic parameters from stellar
chemical abundances and demonstrate its the advantages over HMC in terms of
speed, scalability, and robustness against model misspecifications. We combine
a Galactic Chemical Evolution (GCE) model, CHEMPY, with a neural network
emulator and a Neural Posterior Estimator (NPE) to train our SBI pipeline. Mock
datasets are generated using CHEMPY, including scenarios with mismatched
nucleosynthetic yields, with additional tests conducted on data from a
simulated Milky Way-like galaxy. SBI results are benchmarked against HMC-based
inference, focusing on computational performance, accuracy, and resilience to
systematic discrepancies. SBI achieves a $\sim75,600\times$ speed-up compared
to HMC, reducing inference runtime from $\gtrsim42$ hours to mere seconds for
thousands of stars. Inference on $1,000$ stars yields precise estimates for the
IMF slope ($\alpha_{\rm IMF} = -2.298 \pm 0.002$) and SN Ia normalization
($\log_{10}(N_{\rm Ia}) = -2.885 \pm 0.003$), deviating less than 0.05% from
the ground truth. SBI also demonstrates similar robustness to model
misspecification than HMC, recovering accurate parameters even with alternate
yield tables or data from a cosmological simulation. (shortened...)
| no_new_dataset | 0.947527 |
2503.02463 | Sohan Patnaik | Sohan Patnaik, Milan Aggarwal, Sumit Bhatia, Balaji Krishnamurthy | It Helps to Take a Second Opinion: Teaching Smaller LLMs to Deliberate
Mutually via Selective Rationale Optimisation | Accepted at ICLR 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Very large language models (LLMs) such as GPT-4 have shown the ability to
handle complex tasks by generating and self-refining step-by-step rationales.
Smaller language models (SLMs), typically with < 13B parameters, have been
improved by using the data generated from very-large LMs through knowledge
distillation. However, various practical constraints such as API costs,
copyright, legal and ethical policies restrict using large (often opaque)
models to train smaller models for commercial use. Limited success has been
achieved at improving the ability of an SLM to explore the space of possible
rationales and evaluate them by itself through self-deliberation. To address
this, we propose COALITION, a trainable framework that facilitates interaction
between two variants of the same SLM and trains them to generate and refine
rationales optimized for the end-task. The variants exhibit different behaviors
to produce a set of diverse candidate rationales during the generation and
refinement steps. The model is then trained via Selective Rationale
Optimization (SRO) to prefer generating rationale candidates that maximize the
likelihood of producing the ground-truth answer. During inference, COALITION
employs a controller to select the suitable variant for generating and refining
the rationales. On five different datasets covering mathematical problems,
commonsense reasoning, and natural language inference, COALITION outperforms
several baselines by up to 5%. Our ablation studies reveal that
cross-communication between the two variants performs better than using the
single model to self-refine the rationales. We also demonstrate the
applicability of COALITION for LMs of varying scales (4B to 14B parameters) and
model families (Mistral, Llama, Qwen, Phi). We release the code for this work
at https://github.com/Sohanpatnaik106/coalition.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 10:17:29 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Patnaik",
"Sohan",
""
],
[
"Aggarwal",
"Milan",
""
],
[
"Bhatia",
"Sumit",
""
],
[
"Krishnamurthy",
"Balaji",
""
]
]
| TITLE: It Helps to Take a Second Opinion: Teaching Smaller LLMs to Deliberate
Mutually via Selective Rationale Optimisation
ABSTRACT: Very large language models (LLMs) such as GPT-4 have shown the ability to
handle complex tasks by generating and self-refining step-by-step rationales.
Smaller language models (SLMs), typically with < 13B parameters, have been
improved by using the data generated from very-large LMs through knowledge
distillation. However, various practical constraints such as API costs,
copyright, legal and ethical policies restrict using large (often opaque)
models to train smaller models for commercial use. Limited success has been
achieved at improving the ability of an SLM to explore the space of possible
rationales and evaluate them by itself through self-deliberation. To address
this, we propose COALITION, a trainable framework that facilitates interaction
between two variants of the same SLM and trains them to generate and refine
rationales optimized for the end-task. The variants exhibit different behaviors
to produce a set of diverse candidate rationales during the generation and
refinement steps. The model is then trained via Selective Rationale
Optimization (SRO) to prefer generating rationale candidates that maximize the
likelihood of producing the ground-truth answer. During inference, COALITION
employs a controller to select the suitable variant for generating and refining
the rationales. On five different datasets covering mathematical problems,
commonsense reasoning, and natural language inference, COALITION outperforms
several baselines by up to 5%. Our ablation studies reveal that
cross-communication between the two variants performs better than using the
single model to self-refine the rationales. We also demonstrate the
applicability of COALITION for LMs of varying scales (4B to 14B parameters) and
model families (Mistral, Llama, Qwen, Phi). We release the code for this work
at https://github.com/Sohanpatnaik106/coalition.
| no_new_dataset | 0.94887 |
2503.02476 | Zhengyang Ji | Zhengyang Ji, Shang Gao, Li Liu, Yifan Jia, Yutao Yue | BioD2C: A Dual-level Semantic Consistency Constraint Framework for
Biomedical VQA | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biomedical visual question answering (VQA) has been widely studied and has
demonstrated significant application value and potential in fields such as
assistive medical diagnosis. Despite their success, current biomedical VQA
models perform multimodal information interaction only at the model level
within large language models (LLMs), leading to suboptimal multimodal semantic
alignment when dealing with complex tasks. To address this issue, we propose
BioD2C: a novel Dual-level Semantic Consistency Constraint Framework for
Biomedical VQA, which achieves dual-level semantic interaction alignment at
both the model and feature levels, enabling the model to adaptively learn
visual features based on the question. Specifically, we firstly integrate
textual features into visual features via an image-text fusion mechanism as
feature-level semantic interaction, obtaining visual features conditioned on
the given text; and then introduce a text-queue-based cross-modal soft semantic
loss function to further align the image semantics with the question semantics.
Specifically, in this work, we establish a new dataset, BioVGQ, to address
inherent biases in prior datasets by filtering manually-altered images and
aligning question-answer pairs with multimodal context, and train our model on
this dataset. Extensive experimental results demonstrate that BioD2C achieves
state-of-the-art (SOTA) performance across multiple downstream datasets,
showcasing its robustness, generalizability, and potential to advance
biomedical VQA research.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 10:39:42 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ji",
"Zhengyang",
""
],
[
"Gao",
"Shang",
""
],
[
"Liu",
"Li",
""
],
[
"Jia",
"Yifan",
""
],
[
"Yue",
"Yutao",
""
]
]
| TITLE: BioD2C: A Dual-level Semantic Consistency Constraint Framework for
Biomedical VQA
ABSTRACT: Biomedical visual question answering (VQA) has been widely studied and has
demonstrated significant application value and potential in fields such as
assistive medical diagnosis. Despite their success, current biomedical VQA
models perform multimodal information interaction only at the model level
within large language models (LLMs), leading to suboptimal multimodal semantic
alignment when dealing with complex tasks. To address this issue, we propose
BioD2C: a novel Dual-level Semantic Consistency Constraint Framework for
Biomedical VQA, which achieves dual-level semantic interaction alignment at
both the model and feature levels, enabling the model to adaptively learn
visual features based on the question. Specifically, we firstly integrate
textual features into visual features via an image-text fusion mechanism as
feature-level semantic interaction, obtaining visual features conditioned on
the given text; and then introduce a text-queue-based cross-modal soft semantic
loss function to further align the image semantics with the question semantics.
Specifically, in this work, we establish a new dataset, BioVGQ, to address
inherent biases in prior datasets by filtering manually-altered images and
aligning question-answer pairs with multimodal context, and train our model on
this dataset. Extensive experimental results demonstrate that BioD2C achieves
state-of-the-art (SOTA) performance across multiple downstream datasets,
showcasing its robustness, generalizability, and potential to advance
biomedical VQA research.
| new_dataset | 0.973393 |
2503.02481 | Junyi Wang | Junyi Wang, Mubai Du, Ye Wu, Yijie Li, William M. Wells III, Lauren J.
O'Donnell, Fan Zhang | A Novel Streamline-based diffusion MRI Tractography Registration Method
with Probabilistic Keypoint Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Registration of diffusion MRI tractography is an essential step for analyzing
group similarities and variations in the brain's white matter (WM).
Streamline-based registration approaches can leverage the 3D geometric
information of fiber pathways to enable spatial alignment after registration.
Existing methods usually rely on the optimization of the spatial distances to
identify the optimal transformation. However, such methods overlook point
connectivity patterns within the streamline itself, limiting their ability to
identify anatomical correspondences across tractography datasets. In this work,
we propose a novel unsupervised approach using deep learning to perform
streamline-based dMRI tractography registration. The overall idea is to
identify corresponding keypoint pairs across subjects for spatial alignment of
tractography datasets. We model tractography as point clouds to leverage the
graph connectivity along streamlines. We propose a novel keypoint detection
method for streamlines, framed as a probabilistic classification task to
identify anatomically consistent correspondences across unstructured streamline
sets. In the experiments, we compare several existing methods and show highly
effective and efficient tractography registration performance.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 10:47:10 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Junyi",
""
],
[
"Du",
"Mubai",
""
],
[
"Wu",
"Ye",
""
],
[
"Li",
"Yijie",
""
],
[
"Wells",
"William M.",
"III"
],
[
"O'Donnell",
"Lauren J.",
""
],
[
"Zhang",
"Fan",
""
]
]
| TITLE: A Novel Streamline-based diffusion MRI Tractography Registration Method
with Probabilistic Keypoint Detection
ABSTRACT: Registration of diffusion MRI tractography is an essential step for analyzing
group similarities and variations in the brain's white matter (WM).
Streamline-based registration approaches can leverage the 3D geometric
information of fiber pathways to enable spatial alignment after registration.
Existing methods usually rely on the optimization of the spatial distances to
identify the optimal transformation. However, such methods overlook point
connectivity patterns within the streamline itself, limiting their ability to
identify anatomical correspondences across tractography datasets. In this work,
we propose a novel unsupervised approach using deep learning to perform
streamline-based dMRI tractography registration. The overall idea is to
identify corresponding keypoint pairs across subjects for spatial alignment of
tractography datasets. We model tractography as point clouds to leverage the
graph connectivity along streamlines. We propose a novel keypoint detection
method for streamlines, framed as a probabilistic classification task to
identify anatomically consistent correspondences across unstructured streamline
sets. In the experiments, we compare several existing methods and show highly
effective and efficient tractography registration performance.
| no_new_dataset | 0.955068 |
2503.02497 | Abdul Basit | Haider Asif, Abdul Basit, Nouhaila Innan, Muhammad Kashif, Alberto
Marchisio, Muhammad Shafique | PennyLang: Pioneering LLM-Based Quantum Code Generation with a Novel
PennyLane-Centric Dataset | 10 pages, 8 figures, 6 tables, submitted for review under IJCNN 2025 | null | null | null | cs.SE cs.AI quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) offer remarkable capabilities in code
generation, natural language processing, and domain-specific reasoning. Their
potential in aiding quantum software development remains underexplored,
particularly for the PennyLane framework-a leading platform for hybrid
quantum-classical computing. To address this gap, we introduce a novel,
high-quality dataset comprising 3,347 PennyLane-specific code samples of
quantum circuits and their contextual descriptions, specifically curated to
train/fine-tune LLM-based quantum code assistance. Our key contributions are
threefold: (1) the automatic creation and open-source release of a
comprehensive PennyLane dataset leveraging quantum computing textbooks,
official documentation, and open-source repositories; (2) the development of a
systematic methodology for data refinement, annotation, and formatting to
optimize LLM training efficiency; and (3) a thorough evaluation, based on a
Retrieval-Augmented Generation (RAG) framework, demonstrating the effectiveness
of our dataset in streamlining PennyLane code generation and improving quantum
development workflows. Compared to existing efforts that predominantly focus on
Qiskit, our dataset significantly broadens the spectrum of quantum frameworks
covered in AI-driven code assistance. By bridging this gap and providing
reproducible dataset-creation methodologies, we aim to advance the field of
AI-assisted quantum programming, making quantum computing more accessible to
both newcomers and experienced developers.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 11:04:35 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Asif",
"Haider",
""
],
[
"Basit",
"Abdul",
""
],
[
"Innan",
"Nouhaila",
""
],
[
"Kashif",
"Muhammad",
""
],
[
"Marchisio",
"Alberto",
""
],
[
"Shafique",
"Muhammad",
""
]
]
| TITLE: PennyLang: Pioneering LLM-Based Quantum Code Generation with a Novel
PennyLane-Centric Dataset
ABSTRACT: Large Language Models (LLMs) offer remarkable capabilities in code
generation, natural language processing, and domain-specific reasoning. Their
potential in aiding quantum software development remains underexplored,
particularly for the PennyLane framework-a leading platform for hybrid
quantum-classical computing. To address this gap, we introduce a novel,
high-quality dataset comprising 3,347 PennyLane-specific code samples of
quantum circuits and their contextual descriptions, specifically curated to
train/fine-tune LLM-based quantum code assistance. Our key contributions are
threefold: (1) the automatic creation and open-source release of a
comprehensive PennyLane dataset leveraging quantum computing textbooks,
official documentation, and open-source repositories; (2) the development of a
systematic methodology for data refinement, annotation, and formatting to
optimize LLM training efficiency; and (3) a thorough evaluation, based on a
Retrieval-Augmented Generation (RAG) framework, demonstrating the effectiveness
of our dataset in streamlining PennyLane code generation and improving quantum
development workflows. Compared to existing efforts that predominantly focus on
Qiskit, our dataset significantly broadens the spectrum of quantum frameworks
covered in AI-driven code assistance. By bridging this gap and providing
reproducible dataset-creation methodologies, we aim to advance the field of
AI-assisted quantum programming, making quantum computing more accessible to
both newcomers and experienced developers.
| new_dataset | 0.965576 |
2503.02499 | Nathan Daniel Schiele | Nathan D. Schiele and Olga Gadyatskaya | Attack Tree Distance: a practical examination of tree difference
measurement within cyber security | This is an incomplete draft that was stolen and plagiarized. When a
completed version is finished, it will be published as a version 2 here on
arxiv | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | CONTEXT. Attack treesare a recommended threat modeling tool, but there is no
established method to compare them. OBJECTIVE. We aim to establish a method to
compare "real" attack trees, based on both the structure of the tree itself and
the meaning of the node labels. METHOD. We define four methods of comparison
(three novel and one established) and compare them to a dataset of attack trees
created from a study run on students (n = 39). These attack trees all follow
from the same scenario, but have slightly different labels. RESULTS. We find
that applying semantic similarity as a means of comparing node labels is a
valid approach. Further, we find that treeedit distance (established) and
radical distance (novel) are themost promising methods of comparison in most
circumstances. CONCLUSION. We show that these two methods are valid as means of
comparing attack trees, and suggest a novel technique for using semantic
similarity to compare node labels. We further suggest that these methods can be
used to compare attack trees in a real-world scenario, and that they can be
used to identify similar attack trees.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 11:05:07 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Schiele",
"Nathan D.",
""
],
[
"Gadyatskaya",
"Olga",
""
]
]
| TITLE: Attack Tree Distance: a practical examination of tree difference
measurement within cyber security
ABSTRACT: CONTEXT. Attack treesare a recommended threat modeling tool, but there is no
established method to compare them. OBJECTIVE. We aim to establish a method to
compare "real" attack trees, based on both the structure of the tree itself and
the meaning of the node labels. METHOD. We define four methods of comparison
(three novel and one established) and compare them to a dataset of attack trees
created from a study run on students (n = 39). These attack trees all follow
from the same scenario, but have slightly different labels. RESULTS. We find
that applying semantic similarity as a means of comparing node labels is a
valid approach. Further, we find that treeedit distance (established) and
radical distance (novel) are themost promising methods of comparison in most
circumstances. CONCLUSION. We show that these two methods are valid as means of
comparing attack trees, and suggest a novel technique for using semantic
similarity to compare node labels. We further suggest that these methods can be
used to compare attack trees in a real-world scenario, and that they can be
used to identify similar attack trees.
| new_dataset | 0.964187 |
2503.02508 | Xin Li | Xin Ding, Xin Li, Haotong Qin, Zhibo Chen | Q&C: When Quantization Meets Cache in Efficient Image Generation | 11 pages | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantization and cache mechanisms are typically applied individually for
efficient Diffusion Transformers (DiTs), each demonstrating notable potential
for acceleration. However, the promoting effect of combining the two mechanisms
on efficient generation remains under-explored. Through empirical
investigation, we find that the combination of quantization and cache
mechanisms for DiT is not straightforward, and two key challenges lead to
severe catastrophic performance degradation: (i) the sample efficacy of
calibration datasets in post-training quantization (PTQ) is significantly
eliminated by cache operation; (ii) the combination of the above mechanisms
introduces more severe exposure bias within sampling distribution, resulting in
amplified error accumulation in the image generation process. In this work, we
take advantage of these two acceleration mechanisms and propose a hybrid
acceleration method by tackling the above challenges, aiming to further improve
the efficiency of DiTs while maintaining excellent generation capability.
Concretely, a temporal-aware parallel clustering (TAP) is designed to
dynamically improve the sample selection efficacy for the calibration within
PTQ for different diffusion steps. A variance compensation (VC) strategy is
derived to correct the sampling distribution. It mitigates exposure bias
through an adaptive correction factor generation. Extensive experiments have
shown that our method has accelerated DiTs by 12.7x while preserving
competitive generation capability. The code will be available at
https://github.com/xinding-sys/Quant-Cache.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 11:19:02 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ding",
"Xin",
""
],
[
"Li",
"Xin",
""
],
[
"Qin",
"Haotong",
""
],
[
"Chen",
"Zhibo",
""
]
]
| TITLE: Q&C: When Quantization Meets Cache in Efficient Image Generation
ABSTRACT: Quantization and cache mechanisms are typically applied individually for
efficient Diffusion Transformers (DiTs), each demonstrating notable potential
for acceleration. However, the promoting effect of combining the two mechanisms
on efficient generation remains under-explored. Through empirical
investigation, we find that the combination of quantization and cache
mechanisms for DiT is not straightforward, and two key challenges lead to
severe catastrophic performance degradation: (i) the sample efficacy of
calibration datasets in post-training quantization (PTQ) is significantly
eliminated by cache operation; (ii) the combination of the above mechanisms
introduces more severe exposure bias within sampling distribution, resulting in
amplified error accumulation in the image generation process. In this work, we
take advantage of these two acceleration mechanisms and propose a hybrid
acceleration method by tackling the above challenges, aiming to further improve
the efficiency of DiTs while maintaining excellent generation capability.
Concretely, a temporal-aware parallel clustering (TAP) is designed to
dynamically improve the sample selection efficacy for the calibration within
PTQ for different diffusion steps. A variance compensation (VC) strategy is
derived to correct the sampling distribution. It mitigates exposure bias
through an adaptive correction factor generation. Extensive experiments have
shown that our method has accelerated DiTs by 12.7x while preserving
competitive generation capability. The code will be available at
https://github.com/xinding-sys/Quant-Cache.
| no_new_dataset | 0.945851 |
2503.02510 | Mustafa M. Abd Zaid | Mustafa Majeed Abd Zaid, Ahmed Abed Mohammed, Putra Sumari | Remote Sensing Image Classification Using Convolutional Neural Network
(CNN) and Transfer Learning Techniques | This paper is published in Journal of Computer Science, Volume 21 No.
3, 2025. It contains 635-645 pages | J. Comput. Sci., 21(3), 635-645, 2025 | 10.3844/jcssp.2025.635.645 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This study investigates the classification of aerial images depicting
transmission towers, forests, farmland, and mountains. To complete the
classification job, features are extracted from input photos using a
Convolutional Neural Network (CNN) architecture. Then, the images are
classified using Softmax. To test the model, we ran it for ten epochs using a
batch size of 90, the Adam optimizer, and a learning rate of 0.001. Both
training and assessment are conducted using a dataset that blends
self-collected pictures from Google satellite imagery with the MLRNet dataset.
The comprehensive dataset comprises 10,400 images. Our study shows that
transfer learning models and MobileNetV2 in particular, work well for landscape
categorization. These models are good options for practical use because they
strike a good mix between precision and efficiency; our approach achieves
results with an overall accuracy of 87% on the built CNN model. Furthermore, we
reach even higher accuracies by utilizing the pretrained VGG16 and MobileNetV2
models as a starting point for transfer learning. Specifically, VGG16 achieves
an accuracy of 90% and a test loss of 0.298, while MobileNetV2 outperforms both
models with an accuracy of 96% and a test loss of 0.119; the results
demonstrate the effectiveness of employing transfer learning with MobileNetV2
for classifying transmission towers, forests, farmland, and mountains.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 11:19:18 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zaid",
"Mustafa Majeed Abd",
""
],
[
"Mohammed",
"Ahmed Abed",
""
],
[
"Sumari",
"Putra",
""
]
]
| TITLE: Remote Sensing Image Classification Using Convolutional Neural Network
(CNN) and Transfer Learning Techniques
ABSTRACT: This study investigates the classification of aerial images depicting
transmission towers, forests, farmland, and mountains. To complete the
classification job, features are extracted from input photos using a
Convolutional Neural Network (CNN) architecture. Then, the images are
classified using Softmax. To test the model, we ran it for ten epochs using a
batch size of 90, the Adam optimizer, and a learning rate of 0.001. Both
training and assessment are conducted using a dataset that blends
self-collected pictures from Google satellite imagery with the MLRNet dataset.
The comprehensive dataset comprises 10,400 images. Our study shows that
transfer learning models and MobileNetV2 in particular, work well for landscape
categorization. These models are good options for practical use because they
strike a good mix between precision and efficiency; our approach achieves
results with an overall accuracy of 87% on the built CNN model. Furthermore, we
reach even higher accuracies by utilizing the pretrained VGG16 and MobileNetV2
models as a starting point for transfer learning. Specifically, VGG16 achieves
an accuracy of 90% and a test loss of 0.298, while MobileNetV2 outperforms both
models with an accuracy of 96% and a test loss of 0.119; the results
demonstrate the effectiveness of employing transfer learning with MobileNetV2
for classifying transmission towers, forests, farmland, and mountains.
| no_new_dataset | 0.911653 |
2503.02534 | Hocheol Lim | Hocheol Lim, Hyein Cho, Jeonghoon Kim | SAGE-Amine: Generative Amine Design with Multi-Property Optimization for
Efficient CO2 Capture | 33 pages, 5 figures | null | null | null | cs.LG cond-mat.mtrl-sci | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Efficient CO2 capture is vital for mitigating climate change, with
amine-based solvents being widely used due to their strong reactivity with CO2.
However, optimizing key properties such as basicity, viscosity, and absorption
capacity remains challenging, as traditional methods rely on labor-intensive
experimentation and predefined chemical databases, limiting the exploration of
novel solutions. Here, SAGE-Amine was introduced, a generative modeling
approach that integrates Scoring-Assisted Generative Exploration (SAGE) with
quantitative structure-property relationship models to design new amines
tailored for CO2 capture. Unlike conventional virtual screening restricted to
existing compounds, SAGE-Amine generates novel amines by leveraging
autoregressive natural language processing models trained on amine datasets.
SAGE-Amine identified known amines for CO2 capture from scratch and
successfully performed single-property optimization, increasing basicity or
reducing viscosity or vapor pressure. Furthermore, it facilitated
multi-property optimization, simultaneously achieving high basicity with low
viscosity and vapor pressure. The 10 top-ranked amines were suggested using
SAGE-Amine and their thermodynamic properties were further assessed using
COSMO-RS simulations, confirming their potential for CO2 capture. These results
highlight the potential of generative modeling in accelerating the discovery of
amine solvents and expanding the possibilities for industrial CO2 capture
applications.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 12:02:36 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lim",
"Hocheol",
""
],
[
"Cho",
"Hyein",
""
],
[
"Kim",
"Jeonghoon",
""
]
]
| TITLE: SAGE-Amine: Generative Amine Design with Multi-Property Optimization for
Efficient CO2 Capture
ABSTRACT: Efficient CO2 capture is vital for mitigating climate change, with
amine-based solvents being widely used due to their strong reactivity with CO2.
However, optimizing key properties such as basicity, viscosity, and absorption
capacity remains challenging, as traditional methods rely on labor-intensive
experimentation and predefined chemical databases, limiting the exploration of
novel solutions. Here, SAGE-Amine was introduced, a generative modeling
approach that integrates Scoring-Assisted Generative Exploration (SAGE) with
quantitative structure-property relationship models to design new amines
tailored for CO2 capture. Unlike conventional virtual screening restricted to
existing compounds, SAGE-Amine generates novel amines by leveraging
autoregressive natural language processing models trained on amine datasets.
SAGE-Amine identified known amines for CO2 capture from scratch and
successfully performed single-property optimization, increasing basicity or
reducing viscosity or vapor pressure. Furthermore, it facilitated
multi-property optimization, simultaneously achieving high basicity with low
viscosity and vapor pressure. The 10 top-ranked amines were suggested using
SAGE-Amine and their thermodynamic properties were further assessed using
COSMO-RS simulations, confirming their potential for CO2 capture. These results
highlight the potential of generative modeling in accelerating the discovery of
amine solvents and expanding the possibilities for industrial CO2 capture
applications.
| no_new_dataset | 0.950319 |
2503.02539 | Yiyun Zhou | Yiyun Zhou, Zheqi Lv, Shengyu Zhang, Jingyuan Chen | Disentangled Knowledge Tracing for Alleviating Cognitive Bias | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the realm of Intelligent Tutoring System (ITS), the accurate assessment of
students' knowledge states through Knowledge Tracing (KT) is crucial for
personalized learning. However, due to data bias, $\textit{i.e.}$, the
unbalanced distribution of question groups ($\textit{e.g.}$, concepts),
conventional KT models are plagued by cognitive bias, which tends to result in
cognitive underload for overperformers and cognitive overload for
underperformers. More seriously, this bias is amplified with the exercise
recommendations by ITS. After delving into the causal relations in the KT
models, we identify the main cause as the confounder effect of students'
historical correct rate distribution over question groups on the student
representation and prediction score. Towards this end, we propose a
Disentangled Knowledge Tracing (DisKT) model, which separately models students'
familiar and unfamiliar abilities based on causal effects and eliminates the
impact of the confounder in student representation within the model.
Additionally, to shield the contradictory psychology ($\textit{e.g.}$, guessing
and mistaking) in the students' biased data, DisKT introduces a contradiction
attention mechanism. Furthermore, DisKT enhances the interpretability of the
model predictions by integrating a variant of Item Response Theory.
Experimental results on 11 benchmarks and 3 synthesized datasets with different
bias strengths demonstrate that DisKT significantly alleviates cognitive bias
and outperforms 16 baselines in evaluation accuracy.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 12:04:13 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhou",
"Yiyun",
""
],
[
"Lv",
"Zheqi",
""
],
[
"Zhang",
"Shengyu",
""
],
[
"Chen",
"Jingyuan",
""
]
]
| TITLE: Disentangled Knowledge Tracing for Alleviating Cognitive Bias
ABSTRACT: In the realm of Intelligent Tutoring System (ITS), the accurate assessment of
students' knowledge states through Knowledge Tracing (KT) is crucial for
personalized learning. However, due to data bias, $\textit{i.e.}$, the
unbalanced distribution of question groups ($\textit{e.g.}$, concepts),
conventional KT models are plagued by cognitive bias, which tends to result in
cognitive underload for overperformers and cognitive overload for
underperformers. More seriously, this bias is amplified with the exercise
recommendations by ITS. After delving into the causal relations in the KT
models, we identify the main cause as the confounder effect of students'
historical correct rate distribution over question groups on the student
representation and prediction score. Towards this end, we propose a
Disentangled Knowledge Tracing (DisKT) model, which separately models students'
familiar and unfamiliar abilities based on causal effects and eliminates the
impact of the confounder in student representation within the model.
Additionally, to shield the contradictory psychology ($\textit{e.g.}$, guessing
and mistaking) in the students' biased data, DisKT introduces a contradiction
attention mechanism. Furthermore, DisKT enhances the interpretability of the
model predictions by integrating a variant of Item Response Theory.
Experimental results on 11 benchmarks and 3 synthesized datasets with different
bias strengths demonstrate that DisKT significantly alleviates cognitive bias
and outperforms 16 baselines in evaluation accuracy.
| no_new_dataset | 0.944893 |
2503.02547 | Sheng Shang | Sheng Shang, Chenglong Zhao, Ruixin Zhang, Jianlong Jin, Jingyun
Zhang, Rizen Guo, Shouhong Ding, Yunsheng Wu, Yang Zhao, Wei Jia | PVTree: Realistic and Controllable Palm Vein Generation for Recognition
Tasks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Palm vein recognition is an emerging biometric technology that offers
enhanced security and privacy. However, acquiring sufficient palm vein data for
training deep learning-based recognition models is challenging due to the high
costs of data collection and privacy protection constraints. This has led to a
growing interest in generating pseudo-palm vein data using generative models.
Existing methods, however, often produce unrealistic palm vein patterns or
struggle with controlling identity and style attributes. To address these
issues, we propose a novel palm vein generation framework named PVTree. First,
the palm vein identity is defined by a complex and authentic 3D palm vascular
tree, created using an improved Constrained Constructive Optimization (CCO)
algorithm. Second, palm vein patterns of the same identity are generated by
projecting the same 3D vascular tree into 2D images from different views and
converting them into realistic images using a generative model. As a result,
PVTree satisfies the need for both identity consistency and intra-class
diversity. Extensive experiments conducted on several publicly available
datasets demonstrate that our proposed palm vein generation method surpasses
existing methods and achieves a higher TAR@FAR=1e-4 under the 1:1 Open-set
protocol. To the best of our knowledge, this is the first time that the
performance of a recognition model trained on synthetic palm vein data exceeds
that of the recognition model trained on real data, which indicates that palm
vein image generation research has a promising future.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 12:15:33 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Shang",
"Sheng",
""
],
[
"Zhao",
"Chenglong",
""
],
[
"Zhang",
"Ruixin",
""
],
[
"Jin",
"Jianlong",
""
],
[
"Zhang",
"Jingyun",
""
],
[
"Guo",
"Rizen",
""
],
[
"Ding",
"Shouhong",
""
],
[
"Wu",
"Yunsheng",
""
],
[
"Zhao",
"Yang",
""
],
[
"Jia",
"Wei",
""
]
]
| TITLE: PVTree: Realistic and Controllable Palm Vein Generation for Recognition
Tasks
ABSTRACT: Palm vein recognition is an emerging biometric technology that offers
enhanced security and privacy. However, acquiring sufficient palm vein data for
training deep learning-based recognition models is challenging due to the high
costs of data collection and privacy protection constraints. This has led to a
growing interest in generating pseudo-palm vein data using generative models.
Existing methods, however, often produce unrealistic palm vein patterns or
struggle with controlling identity and style attributes. To address these
issues, we propose a novel palm vein generation framework named PVTree. First,
the palm vein identity is defined by a complex and authentic 3D palm vascular
tree, created using an improved Constrained Constructive Optimization (CCO)
algorithm. Second, palm vein patterns of the same identity are generated by
projecting the same 3D vascular tree into 2D images from different views and
converting them into realistic images using a generative model. As a result,
PVTree satisfies the need for both identity consistency and intra-class
diversity. Extensive experiments conducted on several publicly available
datasets demonstrate that our proposed palm vein generation method surpasses
existing methods and achieves a higher TAR@FAR=1e-4 under the 1:1 Open-set
protocol. To the best of our knowledge, this is the first time that the
performance of a recognition model trained on synthetic palm vein data exceeds
that of the recognition model trained on real data, which indicates that palm
vein image generation research has a promising future.
| no_new_dataset | 0.948251 |
2503.02549 | Grzegorz Skorupko | Grzegorz Skorupko, Fotios Avgoustidis, Carlos Mart\'in-Isla, Lidia
Garrucho, Dimitri A. Kessler, Esmeralda Ruiz Pujadas, Oliver D\'iaz, Maciej
Bobowicz, Katarzyna Gwo\'zdziewicz, Xavier Bargall\'o, Paulius
Jaru\v{s}evi\v{c}ius, Kaisar Kushibar and Karim Lekadir | Federated nnU-Net for Privacy-Preserving Medical Image Segmentation | In review | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The nnU-Net framework has played a crucial role in medical image segmentation
and has become the gold standard in multitudes of applications targeting
different diseases, organs, and modalities. However, so far it has been used
primarily in a centralized approach where the data collected from hospitals are
stored in one center and used to train the nnU-Net. This centralized approach
has various limitations, such as leakage of sensitive patient information and
violation of patient privacy. Federated learning is one of the approaches to
train a segmentation model in a decentralized manner that helps preserve
patient privacy. In this paper, we propose FednnU-Net, a federated learning
extension of nnU-Net. We introduce two novel federated learning methods to the
nnU-Net framework - Federated Fingerprint Extraction (FFE) and Asymmetric
Federated Averaging (AsymFedAvg) - and experimentally show their consistent
performance for breast, cardiac and fetal segmentation using 6 datasets
representing samples from 18 institutions. Additionally, to further promote
research and deployment of decentralized training in privacy constrained
institutions, we make our plug-n-play framework public. The source-code is
available at https://github.com/faildeny/FednnUNet .
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 12:20:06 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Skorupko",
"Grzegorz",
""
],
[
"Avgoustidis",
"Fotios",
""
],
[
"Martín-Isla",
"Carlos",
""
],
[
"Garrucho",
"Lidia",
""
],
[
"Kessler",
"Dimitri A.",
""
],
[
"Pujadas",
"Esmeralda Ruiz",
""
],
[
"Díaz",
"Oliver",
""
],
[
"Bobowicz",
"Maciej",
""
],
[
"Gwoździewicz",
"Katarzyna",
""
],
[
"Bargalló",
"Xavier",
""
],
[
"Jaruševičius",
"Paulius",
""
],
[
"Kushibar",
"Kaisar",
""
],
[
"Lekadir",
"Karim",
""
]
]
| TITLE: Federated nnU-Net for Privacy-Preserving Medical Image Segmentation
ABSTRACT: The nnU-Net framework has played a crucial role in medical image segmentation
and has become the gold standard in multitudes of applications targeting
different diseases, organs, and modalities. However, so far it has been used
primarily in a centralized approach where the data collected from hospitals are
stored in one center and used to train the nnU-Net. This centralized approach
has various limitations, such as leakage of sensitive patient information and
violation of patient privacy. Federated learning is one of the approaches to
train a segmentation model in a decentralized manner that helps preserve
patient privacy. In this paper, we propose FednnU-Net, a federated learning
extension of nnU-Net. We introduce two novel federated learning methods to the
nnU-Net framework - Federated Fingerprint Extraction (FFE) and Asymmetric
Federated Averaging (AsymFedAvg) - and experimentally show their consistent
performance for breast, cardiac and fetal segmentation using 6 datasets
representing samples from 18 institutions. Additionally, to further promote
research and deployment of decentralized training in privacy constrained
institutions, we make our plug-n-play framework public. The source-code is
available at https://github.com/faildeny/FednnUNet .
| no_new_dataset | 0.947672 |
2503.02558 | Han Fang | Zeqing Wang, Han Fang, Yihong Xu, Yutong Ban | Tracking-Aware Deformation Field Estimation for Non-rigid 3D
Reconstruction in Robotic Surgeries | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Minimally invasive procedures have been advanced rapidly by the robotic
laparoscopic surgery. The latter greatly assists surgeons in sophisticated and
precise operations with reduced invasiveness. Nevertheless, it is still safety
critical to be aware of even the least tissue deformation during
instrument-tissue interactions, especially in 3D space. To address this, recent
works rely on NeRF to render 2D videos from different perspectives and
eliminate occlusions. However, most of the methods fail to predict the accurate
3D shapes and associated deformation estimates robustly. Differently, we
propose Tracking-Aware Deformation Field (TADF), a novel framework which
reconstructs the 3D mesh along with the 3D tissue deformation simultaneously.
It first tracks the key points of soft tissue by a foundation vision model,
providing an accurate 2D deformation field. Then, the 2D deformation field is
smoothly incorporated with a neural implicit reconstruction network to obtain
tissue deformation in the 3D space. Finally, we experimentally demonstrate that
the proposed method provides more accurate deformation estimation compared with
other 3D neural reconstruction methods in two public datasets.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 12:33:17 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Zeqing",
""
],
[
"Fang",
"Han",
""
],
[
"Xu",
"Yihong",
""
],
[
"Ban",
"Yutong",
""
]
]
| TITLE: Tracking-Aware Deformation Field Estimation for Non-rigid 3D
Reconstruction in Robotic Surgeries
ABSTRACT: Minimally invasive procedures have been advanced rapidly by the robotic
laparoscopic surgery. The latter greatly assists surgeons in sophisticated and
precise operations with reduced invasiveness. Nevertheless, it is still safety
critical to be aware of even the least tissue deformation during
instrument-tissue interactions, especially in 3D space. To address this, recent
works rely on NeRF to render 2D videos from different perspectives and
eliminate occlusions. However, most of the methods fail to predict the accurate
3D shapes and associated deformation estimates robustly. Differently, we
propose Tracking-Aware Deformation Field (TADF), a novel framework which
reconstructs the 3D mesh along with the 3D tissue deformation simultaneously.
It first tracks the key points of soft tissue by a foundation vision model,
providing an accurate 2D deformation field. Then, the 2D deformation field is
smoothly incorporated with a neural implicit reconstruction network to obtain
tissue deformation in the 3D space. Finally, we experimentally demonstrate that
the proposed method provides more accurate deformation estimation compared with
other 3D neural reconstruction methods in two public datasets.
| no_new_dataset | 0.94366 |
2503.02572 | Valerii Serpiva | Valerii Serpiva, Artem Lykov, Artyom Myshlyaev, Muhammad Haris Khan,
Ali Alridha Abdulkarim, Oleg Sautenkov and Dzmitry Tsetserukou | RaceVLA: VLA-based Racing Drone Navigation with Human-like Behaviour | 6 pages, 6 figures. Submitted to IROS 2025 | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | RaceVLA presents an innovative approach for autonomous racing drone
navigation by leveraging Visual-Language-Action (VLA) to emulate human-like
behavior. This research explores the integration of advanced algorithms that
enable drones to adapt their navigation strategies based on real-time
environmental feedback, mimicking the decision-making processes of human
pilots. The model, fine-tuned on a collected racing drone dataset, demonstrates
strong generalization despite the complexity of drone racing environments.
RaceVLA outperforms OpenVLA in motion (75.0 vs 60.0) and semantic
generalization (45.5 vs 36.3), benefiting from the dynamic camera and
simplified motion tasks. However, visual (79.6 vs 87.0) and physical (50.0 vs
76.7) generalization were slightly reduced due to the challenges of maneuvering
in dynamic environments with varying object sizes. RaceVLA also outperforms
RT-2 across all axes - visual (79.6 vs 52.0), motion (75.0 vs 55.0), physical
(50.0 vs 26.7), and semantic (45.5 vs 38.8), demonstrating its robustness for
real-time adjustments in complex environments. Experiments revealed an average
velocity of 1.04 m/s, with a maximum speed of 2.02 m/s, and consistent
maneuverability, demonstrating RaceVLA's ability to handle high-speed scenarios
effectively. These findings highlight the potential of RaceVLA for
high-performance navigation in competitive racing contexts. The RaceVLA
codebase, pretrained weights, and dataset are available at this http URL:
https://racevla.github.io/
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 12:54:05 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Serpiva",
"Valerii",
""
],
[
"Lykov",
"Artem",
""
],
[
"Myshlyaev",
"Artyom",
""
],
[
"Khan",
"Muhammad Haris",
""
],
[
"Abdulkarim",
"Ali Alridha",
""
],
[
"Sautenkov",
"Oleg",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
]
| TITLE: RaceVLA: VLA-based Racing Drone Navigation with Human-like Behaviour
ABSTRACT: RaceVLA presents an innovative approach for autonomous racing drone
navigation by leveraging Visual-Language-Action (VLA) to emulate human-like
behavior. This research explores the integration of advanced algorithms that
enable drones to adapt their navigation strategies based on real-time
environmental feedback, mimicking the decision-making processes of human
pilots. The model, fine-tuned on a collected racing drone dataset, demonstrates
strong generalization despite the complexity of drone racing environments.
RaceVLA outperforms OpenVLA in motion (75.0 vs 60.0) and semantic
generalization (45.5 vs 36.3), benefiting from the dynamic camera and
simplified motion tasks. However, visual (79.6 vs 87.0) and physical (50.0 vs
76.7) generalization were slightly reduced due to the challenges of maneuvering
in dynamic environments with varying object sizes. RaceVLA also outperforms
RT-2 across all axes - visual (79.6 vs 52.0), motion (75.0 vs 55.0), physical
(50.0 vs 26.7), and semantic (45.5 vs 38.8), demonstrating its robustness for
real-time adjustments in complex environments. Experiments revealed an average
velocity of 1.04 m/s, with a maximum speed of 2.02 m/s, and consistent
maneuverability, demonstrating RaceVLA's ability to handle high-speed scenarios
effectively. These findings highlight the potential of RaceVLA for
high-performance navigation in competitive racing contexts. The RaceVLA
codebase, pretrained weights, and dataset are available at this http URL:
https://racevla.github.io/
| new_dataset | 0.951051 |
2503.02574 | Tim Beyer | Tim Beyer, Sophie Xhonneux, Simon Geisler, Gauthier Gidel, Leo
Schwinn, Stephan G\"unnemann | LLM-Safety Evaluations Lack Robustness | null | null | null | null | cs.CR cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we argue that current safety alignment research efforts for
large language models are hindered by many intertwined sources of noise, such
as small datasets, methodological inconsistencies, and unreliable evaluation
setups. This can, at times, make it impossible to evaluate and compare attacks
and defenses fairly, thereby slowing progress. We systematically analyze the
LLM safety evaluation pipeline, covering dataset curation, optimization
strategies for automated red-teaming, response generation, and response
evaluation using LLM judges. At each stage, we identify key issues and
highlight their practical impact. We also propose a set of guidelines for
reducing noise and bias in evaluations of future attack and defense papers.
Lastly, we offer an opposing perspective, highlighting practical reasons for
existing limitations. We believe that addressing the outlined problems in
future research will improve the field's ability to generate easily comparable
results and make measurable progress.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 12:55:07 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Beyer",
"Tim",
""
],
[
"Xhonneux",
"Sophie",
""
],
[
"Geisler",
"Simon",
""
],
[
"Gidel",
"Gauthier",
""
],
[
"Schwinn",
"Leo",
""
],
[
"Günnemann",
"Stephan",
""
]
]
| TITLE: LLM-Safety Evaluations Lack Robustness
ABSTRACT: In this paper, we argue that current safety alignment research efforts for
large language models are hindered by many intertwined sources of noise, such
as small datasets, methodological inconsistencies, and unreliable evaluation
setups. This can, at times, make it impossible to evaluate and compare attacks
and defenses fairly, thereby slowing progress. We systematically analyze the
LLM safety evaluation pipeline, covering dataset curation, optimization
strategies for automated red-teaming, response generation, and response
evaluation using LLM judges. At each stage, we identify key issues and
highlight their practical impact. We also propose a set of guidelines for
reducing noise and bias in evaluations of future attack and defense papers.
Lastly, we offer an opposing perspective, highlighting practical reasons for
existing limitations. We believe that addressing the outlined problems in
future research will improve the field's ability to generate easily comparable
results and make measurable progress.
| no_new_dataset | 0.94868 |
2503.02579 | Ege \"Ozsoy | Ege \"Ozsoy, Chantal Pellegrini, Tobias Czempiel, Felix Tristram, Kun
Yuan, David Bani-Harouni, Ulrich Eck, Benjamin Busam, Matthias Keicher,
Nassir Navab | MM-OR: A Large Multimodal Operating Room Dataset for Semantic
Understanding of High-Intensity Surgical Environments | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Operating rooms (ORs) are complex, high-stakes environments requiring precise
understanding of interactions among medical staff, tools, and equipment for
enhancing surgical assistance, situational awareness, and patient safety.
Current datasets fall short in scale, realism and do not capture the multimodal
nature of OR scenes, limiting progress in OR modeling. To this end, we
introduce MM-OR, a realistic and large-scale multimodal spatiotemporal OR
dataset, and the first dataset to enable multimodal scene graph generation.
MM-OR captures comprehensive OR scenes containing RGB-D data, detail views,
audio, speech transcripts, robotic logs, and tracking data and is annotated
with panoptic segmentations, semantic scene graphs, and downstream task labels.
Further, we propose MM2SG, the first multimodal large vision-language model for
scene graph generation, and through extensive experiments, demonstrate its
ability to effectively leverage multimodal inputs. Together, MM-OR and MM2SG
establish a new benchmark for holistic OR understanding, and open the path
towards multimodal scene analysis in complex, high-stakes environments. Our
code, and data is available at https://github.com/egeozsoy/MM-OR.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:00:52 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Özsoy",
"Ege",
""
],
[
"Pellegrini",
"Chantal",
""
],
[
"Czempiel",
"Tobias",
""
],
[
"Tristram",
"Felix",
""
],
[
"Yuan",
"Kun",
""
],
[
"Bani-Harouni",
"David",
""
],
[
"Eck",
"Ulrich",
""
],
[
"Busam",
"Benjamin",
""
],
[
"Keicher",
"Matthias",
""
],
[
"Navab",
"Nassir",
""
]
]
| TITLE: MM-OR: A Large Multimodal Operating Room Dataset for Semantic
Understanding of High-Intensity Surgical Environments
ABSTRACT: Operating rooms (ORs) are complex, high-stakes environments requiring precise
understanding of interactions among medical staff, tools, and equipment for
enhancing surgical assistance, situational awareness, and patient safety.
Current datasets fall short in scale, realism and do not capture the multimodal
nature of OR scenes, limiting progress in OR modeling. To this end, we
introduce MM-OR, a realistic and large-scale multimodal spatiotemporal OR
dataset, and the first dataset to enable multimodal scene graph generation.
MM-OR captures comprehensive OR scenes containing RGB-D data, detail views,
audio, speech transcripts, robotic logs, and tracking data and is annotated
with panoptic segmentations, semantic scene graphs, and downstream task labels.
Further, we propose MM2SG, the first multimodal large vision-language model for
scene graph generation, and through extensive experiments, demonstrate its
ability to effectively leverage multimodal inputs. Together, MM-OR and MM2SG
establish a new benchmark for holistic OR understanding, and open the path
towards multimodal scene analysis in complex, high-stakes environments. Our
code, and data is available at https://github.com/egeozsoy/MM-OR.
| new_dataset | 0.967717 |
2503.02581 | Kailun Yang | Jiayi Zhao, Fei Teng, Kai Luo, Guoqiang Zhao, Zhiyong Li, Xu Zheng,
Kailun Yang | Unveiling the Potential of Segment Anything Model 2 for RGB-Thermal
Semantic Segmentation with Language Guidance | The source code will be made publicly available at
https://github.com/iAsakiT3T/SHIFNet | null | null | null | cs.CV cs.RO eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The perception capability of robotic systems relies on the richness of the
dataset. Although Segment Anything Model 2 (SAM2), trained on large datasets,
demonstrates strong perception potential in perception tasks, its inherent
training paradigm prevents it from being suitable for RGB-T tasks. To address
these challenges, we propose SHIFNet, a novel SAM2-driven Hybrid Interaction
Paradigm that unlocks the potential of SAM2 with linguistic guidance for
efficient RGB-Thermal perception. Our framework consists of two key components:
(1) Semantic-Aware Cross-modal Fusion (SACF) module that dynamically balances
modality contributions through text-guided affinity learning, overcoming SAM2's
inherent RGB bias; (2) Heterogeneous Prompting Decoder (HPD) that enhances
global semantic information through a semantic enhancement module and then
combined with category embeddings to amplify cross-modal semantic consistency.
With 32.27M trainable parameters, SHIFNet achieves state-of-the-art
segmentation performance on public benchmarks, reaching 89.8% on PST900 and
67.8% on FMB, respectively. The framework facilitates the adaptation of
pre-trained large models to RGB-T segmentation tasks, effectively mitigating
the high costs associated with data collection while endowing robotic systems
with comprehensive perception capabilities. The source code will be made
publicly available at https://github.com/iAsakiT3T/SHIFNet.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:04:46 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhao",
"Jiayi",
""
],
[
"Teng",
"Fei",
""
],
[
"Luo",
"Kai",
""
],
[
"Zhao",
"Guoqiang",
""
],
[
"Li",
"Zhiyong",
""
],
[
"Zheng",
"Xu",
""
],
[
"Yang",
"Kailun",
""
]
]
| TITLE: Unveiling the Potential of Segment Anything Model 2 for RGB-Thermal
Semantic Segmentation with Language Guidance
ABSTRACT: The perception capability of robotic systems relies on the richness of the
dataset. Although Segment Anything Model 2 (SAM2), trained on large datasets,
demonstrates strong perception potential in perception tasks, its inherent
training paradigm prevents it from being suitable for RGB-T tasks. To address
these challenges, we propose SHIFNet, a novel SAM2-driven Hybrid Interaction
Paradigm that unlocks the potential of SAM2 with linguistic guidance for
efficient RGB-Thermal perception. Our framework consists of two key components:
(1) Semantic-Aware Cross-modal Fusion (SACF) module that dynamically balances
modality contributions through text-guided affinity learning, overcoming SAM2's
inherent RGB bias; (2) Heterogeneous Prompting Decoder (HPD) that enhances
global semantic information through a semantic enhancement module and then
combined with category embeddings to amplify cross-modal semantic consistency.
With 32.27M trainable parameters, SHIFNet achieves state-of-the-art
segmentation performance on public benchmarks, reaching 89.8% on PST900 and
67.8% on FMB, respectively. The framework facilitates the adaptation of
pre-trained large models to RGB-T segmentation tasks, effectively mitigating
the high costs associated with data collection while endowing robotic systems
with comprehensive perception capabilities. The source code will be made
publicly available at https://github.com/iAsakiT3T/SHIFNet.
| no_new_dataset | 0.951233 |
2503.02583 | Pawe{\l} Teisseyre | Pawe{\l} Teisseyre and Jan Mielniczuk | A generalized approach to label shift: the Conditional Probability Shift
Model | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many practical applications of machine learning, a discrepancy often
arises between a source distribution from which labeled training examples are
drawn and a target distribution for which only unlabeled data is observed.
Traditionally, two main scenarios have been considered to address this issue:
covariate shift (CS), where only the marginal distribution of features changes,
and label shift (LS), which involves a change in the class variable's prior
distribution. However, these frameworks do not encompass all forms of
distributional shift. This paper introduces a new setting, Conditional
Probability Shift (CPS), which captures the case when the conditional
distribution of the class variable given some specific features changes while
the distribution of remaining features given the specific features and the
class is preserved. For this scenario we present the Conditional Probability
Shift Model (CPSM) based on modeling the class variable's conditional
probabilities using multinomial regression. Since the class variable is not
observed for the target data, the parameters of the multinomial model for its
distribution are estimated using the Expectation-Maximization algorithm. The
proposed method is generic and can be combined with any probabilistic
classifier. The effectiveness of CPSM is demonstrated through experiments on
synthetic datasets and a case study using the MIMIC medical database, revealing
its superior balanced classification accuracy on the target data compared to
existing methods, particularly in situations situations of conditional
distribution shift and no apriori distribution shift, which are not detected by
LS-based methods.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:07:20 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Teisseyre",
"Paweł",
""
],
[
"Mielniczuk",
"Jan",
""
]
]
| TITLE: A generalized approach to label shift: the Conditional Probability Shift
Model
ABSTRACT: In many practical applications of machine learning, a discrepancy often
arises between a source distribution from which labeled training examples are
drawn and a target distribution for which only unlabeled data is observed.
Traditionally, two main scenarios have been considered to address this issue:
covariate shift (CS), where only the marginal distribution of features changes,
and label shift (LS), which involves a change in the class variable's prior
distribution. However, these frameworks do not encompass all forms of
distributional shift. This paper introduces a new setting, Conditional
Probability Shift (CPS), which captures the case when the conditional
distribution of the class variable given some specific features changes while
the distribution of remaining features given the specific features and the
class is preserved. For this scenario we present the Conditional Probability
Shift Model (CPSM) based on modeling the class variable's conditional
probabilities using multinomial regression. Since the class variable is not
observed for the target data, the parameters of the multinomial model for its
distribution are estimated using the Expectation-Maximization algorithm. The
proposed method is generic and can be combined with any probabilistic
classifier. The effectiveness of CPSM is demonstrated through experiments on
synthetic datasets and a case study using the MIMIC medical database, revealing
its superior balanced classification accuracy on the target data compared to
existing methods, particularly in situations situations of conditional
distribution shift and no apriori distribution shift, which are not detected by
LS-based methods.
| no_new_dataset | 0.949856 |
2503.02595 | Zhaoxing Gan | Zhaoxing Gan, Mengtian Li, Ruhua Chen, Zhongxia Ji, Sichen Guo,
Huanling Hu, Guangnan Ye, Zuo Hu | StageDesigner: Artistic Stage Generation for Scenography via Theater
Scripts | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this work, we introduce StageDesigner, the first comprehensive framework
for artistic stage generation using large language models combined with
layout-controlled diffusion models. Given the professional requirements of
stage scenography, StageDesigner simulates the workflows of seasoned artists to
generate immersive 3D stage scenes. Specifically, our approach is divided into
three primary modules: Script Analysis, which extracts thematic and spatial
cues from input scripts; Foreground Generation, which constructs and arranges
essential 3D objects; and Background Generation, which produces a harmonious
background aligned with the narrative atmosphere and maintains spatial
coherence by managing occlusions between foreground and background elements.
Furthermore, we introduce the StagePro-V1 dataset, a dedicated dataset with 276
unique stage scenes spanning different historical styles and annotated with
scripts, images, and detailed 3D layouts, specifically tailored for this task.
Finally, evaluations using both standard and newly proposed metrics, along with
extensive user studies, demonstrate the effectiveness of StageDesigner. Project
can be found at: https://deadsmither5.github.io/2025/01/03/StageDesigner/
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:17:50 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Gan",
"Zhaoxing",
""
],
[
"Li",
"Mengtian",
""
],
[
"Chen",
"Ruhua",
""
],
[
"Ji",
"Zhongxia",
""
],
[
"Guo",
"Sichen",
""
],
[
"Hu",
"Huanling",
""
],
[
"Ye",
"Guangnan",
""
],
[
"Hu",
"Zuo",
""
]
]
| TITLE: StageDesigner: Artistic Stage Generation for Scenography via Theater
Scripts
ABSTRACT: In this work, we introduce StageDesigner, the first comprehensive framework
for artistic stage generation using large language models combined with
layout-controlled diffusion models. Given the professional requirements of
stage scenography, StageDesigner simulates the workflows of seasoned artists to
generate immersive 3D stage scenes. Specifically, our approach is divided into
three primary modules: Script Analysis, which extracts thematic and spatial
cues from input scripts; Foreground Generation, which constructs and arranges
essential 3D objects; and Background Generation, which produces a harmonious
background aligned with the narrative atmosphere and maintains spatial
coherence by managing occlusions between foreground and background elements.
Furthermore, we introduce the StagePro-V1 dataset, a dedicated dataset with 276
unique stage scenes spanning different historical styles and annotated with
scripts, images, and detailed 3D layouts, specifically tailored for this task.
Finally, evaluations using both standard and newly proposed metrics, along with
extensive user studies, demonstrate the effectiveness of StageDesigner. Project
can be found at: https://deadsmither5.github.io/2025/01/03/StageDesigner/
| new_dataset | 0.955651 |
2503.02600 | Kailun Yang | Yizhou Huang, Fan Yang, Guoliang Zhu, Gen Li, Hao Shi, Yukun Zuo,
Wenrui Chen, Zhiyong Li, Kailun Yang | Resource-Efficient Affordance Grounding with Complementary Depth and
Semantic Prompts | The source code will be made publicly available at
https://github.com/DAWDSE/BiT-Align | null | null | null | cs.CV cs.RO eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Affordance refers to the functional properties that an agent perceives and
utilizes from its environment, and is key perceptual information required for
robots to perform actions. This information is rich and multimodal in nature.
Existing multimodal affordance methods face limitations in extracting useful
information, mainly due to simple structural designs, basic fusion methods, and
large model parameters, making it difficult to meet the performance
requirements for practical deployment. To address these issues, this paper
proposes the BiT-Align image-depth-text affordance mapping framework. The
framework includes a Bypass Prompt Module (BPM) and a Text Feature Guidance
(TFG) attention selection mechanism. BPM integrates the auxiliary modality
depth image directly as a prompt to the primary modality RGB image, embedding
it into the primary modality encoder without introducing additional encoders.
This reduces the model's parameter count and effectively improves functional
region localization accuracy. The TFG mechanism guides the selection and
enhancement of attention heads in the image encoder using textual features,
improving the understanding of affordance characteristics. Experimental results
demonstrate that the proposed method achieves significant performance
improvements on public AGD20K and HICO-IIF datasets. On the AGD20K dataset,
compared with the current state-of-the-art method, we achieve a 6.0%
improvement in the KLD metric, while reducing model parameters by 88.8%,
demonstrating practical application values. The source code will be made
publicly available at https://github.com/DAWDSE/BiT-Align.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:20:42 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Huang",
"Yizhou",
""
],
[
"Yang",
"Fan",
""
],
[
"Zhu",
"Guoliang",
""
],
[
"Li",
"Gen",
""
],
[
"Shi",
"Hao",
""
],
[
"Zuo",
"Yukun",
""
],
[
"Chen",
"Wenrui",
""
],
[
"Li",
"Zhiyong",
""
],
[
"Yang",
"Kailun",
""
]
]
| TITLE: Resource-Efficient Affordance Grounding with Complementary Depth and
Semantic Prompts
ABSTRACT: Affordance refers to the functional properties that an agent perceives and
utilizes from its environment, and is key perceptual information required for
robots to perform actions. This information is rich and multimodal in nature.
Existing multimodal affordance methods face limitations in extracting useful
information, mainly due to simple structural designs, basic fusion methods, and
large model parameters, making it difficult to meet the performance
requirements for practical deployment. To address these issues, this paper
proposes the BiT-Align image-depth-text affordance mapping framework. The
framework includes a Bypass Prompt Module (BPM) and a Text Feature Guidance
(TFG) attention selection mechanism. BPM integrates the auxiliary modality
depth image directly as a prompt to the primary modality RGB image, embedding
it into the primary modality encoder without introducing additional encoders.
This reduces the model's parameter count and effectively improves functional
region localization accuracy. The TFG mechanism guides the selection and
enhancement of attention heads in the image encoder using textual features,
improving the understanding of affordance characteristics. Experimental results
demonstrate that the proposed method achieves significant performance
improvements on public AGD20K and HICO-IIF datasets. On the AGD20K dataset,
compared with the current state-of-the-art method, we achieve a 6.0%
improvement in the KLD metric, while reducing model parameters by 88.8%,
demonstrating practical application values. The source code will be made
publicly available at https://github.com/DAWDSE/BiT-Align.
| no_new_dataset | 0.949153 |
2503.02609 | Tianyu Jia | Tianyu Jia, Zongxia Xie, Yanru Sun, Dilfira Kudrat, Qinghua Hu | Lightweight Channel-wise Dynamic Fusion Model: Non-stationary Time
Series Forecasting via Entropy Analysis | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-stationarity is an intrinsic property of real-world time series and plays
a crucial role in time series forecasting. Previous studies primarily adopt
instance normalization to attenuate the non-stationarity of original series for
better predictability. However, instance normalization that directly removes
the inherent non-stationarity can lead to three issues: (1) disrupting global
temporal dependencies, (2) ignoring channel-specific differences, and (3)
producing over-smoothed predictions. To address these issues, we theoretically
demonstrate that variance can be a valid and interpretable proxy for
quantifying non-stationarity of time series. Based on the analysis, we propose
a novel lightweight \textit{C}hannel-wise \textit{D}ynamic \textit{F}usion
\textit{M}odel (\textit{CDFM}), which selectively and dynamically recovers
intrinsic non-stationarity of the original series, while keeping the
predictability of normalized series. First, we design a Dual-Predictor Module,
which involves two branches: a Time Stationary Predictor for capturing stable
patterns and a Time Non-stationary Predictor for modeling global dynamics
patterns. Second, we propose a Fusion Weight Learner to dynamically
characterize the intrinsic non-stationary information across different samples
based on variance. Finally, we introduce a Channel Selector to selectively
recover non-stationary information from specific channels by evaluating their
non-stationarity, similarity, and distribution consistency, enabling the model
to capture relevant dynamic features and avoid overfitting. Comprehensive
experiments on seven time series datasets demonstrate the superiority and
generalization capabilities of CDFM.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:29:42 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Jia",
"Tianyu",
""
],
[
"Xie",
"Zongxia",
""
],
[
"Sun",
"Yanru",
""
],
[
"Kudrat",
"Dilfira",
""
],
[
"Hu",
"Qinghua",
""
]
]
| TITLE: Lightweight Channel-wise Dynamic Fusion Model: Non-stationary Time
Series Forecasting via Entropy Analysis
ABSTRACT: Non-stationarity is an intrinsic property of real-world time series and plays
a crucial role in time series forecasting. Previous studies primarily adopt
instance normalization to attenuate the non-stationarity of original series for
better predictability. However, instance normalization that directly removes
the inherent non-stationarity can lead to three issues: (1) disrupting global
temporal dependencies, (2) ignoring channel-specific differences, and (3)
producing over-smoothed predictions. To address these issues, we theoretically
demonstrate that variance can be a valid and interpretable proxy for
quantifying non-stationarity of time series. Based on the analysis, we propose
a novel lightweight \textit{C}hannel-wise \textit{D}ynamic \textit{F}usion
\textit{M}odel (\textit{CDFM}), which selectively and dynamically recovers
intrinsic non-stationarity of the original series, while keeping the
predictability of normalized series. First, we design a Dual-Predictor Module,
which involves two branches: a Time Stationary Predictor for capturing stable
patterns and a Time Non-stationary Predictor for modeling global dynamics
patterns. Second, we propose a Fusion Weight Learner to dynamically
characterize the intrinsic non-stationary information across different samples
based on variance. Finally, we introduce a Channel Selector to selectively
recover non-stationary information from specific channels by evaluating their
non-stationarity, similarity, and distribution consistency, enabling the model
to capture relevant dynamic features and avoid overfitting. Comprehensive
experiments on seven time series datasets demonstrate the superiority and
generalization capabilities of CDFM.
| no_new_dataset | 0.945901 |
2503.02614 | Yiyan Xu | Yiyan Xu, Jinghao Zhang, Alireza Salemi, Xinting Hu, Wenjie Wang, Fuli
Feng, Hamed Zamani, Xiangnan He, Tat-Seng Chua | Personalized Generation In Large Model Era: A Survey | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the era of large models, content generation is gradually shifting to
Personalized Generation (PGen), tailoring content to individual preferences and
needs. This paper presents the first comprehensive survey on PGen,
investigating existing research in this rapidly growing field. We conceptualize
PGen from a unified perspective, systematically formalizing its key components,
core objectives, and abstract workflows. Based on this unified perspective, we
propose a multi-level taxonomy, offering an in-depth review of technical
advancements, commonly used datasets, and evaluation metrics across multiple
modalities, personalized contexts, and tasks. Moreover, we envision the
potential applications of PGen and highlight open challenges and promising
directions for future exploration. By bridging PGen research across multiple
modalities, this survey serves as a valuable resource for fostering knowledge
sharing and interdisciplinary collaboration, ultimately contributing to a more
personalized digital landscape.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:34:19 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Xu",
"Yiyan",
""
],
[
"Zhang",
"Jinghao",
""
],
[
"Salemi",
"Alireza",
""
],
[
"Hu",
"Xinting",
""
],
[
"Wang",
"Wenjie",
""
],
[
"Feng",
"Fuli",
""
],
[
"Zamani",
"Hamed",
""
],
[
"He",
"Xiangnan",
""
],
[
"Chua",
"Tat-Seng",
""
]
]
| TITLE: Personalized Generation In Large Model Era: A Survey
ABSTRACT: In the era of large models, content generation is gradually shifting to
Personalized Generation (PGen), tailoring content to individual preferences and
needs. This paper presents the first comprehensive survey on PGen,
investigating existing research in this rapidly growing field. We conceptualize
PGen from a unified perspective, systematically formalizing its key components,
core objectives, and abstract workflows. Based on this unified perspective, we
propose a multi-level taxonomy, offering an in-depth review of technical
advancements, commonly used datasets, and evaluation metrics across multiple
modalities, personalized contexts, and tasks. Moreover, we envision the
potential applications of PGen and highlight open challenges and promising
directions for future exploration. By bridging PGen research across multiple
modalities, this survey serves as a valuable resource for fostering knowledge
sharing and interdisciplinary collaboration, ultimately contributing to a more
personalized digital landscape.
| no_new_dataset | 0.948632 |
2503.02616 | Zirun Guo | Zirun Guo, Tao Jin | Smoothing the Shift: Towards Stable Test-Time Adaptation under Complex
Multimodal Noises | Accepted at ICLR 2025 | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Test-Time Adaptation (TTA) aims to tackle distribution shifts using unlabeled
test data without access to the source data. In the context of multimodal data,
there are more complex noise patterns than unimodal data such as simultaneous
corruptions for multiple modalities and missing modalities. Besides, in
real-world applications, corruptions from different distribution shifts are
always mixed. Existing TTA methods always fail in such multimodal scenario
because the abrupt distribution shifts will destroy the prior knowledge from
the source model, thus leading to performance degradation. To this end, we
reveal a new challenge named multimodal wild TTA. To address this challenging
problem, we propose two novel strategies: sample identification with
interquartile range Smoothing and unimodal assistance, and Mutual information
sharing (SuMi). SuMi smooths the adaptation process by interquartile range
which avoids the abrupt distribution shifts. Then, SuMi fully utilizes the
unimodal features to select low-entropy samples with rich multimodal
information for optimization. Furthermore, mutual information sharing is
introduced to align the information, reduce the discrepancies and enhance the
information utilization across different modalities. Extensive experiments on
two public datasets show the effectiveness and superiority over existing
methods under the complex noise patterns in multimodal data. Code is available
at https://github.com/zrguo/SuMi.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:36:16 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Guo",
"Zirun",
""
],
[
"Jin",
"Tao",
""
]
]
| TITLE: Smoothing the Shift: Towards Stable Test-Time Adaptation under Complex
Multimodal Noises
ABSTRACT: Test-Time Adaptation (TTA) aims to tackle distribution shifts using unlabeled
test data without access to the source data. In the context of multimodal data,
there are more complex noise patterns than unimodal data such as simultaneous
corruptions for multiple modalities and missing modalities. Besides, in
real-world applications, corruptions from different distribution shifts are
always mixed. Existing TTA methods always fail in such multimodal scenario
because the abrupt distribution shifts will destroy the prior knowledge from
the source model, thus leading to performance degradation. To this end, we
reveal a new challenge named multimodal wild TTA. To address this challenging
problem, we propose two novel strategies: sample identification with
interquartile range Smoothing and unimodal assistance, and Mutual information
sharing (SuMi). SuMi smooths the adaptation process by interquartile range
which avoids the abrupt distribution shifts. Then, SuMi fully utilizes the
unimodal features to select low-entropy samples with rich multimodal
information for optimization. Furthermore, mutual information sharing is
introduced to align the information, reduce the discrepancies and enhance the
information utilization across different modalities. Extensive experiments on
two public datasets show the effectiveness and superiority over existing
methods under the complex noise patterns in multimodal data. Code is available
at https://github.com/zrguo/SuMi.
| no_new_dataset | 0.947235 |
2503.02618 | Michal Januszewski | Jan-Matthis Lueckmann, Alexander Immer, Alex Bo-Yuan Chen, Peter H.
Li, Mariela D. Petkova, Nirmala A. Iyer, Luuk Willem Hesselink, Aparna Dev,
Gudrun Ihrke, Woohyun Park, Alyson Petruncio, Aubrey Weigel, Wyatt Korff,
Florian Engert, Jeff W. Lichtman, Misha B. Ahrens, Micha{\l} Januszewski,
Viren Jain | ZAPBench: A Benchmark for Whole-Brain Activity Prediction in Zebrafish | null | null | null | null | q-bio.NC cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Data-driven benchmarks have led to significant progress in key scientific
modeling domains including weather and structural biology. Here, we introduce
the Zebrafish Activity Prediction Benchmark (ZAPBench) to measure progress on
the problem of predicting cellular-resolution neural activity throughout an
entire vertebrate brain. The benchmark is based on a novel dataset containing
4d light-sheet microscopy recordings of over 70,000 neurons in a larval
zebrafish brain, along with motion stabilized and voxel-level cell
segmentations of these data that facilitate development of a variety of
forecasting methods. Initial results from a selection of time series and
volumetric video modeling approaches achieve better performance than naive
baseline methods, but also show room for further improvement. The specific
brain used in the activity recording is also undergoing synaptic-level
anatomical mapping, which will enable future integration of detailed structural
information into forecasting methods.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:38:41 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lueckmann",
"Jan-Matthis",
""
],
[
"Immer",
"Alexander",
""
],
[
"Chen",
"Alex Bo-Yuan",
""
],
[
"Li",
"Peter H.",
""
],
[
"Petkova",
"Mariela D.",
""
],
[
"Iyer",
"Nirmala A.",
""
],
[
"Hesselink",
"Luuk Willem",
""
],
[
"Dev",
"Aparna",
""
],
[
"Ihrke",
"Gudrun",
""
],
[
"Park",
"Woohyun",
""
],
[
"Petruncio",
"Alyson",
""
],
[
"Weigel",
"Aubrey",
""
],
[
"Korff",
"Wyatt",
""
],
[
"Engert",
"Florian",
""
],
[
"Lichtman",
"Jeff W.",
""
],
[
"Ahrens",
"Misha B.",
""
],
[
"Januszewski",
"Michał",
""
],
[
"Jain",
"Viren",
""
]
]
| TITLE: ZAPBench: A Benchmark for Whole-Brain Activity Prediction in Zebrafish
ABSTRACT: Data-driven benchmarks have led to significant progress in key scientific
modeling domains including weather and structural biology. Here, we introduce
the Zebrafish Activity Prediction Benchmark (ZAPBench) to measure progress on
the problem of predicting cellular-resolution neural activity throughout an
entire vertebrate brain. The benchmark is based on a novel dataset containing
4d light-sheet microscopy recordings of over 70,000 neurons in a larval
zebrafish brain, along with motion stabilized and voxel-level cell
segmentations of these data that facilitate development of a variety of
forecasting methods. Initial results from a selection of time series and
volumetric video modeling approaches achieve better performance than naive
baseline methods, but also show room for further improvement. The specific
brain used in the activity recording is also undergoing synaptic-level
anatomical mapping, which will enable future integration of detailed structural
information into forecasting methods.
| new_dataset | 0.963848 |
2503.02619 | Xiaoyu Zheng | Xiaoyu Zheng, Xu Chen, Shaogang Gong, Xavier Griffin, and Greg
Slabaugh | XFMamba: Cross-Fusion Mamba for Multi-View Medical Image Classification | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Compared to single view medical image classification, using multiple views
can significantly enhance predictive accuracy as it can account for the
complementarity of each view while leveraging correlations between views.
Existing multi-view approaches typically employ separate convolutional or
transformer branches combined with simplistic feature fusion strategies.
However, these approaches inadvertently disregard essential cross-view
correlations, leading to suboptimal classification performance, and suffer from
challenges with limited receptive field (CNNs) or quadratic computational
complexity (transformers). Inspired by state space sequence models, we propose
XFMamba, a pure Mamba-based cross-fusion architecture to address the challenge
of multi-view medical image classification. XFMamba introduces a novel
two-stage fusion strategy, facilitating the learning of single-view features
and their cross-view disparity. This mechanism captures spatially long-range
dependencies in each view while enhancing seamless information transfer between
views. Results on three public datasets, MURA, CheXpert and DDSM, illustrate
the effectiveness of our approach across diverse multi-view medical image
classification tasks, showing that it outperforms existing convolution-based
and transformer-based multi-view methods. Code is available at
https://github.com/XZheng0427/XFMamba.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:38:58 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zheng",
"Xiaoyu",
""
],
[
"Chen",
"Xu",
""
],
[
"Gong",
"Shaogang",
""
],
[
"Griffin",
"Xavier",
""
],
[
"Slabaugh",
"Greg",
""
]
]
| TITLE: XFMamba: Cross-Fusion Mamba for Multi-View Medical Image Classification
ABSTRACT: Compared to single view medical image classification, using multiple views
can significantly enhance predictive accuracy as it can account for the
complementarity of each view while leveraging correlations between views.
Existing multi-view approaches typically employ separate convolutional or
transformer branches combined with simplistic feature fusion strategies.
However, these approaches inadvertently disregard essential cross-view
correlations, leading to suboptimal classification performance, and suffer from
challenges with limited receptive field (CNNs) or quadratic computational
complexity (transformers). Inspired by state space sequence models, we propose
XFMamba, a pure Mamba-based cross-fusion architecture to address the challenge
of multi-view medical image classification. XFMamba introduces a novel
two-stage fusion strategy, facilitating the learning of single-view features
and their cross-view disparity. This mechanism captures spatially long-range
dependencies in each view while enhancing seamless information transfer between
views. Results on three public datasets, MURA, CheXpert and DDSM, illustrate
the effectiveness of our approach across diverse multi-view medical image
classification tasks, showing that it outperforms existing convolution-based
and transformer-based multi-view methods. Code is available at
https://github.com/XZheng0427/XFMamba.
| no_new_dataset | 0.945851 |
2503.02628 | Wenxuan Liu | Wenxuan Liu, Zixuan Li, Long Bai, Yuxin Zuo, Daozhu Xu, Xiaolong Jin,
Jiafeng Guo, Xueqi Cheng | Towards Event Extraction with Massive Types: LLM-based Collaborative
Annotation and Partitioning Extraction | Work in progress | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing a general-purpose extraction system that can extract events with
massive types is a long-standing target in Event Extraction (EE). In doing so,
the challenge comes from two aspects: 1) The absence of an efficient and
effective annotation method. 2) The absence of a powerful extraction method can
handle massive types. For the first challenge, we propose a collaborative
annotation method based on Large Language Models (LLMs). Through collaboration
among multiple LLMs, it first refines annotations of trigger words from distant
supervision and then carries out argument annotation. Next, a voting phase
consolidates the annotation preferences across different LLMs. Finally, we
create the EEMT dataset, the largest EE dataset to date, featuring over 200,000
samples, 3,465 event types, and 6,297 role types. For the second challenge, we
propose an LLM-based Partitioning EE method called LLM-PEE. To overcome the
limited context length of LLMs, LLM-PEE first recalls candidate event types and
then splits them into multiple partitions for LLMs to extract events. The
results in the supervised setting show that LLM-PEE outperforms the
state-of-the-art methods by 5.4 in event detection and 6.1 in argument
extraction. In the zero-shot setting, LLM-PEE achieves up to 12.9 improvement
compared to mainstream LLMs, demonstrating its strong generalization
capabilities.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:53:43 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Liu",
"Wenxuan",
""
],
[
"Li",
"Zixuan",
""
],
[
"Bai",
"Long",
""
],
[
"Zuo",
"Yuxin",
""
],
[
"Xu",
"Daozhu",
""
],
[
"Jin",
"Xiaolong",
""
],
[
"Guo",
"Jiafeng",
""
],
[
"Cheng",
"Xueqi",
""
]
]
| TITLE: Towards Event Extraction with Massive Types: LLM-based Collaborative
Annotation and Partitioning Extraction
ABSTRACT: Developing a general-purpose extraction system that can extract events with
massive types is a long-standing target in Event Extraction (EE). In doing so,
the challenge comes from two aspects: 1) The absence of an efficient and
effective annotation method. 2) The absence of a powerful extraction method can
handle massive types. For the first challenge, we propose a collaborative
annotation method based on Large Language Models (LLMs). Through collaboration
among multiple LLMs, it first refines annotations of trigger words from distant
supervision and then carries out argument annotation. Next, a voting phase
consolidates the annotation preferences across different LLMs. Finally, we
create the EEMT dataset, the largest EE dataset to date, featuring over 200,000
samples, 3,465 event types, and 6,297 role types. For the second challenge, we
propose an LLM-based Partitioning EE method called LLM-PEE. To overcome the
limited context length of LLMs, LLM-PEE first recalls candidate event types and
then splits them into multiple partitions for LLMs to extract events. The
results in the supervised setting show that LLM-PEE outperforms the
state-of-the-art methods by 5.4 in event detection and 6.1 in argument
extraction. In the zero-shot setting, LLM-PEE achieves up to 12.9 improvement
compared to mainstream LLMs, demonstrating its strong generalization
capabilities.
| new_dataset | 0.963265 |
2503.02645 | Chungpa Lee | Chungpa Lee, Jongho Im, Joseph H.T. Kim | A Generalized Theory of Mixup for Structure-Preserving Synthetic Data | null | Proceedings of the 28th International Conference on Artificial
Intelligence and Statistics (AISTATS) 2025 | null | null | cs.LG stat.ML stat.OT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Mixup is a widely adopted data augmentation technique known for enhancing the
generalization of machine learning models by interpolating between data points.
Despite its success and popularity, limited attention has been given to
understanding the statistical properties of the synthetic data it generates. In
this paper, we delve into the theoretical underpinnings of mixup, specifically
its effects on the statistical structure of synthesized data. We demonstrate
that while mixup improves model performance, it can distort key statistical
properties such as variance, potentially leading to unintended consequences in
data synthesis. To address this, we propose a novel mixup method that
incorporates a generalized and flexible weighting scheme, better preserving the
original data's structure. Through theoretical developments, we provide
conditions under which our proposed method maintains the (co)variance and
distributional properties of the original dataset. Numerical experiments
confirm that the new approach not only preserves the statistical
characteristics of the original data but also sustains model performance across
repeated synthesis, alleviating concerns of model collapse identified in
previous research.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:28:50 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lee",
"Chungpa",
""
],
[
"Im",
"Jongho",
""
],
[
"Kim",
"Joseph H. T.",
""
]
]
| TITLE: A Generalized Theory of Mixup for Structure-Preserving Synthetic Data
ABSTRACT: Mixup is a widely adopted data augmentation technique known for enhancing the
generalization of machine learning models by interpolating between data points.
Despite its success and popularity, limited attention has been given to
understanding the statistical properties of the synthetic data it generates. In
this paper, we delve into the theoretical underpinnings of mixup, specifically
its effects on the statistical structure of synthesized data. We demonstrate
that while mixup improves model performance, it can distort key statistical
properties such as variance, potentially leading to unintended consequences in
data synthesis. To address this, we propose a novel mixup method that
incorporates a generalized and flexible weighting scheme, better preserving the
original data's structure. Through theoretical developments, we provide
conditions under which our proposed method maintains the (co)variance and
distributional properties of the original dataset. Numerical experiments
confirm that the new approach not only preserves the statistical
characteristics of the original data but also sustains model performance across
repeated synthesis, alleviating concerns of model collapse identified in
previous research.
| no_new_dataset | 0.949576 |
2503.02670 | Huiyuan Lai | Huiyuan Lai, Xiao Zhang, Malvina Nissim | Multidimensional Consistency Improves Reasoning in Language Models | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While Large language models (LLMs) have proved able to address some complex
reasoning tasks, we also know that they are highly sensitive to input
variation, which can lead to different solution paths and final answers. Answer
consistency across input variations can thus be taken as a sign of stronger
confidence. Leveraging this insight, we introduce a framework, {\em
Multidimensional Reasoning Consistency} where, focusing on math problems,
models are systematically pushed to diversify solution paths towards a final
answer, thereby testing them for answer consistency across multiple input
variations. We induce variations in (i) order of shots in prompt, (ii) problem
phrasing, and (iii) languages used. Extensive experiments on a large range of
open-source state-of-the-art LLMs of various sizes show that reasoning
consistency differs by variation dimension, and that by aggregating consistency
across dimensions, our framework consistently enhances mathematical reasoning
performance on both monolingual dataset GSM8K and multilingual dataset MGSM,
especially for smaller models.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 14:41:05 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lai",
"Huiyuan",
""
],
[
"Zhang",
"Xiao",
""
],
[
"Nissim",
"Malvina",
""
]
]
| TITLE: Multidimensional Consistency Improves Reasoning in Language Models
ABSTRACT: While Large language models (LLMs) have proved able to address some complex
reasoning tasks, we also know that they are highly sensitive to input
variation, which can lead to different solution paths and final answers. Answer
consistency across input variations can thus be taken as a sign of stronger
confidence. Leveraging this insight, we introduce a framework, {\em
Multidimensional Reasoning Consistency} where, focusing on math problems,
models are systematically pushed to diversify solution paths towards a final
answer, thereby testing them for answer consistency across multiple input
variations. We induce variations in (i) order of shots in prompt, (ii) problem
phrasing, and (iii) languages used. Extensive experiments on a large range of
open-source state-of-the-art LLMs of various sizes show that reasoning
consistency differs by variation dimension, and that by aggregating consistency
across dimensions, our framework consistently enhances mathematical reasoning
performance on both monolingual dataset GSM8K and multilingual dataset MGSM,
especially for smaller models.
| no_new_dataset | 0.945801 |
2503.02674 | Maddalena Amendola | Maddalena Amendola, Andrea Passarella, Raffaele Perego | Towards Robust Expert Finding in Community Question Answering Platforms | null | Advances in Information Retrieval, Springer Nature Switzerland,
2024, 152--168 | 10.1007/978-3-030-99739-7_30 | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | This paper introduces TUEF, a topic-oriented user-interaction model for fair
Expert Finding in Community Question Answering (CQA) platforms. The Expert
Finding task in CQA platforms involves identifying proficient users capable of
providing accurate answers to questions from the community. To this aim, TUEF
improves the robustness and credibility of the CQA platform through a more
precise Expert Finding component. The key idea of TUEF is to exploit diverse
types of information, specifically, content and social information, to identify
more precisely experts thus improving the robustness of the task. We assess
TUEF through reproducible experiments conducted on a large-scale dataset from
StackOverflow. The results consistently demonstrate that TUEF outperforms
state-of-the-art competitors while promoting transparent expert identification.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 14:46:01 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Amendola",
"Maddalena",
""
],
[
"Passarella",
"Andrea",
""
],
[
"Perego",
"Raffaele",
""
]
]
| TITLE: Towards Robust Expert Finding in Community Question Answering Platforms
ABSTRACT: This paper introduces TUEF, a topic-oriented user-interaction model for fair
Expert Finding in Community Question Answering (CQA) platforms. The Expert
Finding task in CQA platforms involves identifying proficient users capable of
providing accurate answers to questions from the community. To this aim, TUEF
improves the robustness and credibility of the CQA platform through a more
precise Expert Finding component. The key idea of TUEF is to exploit diverse
types of information, specifically, content and social information, to identify
more precisely experts thus improving the robustness of the task. We assess
TUEF through reproducible experiments conducted on a large-scale dataset from
StackOverflow. The results consistently demonstrate that TUEF outperforms
state-of-the-art competitors while promoting transparent expert identification.
| no_new_dataset | 0.95561 |
2503.02685 | Sovesh Mohapatra | Sovesh Mohapatra, Minhui Ouyang, Shufang Tan, Jianlin Guo, Lianglong
Sun, Yong He, Hao Huang | TReND: Transformer derived features and Regularized NMF for neonatal
functional network Delineation | 10 Pages, 5 figures | null | null | null | q-bio.NC cs.CV eess.SP q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Precise parcellation of functional networks (FNs) of early developing human
brain is the fundamental basis for identifying biomarker of developmental
disorders and understanding functional development. Resting-state fMRI
(rs-fMRI) enables in vivo exploration of functional changes, but adult FN
parcellations cannot be directly applied to the neonates due to incomplete
network maturation. No standardized neonatal functional atlas is currently
available. To solve this fundamental issue, we propose TReND, a novel and fully
automated self-supervised transformer-autoencoder framework that integrates
regularized nonnegative matrix factorization (RNMF) to unveil the FNs in
neonates. TReND effectively disentangles spatiotemporal features in voxel-wise
rs-fMRI data. The framework integrates confidence-adaptive masks into
transformer self-attention layers to mitigate noise influence. A self
supervised decoder acts as a regulator to refine the encoder's latent
embeddings, which serve as reliable temporal features. For spatial coherence,
we incorporate brain surface-based geodesic distances as spatial encodings
along with functional connectivity from temporal features. The TReND clustering
approach processes these features under sparsity and smoothness constraints,
producing robust and biologically plausible parcellations. We extensively
validated our TReND framework on three different rs-fMRI datasets: simulated,
dHCP and HCP-YA against comparable traditional feature extraction and
clustering techniques. Our results demonstrated the superiority of the TReND
framework in the delineation of neonate FNs with significantly better spatial
contiguity and functional homogeneity. Collectively, we established TReND, a
novel and robust framework, for neonatal FN delineation. TReND-derived neonatal
FNs could serve as a neonatal functional atlas for perinatal populations in
health and disease.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 14:57:59 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Mohapatra",
"Sovesh",
""
],
[
"Ouyang",
"Minhui",
""
],
[
"Tan",
"Shufang",
""
],
[
"Guo",
"Jianlin",
""
],
[
"Sun",
"Lianglong",
""
],
[
"He",
"Yong",
""
],
[
"Huang",
"Hao",
""
]
]
| TITLE: TReND: Transformer derived features and Regularized NMF for neonatal
functional network Delineation
ABSTRACT: Precise parcellation of functional networks (FNs) of early developing human
brain is the fundamental basis for identifying biomarker of developmental
disorders and understanding functional development. Resting-state fMRI
(rs-fMRI) enables in vivo exploration of functional changes, but adult FN
parcellations cannot be directly applied to the neonates due to incomplete
network maturation. No standardized neonatal functional atlas is currently
available. To solve this fundamental issue, we propose TReND, a novel and fully
automated self-supervised transformer-autoencoder framework that integrates
regularized nonnegative matrix factorization (RNMF) to unveil the FNs in
neonates. TReND effectively disentangles spatiotemporal features in voxel-wise
rs-fMRI data. The framework integrates confidence-adaptive masks into
transformer self-attention layers to mitigate noise influence. A self
supervised decoder acts as a regulator to refine the encoder's latent
embeddings, which serve as reliable temporal features. For spatial coherence,
we incorporate brain surface-based geodesic distances as spatial encodings
along with functional connectivity from temporal features. The TReND clustering
approach processes these features under sparsity and smoothness constraints,
producing robust and biologically plausible parcellations. We extensively
validated our TReND framework on three different rs-fMRI datasets: simulated,
dHCP and HCP-YA against comparable traditional feature extraction and
clustering techniques. Our results demonstrated the superiority of the TReND
framework in the delineation of neonate FNs with significantly better spatial
contiguity and functional homogeneity. Collectively, we established TReND, a
novel and robust framework, for neonatal FN delineation. TReND-derived neonatal
FNs could serve as a neonatal functional atlas for perinatal populations in
health and disease.
| no_new_dataset | 0.94743 |
2503.02687 | Miao Zhang | Miao Zhang, Sherif Abdulatif, Benedikt Loesch, Marco Altmann and Bin
Yang | Class-Aware PillarMix: Can Mixed Sample Data Augmentation Enhance 3D
Object Detection with Radar Point Clouds? | 8 pages, 6 figures, 4 tables, submitted to 2025 IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS 2025) | null | null | null | cs.CV cs.AI cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the significant effort required for data collection and annotation in
3D perception tasks, mixed sample data augmentation (MSDA) has been widely
studied to generate diverse training samples by mixing existing data. Recently,
many MSDA techniques have been developed for point clouds, but they mainly
target LiDAR data, leaving their application to radar point clouds largely
unexplored. In this paper, we examine the feasibility of applying existing MSDA
methods to radar point clouds and identify several challenges in adapting these
techniques. These obstacles stem from the radar's irregular angular
distribution, deviations from a single-sensor polar layout in multi-radar
setups, and point sparsity. To address these issues, we propose Class-Aware
PillarMix (CAPMix), a novel MSDA approach that applies MixUp at the pillar
level in 3D point clouds, guided by class labels. Unlike methods that rely a
single mix ratio to the entire sample, CAPMix assigns an independent ratio to
each pillar, boosting sample diversity. To account for the density of different
classes, we use class-specific distributions: for dense objects (e.g., large
vehicles), we skew ratios to favor points from another sample, while for sparse
objects (e.g., pedestrians), we sample more points from the original. This
class-aware mixing retains critical details and enriches each sample with new
information, ultimately generating more diverse training data. Experimental
results demonstrate that our method not only significantly boosts performance
but also outperforms existing MSDA approaches across two datasets (Bosch Street
and K-Radar). We believe that this straightforward yet effective approach will
spark further investigation into MSDA techniques for radar data.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 15:02:07 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhang",
"Miao",
""
],
[
"Abdulatif",
"Sherif",
""
],
[
"Loesch",
"Benedikt",
""
],
[
"Altmann",
"Marco",
""
],
[
"Yang",
"Bin",
""
]
]
| TITLE: Class-Aware PillarMix: Can Mixed Sample Data Augmentation Enhance 3D
Object Detection with Radar Point Clouds?
ABSTRACT: Due to the significant effort required for data collection and annotation in
3D perception tasks, mixed sample data augmentation (MSDA) has been widely
studied to generate diverse training samples by mixing existing data. Recently,
many MSDA techniques have been developed for point clouds, but they mainly
target LiDAR data, leaving their application to radar point clouds largely
unexplored. In this paper, we examine the feasibility of applying existing MSDA
methods to radar point clouds and identify several challenges in adapting these
techniques. These obstacles stem from the radar's irregular angular
distribution, deviations from a single-sensor polar layout in multi-radar
setups, and point sparsity. To address these issues, we propose Class-Aware
PillarMix (CAPMix), a novel MSDA approach that applies MixUp at the pillar
level in 3D point clouds, guided by class labels. Unlike methods that rely a
single mix ratio to the entire sample, CAPMix assigns an independent ratio to
each pillar, boosting sample diversity. To account for the density of different
classes, we use class-specific distributions: for dense objects (e.g., large
vehicles), we skew ratios to favor points from another sample, while for sparse
objects (e.g., pedestrians), we sample more points from the original. This
class-aware mixing retains critical details and enriches each sample with new
information, ultimately generating more diverse training data. Experimental
results demonstrate that our method not only significantly boosts performance
but also outperforms existing MSDA approaches across two datasets (Bosch Street
and K-Radar). We believe that this straightforward yet effective approach will
spark further investigation into MSDA techniques for radar data.
| no_new_dataset | 0.952397 |
2503.02690 | James Warner | Tristan A. Shah, Michael C. Stanley, James E. Warner | Generative Modeling of Microweather Wind Velocities for Urban Air
Mobility | 17 pages, 13 figures, published in 2025 IEEE Aerospace Conference
proceedings | null | null | null | cs.CE cs.LG physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by the pursuit of safe, reliable, and weather-tolerant urban air
mobility (UAM) solutions, this work proposes a generative modeling approach for
characterizing microweather wind velocities. Microweather, or the weather
conditions in highly localized areas, is particularly complex in urban
environments owing to the chaotic and turbulent nature of wind flows.
Furthermore, traditional means of assessing local wind fields are not generally
viable solutions for UAM applications: 1) field measurements that would rely on
permanent wind profiling systems in operational air space are not practical, 2)
physics-based models that simulate fluid dynamics at a sufficiently high
resolution are not computationally tractable, and 3) data-driven modeling
approaches that are largely deterministic ignore the inherent variability in
turbulent flows that dictates UAM reliability. Thus, advancements in predictive
capabilities are needed to help mitigate the unique operational safety risks
that microweather winds pose for smaller, lighter weight UAM aircraft.
This work aims to model microweather wind velocities in a manner that is
computationally-efficient, captures random variability, and would only require
a temporary, rather than permanent, field measurement campaign. Inspired by
recent breakthroughs in conditional generative AI such as text-to-image
generation, the proposed approach learns a probabilistic macro-to-microweather
mapping between regional weather forecasts and measured local wind velocities
using generative modeling (denoising diffusion probabilistic models, flow
matching, and Gaussian mixture models). A simple proof of concept was
implemented using a dataset comprised of local (micro) measurements from a
Sonic Detection and Ranging (SoDAR) wind profiler along with (macro) forecast
data from a nearby weather station over the same time period.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 15:03:15 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Shah",
"Tristan A.",
""
],
[
"Stanley",
"Michael C.",
""
],
[
"Warner",
"James E.",
""
]
]
| TITLE: Generative Modeling of Microweather Wind Velocities for Urban Air
Mobility
ABSTRACT: Motivated by the pursuit of safe, reliable, and weather-tolerant urban air
mobility (UAM) solutions, this work proposes a generative modeling approach for
characterizing microweather wind velocities. Microweather, or the weather
conditions in highly localized areas, is particularly complex in urban
environments owing to the chaotic and turbulent nature of wind flows.
Furthermore, traditional means of assessing local wind fields are not generally
viable solutions for UAM applications: 1) field measurements that would rely on
permanent wind profiling systems in operational air space are not practical, 2)
physics-based models that simulate fluid dynamics at a sufficiently high
resolution are not computationally tractable, and 3) data-driven modeling
approaches that are largely deterministic ignore the inherent variability in
turbulent flows that dictates UAM reliability. Thus, advancements in predictive
capabilities are needed to help mitigate the unique operational safety risks
that microweather winds pose for smaller, lighter weight UAM aircraft.
This work aims to model microweather wind velocities in a manner that is
computationally-efficient, captures random variability, and would only require
a temporary, rather than permanent, field measurement campaign. Inspired by
recent breakthroughs in conditional generative AI such as text-to-image
generation, the proposed approach learns a probabilistic macro-to-microweather
mapping between regional weather forecasts and measured local wind velocities
using generative modeling (denoising diffusion probabilistic models, flow
matching, and Gaussian mixture models). A simple proof of concept was
implemented using a dataset comprised of local (micro) measurements from a
Sonic Detection and Ranging (SoDAR) wind profiler along with (macro) forecast
data from a nearby weather station over the same time period.
| no_new_dataset | 0.941223 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.