Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2504.02222 | Liying Xu | Liying Xu and Hongliang He and Wei Han and Hanbin Huang and Siwei Feng
and Guohong Fu | APSeg: Auto-Prompt Model with Acquired and Injected Knowledge for
Nuclear Instance Segmentation and Classification | 10 pages, 3 figures | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nuclear instance segmentation and classification provide critical
quantitative foundations for digital pathology diagnosis. With the advent of
the foundational Segment Anything Model (SAM), the accuracy and efficiency of
nuclear segmentation have improved significantly. However, SAM imposes a strong
reliance on precise prompts, and its class-agnostic design renders its
classification results entirely dependent on the provided prompts. Therefore,
we focus on generating prompts with more accurate localization and
classification and propose \textbf{APSeg}, \textbf{A}uto-\textbf{P}rompt model
with acquired and injected knowledge for nuclear instance \textbf{Seg}mentation
and classification. APSeg incorporates two knowledge-aware modules: (1)
Distribution-Guided Proposal Offset Module (\textbf{DG-POM}), which learns
distribution knowledge through density map guided, and (2) Category Knowledge
Semantic Injection Module (\textbf{CK-SIM}), which injects morphological
knowledge derived from category descriptions. We conducted extensive
experiments on the PanNuke and CoNSeP datasets, demonstrating the effectiveness
of our approach. The code will be released upon acceptance.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 02:28:51 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Xu",
"Liying",
""
],
[
"He",
"Hongliang",
""
],
[
"Han",
"Wei",
""
],
[
"Huang",
"Hanbin",
""
],
[
"Feng",
"Siwei",
""
],
[
"Fu",
"Guohong",
""
]
] | TITLE: APSeg: Auto-Prompt Model with Acquired and Injected Knowledge for
Nuclear Instance Segmentation and Classification
ABSTRACT: Nuclear instance segmentation and classification provide critical
quantitative foundations for digital pathology diagnosis. With the advent of
the foundational Segment Anything Model (SAM), the accuracy and efficiency of
nuclear segmentation have improved significantly. However, SAM imposes a strong
reliance on precise prompts, and its class-agnostic design renders its
classification results entirely dependent on the provided prompts. Therefore,
we focus on generating prompts with more accurate localization and
classification and propose \textbf{APSeg}, \textbf{A}uto-\textbf{P}rompt model
with acquired and injected knowledge for nuclear instance \textbf{Seg}mentation
and classification. APSeg incorporates two knowledge-aware modules: (1)
Distribution-Guided Proposal Offset Module (\textbf{DG-POM}), which learns
distribution knowledge through density map guided, and (2) Category Knowledge
Semantic Injection Module (\textbf{CK-SIM}), which injects morphological
knowledge derived from category descriptions. We conducted extensive
experiments on the PanNuke and CoNSeP datasets, demonstrating the effectiveness
of our approach. The code will be released upon acceptance.
|
2504.02244 | Iroh (Xu) Cao | Xu Cao, Pranav Virupaksha, Wenqi Jia, Bolin Lai, Fiona Ryan, Sangmin
Lee, James M. Rehg | SocialGesture: Delving into Multi-person Gesture Understanding | CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Previous research in human gesture recognition has largely overlooked
multi-person interactions, which are crucial for understanding the social
context of naturally occurring gestures. This limitation in existing datasets
presents a significant challenge in aligning human gestures with other
modalities like language and speech. To address this issue, we introduce
SocialGesture, the first large-scale dataset specifically designed for
multi-person gesture analysis. SocialGesture features a diverse range of
natural scenarios and supports multiple gesture analysis tasks, including
video-based recognition and temporal localization, providing a valuable
resource for advancing the study of gesture during complex social interactions.
Furthermore, we propose a novel visual question answering (VQA) task to
benchmark vision language models'(VLMs) performance on social gesture
understanding. Our findings highlight several limitations of current gesture
recognition models, offering insights into future directions for improvement in
this field. SocialGesture is available at
huggingface.co/datasets/IrohXu/SocialGesture.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 03:21:06 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Cao",
"Xu",
""
],
[
"Virupaksha",
"Pranav",
""
],
[
"Jia",
"Wenqi",
""
],
[
"Lai",
"Bolin",
""
],
[
"Ryan",
"Fiona",
""
],
[
"Lee",
"Sangmin",
""
],
[
"Rehg",
"James M.",
""
]
] | TITLE: SocialGesture: Delving into Multi-person Gesture Understanding
ABSTRACT: Previous research in human gesture recognition has largely overlooked
multi-person interactions, which are crucial for understanding the social
context of naturally occurring gestures. This limitation in existing datasets
presents a significant challenge in aligning human gestures with other
modalities like language and speech. To address this issue, we introduce
SocialGesture, the first large-scale dataset specifically designed for
multi-person gesture analysis. SocialGesture features a diverse range of
natural scenarios and supports multiple gesture analysis tasks, including
video-based recognition and temporal localization, providing a valuable
resource for advancing the study of gesture during complex social interactions.
Furthermore, we propose a novel visual question answering (VQA) task to
benchmark vision language models'(VLMs) performance on social gesture
understanding. Our findings highlight several limitations of current gesture
recognition models, offering insights into future directions for improvement in
this field. SocialGesture is available at
huggingface.co/datasets/IrohXu/SocialGesture.
|
2504.02245 | Xiaoyu Li | Junxi Man, Yumin Lin, Xiaoyu Li | Traffic Flow Data Completion and Anomaly Diagnosis via Sparse and
Low-Rank Tensor Optimization | null | null | null | null | math.OC cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatiotemporal traffic time series, such as traffic speed data, collected
from sensing systems are often incomplete, with considerable corruption and
large amounts of missing values. A vast amount of data conceals implicit data
structures, which poses significant challenges for data recovery issues, such
as mining the potential spatio-temporal correlations of data and identifying
abnormal data. In this paper, we propose a Tucker decomposition-based sparse
low-rank high-order tensor optimization model (TSLTO) for data imputation and
anomaly diagnosis. We decompose the traffic tensor data into low-rank and
sparse tensors, and establish a sparse low-rank high-order tensor optimization
model based on Tucker decomposition. By utilizing tools of non-smooth analysis
for tensor functions, we explore the optimality conditions of the proposed
tensor optimization model and design an ADMM optimization algorithm for solving
the model. Finally, numerical experiments are conducted on both synthetic data
and a real-world dataset: the urban traffic speed dataset of Guangzhou.
Numerical comparisons with several representative existing algorithms
demonstrate that our proposed approach achieves higher accuracy and efficiency
in traffic flow data recovery and anomaly diagnosis tasks.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 03:21:30 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Man",
"Junxi",
""
],
[
"Lin",
"Yumin",
""
],
[
"Li",
"Xiaoyu",
""
]
] | TITLE: Traffic Flow Data Completion and Anomaly Diagnosis via Sparse and
Low-Rank Tensor Optimization
ABSTRACT: Spatiotemporal traffic time series, such as traffic speed data, collected
from sensing systems are often incomplete, with considerable corruption and
large amounts of missing values. A vast amount of data conceals implicit data
structures, which poses significant challenges for data recovery issues, such
as mining the potential spatio-temporal correlations of data and identifying
abnormal data. In this paper, we propose a Tucker decomposition-based sparse
low-rank high-order tensor optimization model (TSLTO) for data imputation and
anomaly diagnosis. We decompose the traffic tensor data into low-rank and
sparse tensors, and establish a sparse low-rank high-order tensor optimization
model based on Tucker decomposition. By utilizing tools of non-smooth analysis
for tensor functions, we explore the optimality conditions of the proposed
tensor optimization model and design an ADMM optimization algorithm for solving
the model. Finally, numerical experiments are conducted on both synthetic data
and a real-world dataset: the urban traffic speed dataset of Guangzhou.
Numerical comparisons with several representative existing algorithms
demonstrate that our proposed approach achieves higher accuracy and efficiency
in traffic flow data recovery and anomaly diagnosis tasks.
|
2504.02248 | Songran Bai | Songran Bai, Xiaolong Zheng, Daniel Dajun Zeng | CRC-SGAD: Conformal Risk Control for Supervised Graph Anomaly Detection | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Anomaly Detection (GAD) is critical in security-sensitive domains, yet
faces reliability challenges: miscalibrated confidence estimation
(underconfidence in normal nodes, overconfidence in anomalies), adversarial
vulnerability of derived confidence score under structural perturbations, and
limited efficacy of conventional calibration methods for sparse anomaly
patterns. Thus we propose CRC-SGAD, a framework integrating statistical risk
control into GAD via two innovations: (1) A Dual-Threshold Conformal Risk
Control mechanism that provides theoretically guaranteed bounds for both False
Negative Rate (FNR) and False Positive Rate (FPR) through providing prediction
sets; (2) A Subgraph-aware Spectral Graph Neural Calibrator (SSGNC) that
optimizes node representations through adaptive spectral filtering while
reducing the size of prediction sets via hybrid loss optimization. Experiments
on four datasets and five GAD models demonstrate statistically significant
improvements in FNR and FPR control and prediction set size. CRC-SGAD
establishes a paradigm for statistically rigorous anomaly detection in
graph-structured security applications.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 03:27:49 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Bai",
"Songran",
""
],
[
"Zheng",
"Xiaolong",
""
],
[
"Zeng",
"Daniel Dajun",
""
]
] | TITLE: CRC-SGAD: Conformal Risk Control for Supervised Graph Anomaly Detection
ABSTRACT: Graph Anomaly Detection (GAD) is critical in security-sensitive domains, yet
faces reliability challenges: miscalibrated confidence estimation
(underconfidence in normal nodes, overconfidence in anomalies), adversarial
vulnerability of derived confidence score under structural perturbations, and
limited efficacy of conventional calibration methods for sparse anomaly
patterns. Thus we propose CRC-SGAD, a framework integrating statistical risk
control into GAD via two innovations: (1) A Dual-Threshold Conformal Risk
Control mechanism that provides theoretically guaranteed bounds for both False
Negative Rate (FNR) and False Positive Rate (FPR) through providing prediction
sets; (2) A Subgraph-aware Spectral Graph Neural Calibrator (SSGNC) that
optimizes node representations through adaptive spectral filtering while
reducing the size of prediction sets via hybrid loss optimization. Experiments
on four datasets and five GAD models demonstrate statistically significant
improvements in FNR and FPR control and prediction set size. CRC-SGAD
establishes a paradigm for statistically rigorous anomaly detection in
graph-structured security applications.
|
2504.02264 | Wenzhuo Liu | Wenzhuo Liu, Wenshuo Wang, Yicheng Qiao, Qiannan Guo, Jiayin Zhu,
Pengfei Li, Zilong Chen, Huiming Yang, Zhiwei Li, Lening Wang, Tiao Tan,
Huaping Liu | MMTL-UniAD: A Unified Framework for Multimodal and Multi-Task Learning
in Assistive Driving Perception | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Advanced driver assistance systems require a comprehensive understanding of
the driver's mental/physical state and traffic context but existing works often
neglect the potential benefits of joint learning between these tasks. This
paper proposes MMTL-UniAD, a unified multi-modal multi-task learning framework
that simultaneously recognizes driver behavior (e.g., looking around, talking),
driver emotion (e.g., anxiety, happiness), vehicle behavior (e.g., parking,
turning), and traffic context (e.g., traffic jam, traffic smooth). A key
challenge is avoiding negative transfer between tasks, which can impair
learning performance. To address this, we introduce two key components into the
framework: one is the multi-axis region attention network to extract global
context-sensitive features, and the other is the dual-branch multimodal
embedding to learn multimodal embeddings from both task-shared and
task-specific features. The former uses a multi-attention mechanism to extract
task-relevant features, mitigating negative transfer caused by task-unrelated
features. The latter employs a dual-branch structure to adaptively adjust
task-shared and task-specific parameters, enhancing cross-task knowledge
transfer while reducing task conflicts. We assess MMTL-UniAD on the AIDE
dataset, using a series of ablation studies, and show that it outperforms
state-of-the-art methods across all four tasks. The code is available on
https://github.com/Wenzhuo-Liu/MMTL-UniAD.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 04:23:27 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Wenzhuo",
""
],
[
"Wang",
"Wenshuo",
""
],
[
"Qiao",
"Yicheng",
""
],
[
"Guo",
"Qiannan",
""
],
[
"Zhu",
"Jiayin",
""
],
[
"Li",
"Pengfei",
""
],
[
"Chen",
"Zilong",
""
],
[
"Yang",
"Huiming",
""
],
[
"Li",
"Zhiwei",
""
],
[
"Wang",
"Lening",
""
],
[
"Tan",
"Tiao",
""
],
[
"Liu",
"Huaping",
""
]
] | TITLE: MMTL-UniAD: A Unified Framework for Multimodal and Multi-Task Learning
in Assistive Driving Perception
ABSTRACT: Advanced driver assistance systems require a comprehensive understanding of
the driver's mental/physical state and traffic context but existing works often
neglect the potential benefits of joint learning between these tasks. This
paper proposes MMTL-UniAD, a unified multi-modal multi-task learning framework
that simultaneously recognizes driver behavior (e.g., looking around, talking),
driver emotion (e.g., anxiety, happiness), vehicle behavior (e.g., parking,
turning), and traffic context (e.g., traffic jam, traffic smooth). A key
challenge is avoiding negative transfer between tasks, which can impair
learning performance. To address this, we introduce two key components into the
framework: one is the multi-axis region attention network to extract global
context-sensitive features, and the other is the dual-branch multimodal
embedding to learn multimodal embeddings from both task-shared and
task-specific features. The former uses a multi-attention mechanism to extract
task-relevant features, mitigating negative transfer caused by task-unrelated
features. The latter employs a dual-branch structure to adaptively adjust
task-shared and task-specific parameters, enhancing cross-task knowledge
transfer while reducing task conflicts. We assess MMTL-UniAD on the AIDE
dataset, using a series of ablation studies, and show that it outperforms
state-of-the-art methods across all four tasks. The code is available on
https://github.com/Wenzhuo-Liu/MMTL-UniAD.
|
2504.02268 | Waris Gill | Waris Gill (1 and 2), Justin Cechmanek (1), Tyler Hutcherson (1),
Srijith Rajamohan (1), Jen Agarwal (1), Muhammad Ali Gulzar (2), Manvinder
Singh (1), Benoit Dion ((1) Redis, (2) Virginia Tech) | Advancing Semantic Caching for LLMs with Domain-Specific Embeddings and
Synthetic Data | Initial study on embedding fine tuning for semantic cache. It also
explores synthetic data. Total pages are 12, including refrences | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | This report investigates enhancing semantic caching effectiveness by
employing specialized, fine-tuned embedding models. Semantic caching relies on
embedding similarity rather than exact key matching, presenting unique
challenges in balancing precision, query latency, and computational efficiency.
We propose leveraging smaller, domain-specific embedding models, fine-tuned
with targeted real-world and synthetically generated datasets. Our empirical
evaluations demonstrate that compact embedding models fine-tuned for just one
epoch on specialized datasets significantly surpass both state-of-the-art
open-source and proprietary alternatives in precision and recall. Moreover, we
introduce a novel synthetic data generation pipeline for the semantic cache
that mitigates the challenge of limited domain-specific annotated data, further
boosting embedding performance. Our approach effectively balances computational
overhead and accuracy, establishing a viable and efficient strategy for
practical semantic caching implementations.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 04:27:02 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Gill",
"Waris",
"",
"1 and 2"
],
[
"Cechmanek",
"Justin",
"",
"Redis"
],
[
"Hutcherson",
"Tyler",
"",
"Redis"
],
[
"Rajamohan",
"Srijith",
"",
"Redis"
],
[
"Agarwal",
"Jen",
"",
"Redis"
],
[
"Gulzar",
"Muhammad Ali",
"",
"Virginia Tech"
],
[
"Singh",
"Manvinder",
"",
"Redis"
],
[
"Dion",
"Benoit",
""
]
] | TITLE: Advancing Semantic Caching for LLMs with Domain-Specific Embeddings and
Synthetic Data
ABSTRACT: This report investigates enhancing semantic caching effectiveness by
employing specialized, fine-tuned embedding models. Semantic caching relies on
embedding similarity rather than exact key matching, presenting unique
challenges in balancing precision, query latency, and computational efficiency.
We propose leveraging smaller, domain-specific embedding models, fine-tuned
with targeted real-world and synthetically generated datasets. Our empirical
evaluations demonstrate that compact embedding models fine-tuned for just one
epoch on specialized datasets significantly surpass both state-of-the-art
open-source and proprietary alternatives in precision and recall. Moreover, we
introduce a novel synthetic data generation pipeline for the semantic cache
that mitigates the challenge of limited domain-specific annotated data, further
boosting embedding performance. Our approach effectively balances computational
overhead and accuracy, establishing a viable and efficient strategy for
practical semantic caching implementations.
|
2504.02270 | Samuel Sze | Samuel Sze and Daniele De Martini and Lars Kunze | MinkOcc: Towards real-time label-efficient semantic occupancy prediction | 8 pages | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing 3D semantic occupancy prediction models often relies on dense 3D
annotations for supervised learning, a process that is both labor and
resource-intensive, underscoring the need for label-efficient or even
label-free approaches. To address this, we introduce MinkOcc, a multi-modal 3D
semantic occupancy prediction framework for cameras and LiDARs that proposes a
two-step semi-supervised training procedure. Here, a small dataset of
explicitly 3D annotations warm-starts the training process; then, the
supervision is continued by simpler-to-annotate accumulated LiDAR sweeps and
images -- semantically labelled through vision foundational models. MinkOcc
effectively utilizes these sensor-rich supervisory cues and reduces reliance on
manual labeling by 90\% while maintaining competitive accuracy. In addition,
the proposed model incorporates information from LiDAR and camera data through
early fusion and leverages sparse convolution networks for real-time
prediction. With its efficiency in both supervision and computation, we aim to
extend MinkOcc beyond curated datasets, enabling broader real-world deployment
of 3D semantic occupancy prediction in autonomous driving.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 04:31:56 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Sze",
"Samuel",
""
],
[
"De Martini",
"Daniele",
""
],
[
"Kunze",
"Lars",
""
]
] | TITLE: MinkOcc: Towards real-time label-efficient semantic occupancy prediction
ABSTRACT: Developing 3D semantic occupancy prediction models often relies on dense 3D
annotations for supervised learning, a process that is both labor and
resource-intensive, underscoring the need for label-efficient or even
label-free approaches. To address this, we introduce MinkOcc, a multi-modal 3D
semantic occupancy prediction framework for cameras and LiDARs that proposes a
two-step semi-supervised training procedure. Here, a small dataset of
explicitly 3D annotations warm-starts the training process; then, the
supervision is continued by simpler-to-annotate accumulated LiDAR sweeps and
images -- semantically labelled through vision foundational models. MinkOcc
effectively utilizes these sensor-rich supervisory cues and reduces reliance on
manual labeling by 90\% while maintaining competitive accuracy. In addition,
the proposed model incorporates information from LiDAR and camera data through
early fusion and leverages sparse convolution networks for real-time
prediction. With its efficiency in both supervision and computation, we aim to
extend MinkOcc beyond curated datasets, enabling broader real-world deployment
of 3D semantic occupancy prediction in autonomous driving.
|
2504.02271 | Haozhe Yin | Haozhe Yin and Kai Wang and Wenjie Zhang and Ying Zhang and Ruijia Wu
and Xuemin Lin | Efficient Computation of Hyper-triangles on Hypergraphs | null | null | null | null | cs.DS cs.DB | http://creativecommons.org/licenses/by/4.0/ | Hypergraphs, which use hyperedges to capture groupwise interactions among
different entities, have gained increasing attention recently for their
versatility in effectively modeling real-world networks. In this paper, we
study the problem of computing hyper-triangles (formed by three fully-connected
hyperedges), which is a basic structural unit in hypergraphs. Although existing
approaches can be adopted to compute hyper-triangles by exhaustively examining
hyperedge combinations, they overlook the structural characteristics
distinguishing different hyper-triangle patterns. Consequently, these
approaches lack specificity in computing particular hyper-triangle patterns and
exhibit low efficiency. In this paper, we unveil a new formation pathway for
hyper-triangles, transitioning from hyperedges to hyperwedges before assembling
into hyper-triangles, and classify hyper-triangle patterns based on
hyperwedges. Leveraging this insight, we introduce a two-step framework to
reduce the redundant checking of hyperedge combinations. Under this framework,
we propose efficient algorithms for computing a specific pattern of
hyper-triangles. Approximate algorithms are also devised to support estimated
counting scenarios. Furthermore, we introduce a fine-grained hypergraph
clustering coefficient measurement that can reflect diverse properties of
hypergraphs based on different hyper-triangle patterns. Extensive experimental
evaluations conducted on 11 real-world datasets validate the effectiveness and
efficiency of our proposed techniques.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 04:32:37 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Yin",
"Haozhe",
""
],
[
"Wang",
"Kai",
""
],
[
"Zhang",
"Wenjie",
""
],
[
"Zhang",
"Ying",
""
],
[
"Wu",
"Ruijia",
""
],
[
"Lin",
"Xuemin",
""
]
] | TITLE: Efficient Computation of Hyper-triangles on Hypergraphs
ABSTRACT: Hypergraphs, which use hyperedges to capture groupwise interactions among
different entities, have gained increasing attention recently for their
versatility in effectively modeling real-world networks. In this paper, we
study the problem of computing hyper-triangles (formed by three fully-connected
hyperedges), which is a basic structural unit in hypergraphs. Although existing
approaches can be adopted to compute hyper-triangles by exhaustively examining
hyperedge combinations, they overlook the structural characteristics
distinguishing different hyper-triangle patterns. Consequently, these
approaches lack specificity in computing particular hyper-triangle patterns and
exhibit low efficiency. In this paper, we unveil a new formation pathway for
hyper-triangles, transitioning from hyperedges to hyperwedges before assembling
into hyper-triangles, and classify hyper-triangle patterns based on
hyperwedges. Leveraging this insight, we introduce a two-step framework to
reduce the redundant checking of hyperedge combinations. Under this framework,
we propose efficient algorithms for computing a specific pattern of
hyper-triangles. Approximate algorithms are also devised to support estimated
counting scenarios. Furthermore, we introduce a fine-grained hypergraph
clustering coefficient measurement that can reflect diverse properties of
hypergraphs based on different hyper-triangle patterns. Extensive experimental
evaluations conducted on 11 real-world datasets validate the effectiveness and
efficiency of our proposed techniques.
|
2504.02272 | Qianyu Zhou | Shaocong Long, Qianyu Zhou, Xiangtai Li, Chenhao Ying, Yunhai Tong,
Lizhuang Ma, Yuan Luo, Dacheng Tao | Generative Classifier for Domain Generalization | Code will be available at https://github.com/longshaocong/GCDG | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain generalization (DG) aims to improve the generalizability of computer
vision models toward distribution shifts. The mainstream DG methods focus on
learning domain invariance, however, such methods overlook the potential
inherent in domain-specific information. While the prevailing practice of
discriminative linear classifier has been tailored to domain-invariant
features, it struggles when confronted with diverse domain-specific
information, e.g., intra-class shifts, that exhibits multi-modality. To address
these issues, we explore the theoretical implications of relying on domain
invariance, revealing the crucial role of domain-specific information in
mitigating the target risk for DG. Drawing from these insights, we propose
Generative Classifier-driven Domain Generalization (GCDG), introducing a
generative paradigm for the DG classifier based on Gaussian Mixture Models
(GMMs) for each class across domains. GCDG consists of three key modules:
Heterogeneity Learning Classifier~(HLC), Spurious Correlation Blocking~(SCB),
and Diverse Component Balancing~(DCB). Concretely, HLC attempts to model the
feature distributions and thereby capture valuable domain-specific information
via GMMs. SCB identifies the neural units containing spurious correlations and
perturbs them, mitigating the risk of HLC learning spurious patterns.
Meanwhile, DCB ensures a balanced contribution of components in HLC, preventing
the underestimation or neglect of critical components. In this way, GCDG excels
in capturing the nuances of domain-specific information characterized by
diverse distributions. GCDG demonstrates the potential to reduce the target
risk and encourage flat minima, improving the generalizability. Extensive
experiments show GCDG's comparable performance on five DG benchmarks and one
face anti-spoofing dataset, seamlessly integrating into existing DG methods
with consistent improvements.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 04:38:33 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Long",
"Shaocong",
""
],
[
"Zhou",
"Qianyu",
""
],
[
"Li",
"Xiangtai",
""
],
[
"Ying",
"Chenhao",
""
],
[
"Tong",
"Yunhai",
""
],
[
"Ma",
"Lizhuang",
""
],
[
"Luo",
"Yuan",
""
],
[
"Tao",
"Dacheng",
""
]
] | TITLE: Generative Classifier for Domain Generalization
ABSTRACT: Domain generalization (DG) aims to improve the generalizability of computer
vision models toward distribution shifts. The mainstream DG methods focus on
learning domain invariance, however, such methods overlook the potential
inherent in domain-specific information. While the prevailing practice of
discriminative linear classifier has been tailored to domain-invariant
features, it struggles when confronted with diverse domain-specific
information, e.g., intra-class shifts, that exhibits multi-modality. To address
these issues, we explore the theoretical implications of relying on domain
invariance, revealing the crucial role of domain-specific information in
mitigating the target risk for DG. Drawing from these insights, we propose
Generative Classifier-driven Domain Generalization (GCDG), introducing a
generative paradigm for the DG classifier based on Gaussian Mixture Models
(GMMs) for each class across domains. GCDG consists of three key modules:
Heterogeneity Learning Classifier~(HLC), Spurious Correlation Blocking~(SCB),
and Diverse Component Balancing~(DCB). Concretely, HLC attempts to model the
feature distributions and thereby capture valuable domain-specific information
via GMMs. SCB identifies the neural units containing spurious correlations and
perturbs them, mitigating the risk of HLC learning spurious patterns.
Meanwhile, DCB ensures a balanced contribution of components in HLC, preventing
the underestimation or neglect of critical components. In this way, GCDG excels
in capturing the nuances of domain-specific information characterized by
diverse distributions. GCDG demonstrates the potential to reduce the target
risk and encourage flat minima, improving the generalizability. Extensive
experiments show GCDG's comparable performance on five DG benchmarks and one
face anti-spoofing dataset, seamlessly integrating into existing DG methods
with consistent improvements.
|
2504.02273 | Hung Le | Hung Le, Dai Do, Dung Nguyen, and Svetha Venkatesh | Reasoning Under 1 Billion: Memory-Augmented Reinforcement Learning for
Large Language Models | preprint,20 pages | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent advances in fine-tuning large language models (LLMs) with
reinforcement learning (RL) have shown promising improvements in complex
reasoning tasks, particularly when paired with chain-of-thought (CoT)
prompting. However, these successes have been largely demonstrated on
large-scale models with billions of parameters, where a strong pretraining
foundation ensures effective initial exploration. In contrast, RL remains
challenging for tiny LLMs with 1 billion parameters or fewer because they lack
the necessary pretraining strength to explore effectively, often leading to
suboptimal reasoning patterns. This work introduces a novel intrinsic
motivation approach that leverages episodic memory to address this challenge,
improving tiny LLMs in CoT reasoning tasks. Inspired by human memory-driven
learning, our method leverages successful reasoning patterns stored in memory
while allowing for controlled exploration to generate novel responses.
Intrinsic rewards are computed efficiently using a kNN-based episodic memory,
allowing the model to discover new reasoning strategies while quickly adapting
to effective past solutions. Experiments on fine-tuning GSM8K and AI-MO
datasets demonstrate that our approach significantly enhances smaller LLMs'
sample efficiency and generalization capability, making RL-based reasoning
improvements more accessible in low-resource settings.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 04:46:17 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Le",
"Hung",
""
],
[
"Do",
"Dai",
""
],
[
"Nguyen",
"Dung",
""
],
[
"Venkatesh",
"Svetha",
""
]
] | TITLE: Reasoning Under 1 Billion: Memory-Augmented Reinforcement Learning for
Large Language Models
ABSTRACT: Recent advances in fine-tuning large language models (LLMs) with
reinforcement learning (RL) have shown promising improvements in complex
reasoning tasks, particularly when paired with chain-of-thought (CoT)
prompting. However, these successes have been largely demonstrated on
large-scale models with billions of parameters, where a strong pretraining
foundation ensures effective initial exploration. In contrast, RL remains
challenging for tiny LLMs with 1 billion parameters or fewer because they lack
the necessary pretraining strength to explore effectively, often leading to
suboptimal reasoning patterns. This work introduces a novel intrinsic
motivation approach that leverages episodic memory to address this challenge,
improving tiny LLMs in CoT reasoning tasks. Inspired by human memory-driven
learning, our method leverages successful reasoning patterns stored in memory
while allowing for controlled exploration to generate novel responses.
Intrinsic rewards are computed efficiently using a kNN-based episodic memory,
allowing the model to discover new reasoning strategies while quickly adapting
to effective past solutions. Experiments on fine-tuning GSM8K and AI-MO
datasets demonstrate that our approach significantly enhances smaller LLMs'
sample efficiency and generalization capability, making RL-based reasoning
improvements more accessible in low-resource settings.
|
2504.02275 | Kuan Lu | Menghao Huo and Kuan Lu and Qiang Zhu and Zhenrui Chen | Enhancing Customer Contact Efficiency with Graph Neural Networks in
Credit Card Fraud Detection Workflow | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Credit card fraud has been a persistent issue since the last century, causing
significant financial losses to the industry. The most effective way to prevent
fraud is by contacting customers to verify suspicious transactions. However,
while these systems are designed to detect fraudulent activity, they often
mistakenly flag legitimate transactions, leading to unnecessary declines that
disrupt the user experience and erode customer trust. Frequent false positives
can frustrate customers, resulting in dissatisfaction, increased complaints,
and a diminished sense of security. To address these limitations, we propose a
fraud detection framework incorporating Relational Graph Convolutional Networks
(RGCN) to enhance the accuracy and efficiency of identifying fraudulent
transactions. By leveraging the relational structure of transaction data, our
model reduces the need for direct customer confirmation while maintaining high
detection performance. Our experiments are conducted using the IBM credit card
transaction dataset to evaluate the effectiveness of this approach.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 04:50:45 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Huo",
"Menghao",
""
],
[
"Lu",
"Kuan",
""
],
[
"Zhu",
"Qiang",
""
],
[
"Chen",
"Zhenrui",
""
]
] | TITLE: Enhancing Customer Contact Efficiency with Graph Neural Networks in
Credit Card Fraud Detection Workflow
ABSTRACT: Credit card fraud has been a persistent issue since the last century, causing
significant financial losses to the industry. The most effective way to prevent
fraud is by contacting customers to verify suspicious transactions. However,
while these systems are designed to detect fraudulent activity, they often
mistakenly flag legitimate transactions, leading to unnecessary declines that
disrupt the user experience and erode customer trust. Frequent false positives
can frustrate customers, resulting in dissatisfaction, increased complaints,
and a diminished sense of security. To address these limitations, we propose a
fraud detection framework incorporating Relational Graph Convolutional Networks
(RGCN) to enhance the accuracy and efficiency of identifying fraudulent
transactions. By leveraging the relational structure of transaction data, our
model reduces the need for direct customer confirmation while maintaining high
detection performance. Our experiments are conducted using the IBM credit card
transaction dataset to evaluate the effectiveness of this approach.
|
2504.02277 | Amit Rand | Amit Rand and Hadi Ibrahim | Beyond Conventional Transformers: The Medical X-ray Attention (MXA)
Block for Improved Multi-Label Diagnosis Using Knowledge Distillation | 16 pages, 4 figures, 5 tables. For supplementary material and code,
see https://github.com/Hadi-M-Ibrahim/Beyond-Conventional-Transformers/ | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Medical imaging, particularly X-ray analysis, often involves detecting
multiple conditions simultaneously within a single scan, making multi-label
classification crucial for real-world clinical applications. We present the
Medical X-ray Attention (MXA) block, a novel attention mechanism tailored
specifically to address the unique challenges of X-ray abnormality detection.
The MXA block enhances traditional Multi-Head Self Attention (MHSA) by
integrating a specialized module that efficiently captures both detailed local
information and broader global context. To the best of our knowledge, this is
the first work to propose a task-specific attention mechanism for diagnosing
chest X-rays, as well as to attempt multi-label classification using an
Efficient Vision Transformer (EfficientViT). By embedding the MXA block within
the EfficientViT architecture and employing knowledge distillation, our
proposed model significantly improves performance on the CheXpert dataset, a
widely used benchmark for multi-label chest X-ray abnormality detection. Our
approach achieves an area under the curve (AUC) of 0.85, an absolute
improvement of 0.19 compared to our baseline model's AUC of 0.66, corresponding
to a substantial approximate 233% relative improvement over random guessing
(AUC = 0.5).
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 04:55:42 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Rand",
"Amit",
""
],
[
"Ibrahim",
"Hadi",
""
]
] | TITLE: Beyond Conventional Transformers: The Medical X-ray Attention (MXA)
Block for Improved Multi-Label Diagnosis Using Knowledge Distillation
ABSTRACT: Medical imaging, particularly X-ray analysis, often involves detecting
multiple conditions simultaneously within a single scan, making multi-label
classification crucial for real-world clinical applications. We present the
Medical X-ray Attention (MXA) block, a novel attention mechanism tailored
specifically to address the unique challenges of X-ray abnormality detection.
The MXA block enhances traditional Multi-Head Self Attention (MHSA) by
integrating a specialized module that efficiently captures both detailed local
information and broader global context. To the best of our knowledge, this is
the first work to propose a task-specific attention mechanism for diagnosing
chest X-rays, as well as to attempt multi-label classification using an
Efficient Vision Transformer (EfficientViT). By embedding the MXA block within
the EfficientViT architecture and employing knowledge distillation, our
proposed model significantly improves performance on the CheXpert dataset, a
widely used benchmark for multi-label chest X-ray abnormality detection. Our
approach achieves an area under the curve (AUC) of 0.85, an absolute
improvement of 0.19 compared to our baseline model's AUC of 0.66, corresponding
to a substantial approximate 233% relative improvement over random guessing
(AUC = 0.5).
|
2504.02280 | Jason Zutty | YiMing Yu, Jason Zutty | LLM-Guided Evolution: An Autonomous Model Optimization for Object
Detection | null | null | null | null | cs.NE cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In machine learning, Neural Architecture Search (NAS) requires domain
knowledge of model design and a large amount of trial-and-error to achieve
promising performance. Meanwhile, evolutionary algorithms have traditionally
relied on fixed rules and pre-defined building blocks. The Large Language Model
(LLM)-Guided Evolution (GE) framework transformed this approach by
incorporating LLMs to directly modify model source code for image
classification algorithms on CIFAR data and intelligently guide mutations and
crossovers. A key element of LLM-GE is the "Evolution of Thought" (EoT)
technique, which establishes feedback loops, allowing LLMs to refine their
decisions iteratively based on how previous operations performed. In this
study, we perform NAS for object detection by improving LLM-GE to modify the
architecture of You Only Look Once (YOLO) models to enhance performance on the
KITTI dataset. Our approach intelligently adjusts the design and settings of
YOLO to find the optimal algorithms against objective such as detection
accuracy and speed. We show that LLM-GE produced variants with significant
performance improvements, such as an increase in Mean Average Precision from
92.5% to 94.5%. This result highlights the flexibility and effectiveness of
LLM-GE on real-world challenges, offering a novel paradigm for automated
machine learning that combines LLM-driven reasoning with evolutionary
strategies.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 05:06:06 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Yu",
"YiMing",
""
],
[
"Zutty",
"Jason",
""
]
] | TITLE: LLM-Guided Evolution: An Autonomous Model Optimization for Object
Detection
ABSTRACT: In machine learning, Neural Architecture Search (NAS) requires domain
knowledge of model design and a large amount of trial-and-error to achieve
promising performance. Meanwhile, evolutionary algorithms have traditionally
relied on fixed rules and pre-defined building blocks. The Large Language Model
(LLM)-Guided Evolution (GE) framework transformed this approach by
incorporating LLMs to directly modify model source code for image
classification algorithms on CIFAR data and intelligently guide mutations and
crossovers. A key element of LLM-GE is the "Evolution of Thought" (EoT)
technique, which establishes feedback loops, allowing LLMs to refine their
decisions iteratively based on how previous operations performed. In this
study, we perform NAS for object detection by improving LLM-GE to modify the
architecture of You Only Look Once (YOLO) models to enhance performance on the
KITTI dataset. Our approach intelligently adjusts the design and settings of
YOLO to find the optimal algorithms against objective such as detection
accuracy and speed. We show that LLM-GE produced variants with significant
performance improvements, such as an increase in Mean Average Precision from
92.5% to 94.5%. This result highlights the flexibility and effectiveness of
LLM-GE on real-world challenges, offering a novel paradigm for automated
machine learning that combines LLM-driven reasoning with evolutionary
strategies.
|
2504.02287 | Trung Thanh Nguyen | Trung Thanh Nguyen, Yasutomo Kawanishi, Vijay John, Takahiro Komamizu,
Ichiro Ide | MultiSensor-Home: A Wide-area Multi-modal Multi-view Dataset for Action
Recognition and Transformer-based Sensor Fusion | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Multi-modal multi-view action recognition is a rapidly growing field in
computer vision, offering significant potential for applications in
surveillance. However, current datasets often fail to address real-world
challenges such as wide-area environmental conditions, asynchronous data
streams, and the lack of frame-level annotations. Furthermore, existing methods
face difficulties in effectively modeling inter-view relationships and
enhancing spatial feature learning. In this study, we propose the Multi-modal
Multi-view Transformer-based Sensor Fusion (MultiTSF) method and introduce the
MultiSensor-Home dataset, a novel benchmark designed for comprehensive action
recognition in home environments. The MultiSensor-Home dataset features
untrimmed videos captured by distributed sensors, providing high-resolution RGB
and audio data along with detailed multi-view frame-level action labels. The
proposed MultiTSF method leverages a Transformer-based fusion mechanism to
dynamically model inter-view relationships. Furthermore, the method also
integrates a external human detection module to enhance spatial feature
learning. Experiments on MultiSensor-Home and MM-Office datasets demonstrate
the superiority of MultiTSF over the state-of-the-art methods. The quantitative
and qualitative results highlight the effectiveness of the proposed method in
advancing real-world multi-modal multi-view action recognition.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 05:23:08 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Nguyen",
"Trung Thanh",
""
],
[
"Kawanishi",
"Yasutomo",
""
],
[
"John",
"Vijay",
""
],
[
"Komamizu",
"Takahiro",
""
],
[
"Ide",
"Ichiro",
""
]
] | TITLE: MultiSensor-Home: A Wide-area Multi-modal Multi-view Dataset for Action
Recognition and Transformer-based Sensor Fusion
ABSTRACT: Multi-modal multi-view action recognition is a rapidly growing field in
computer vision, offering significant potential for applications in
surveillance. However, current datasets often fail to address real-world
challenges such as wide-area environmental conditions, asynchronous data
streams, and the lack of frame-level annotations. Furthermore, existing methods
face difficulties in effectively modeling inter-view relationships and
enhancing spatial feature learning. In this study, we propose the Multi-modal
Multi-view Transformer-based Sensor Fusion (MultiTSF) method and introduce the
MultiSensor-Home dataset, a novel benchmark designed for comprehensive action
recognition in home environments. The MultiSensor-Home dataset features
untrimmed videos captured by distributed sensors, providing high-resolution RGB
and audio data along with detailed multi-view frame-level action labels. The
proposed MultiTSF method leverages a Transformer-based fusion mechanism to
dynamically model inter-view relationships. Furthermore, the method also
integrates a external human detection module to enhance spatial feature
learning. Experiments on MultiSensor-Home and MM-Office datasets demonstrate
the superiority of MultiTSF over the state-of-the-art methods. The quantitative
and qualitative results highlight the effectiveness of the proposed method in
advancing real-world multi-modal multi-view action recognition.
|
2504.02293 | Abhijit Paul | Sharif Md. Abdullah, Abhijit Paul, Shebuti Rayana, Ahmedul Kabir,
Zarif Masud | State-of-the-Art Translation of Text-to-Gloss using mBART : A case study
of Bangla | Initial Version | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Despite a large deaf and dumb population of 1.7 million, Bangla Sign Language
(BdSL) remains a understudied domain. Specifically, there are no works on
Bangla text-to-gloss translation task. To address this gap, we begin by
addressing the dataset problem. We take inspiration from grammatical rule based
gloss generation used in Germany and American sign langauage (ASL) and adapt it
for BdSL. We also leverage LLM to generate synthetic data and use
back-translation, text generation for data augmentation. With dataset prepared,
we started experimentation. We fine-tuned pretrained mBART-50 and
mBERT-multiclass-uncased model on our dataset. We also trained GRU, RNN and a
novel seq-to-seq model with multi-head attention. We observe significant high
performance (ScareBLEU=79.53) with fine-tuning pretrained mBART-50 multilingual
model from Facebook. We then explored why we observe such high performance with
mBART. We soon notice an interesting property of mBART -- it was trained on
shuffled and masked text data. And as we know, gloss form has shuffling
property. So we hypothesize that mBART is inherently good at text-to-gloss
tasks. To find support against this hypothesis, we trained mBART-50 on
PHOENIX-14T benchmark and evaluated it with existing literature. Our mBART-50
finetune demonstrated State-of-the-Art performance on PHOENIX-14T benchmark,
far outperforming existing models in all 6 metrics (ScareBLEU = 63.89, BLEU-1 =
55.14, BLEU-2 = 38.07, BLEU-3 = 27.13, BLEU-4 = 20.68, COMET = 0.624). Based on
the results, this study proposes a new paradigm for text-to-gloss task using
mBART models. Additionally, our results show that BdSL text-to-gloss task can
greatly benefit from rule-based synthetic dataset.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 05:47:51 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Abdullah",
"Sharif Md.",
""
],
[
"Paul",
"Abhijit",
""
],
[
"Rayana",
"Shebuti",
""
],
[
"Kabir",
"Ahmedul",
""
],
[
"Masud",
"Zarif",
""
]
] | TITLE: State-of-the-Art Translation of Text-to-Gloss using mBART : A case study
of Bangla
ABSTRACT: Despite a large deaf and dumb population of 1.7 million, Bangla Sign Language
(BdSL) remains a understudied domain. Specifically, there are no works on
Bangla text-to-gloss translation task. To address this gap, we begin by
addressing the dataset problem. We take inspiration from grammatical rule based
gloss generation used in Germany and American sign langauage (ASL) and adapt it
for BdSL. We also leverage LLM to generate synthetic data and use
back-translation, text generation for data augmentation. With dataset prepared,
we started experimentation. We fine-tuned pretrained mBART-50 and
mBERT-multiclass-uncased model on our dataset. We also trained GRU, RNN and a
novel seq-to-seq model with multi-head attention. We observe significant high
performance (ScareBLEU=79.53) with fine-tuning pretrained mBART-50 multilingual
model from Facebook. We then explored why we observe such high performance with
mBART. We soon notice an interesting property of mBART -- it was trained on
shuffled and masked text data. And as we know, gloss form has shuffling
property. So we hypothesize that mBART is inherently good at text-to-gloss
tasks. To find support against this hypothesis, we trained mBART-50 on
PHOENIX-14T benchmark and evaluated it with existing literature. Our mBART-50
finetune demonstrated State-of-the-Art performance on PHOENIX-14T benchmark,
far outperforming existing models in all 6 metrics (ScareBLEU = 63.89, BLEU-1 =
55.14, BLEU-2 = 38.07, BLEU-3 = 27.13, BLEU-4 = 20.68, COMET = 0.624). Based on
the results, this study proposes a new paradigm for text-to-gloss task using
mBART models. Additionally, our results show that BdSL text-to-gloss task can
greatly benefit from rule-based synthetic dataset.
|
2504.02298 | Xinyu Luo | Xinyu Luo, Kecheng Chen, Pao-Sheng Vincent Sun, Chris Xing Tian,
Arindam Basu, Haoliang Li | SPACE: SPike-Aware Consistency Enhancement for Test-Time Adaptation in
Spiking Neural Networks | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spiking Neural Networks (SNNs), as a biologically plausible alternative to
Artificial Neural Networks (ANNs), have demonstrated advantages in terms of
energy efficiency, temporal processing, and biological plausibility. However,
SNNs are highly sensitive to distribution shifts, which can significantly
degrade their performance in real-world scenarios. Traditional test-time
adaptation (TTA) methods designed for ANNs often fail to address the unique
computational dynamics of SNNs, such as sparsity and temporal spiking behavior.
To address these challenges, we propose $\textbf{SP}$ike-$\textbf{A}$ware
$\textbf{C}$onsistency $\textbf{E}$nhancement (SPACE), the first source-free
and single-instance TTA method specifically designed for SNNs. SPACE leverages
the inherent spike dynamics of SNNs to maximize the consistency of
spike-behavior-based local feature maps across augmented versions of a single
test sample, enabling robust adaptation without requiring source data. We
evaluate SPACE on multiple datasets, including CIFAR-10-C, CIFAR-100-C,
Tiny-ImageNet-C and DVS Gesture-C. Furthermore, SPACE demonstrates strong
generalization across different model architectures, achieving consistent
performance improvements on both VGG9 and ResNet11. Experimental results show
that SPACE outperforms state-of-the-art methods, highlighting its effectiveness
and robustness in real-world settings.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 06:05:05 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Luo",
"Xinyu",
""
],
[
"Chen",
"Kecheng",
""
],
[
"Sun",
"Pao-Sheng Vincent",
""
],
[
"Tian",
"Chris Xing",
""
],
[
"Basu",
"Arindam",
""
],
[
"Li",
"Haoliang",
""
]
] | TITLE: SPACE: SPike-Aware Consistency Enhancement for Test-Time Adaptation in
Spiking Neural Networks
ABSTRACT: Spiking Neural Networks (SNNs), as a biologically plausible alternative to
Artificial Neural Networks (ANNs), have demonstrated advantages in terms of
energy efficiency, temporal processing, and biological plausibility. However,
SNNs are highly sensitive to distribution shifts, which can significantly
degrade their performance in real-world scenarios. Traditional test-time
adaptation (TTA) methods designed for ANNs often fail to address the unique
computational dynamics of SNNs, such as sparsity and temporal spiking behavior.
To address these challenges, we propose $\textbf{SP}$ike-$\textbf{A}$ware
$\textbf{C}$onsistency $\textbf{E}$nhancement (SPACE), the first source-free
and single-instance TTA method specifically designed for SNNs. SPACE leverages
the inherent spike dynamics of SNNs to maximize the consistency of
spike-behavior-based local feature maps across augmented versions of a single
test sample, enabling robust adaptation without requiring source data. We
evaluate SPACE on multiple datasets, including CIFAR-10-C, CIFAR-100-C,
Tiny-ImageNet-C and DVS Gesture-C. Furthermore, SPACE demonstrates strong
generalization across different model architectures, achieving consistent
performance improvements on both VGG9 and ResNet11. Experimental results show
that SPACE outperforms state-of-the-art methods, highlighting its effectiveness
and robustness in real-world settings.
|
2504.02302 | Wupeng Wang | Wupeng Wang, Zexu Pan, Xinke Li, Shuai Wang, Haizhou Li | Causal Self-supervised Pretrained Frontend with Predictive Code for
Speech Separation | arXiv admin note: text overlap with arXiv:2411.03085 | null | null | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speech separation (SS) seeks to disentangle a multi-talker speech mixture
into single-talker speech streams. Although SS can be generally achieved using
offline methods, such a processing paradigm is not suitable for real-time
streaming applications. Causal separation models, which rely only on past and
present information, offer a promising solution for real-time streaming.
However, these models typically suffer from notable performance degradation due
to the absence of future context. In this paper, we introduce a novel frontend
that is designed to mitigate the mismatch between training and run-time
inference by implicitly incorporating future information into causal models
through predictive patterns. The pretrained frontend employs a transformer
decoder network with a causal convolutional encoder as the backbone and is
pretrained in a self-supervised manner with two innovative pretext tasks:
autoregressive hybrid prediction and contextual knowledge distillation. These
tasks enable the model to capture predictive patterns directly from mixtures in
a self-supervised manner. The pretrained frontend subsequently serves as a
feature extractor to generate high-quality predictive patterns. Comprehensive
evaluations on synthetic and real-world datasets validated the effectiveness of
the proposed pretrained frontend.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 06:18:30 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wang",
"Wupeng",
""
],
[
"Pan",
"Zexu",
""
],
[
"Li",
"Xinke",
""
],
[
"Wang",
"Shuai",
""
],
[
"Li",
"Haizhou",
""
]
] | TITLE: Causal Self-supervised Pretrained Frontend with Predictive Code for
Speech Separation
ABSTRACT: Speech separation (SS) seeks to disentangle a multi-talker speech mixture
into single-talker speech streams. Although SS can be generally achieved using
offline methods, such a processing paradigm is not suitable for real-time
streaming applications. Causal separation models, which rely only on past and
present information, offer a promising solution for real-time streaming.
However, these models typically suffer from notable performance degradation due
to the absence of future context. In this paper, we introduce a novel frontend
that is designed to mitigate the mismatch between training and run-time
inference by implicitly incorporating future information into causal models
through predictive patterns. The pretrained frontend employs a transformer
decoder network with a causal convolutional encoder as the backbone and is
pretrained in a self-supervised manner with two innovative pretext tasks:
autoregressive hybrid prediction and contextual knowledge distillation. These
tasks enable the model to capture predictive patterns directly from mixtures in
a self-supervised manner. The pretrained frontend subsequently serves as a
feature extractor to generate high-quality predictive patterns. Comprehensive
evaluations on synthetic and real-world datasets validated the effectiveness of
the proposed pretrained frontend.
|
2504.02312 | Jiayang Xu | Xiaoda Yang, Jiayang Xu, Kaixuan Luan, Xinyu Zhan, Hongshun Qiu,
Shijun Shi, Hao Li, Shuai Yang, Li Zhang, Checheng Yu, Cewu Lu, Lixin Yang | OmniCam: Unified Multimodal Video Generation via Camera Control | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Camera control, which achieves diverse visual effects by changing camera
position and pose, has attracted widespread attention. However, existing
methods face challenges such as complex interaction and limited control
capabilities. To address these issues, we present OmniCam, a unified multimodal
camera control framework. Leveraging large language models and video diffusion
models, OmniCam generates spatio-temporally consistent videos. It supports
various combinations of input modalities: the user can provide text or video
with expected trajectory as camera path guidance, and image or video as content
reference, enabling precise control over camera motion. To facilitate the
training of OmniCam, we introduce the OmniTr dataset, which contains a large
collection of high-quality long-sequence trajectories, videos, and
corresponding descriptions. Experimental results demonstrate that our model
achieves state-of-the-art performance in high-quality camera-controlled video
generation across various metrics.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 06:38:30 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Yang",
"Xiaoda",
""
],
[
"Xu",
"Jiayang",
""
],
[
"Luan",
"Kaixuan",
""
],
[
"Zhan",
"Xinyu",
""
],
[
"Qiu",
"Hongshun",
""
],
[
"Shi",
"Shijun",
""
],
[
"Li",
"Hao",
""
],
[
"Yang",
"Shuai",
""
],
[
"Zhang",
"Li",
""
],
[
"Yu",
"Checheng",
""
],
[
"Lu",
"Cewu",
""
],
[
"Yang",
"Lixin",
""
]
] | TITLE: OmniCam: Unified Multimodal Video Generation via Camera Control
ABSTRACT: Camera control, which achieves diverse visual effects by changing camera
position and pose, has attracted widespread attention. However, existing
methods face challenges such as complex interaction and limited control
capabilities. To address these issues, we present OmniCam, a unified multimodal
camera control framework. Leveraging large language models and video diffusion
models, OmniCam generates spatio-temporally consistent videos. It supports
various combinations of input modalities: the user can provide text or video
with expected trajectory as camera path guidance, and image or video as content
reference, enabling precise control over camera motion. To facilitate the
training of OmniCam, we introduce the OmniTr dataset, which contains a large
collection of high-quality long-sequence trajectories, videos, and
corresponding descriptions. Experimental results demonstrate that our model
achieves state-of-the-art performance in high-quality camera-controlled video
generation across various metrics.
|
2504.02313 | Zhuoran Tan | Zhuoran Tan, Christos Anagnostopoulos, Jeremy Singer | Distributed Temporal Graph Learning with Provenance for APT Detection in
Supply Chains | This paper has been accepted at 45th IEEE International Conference on
Distributed Computing Systems | null | null | null | cs.CR cs.DC | http://creativecommons.org/licenses/by/4.0/ | Cyber supply chain, encompassing digital asserts, software, hardware, has
become an essential component of modern Information and Communications
Technology (ICT) provisioning. However, the growing inter-dependencies have
introduced numerous attack vectors, making supply chains a prime target for
exploitation. In particular, advanced persistent threats (APTs) frequently
leverage supply chain vulnerabilities (SCVs) as entry points, benefiting from
their inherent stealth. Current defense strategies primarly focus on prevention
through blockchain for integrity assurance or detection using plain-text source
code analysis in open-source software (OSS). However, these approaches overlook
scenarios where source code is unavailable and fail to address detection and
defense during runtime. To bridge this gap, we propose a novel approach that
integrates multi-source data, constructs a comprehensive dynamic provenance
graph, and detects APT behavior in real time using temporal graph learning.
Given the lack of tailored datasets in both industry and academia, we also aim
to simulate a custom dataset by replaying real-world supply chain exploits with
multi-source monitoring.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 06:42:26 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Tan",
"Zhuoran",
""
],
[
"Anagnostopoulos",
"Christos",
""
],
[
"Singer",
"Jeremy",
""
]
] | TITLE: Distributed Temporal Graph Learning with Provenance for APT Detection in
Supply Chains
ABSTRACT: Cyber supply chain, encompassing digital asserts, software, hardware, has
become an essential component of modern Information and Communications
Technology (ICT) provisioning. However, the growing inter-dependencies have
introduced numerous attack vectors, making supply chains a prime target for
exploitation. In particular, advanced persistent threats (APTs) frequently
leverage supply chain vulnerabilities (SCVs) as entry points, benefiting from
their inherent stealth. Current defense strategies primarly focus on prevention
through blockchain for integrity assurance or detection using plain-text source
code analysis in open-source software (OSS). However, these approaches overlook
scenarios where source code is unavailable and fail to address detection and
defense during runtime. To bridge this gap, we propose a novel approach that
integrates multi-source data, constructs a comprehensive dynamic provenance
graph, and detects APT behavior in real time using temporal graph learning.
Given the lack of tailored datasets in both industry and academia, we also aim
to simulate a custom dataset by replaying real-world supply chain exploits with
multi-source monitoring.
|
2504.02317 | Hezhe Qiao | Ye Su, Hezhe Qiao, Di Wu, Yuwen Chen, Lin Chen | Temporal Gaussian Copula For Clinical Multivariate Time Series Data
Imputation | Accepted in BIBM2024 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The imputation of the Multivariate time series (MTS) is particularly
challenging since the MTS typically contains irregular patterns of missing
values due to various factors such as instrument failures, interference from
irrelevant data, and privacy regulations. Existing statistical methods and deep
learning methods have shown promising results in time series imputation. In
this paper, we propose a Temporal Gaussian Copula Model (TGC) for three-order
MTS imputation. The key idea is to leverage the Gaussian Copula to explore the
cross-variable and temporal relationships based on the latent Gaussian
representation. Subsequently, we employ an Expectation-Maximization (EM)
algorithm to improve robustness in managing data with varying missing rates.
Comprehensive experiments were conducted on three real-world MTS datasets. The
results demonstrate that our TGC substantially outperforms the state-of-the-art
imputation methods. Additionally, the TGC model exhibits stronger robustness to
the varying missing ratios in the test dataset. Our code is available at
https://github.com/MVL-Lab/TGC-MTS.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 06:44:05 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Su",
"Ye",
""
],
[
"Qiao",
"Hezhe",
""
],
[
"Wu",
"Di",
""
],
[
"Chen",
"Yuwen",
""
],
[
"Chen",
"Lin",
""
]
] | TITLE: Temporal Gaussian Copula For Clinical Multivariate Time Series Data
Imputation
ABSTRACT: The imputation of the Multivariate time series (MTS) is particularly
challenging since the MTS typically contains irregular patterns of missing
values due to various factors such as instrument failures, interference from
irrelevant data, and privacy regulations. Existing statistical methods and deep
learning methods have shown promising results in time series imputation. In
this paper, we propose a Temporal Gaussian Copula Model (TGC) for three-order
MTS imputation. The key idea is to leverage the Gaussian Copula to explore the
cross-variable and temporal relationships based on the latent Gaussian
representation. Subsequently, we employ an Expectation-Maximization (EM)
algorithm to improve robustness in managing data with varying missing rates.
Comprehensive experiments were conducted on three real-world MTS datasets. The
results demonstrate that our TGC substantially outperforms the state-of-the-art
imputation methods. Additionally, the TGC model exhibits stronger robustness to
the varying missing ratios in the test dataset. Our code is available at
https://github.com/MVL-Lab/TGC-MTS.
|
2504.02318 | Samuel Clarke | Samuel Clarke, Suzannah Wistreich, Yanjie Ze, Jiajun Wu | X-Capture: An Open-Source Portable Device for Multi-Sensory Learning | Project page: https://xcapture.github.io/ | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding objects through multiple sensory modalities is fundamental to
human perception, enabling cross-sensory integration and richer comprehension.
For AI and robotic systems to replicate this ability, access to diverse,
high-quality multi-sensory data is critical. Existing datasets are often
limited by their focus on controlled environments, simulated objects, or
restricted modality pairings. We introduce X-Capture, an open-source, portable,
and cost-effective device for real-world multi-sensory data collection, capable
of capturing correlated RGBD images, tactile readings, and impact audio. With a
build cost under $1,000, X-Capture democratizes the creation of multi-sensory
datasets, requiring only consumer-grade tools for assembly. Using X-Capture, we
curate a sample dataset of 3,000 total points on 500 everyday objects from
diverse, real-world environments, offering both richness and variety. Our
experiments demonstrate the value of both the quantity and the sensory breadth
of our data for both pretraining and fine-tuning multi-modal representations
for object-centric tasks such as cross-sensory retrieval and reconstruction.
X-Capture lays the groundwork for advancing human-like sensory representations
in AI, emphasizing scalability, accessibility, and real-world applicability.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 06:44:25 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Clarke",
"Samuel",
""
],
[
"Wistreich",
"Suzannah",
""
],
[
"Ze",
"Yanjie",
""
],
[
"Wu",
"Jiajun",
""
]
] | TITLE: X-Capture: An Open-Source Portable Device for Multi-Sensory Learning
ABSTRACT: Understanding objects through multiple sensory modalities is fundamental to
human perception, enabling cross-sensory integration and richer comprehension.
For AI and robotic systems to replicate this ability, access to diverse,
high-quality multi-sensory data is critical. Existing datasets are often
limited by their focus on controlled environments, simulated objects, or
restricted modality pairings. We introduce X-Capture, an open-source, portable,
and cost-effective device for real-world multi-sensory data collection, capable
of capturing correlated RGBD images, tactile readings, and impact audio. With a
build cost under $1,000, X-Capture democratizes the creation of multi-sensory
datasets, requiring only consumer-grade tools for assembly. Using X-Capture, we
curate a sample dataset of 3,000 total points on 500 everyday objects from
diverse, real-world environments, offering both richness and variety. Our
experiments demonstrate the value of both the quantity and the sensory breadth
of our data for both pretraining and fine-tuning multi-modal representations
for object-centric tasks such as cross-sensory retrieval and reconstruction.
X-Capture lays the groundwork for advancing human-like sensory representations
in AI, emphasizing scalability, accessibility, and real-world applicability.
|
2504.02322 | Zhuoran Tan | Zhuoran Tan, Qiyuan Wang, Christos Anagnostopoulos, Shameem P.
Parambath, Jeremy Singer, Sam Temple | Distributed Log-driven Anomaly Detection System based on Evolving
Decision Making | This paper has been accepted at 45th IEEE International Conference on
Distributed Computing Systems | null | null | null | cs.CR cs.DC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Effective anomaly detection from logs is crucial for enhancing cybersecurity
defenses by enabling the early identification of threats. Despite advances in
anomaly detection, existing systems often fall short in areas such as
post-detection validation, scalability, and effective maintenance. These
limitations not only hinder the detection of new threats but also impair
overall system performance. To address these challenges, we propose CEDLog, a
novel practical framework that integrates Elastic Weight Consolidation (EWC)
for continual learning and implements distributed computing for scalable
processing by integrating Apache Airflow and Dask. In CEDLog, anomalies are
detected through the synthesis of Multi-layer Perceptron (MLP) and Graph
Convolutional Networks (GCNs) using critical features present in event logs.
Through comparisons with update strategies on large-scale datasets, we
demonstrate the strengths of CEDLog, showcasing efficient updates and low false
positives
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 06:50:30 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Tan",
"Zhuoran",
""
],
[
"Wang",
"Qiyuan",
""
],
[
"Anagnostopoulos",
"Christos",
""
],
[
"Parambath",
"Shameem P.",
""
],
[
"Singer",
"Jeremy",
""
],
[
"Temple",
"Sam",
""
]
] | TITLE: Distributed Log-driven Anomaly Detection System based on Evolving
Decision Making
ABSTRACT: Effective anomaly detection from logs is crucial for enhancing cybersecurity
defenses by enabling the early identification of threats. Despite advances in
anomaly detection, existing systems often fall short in areas such as
post-detection validation, scalability, and effective maintenance. These
limitations not only hinder the detection of new threats but also impair
overall system performance. To address these challenges, we propose CEDLog, a
novel practical framework that integrates Elastic Weight Consolidation (EWC)
for continual learning and implements distributed computing for scalable
processing by integrating Apache Airflow and Dask. In CEDLog, anomalies are
detected through the synthesis of Multi-layer Perceptron (MLP) and Graph
Convolutional Networks (GCNs) using critical features present in event logs.
Through comparisons with update strategies on large-scale datasets, we
demonstrate the strengths of CEDLog, showcasing efficient updates and low false
positives
|
2504.02327 | Weibin Liao | Weibin Liao, Xin Gao, Tianyu Jia, Rihong Qiu, Yifan Zhu, Yang Lin, Xu
Chu, Junfeng Zhao, Yasha Wang | LearNAT: Learning NL2SQL with AST-guided Task Decomposition for Large
Language Models | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural Language to SQL (NL2SQL) has emerged as a critical task for enabling
seamless interaction with databases. Recent advancements in Large Language
Models (LLMs) have demonstrated remarkable performance in this domain. However,
existing NL2SQL methods predominantly rely on closed-source LLMs leveraging
prompt engineering, while open-source models typically require fine-tuning to
acquire domain-specific knowledge. Despite these efforts, open-source LLMs
struggle with complex NL2SQL tasks due to the indirect expression of user query
objectives and the semantic gap between user queries and database schemas.
Inspired by the application of reinforcement learning in mathematical
problem-solving to encourage step-by-step reasoning in LLMs, we propose LearNAT
(Learning NL2SQL with AST-guided Task Decomposition), a novel framework that
improves the performance of open-source LLMs on complex NL2SQL tasks through
task decomposition and reinforcement learning. LearNAT introduces three key
components: (1) a Decomposition Synthesis Procedure that leverages Abstract
Syntax Trees (ASTs) to guide efficient search and pruning strategies for task
decomposition, (2) Margin-aware Reinforcement Learning, which employs
fine-grained step-level optimization via DPO with AST margins, and (3) Adaptive
Demonstration Reasoning, a mechanism for dynamically selecting relevant
examples to enhance decomposition capabilities. Extensive experiments on two
benchmark datasets, Spider and BIRD, demonstrate that LearNAT enables a
7B-parameter open-source LLM to achieve performance comparable to GPT-4, while
offering improved efficiency and accessibility.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 06:59:44 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liao",
"Weibin",
""
],
[
"Gao",
"Xin",
""
],
[
"Jia",
"Tianyu",
""
],
[
"Qiu",
"Rihong",
""
],
[
"Zhu",
"Yifan",
""
],
[
"Lin",
"Yang",
""
],
[
"Chu",
"Xu",
""
],
[
"Zhao",
"Junfeng",
""
],
[
"Wang",
"Yasha",
""
]
] | TITLE: LearNAT: Learning NL2SQL with AST-guided Task Decomposition for Large
Language Models
ABSTRACT: Natural Language to SQL (NL2SQL) has emerged as a critical task for enabling
seamless interaction with databases. Recent advancements in Large Language
Models (LLMs) have demonstrated remarkable performance in this domain. However,
existing NL2SQL methods predominantly rely on closed-source LLMs leveraging
prompt engineering, while open-source models typically require fine-tuning to
acquire domain-specific knowledge. Despite these efforts, open-source LLMs
struggle with complex NL2SQL tasks due to the indirect expression of user query
objectives and the semantic gap between user queries and database schemas.
Inspired by the application of reinforcement learning in mathematical
problem-solving to encourage step-by-step reasoning in LLMs, we propose LearNAT
(Learning NL2SQL with AST-guided Task Decomposition), a novel framework that
improves the performance of open-source LLMs on complex NL2SQL tasks through
task decomposition and reinforcement learning. LearNAT introduces three key
components: (1) a Decomposition Synthesis Procedure that leverages Abstract
Syntax Trees (ASTs) to guide efficient search and pruning strategies for task
decomposition, (2) Margin-aware Reinforcement Learning, which employs
fine-grained step-level optimization via DPO with AST margins, and (3) Adaptive
Demonstration Reasoning, a mechanism for dynamically selecting relevant
examples to enhance decomposition capabilities. Extensive experiments on two
benchmark datasets, Spider and BIRD, demonstrate that LearNAT enables a
7B-parameter open-source LLM to achieve performance comparable to GPT-4, while
offering improved efficiency and accessibility.
|
2504.02335 | Seif Mzoughi Msc | Seif Mzoughi and Mohamed Elshafeia and Foutse Khomh | Evaluating and Enhancing Segmentation Model Robustness with Metamorphic
Testing | null | null | null | null | cs.CV cs.SE | http://creativecommons.org/licenses/by/4.0/ | Image segmentation is critical for applications such as medical imaging,
augmented reality, and video surveillance. However, segmentation models often
lack robustness, making them vulnerable to adversarial perturbations from
subtle image distortions. In this work, we propose SegRMT, a metamorphic
testing approach that leverages genetic algorithms (GA) to optimize sequences
of spatial and spectral transformations while preserving image fidelity via a
predefined PSNR threshold. Using the Cityscapes dataset, our method generates
adversarial examples that effectively challenge the DeepLabV3 segmentation
model. Our experiments show that SegRMT reduces DeepLabV3's mean Intersection
over Union (mIoU) to 6.4%, outperforming other adversarial baselines that
decrease mIoU to between 8.5% and 21.7%. Furthermore, when used for adversarial
training, SegRMT boosts model performance, achieving mIoU improvements up to
73% on dedicated adversarial datasets and increasing cross-adversarial mIoU to
53.8%, compared to only 2%-10% for other methods. These findings demonstrate
that SegRMT not only simulates realistic image distortions but also enhances
the robustness of segmentation models, making it a valuable tool for ensuring
reliable performance in safety-critical applications.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 07:15:45 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Mzoughi",
"Seif",
""
],
[
"Elshafeia",
"Mohamed",
""
],
[
"Khomh",
"Foutse",
""
]
] | TITLE: Evaluating and Enhancing Segmentation Model Robustness with Metamorphic
Testing
ABSTRACT: Image segmentation is critical for applications such as medical imaging,
augmented reality, and video surveillance. However, segmentation models often
lack robustness, making them vulnerable to adversarial perturbations from
subtle image distortions. In this work, we propose SegRMT, a metamorphic
testing approach that leverages genetic algorithms (GA) to optimize sequences
of spatial and spectral transformations while preserving image fidelity via a
predefined PSNR threshold. Using the Cityscapes dataset, our method generates
adversarial examples that effectively challenge the DeepLabV3 segmentation
model. Our experiments show that SegRMT reduces DeepLabV3's mean Intersection
over Union (mIoU) to 6.4%, outperforming other adversarial baselines that
decrease mIoU to between 8.5% and 21.7%. Furthermore, when used for adversarial
training, SegRMT boosts model performance, achieving mIoU improvements up to
73% on dedicated adversarial datasets and increasing cross-adversarial mIoU to
53.8%, compared to only 2%-10% for other methods. These findings demonstrate
that SegRMT not only simulates realistic image distortions but also enhances
the robustness of segmentation models, making it a valuable tool for ensuring
reliable performance in safety-critical applications.
|
2504.02345 | Masakazu Yoshimura | Masakazu Yoshimura, Junji Otsuka, Radu Berdan, Takeshi Ohashi | SemiISP/SemiIE: Semi-Supervised Image Signal Processor and Image
Enhancement Leveraging One-to-Many Mapping sRGB-to-RAW | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | DNN-based methods have been successful in Image Signal Processor (ISP) and
image enhancement (IE) tasks. However, the cost of creating training data for
these tasks is considerably higher than for other tasks, making it difficult to
prepare large-scale datasets. Also, creating personalized ISP and IE with
minimal training data can lead to new value streams since preferred image
quality varies depending on the person and use case. While semi-supervised
learning could be a potential solution in such cases, it has rarely been
utilized for these tasks. In this paper, we realize semi-supervised learning
for ISP and IE leveraging a RAW image reconstruction (sRGB-to-RAW) method.
Although existing sRGB-to-RAW methods can generate pseudo-RAW image datasets
that improve the accuracy of RAW-based high-level computer vision tasks such as
object detection, their quality is not sufficient for ISP and IE tasks that
require precise image quality definition. Therefore, we also propose a
sRGB-to-RAW method that can improve the image quality of these tasks. The
proposed semi-supervised learning with the proposed sRGB-to-RAW method
successfully improves the image quality of various models on various datasets.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 07:28:16 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Yoshimura",
"Masakazu",
""
],
[
"Otsuka",
"Junji",
""
],
[
"Berdan",
"Radu",
""
],
[
"Ohashi",
"Takeshi",
""
]
] | TITLE: SemiISP/SemiIE: Semi-Supervised Image Signal Processor and Image
Enhancement Leveraging One-to-Many Mapping sRGB-to-RAW
ABSTRACT: DNN-based methods have been successful in Image Signal Processor (ISP) and
image enhancement (IE) tasks. However, the cost of creating training data for
these tasks is considerably higher than for other tasks, making it difficult to
prepare large-scale datasets. Also, creating personalized ISP and IE with
minimal training data can lead to new value streams since preferred image
quality varies depending on the person and use case. While semi-supervised
learning could be a potential solution in such cases, it has rarely been
utilized for these tasks. In this paper, we realize semi-supervised learning
for ISP and IE leveraging a RAW image reconstruction (sRGB-to-RAW) method.
Although existing sRGB-to-RAW methods can generate pseudo-RAW image datasets
that improve the accuracy of RAW-based high-level computer vision tasks such as
object detection, their quality is not sufficient for ISP and IE tasks that
require precise image quality definition. Therefore, we also propose a
sRGB-to-RAW method that can improve the image quality of these tasks. The
proposed semi-supervised learning with the proposed sRGB-to-RAW method
successfully improves the image quality of various models on various datasets.
|
2504.02349 | Artyom Gadetsky | Artyom Gadetsky, Andrei Atanov, Yulun Jiang, Zhitong Gao, Ghazal
Hosseini Mighan, Amir Zamir, Maria Brbic | Large (Vision) Language Models are Unsupervised In-Context Learners | ICLR 2025 camera-ready | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in large language and vision-language models have enabled
zero-shot inference, allowing models to solve new tasks without task-specific
training. Various adaptation techniques such as prompt engineering, In-Context
Learning (ICL), and supervised fine-tuning can further enhance the model's
performance on a downstream task, but they require substantial manual effort to
construct effective prompts or labeled examples. In this work, we introduce a
joint inference framework for fully unsupervised adaptation, eliminating the
need for manual prompt engineering and labeled examples. Unlike zero-shot
inference, which makes independent predictions, the joint inference makes
predictions simultaneously for all inputs in a given task. Since direct joint
inference involves computationally expensive optimization, we develop efficient
approximation techniques, leading to two unsupervised adaptation methods:
unsupervised fine-tuning and unsupervised ICL. We demonstrate the effectiveness
of our methods across diverse tasks and models, including language-only
Llama-3.1 on natural language processing tasks, reasoning-oriented Qwen2.5-Math
on grade school math problems, vision-language OpenFlamingo on vision tasks,
and the API-only access GPT-4o model on massive multi-discipline tasks. Our
experiments demonstrate substantial improvements over the standard zero-shot
approach, including 39% absolute improvement on the challenging GSM8K math
reasoning dataset. Remarkably, despite being fully unsupervised, our framework
often performs on par with supervised approaches that rely on ground truth
labels.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 07:33:02 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Gadetsky",
"Artyom",
""
],
[
"Atanov",
"Andrei",
""
],
[
"Jiang",
"Yulun",
""
],
[
"Gao",
"Zhitong",
""
],
[
"Mighan",
"Ghazal Hosseini",
""
],
[
"Zamir",
"Amir",
""
],
[
"Brbic",
"Maria",
""
]
] | TITLE: Large (Vision) Language Models are Unsupervised In-Context Learners
ABSTRACT: Recent advances in large language and vision-language models have enabled
zero-shot inference, allowing models to solve new tasks without task-specific
training. Various adaptation techniques such as prompt engineering, In-Context
Learning (ICL), and supervised fine-tuning can further enhance the model's
performance on a downstream task, but they require substantial manual effort to
construct effective prompts or labeled examples. In this work, we introduce a
joint inference framework for fully unsupervised adaptation, eliminating the
need for manual prompt engineering and labeled examples. Unlike zero-shot
inference, which makes independent predictions, the joint inference makes
predictions simultaneously for all inputs in a given task. Since direct joint
inference involves computationally expensive optimization, we develop efficient
approximation techniques, leading to two unsupervised adaptation methods:
unsupervised fine-tuning and unsupervised ICL. We demonstrate the effectiveness
of our methods across diverse tasks and models, including language-only
Llama-3.1 on natural language processing tasks, reasoning-oriented Qwen2.5-Math
on grade school math problems, vision-language OpenFlamingo on vision tasks,
and the API-only access GPT-4o model on massive multi-discipline tasks. Our
experiments demonstrate substantial improvements over the standard zero-shot
approach, including 39% absolute improvement on the challenging GSM8K math
reasoning dataset. Remarkably, despite being fully unsupervised, our framework
often performs on par with supervised approaches that rely on ground truth
labels.
|
2504.02356 | Janghyun Kim | Janghyun Kim, Minseong Kweon, Jinsun Park, Ukcheol Shin | All-day Depth Completion via Thermal-LiDAR Fusion | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Depth completion, which estimates dense depth from sparse LiDAR and RGB
images, has demonstrated outstanding performance in well-lit conditions.
However, due to the limitations of RGB sensors, existing methods often struggle
to achieve reliable performance in harsh environments, such as heavy rain and
low-light conditions. Furthermore, we observe that ground truth depth maps
often suffer from large missing measurements in adverse weather conditions such
as heavy rain, leading to insufficient supervision. In contrast, thermal
cameras are known for providing clear and reliable visibility in such
conditions, yet research on thermal-LiDAR depth completion remains
underexplored. Moreover, the characteristics of thermal images, such as
blurriness, low contrast, and noise, bring unclear depth boundary problems. To
address these challenges, we first evaluate the feasibility and robustness of
thermal-LiDAR depth completion across diverse lighting (eg., well-lit,
low-light), weather (eg., clear-sky, rainy), and environment (eg., indoor,
outdoor) conditions, by conducting extensive benchmarks on the MS$^2$ and ViViD
datasets. In addition, we propose a framework that utilizes COntrastive
learning and Pseudo-Supervision (COPS) to enhance depth boundary clarity and
improve completion accuracy by leveraging a depth foundation model in two key
ways. First, COPS enforces a depth-aware contrastive loss between different
depth points by mining positive and negative samples using a monocular depth
foundation model to sharpen depth boundaries. Second, it mitigates the issue of
incomplete supervision from ground truth depth maps by leveraging foundation
model predictions as dense depth priors. We also provide in-depth analyses of
the key challenges in thermal-LiDAR depth completion to aid in understanding
the task and encourage future research.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 07:45:03 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Kim",
"Janghyun",
""
],
[
"Kweon",
"Minseong",
""
],
[
"Park",
"Jinsun",
""
],
[
"Shin",
"Ukcheol",
""
]
] | TITLE: All-day Depth Completion via Thermal-LiDAR Fusion
ABSTRACT: Depth completion, which estimates dense depth from sparse LiDAR and RGB
images, has demonstrated outstanding performance in well-lit conditions.
However, due to the limitations of RGB sensors, existing methods often struggle
to achieve reliable performance in harsh environments, such as heavy rain and
low-light conditions. Furthermore, we observe that ground truth depth maps
often suffer from large missing measurements in adverse weather conditions such
as heavy rain, leading to insufficient supervision. In contrast, thermal
cameras are known for providing clear and reliable visibility in such
conditions, yet research on thermal-LiDAR depth completion remains
underexplored. Moreover, the characteristics of thermal images, such as
blurriness, low contrast, and noise, bring unclear depth boundary problems. To
address these challenges, we first evaluate the feasibility and robustness of
thermal-LiDAR depth completion across diverse lighting (eg., well-lit,
low-light), weather (eg., clear-sky, rainy), and environment (eg., indoor,
outdoor) conditions, by conducting extensive benchmarks on the MS$^2$ and ViViD
datasets. In addition, we propose a framework that utilizes COntrastive
learning and Pseudo-Supervision (COPS) to enhance depth boundary clarity and
improve completion accuracy by leveraging a depth foundation model in two key
ways. First, COPS enforces a depth-aware contrastive loss between different
depth points by mining positive and negative samples using a monocular depth
foundation model to sharpen depth boundaries. Second, it mitigates the issue of
incomplete supervision from ground truth depth maps by leveraging foundation
model predictions as dense depth priors. We also provide in-depth analyses of
the key challenges in thermal-LiDAR depth completion to aid in understanding
the task and encourage future research.
|
2504.02357 | Xiaolei Li | Xiaolei Li, Jialun Cao, Yepang Liu, Shing-Chi Cheung, Hailong Wang | ReuseDroid: A VLM-empowered Android UI Test Migrator Boosted by Active
Feedback | 13 pages, 5 figures | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | GUI testing is an essential quality assurance process in mobile app
development. However, the creation and maintenance of GUI tests for mobile apps
are resource-intensive and costly. Recognizing that many apps share similar
functionalities, researchers have proposed various techniques to migrate GUI
tests from one app to another with similar features. For example, some
techniques employ mapping-based approaches to align the GUI elements traversed
by the tests of a source app to those present in the target app. Other test
migration techniques have also been proposed to leverage large language models
(LLMs) by adapting the GUI tasks in source tests. However, these techniques are
ineffective in dealing with different operational logic between the source and
target apps. The semantics of GUI elements may not be correctly inferred due to
the missing analysis of these flows. In this work, we propose REUSEDROID, a
novel multiagent framework for GUI test migration empowered by Large
Vision-Language Models (VLMs). REUSEDROID is powered by multiple VLM-based
agents, each tackling a stage of the test migration process by leveraging the
relevant visual and textual information embedded in GUI pages. An insight of
REUSEDROID is to migrate tests based only on the core logic shared across
similar apps, while their entire operational logic could differ. We evaluate
REUSEDROID on LinPro, a new test migration dataset that consists of 578
migration tasks for 39 popular apps across 4 categories. The experimental
result shows that REUSEDROID can successfully migrate 90.3% of the migration
tasks, outperforming the best mapping-based and LLM-based baselines by 318.1%
and 109.1%, respectively.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 07:45:09 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Li",
"Xiaolei",
""
],
[
"Cao",
"Jialun",
""
],
[
"Liu",
"Yepang",
""
],
[
"Cheung",
"Shing-Chi",
""
],
[
"Wang",
"Hailong",
""
]
] | TITLE: ReuseDroid: A VLM-empowered Android UI Test Migrator Boosted by Active
Feedback
ABSTRACT: GUI testing is an essential quality assurance process in mobile app
development. However, the creation and maintenance of GUI tests for mobile apps
are resource-intensive and costly. Recognizing that many apps share similar
functionalities, researchers have proposed various techniques to migrate GUI
tests from one app to another with similar features. For example, some
techniques employ mapping-based approaches to align the GUI elements traversed
by the tests of a source app to those present in the target app. Other test
migration techniques have also been proposed to leverage large language models
(LLMs) by adapting the GUI tasks in source tests. However, these techniques are
ineffective in dealing with different operational logic between the source and
target apps. The semantics of GUI elements may not be correctly inferred due to
the missing analysis of these flows. In this work, we propose REUSEDROID, a
novel multiagent framework for GUI test migration empowered by Large
Vision-Language Models (VLMs). REUSEDROID is powered by multiple VLM-based
agents, each tackling a stage of the test migration process by leveraging the
relevant visual and textual information embedded in GUI pages. An insight of
REUSEDROID is to migrate tests based only on the core logic shared across
similar apps, while their entire operational logic could differ. We evaluate
REUSEDROID on LinPro, a new test migration dataset that consists of 578
migration tasks for 39 popular apps across 4 categories. The experimental
result shows that REUSEDROID can successfully migrate 90.3% of the migration
tasks, outperforming the best mapping-based and LLM-based baselines by 318.1%
and 109.1%, respectively.
|
2504.02362 | Wang Haodian | Haodian Wang, Long Peng, Yuejin Sun, Zengyu Wan, Yang Wang and Yang
Cao | Brightness Perceiving for Recursive Low-Light Image Enhancement | null | IEEE Transactions on Artificial Intelligence Vol 5, no. 6,
3034--3045 (2023) | 10.1109/TAI.2023.3339092 | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the wide dynamic range in real low-light scenes, there will be large
differences in the degree of contrast degradation and detail blurring of
captured images, making it difficult for existing end-to-end methods to enhance
low-light images to normal exposure. To address the above issue, we decompose
low-light image enhancement into a recursive enhancement task and propose a
brightness-perceiving-based recursive enhancement framework for high dynamic
range low-light image enhancement. Specifically, our recursive enhancement
framework consists of two parallel sub-networks: Adaptive Contrast and Texture
enhancement network (ACT-Net) and Brightness Perception network (BP-Net). The
ACT-Net is proposed to adaptively enhance image contrast and details under the
guidance of the brightness adjustment branch and gradient adjustment branch,
which are proposed to perceive the degradation degree of contrast and details
in low-light images. To adaptively enhance images captured under different
brightness levels, BP-Net is proposed to control the recursive enhancement
times of ACT-Net by exploring the image brightness distribution properties.
Finally, in order to coordinate ACT-Net and BP-Net, we design a novel
unsupervised training strategy to facilitate the training procedure. To further
validate the effectiveness of the proposed method, we construct a new dataset
with a broader brightness distribution by mixing three low-light datasets.
Compared with eleven existing representative methods, the proposed method
achieves new SOTA performance on six reference and no reference metrics.
Specifically, the proposed method improves the PSNR by 0.9 dB compared to the
existing SOTA method.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 07:53:33 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wang",
"Haodian",
""
],
[
"Peng",
"Long",
""
],
[
"Sun",
"Yuejin",
""
],
[
"Wan",
"Zengyu",
""
],
[
"Wang",
"Yang",
""
],
[
"Cao",
"Yang",
""
]
] | TITLE: Brightness Perceiving for Recursive Low-Light Image Enhancement
ABSTRACT: Due to the wide dynamic range in real low-light scenes, there will be large
differences in the degree of contrast degradation and detail blurring of
captured images, making it difficult for existing end-to-end methods to enhance
low-light images to normal exposure. To address the above issue, we decompose
low-light image enhancement into a recursive enhancement task and propose a
brightness-perceiving-based recursive enhancement framework for high dynamic
range low-light image enhancement. Specifically, our recursive enhancement
framework consists of two parallel sub-networks: Adaptive Contrast and Texture
enhancement network (ACT-Net) and Brightness Perception network (BP-Net). The
ACT-Net is proposed to adaptively enhance image contrast and details under the
guidance of the brightness adjustment branch and gradient adjustment branch,
which are proposed to perceive the degradation degree of contrast and details
in low-light images. To adaptively enhance images captured under different
brightness levels, BP-Net is proposed to control the recursive enhancement
times of ACT-Net by exploring the image brightness distribution properties.
Finally, in order to coordinate ACT-Net and BP-Net, we design a novel
unsupervised training strategy to facilitate the training procedure. To further
validate the effectiveness of the proposed method, we construct a new dataset
with a broader brightness distribution by mixing three low-light datasets.
Compared with eleven existing representative methods, the proposed method
achieves new SOTA performance on six reference and no reference metrics.
Specifically, the proposed method improves the PSNR by 0.9 dB compared to the
existing SOTA method.
|
2504.02367 | Zhendong Cao | Zhendong Cao, Lei Wang | CrystalFormer-RL: Reinforcement Fine-Tuning for Materials Design | 8 pages, 6 figures | null | null | null | cond-mat.mtrl-sci cs.LG physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement fine-tuning has instrumental enhanced the instruction-following
and reasoning abilities of large language models. In this work, we explore the
applications of reinforcement fine-tuning to the autoregressive
transformer-based materials generative model CrystalFormer (arXiv:2403.15734)
using discriminative machine learning models such as interatomic potentials and
property prediction models. By optimizing reward signals-such as energy above
the convex hull and material property figures of merit-reinforcement
fine-tuning infuses knowledge from discriminative models into generative
models. The resulting model, CrystalFormer-RL, shows enhanced stability in
generated crystals and successfully discovers crystals with desirable yet
conflicting material properties, such as substantial dielectric constant and
band gap simultaneously. Notably, we observe that reinforcement fine-tuning
enables not only the property-guided novel material design ability of
generative pre-trained model but also unlocks property-driven material
retrieval from the unsupervised pre-training dataset. Leveraging rewards from
discriminative models to fine-tune materials generative models opens an
exciting gateway to the synergies of the machine learning ecosystem for
materials.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 07:59:30 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Cao",
"Zhendong",
""
],
[
"Wang",
"Lei",
""
]
] | TITLE: CrystalFormer-RL: Reinforcement Fine-Tuning for Materials Design
ABSTRACT: Reinforcement fine-tuning has instrumental enhanced the instruction-following
and reasoning abilities of large language models. In this work, we explore the
applications of reinforcement fine-tuning to the autoregressive
transformer-based materials generative model CrystalFormer (arXiv:2403.15734)
using discriminative machine learning models such as interatomic potentials and
property prediction models. By optimizing reward signals-such as energy above
the convex hull and material property figures of merit-reinforcement
fine-tuning infuses knowledge from discriminative models into generative
models. The resulting model, CrystalFormer-RL, shows enhanced stability in
generated crystals and successfully discovers crystals with desirable yet
conflicting material properties, such as substantial dielectric constant and
band gap simultaneously. Notably, we observe that reinforcement fine-tuning
enables not only the property-guided novel material design ability of
generative pre-trained model but also unlocks property-driven material
retrieval from the unsupervised pre-training dataset. Leveraging rewards from
discriminative models to fine-tune materials generative models opens an
exciting gateway to the synergies of the machine learning ecosystem for
materials.
|
2504.02377 | Zhelin Xu | Zhelin Xu, Shuhei Yamamoto, Hideo Joho | Research Paper Recommender System by Considering Users' Information
Seeking Behaviors | 9 pages, 5 figures, accepted as a full paper at IJCNN 2025 | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid growth of scientific publications, researchers need to spend
more time and effort searching for papers that align with their research
interests. To address this challenge, paper recommendation systems have been
developed to help researchers in effectively identifying relevant paper. One of
the leading approaches to paper recommendation is content-based filtering
method. Traditional content-based filtering methods recommend relevant papers
to users based on the overall similarity of papers. However, these approaches
do not take into account the information seeking behaviors that users commonly
employ when searching for literature. Such behaviors include not only
evaluating the overall similarity among papers, but also focusing on specific
sections, such as the method section, to ensure that the approach aligns with
the user's interests. In this paper, we propose a content-based filtering
recommendation method that takes this information seeking behavior into
account. Specifically, in addition to considering the overall content of a
paper, our approach also takes into account three specific sections
(background, method, and results) and assigns weights to them to better reflect
user preferences. We conduct offline evaluations on the publicly available DBLP
dataset, and the results demonstrate that the proposed method outperforms six
baseline methods in terms of precision, recall, F1-score, MRR, and MAP.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 08:11:58 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Xu",
"Zhelin",
""
],
[
"Yamamoto",
"Shuhei",
""
],
[
"Joho",
"Hideo",
""
]
] | TITLE: Research Paper Recommender System by Considering Users' Information
Seeking Behaviors
ABSTRACT: With the rapid growth of scientific publications, researchers need to spend
more time and effort searching for papers that align with their research
interests. To address this challenge, paper recommendation systems have been
developed to help researchers in effectively identifying relevant paper. One of
the leading approaches to paper recommendation is content-based filtering
method. Traditional content-based filtering methods recommend relevant papers
to users based on the overall similarity of papers. However, these approaches
do not take into account the information seeking behaviors that users commonly
employ when searching for literature. Such behaviors include not only
evaluating the overall similarity among papers, but also focusing on specific
sections, such as the method section, to ensure that the approach aligns with
the user's interests. In this paper, we propose a content-based filtering
recommendation method that takes this information seeking behavior into
account. Specifically, in addition to considering the overall content of a
paper, our approach also takes into account three specific sections
(background, method, and results) and assigns weights to them to better reflect
user preferences. We conduct offline evaluations on the publicly available DBLP
dataset, and the results demonstrate that the proposed method outperforms six
baseline methods in terms of precision, recall, F1-score, MRR, and MAP.
|
2504.02382 | Yudi Sang | Yudi Sang, Yanzhen Liu, Sutuke Yibulayimu, Yunning Wang, Benjamin D.
Killeen, Mingxu Liu, Ping-Cheng Ku, Ole Johannsen, Karol Gotkowski,
Maximilian Zenk, Klaus Maier-Hein, Fabian Isensee, Peiyan Yue, Yi Wang,
Haidong Yu, Zhaohong Pan, Yutong He, Xiaokun Liang, Daiqi Liu, Fuxin Fan,
Artur Jurgas, Andrzej Skalski, Yuxi Ma, Jing Yang, Szymon P{\l}otka, Rafa{\l}
Litka, Gang Zhu, Yingchun Song, Mathias Unberath, Mehran Armand, Dan Ruan, S.
Kevin Zhou, Qiyong Cao, Chunpeng Zhao, Xinbao Wu, and Yu Wang | Benchmark of Segmentation Techniques for Pelvic Fracture in CT and
X-ray: Summary of the PENGWIN 2024 Challenge | PENGWIN 2024 Challenge Report | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The segmentation of pelvic fracture fragments in CT and X-ray images is
crucial for trauma diagnosis, surgical planning, and intraoperative guidance.
However, accurately and efficiently delineating the bone fragments remains a
significant challenge due to complex anatomy and imaging limitations. The
PENGWIN challenge, organized as a MICCAI 2024 satellite event, aimed to advance
automated fracture segmentation by benchmarking state-of-the-art algorithms on
these complex tasks. A diverse dataset of 150 CT scans was collected from
multiple clinical centers, and a large set of simulated X-ray images was
generated using the DeepDRR method. Final submissions from 16 teams worldwide
were evaluated under a rigorous multi-metric testing scheme. The top-performing
CT algorithm achieved an average fragment-wise intersection over union (IoU) of
0.930, demonstrating satisfactory accuracy. However, in the X-ray task, the
best algorithm attained an IoU of 0.774, highlighting the greater challenges
posed by overlapping anatomical structures. Beyond the quantitative evaluation,
the challenge revealed methodological diversity in algorithm design. Variations
in instance representation, such as primary-secondary classification versus
boundary-core separation, led to differing segmentation strategies. Despite
promising results, the challenge also exposed inherent uncertainties in
fragment definition, particularly in cases of incomplete fractures. These
findings suggest that interactive segmentation approaches, integrating human
decision-making with task-relevant information, may be essential for improving
model reliability and clinical applicability.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 08:19:36 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Sang",
"Yudi",
""
],
[
"Liu",
"Yanzhen",
""
],
[
"Yibulayimu",
"Sutuke",
""
],
[
"Wang",
"Yunning",
""
],
[
"Killeen",
"Benjamin D.",
""
],
[
"Liu",
"Mingxu",
""
],
[
"Ku",
"Ping-Cheng",
""
],
[
"Johannsen",
"Ole",
""
],
[
"Gotkowski",
"Karol",
""
],
[
"Zenk",
"Maximilian",
""
],
[
"Maier-Hein",
"Klaus",
""
],
[
"Isensee",
"Fabian",
""
],
[
"Yue",
"Peiyan",
""
],
[
"Wang",
"Yi",
""
],
[
"Yu",
"Haidong",
""
],
[
"Pan",
"Zhaohong",
""
],
[
"He",
"Yutong",
""
],
[
"Liang",
"Xiaokun",
""
],
[
"Liu",
"Daiqi",
""
],
[
"Fan",
"Fuxin",
""
],
[
"Jurgas",
"Artur",
""
],
[
"Skalski",
"Andrzej",
""
],
[
"Ma",
"Yuxi",
""
],
[
"Yang",
"Jing",
""
],
[
"Płotka",
"Szymon",
""
],
[
"Litka",
"Rafał",
""
],
[
"Zhu",
"Gang",
""
],
[
"Song",
"Yingchun",
""
],
[
"Unberath",
"Mathias",
""
],
[
"Armand",
"Mehran",
""
],
[
"Ruan",
"Dan",
""
],
[
"Zhou",
"S. Kevin",
""
],
[
"Cao",
"Qiyong",
""
],
[
"Zhao",
"Chunpeng",
""
],
[
"Wu",
"Xinbao",
""
],
[
"Wang",
"Yu",
""
]
] | TITLE: Benchmark of Segmentation Techniques for Pelvic Fracture in CT and
X-ray: Summary of the PENGWIN 2024 Challenge
ABSTRACT: The segmentation of pelvic fracture fragments in CT and X-ray images is
crucial for trauma diagnosis, surgical planning, and intraoperative guidance.
However, accurately and efficiently delineating the bone fragments remains a
significant challenge due to complex anatomy and imaging limitations. The
PENGWIN challenge, organized as a MICCAI 2024 satellite event, aimed to advance
automated fracture segmentation by benchmarking state-of-the-art algorithms on
these complex tasks. A diverse dataset of 150 CT scans was collected from
multiple clinical centers, and a large set of simulated X-ray images was
generated using the DeepDRR method. Final submissions from 16 teams worldwide
were evaluated under a rigorous multi-metric testing scheme. The top-performing
CT algorithm achieved an average fragment-wise intersection over union (IoU) of
0.930, demonstrating satisfactory accuracy. However, in the X-ray task, the
best algorithm attained an IoU of 0.774, highlighting the greater challenges
posed by overlapping anatomical structures. Beyond the quantitative evaluation,
the challenge revealed methodological diversity in algorithm design. Variations
in instance representation, such as primary-secondary classification versus
boundary-core separation, led to differing segmentation strategies. Despite
promising results, the challenge also exposed inherent uncertainties in
fragment definition, particularly in cases of incomplete fractures. These
findings suggest that interactive segmentation approaches, integrating human
decision-making with task-relevant information, may be essential for improving
model reliability and clinical applicability.
|
2504.02386 | Kim Sung-Bin | Kim Sung-Bin, Jeongsoo Choi, Puyuan Peng, Joon Son Chung, Tae-Hyun Oh,
David Harwath | VoiceCraft-Dub: Automated Video Dubbing with Neural Codec Language
Models | https://voicecraft-dub.github.io/ | null | null | null | cs.CV eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present VoiceCraft-Dub, a novel approach for automated video dubbing that
synthesizes high-quality speech from text and facial cues. This task has broad
applications in filmmaking, multimedia creation, and assisting voice-impaired
individuals. Building on the success of Neural Codec Language Models (NCLMs)
for speech synthesis, our method extends their capabilities by incorporating
video features, ensuring that synthesized speech is time-synchronized and
expressively aligned with facial movements while preserving natural prosody. To
inject visual cues, we design adapters to align facial features with the NCLM
token space and introduce audio-visual fusion layers to merge audio-visual
information within the NCLM framework. Additionally, we curate CelebV-Dub, a
new dataset of expressive, real-world videos specifically designed for
automated video dubbing. Extensive experiments show that our model achieves
high-quality, intelligible, and natural speech synthesis with accurate lip
synchronization, outperforming existing methods in human perception and
performing favorably in objective evaluations. We also adapt VoiceCraft-Dub for
the video-to-speech task, demonstrating its versatility for various
applications.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 08:24:47 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Sung-Bin",
"Kim",
""
],
[
"Choi",
"Jeongsoo",
""
],
[
"Peng",
"Puyuan",
""
],
[
"Chung",
"Joon Son",
""
],
[
"Oh",
"Tae-Hyun",
""
],
[
"Harwath",
"David",
""
]
] | TITLE: VoiceCraft-Dub: Automated Video Dubbing with Neural Codec Language
Models
ABSTRACT: We present VoiceCraft-Dub, a novel approach for automated video dubbing that
synthesizes high-quality speech from text and facial cues. This task has broad
applications in filmmaking, multimedia creation, and assisting voice-impaired
individuals. Building on the success of Neural Codec Language Models (NCLMs)
for speech synthesis, our method extends their capabilities by incorporating
video features, ensuring that synthesized speech is time-synchronized and
expressively aligned with facial movements while preserving natural prosody. To
inject visual cues, we design adapters to align facial features with the NCLM
token space and introduce audio-visual fusion layers to merge audio-visual
information within the NCLM framework. Additionally, we curate CelebV-Dub, a
new dataset of expressive, real-world videos specifically designed for
automated video dubbing. Extensive experiments show that our model achieves
high-quality, intelligible, and natural speech synthesis with accurate lip
synchronization, outperforming existing methods in human perception and
performing favorably in objective evaluations. We also adapt VoiceCraft-Dub for
the video-to-speech task, demonstrating its versatility for various
applications.
|
2504.02403 | Max M\"uller-Eberstein | Max M\"uller-Eberstein, Mike Zhang, Elisa Bassignana, Peter Brunsgaard
Trolle and Rob van der Goot | DaKultur: Evaluating the Cultural Awareness of Language Models for
Danish with Native Speakers | Accepted at C3NLP at NAACL | null | null | null | cs.CL cs.CY cs.HC | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have seen widespread societal adoption. However,
while they are able to interact with users in languages beyond English, they
have been shown to lack cultural awareness, providing anglocentric or
inappropriate responses for underrepresented language communities. To
investigate this gap and disentangle linguistic versus cultural proficiency, we
conduct the first cultural evaluation study for the mid-resource language of
Danish, in which native speakers prompt different models to solve tasks
requiring cultural awareness. Our analysis of the resulting 1,038 interactions
from 63 demographically diverse participants highlights open challenges to
cultural adaptation: Particularly, how currently employed automatically
translated data are insufficient to train or measure cultural adaptation, and
how training on native-speaker data can more than double response acceptance
rates. We release our study data as DaKultur - the first native Danish cultural
awareness dataset.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 08:52:42 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Müller-Eberstein",
"Max",
""
],
[
"Zhang",
"Mike",
""
],
[
"Bassignana",
"Elisa",
""
],
[
"Trolle",
"Peter Brunsgaard",
""
],
[
"van der Goot",
"Rob",
""
]
] | TITLE: DaKultur: Evaluating the Cultural Awareness of Language Models for
Danish with Native Speakers
ABSTRACT: Large Language Models (LLMs) have seen widespread societal adoption. However,
while they are able to interact with users in languages beyond English, they
have been shown to lack cultural awareness, providing anglocentric or
inappropriate responses for underrepresented language communities. To
investigate this gap and disentangle linguistic versus cultural proficiency, we
conduct the first cultural evaluation study for the mid-resource language of
Danish, in which native speakers prompt different models to solve tasks
requiring cultural awareness. Our analysis of the resulting 1,038 interactions
from 63 demographically diverse participants highlights open challenges to
cultural adaptation: Particularly, how currently employed automatically
translated data are insufficient to train or measure cultural adaptation, and
how training on native-speaker data can more than double response acceptance
rates. We release our study data as DaKultur - the first native Danish cultural
awareness dataset.
|
2504.02404 | Xiang Feng | Xiang Feng, Wentao Jiang, Zengmao Wang, Yong Luo, Pingbo Xu, Baosheng
Yu, Hua Jin, Bo Du, Jing Zhang | AnesBench: Multi-Dimensional Evaluation of LLM Reasoning in
Anesthesiology | 23 pages, 9 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The application of large language models (LLMs) in the medical field has
gained significant attention, yet their reasoning capabilities in more
specialized domains like anesthesiology remain underexplored. In this paper, we
systematically evaluate the reasoning capabilities of LLMs in anesthesiology
and analyze key factors influencing their performance. To this end, we
introduce AnesBench, a cross-lingual benchmark designed to assess
anesthesiology-related reasoning across three levels: factual retrieval (System
1), hybrid reasoning (System 1.x), and complex decision-making (System 2).
Through extensive experiments, we first explore how model characteristics,
including model scale, Chain of Thought (CoT) length, and language
transferability, affect reasoning performance. Then, we further evaluate the
effectiveness of different training strategies, leveraging our curated
anesthesiology-related dataset, including continuous pre-training (CPT) and
supervised fine-tuning (SFT). Additionally, we also investigate how the
test-time reasoning techniques, such as Best-of-N sampling and beam search,
influence reasoning performance, and assess the impact of reasoning-enhanced
model distillation, specifically DeepSeek-R1. We will publicly release
AnesBench, along with our CPT and SFT training datasets and evaluation code at
https://github.com/MiliLab/AnesBench.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 08:54:23 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Feng",
"Xiang",
""
],
[
"Jiang",
"Wentao",
""
],
[
"Wang",
"Zengmao",
""
],
[
"Luo",
"Yong",
""
],
[
"Xu",
"Pingbo",
""
],
[
"Yu",
"Baosheng",
""
],
[
"Jin",
"Hua",
""
],
[
"Du",
"Bo",
""
],
[
"Zhang",
"Jing",
""
]
] | TITLE: AnesBench: Multi-Dimensional Evaluation of LLM Reasoning in
Anesthesiology
ABSTRACT: The application of large language models (LLMs) in the medical field has
gained significant attention, yet their reasoning capabilities in more
specialized domains like anesthesiology remain underexplored. In this paper, we
systematically evaluate the reasoning capabilities of LLMs in anesthesiology
and analyze key factors influencing their performance. To this end, we
introduce AnesBench, a cross-lingual benchmark designed to assess
anesthesiology-related reasoning across three levels: factual retrieval (System
1), hybrid reasoning (System 1.x), and complex decision-making (System 2).
Through extensive experiments, we first explore how model characteristics,
including model scale, Chain of Thought (CoT) length, and language
transferability, affect reasoning performance. Then, we further evaluate the
effectiveness of different training strategies, leveraging our curated
anesthesiology-related dataset, including continuous pre-training (CPT) and
supervised fine-tuning (SFT). Additionally, we also investigate how the
test-time reasoning techniques, such as Best-of-N sampling and beam search,
influence reasoning performance, and assess the impact of reasoning-enhanced
model distillation, specifically DeepSeek-R1. We will publicly release
AnesBench, along with our CPT and SFT training datasets and evaluation code at
https://github.com/MiliLab/AnesBench.
|
2504.02408 | Naomi Silverstein | Naomi Silverstein, Efrat Leibowitz, Ron Beloosesky, Haim Azhari | Translation of Fetal Brain Ultrasound Images into Pseudo-MRI Images
using Artificial Intelligence | 13 pages, 7 figures | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ultrasound is a widely accessible and cost-effective medical imaging tool
commonly used for prenatal evaluation of the fetal brain. However, it has
limitations, particularly in the third trimester, where the complexity of the
fetal brain requires high image quality for extracting quantitative data. In
contrast, magnetic resonance imaging (MRI) offers superior image quality and
tissue differentiation but is less available, expensive, and requires
time-consuming acquisition. Thus, transforming ultrasonic images into an
MRI-mimicking display may be advantageous and allow better tissue anatomy
presentation. To address this goal, we have examined the use of artificial
intelligence, implementing a diffusion model renowned for generating
high-quality images. The proposed method, termed "Dual Diffusion Imposed
Correlation" (DDIC), leverages a diffusion-based translation methodology,
assuming a shared latent space between ultrasound and MRI domains. Model
training was obtained utilizing the "HC18" dataset for ultrasound and the "CRL
fetal brain atlas" along with the "FeTA " datasets for MRI. The generated
pseudo-MRI images provide notable improvements in visual discrimination of
brain tissue, especially in the lateral ventricles and the Sylvian fissure,
characterized by enhanced contrast clarity. Improvement was demonstrated in
Mutual information, Peak signal-to-noise ratio, Fr\'echet Inception Distance,
and Contrast-to-noise ratio. Findings from these evaluations indicate
statistically significant superior performance of the DDIC compared to other
translation methodologies. In addition, a Medical Opinion Test was obtained
from 5 gynecologists. The results demonstrated display improvement in 81% of
the tested images. In conclusion, the presented pseudo-MRI images hold the
potential for streamlining diagnosis and enhancing clinical outcomes through
improved representation.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 08:59:33 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Silverstein",
"Naomi",
""
],
[
"Leibowitz",
"Efrat",
""
],
[
"Beloosesky",
"Ron",
""
],
[
"Azhari",
"Haim",
""
]
] | TITLE: Translation of Fetal Brain Ultrasound Images into Pseudo-MRI Images
using Artificial Intelligence
ABSTRACT: Ultrasound is a widely accessible and cost-effective medical imaging tool
commonly used for prenatal evaluation of the fetal brain. However, it has
limitations, particularly in the third trimester, where the complexity of the
fetal brain requires high image quality for extracting quantitative data. In
contrast, magnetic resonance imaging (MRI) offers superior image quality and
tissue differentiation but is less available, expensive, and requires
time-consuming acquisition. Thus, transforming ultrasonic images into an
MRI-mimicking display may be advantageous and allow better tissue anatomy
presentation. To address this goal, we have examined the use of artificial
intelligence, implementing a diffusion model renowned for generating
high-quality images. The proposed method, termed "Dual Diffusion Imposed
Correlation" (DDIC), leverages a diffusion-based translation methodology,
assuming a shared latent space between ultrasound and MRI domains. Model
training was obtained utilizing the "HC18" dataset for ultrasound and the "CRL
fetal brain atlas" along with the "FeTA " datasets for MRI. The generated
pseudo-MRI images provide notable improvements in visual discrimination of
brain tissue, especially in the lateral ventricles and the Sylvian fissure,
characterized by enhanced contrast clarity. Improvement was demonstrated in
Mutual information, Peak signal-to-noise ratio, Fr\'echet Inception Distance,
and Contrast-to-noise ratio. Findings from these evaluations indicate
statistically significant superior performance of the DDIC compared to other
translation methodologies. In addition, a Medical Opinion Test was obtained
from 5 gynecologists. The results demonstrated display improvement in 81% of
the tested images. In conclusion, the presented pseudo-MRI images hold the
potential for streamlining diagnosis and enhancing clinical outcomes through
improved representation.
|
2504.02416 | Peifu Liu | Peifu Liu, Huiyan Bai, Tingfa Xu, Jihui Wang, Huan Chen, Jianan Li | Hyperspectral Remote Sensing Images Salient Object Detection: The First
Benchmark Dataset and Baseline | Accepted by TGRS 2025 | null | 10.1109/TGRS.2025.3558189 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of hyperspectral remote sensing image salient object detection
(HRSI-SOD) is to identify objects or regions that exhibit distinct spectrum
contrasts with the background. This area holds significant promise for
practical applications; however, progress has been limited by a notable
scarcity of dedicated datasets and methodologies. To bridge this gap and
stimulate further research, we introduce the first HRSI-SOD dataset, termed
HRSSD, which includes 704 hyperspectral images and 5327 pixel-level annotated
salient objects. The HRSSD dataset poses substantial challenges for salient
object detection algorithms due to large scale variation, diverse
foreground-background relations, and multi-salient objects. Additionally, we
propose an innovative and efficient baseline model for HRSI-SOD, termed the
Deep Spectral Saliency Network (DSSN). The core of DSSN is the Cross-level
Saliency Assessment Block, which performs pixel-wise attention and evaluates
the contributions of multi-scale similarity maps at each spatial location,
effectively reducing erroneous responses in cluttered regions and emphasizes
salient regions across scales. Additionally, the High-resolution Fusion Module
combines bottom-up fusion strategy and learned spatial upsampling to leverage
the strengths of multi-scale saliency maps, ensuring accurate localization of
small objects. Experiments on the HRSSD dataset robustly validate the
superiority of DSSN, underscoring the critical need for specialized datasets
and methodologies in this domain. Further evaluations on the HSOD-BIT and
HS-SOD datasets demonstrate the generalizability of the proposed method. The
dataset and source code are publicly available at
https://github.com/laprf/HRSSD.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 09:12:42 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Peifu",
""
],
[
"Bai",
"Huiyan",
""
],
[
"Xu",
"Tingfa",
""
],
[
"Wang",
"Jihui",
""
],
[
"Chen",
"Huan",
""
],
[
"Li",
"Jianan",
""
]
] | TITLE: Hyperspectral Remote Sensing Images Salient Object Detection: The First
Benchmark Dataset and Baseline
ABSTRACT: The objective of hyperspectral remote sensing image salient object detection
(HRSI-SOD) is to identify objects or regions that exhibit distinct spectrum
contrasts with the background. This area holds significant promise for
practical applications; however, progress has been limited by a notable
scarcity of dedicated datasets and methodologies. To bridge this gap and
stimulate further research, we introduce the first HRSI-SOD dataset, termed
HRSSD, which includes 704 hyperspectral images and 5327 pixel-level annotated
salient objects. The HRSSD dataset poses substantial challenges for salient
object detection algorithms due to large scale variation, diverse
foreground-background relations, and multi-salient objects. Additionally, we
propose an innovative and efficient baseline model for HRSI-SOD, termed the
Deep Spectral Saliency Network (DSSN). The core of DSSN is the Cross-level
Saliency Assessment Block, which performs pixel-wise attention and evaluates
the contributions of multi-scale similarity maps at each spatial location,
effectively reducing erroneous responses in cluttered regions and emphasizes
salient regions across scales. Additionally, the High-resolution Fusion Module
combines bottom-up fusion strategy and learned spatial upsampling to leverage
the strengths of multi-scale saliency maps, ensuring accurate localization of
small objects. Experiments on the HRSSD dataset robustly validate the
superiority of DSSN, underscoring the critical need for specialized datasets
and methodologies in this domain. Further evaluations on the HSOD-BIT and
HS-SOD datasets demonstrate the generalizability of the proposed method. The
dataset and source code are publicly available at
https://github.com/laprf/HRSSD.
|
2504.02417 | Lili Liang | Lili Liang, Guanglu Sun | Leveraging Static Relationships for Intra-Type and Inter-Type Message
Passing in Video Question Answering | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Video Question Answering (VideoQA) is an important research direction in the
field of artificial intelligence, enabling machines to understand video content
and perform reasoning and answering based on natural language questions.
Although methods based on static relationship reasoning have made certain
progress, there are still deficiencies in the accuracy of static relationship
recognition and representation, and they have not fully utilized the static
relationship information in videos for in-depth reasoning and analysis.
Therefore, this paper proposes a reasoning method for intra-type and inter-type
message passing based on static relationships. This method constructs a dual
graph for intra-type message passing reasoning and builds a heterogeneous graph
based on static relationships for inter-type message passing reasoning. The
intra-type message passing reasoning model captures the neighborhood
information of targets and relationships related to the question in the dual
graph, updating the dual graph to obtain intra-type clues for answering the
question. The inter-type message passing reasoning model captures the
neighborhood information of targets and relationships from different categories
related to the question in the heterogeneous graph, updating the heterogeneous
graph to obtain inter-type clues for answering the question. Finally, the
answers are inferred by combining the intra-type and inter-type clues based on
static relationships. Experimental results on the ANetQA and Next-QA datasets
demonstrate the effectiveness of this method.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 09:14:41 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liang",
"Lili",
""
],
[
"Sun",
"Guanglu",
""
]
] | TITLE: Leveraging Static Relationships for Intra-Type and Inter-Type Message
Passing in Video Question Answering
ABSTRACT: Video Question Answering (VideoQA) is an important research direction in the
field of artificial intelligence, enabling machines to understand video content
and perform reasoning and answering based on natural language questions.
Although methods based on static relationship reasoning have made certain
progress, there are still deficiencies in the accuracy of static relationship
recognition and representation, and they have not fully utilized the static
relationship information in videos for in-depth reasoning and analysis.
Therefore, this paper proposes a reasoning method for intra-type and inter-type
message passing based on static relationships. This method constructs a dual
graph for intra-type message passing reasoning and builds a heterogeneous graph
based on static relationships for inter-type message passing reasoning. The
intra-type message passing reasoning model captures the neighborhood
information of targets and relationships related to the question in the dual
graph, updating the dual graph to obtain intra-type clues for answering the
question. The inter-type message passing reasoning model captures the
neighborhood information of targets and relationships from different categories
related to the question in the heterogeneous graph, updating the heterogeneous
graph to obtain inter-type clues for answering the question. Finally, the
answers are inferred by combining the intra-type and inter-type clues based on
static relationships. Experimental results on the ANetQA and Next-QA datasets
demonstrate the effectiveness of this method.
|
2504.02437 | Wenjing Ke | Renwu Li, Wenjing Ke, Dong Li, Lu Tian, Emad Barsoum | MonoGS++: Fast and Accurate Monocular RGB Gaussian SLAM | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present MonoGS++, a novel fast and accurate Simultaneous Localization and
Mapping (SLAM) method that leverages 3D Gaussian representations and operates
solely on RGB inputs. While previous 3D Gaussian Splatting (GS)-based methods
largely depended on depth sensors, our approach reduces the hardware dependency
and only requires RGB input, leveraging online visual odometry (VO) to generate
sparse point clouds in real-time. To reduce redundancy and enhance the quality
of 3D scene reconstruction, we implemented a series of methodological
enhancements in 3D Gaussian mapping. Firstly, we introduced dynamic 3D Gaussian
insertion to avoid adding redundant Gaussians in previously well-reconstructed
areas. Secondly, we introduced clarity-enhancing Gaussian densification module
and planar regularization to handle texture-less areas and flat surfaces
better. We achieved precise camera tracking results both on the synthetic
Replica and real-world TUM-RGBD datasets, comparable to those of the
state-of-the-art. Additionally, our method realized a significant 5.57x
improvement in frames per second (fps) over the previous state-of-the-art,
MonoGS.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 09:51:51 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Li",
"Renwu",
""
],
[
"Ke",
"Wenjing",
""
],
[
"Li",
"Dong",
""
],
[
"Tian",
"Lu",
""
],
[
"Barsoum",
"Emad",
""
]
] | TITLE: MonoGS++: Fast and Accurate Monocular RGB Gaussian SLAM
ABSTRACT: We present MonoGS++, a novel fast and accurate Simultaneous Localization and
Mapping (SLAM) method that leverages 3D Gaussian representations and operates
solely on RGB inputs. While previous 3D Gaussian Splatting (GS)-based methods
largely depended on depth sensors, our approach reduces the hardware dependency
and only requires RGB input, leveraging online visual odometry (VO) to generate
sparse point clouds in real-time. To reduce redundancy and enhance the quality
of 3D scene reconstruction, we implemented a series of methodological
enhancements in 3D Gaussian mapping. Firstly, we introduced dynamic 3D Gaussian
insertion to avoid adding redundant Gaussians in previously well-reconstructed
areas. Secondly, we introduced clarity-enhancing Gaussian densification module
and planar regularization to handle texture-less areas and flat surfaces
better. We achieved precise camera tracking results both on the synthetic
Replica and real-world TUM-RGBD datasets, comparable to those of the
state-of-the-art. Additionally, our method realized a significant 5.57x
improvement in frames per second (fps) over the previous state-of-the-art,
MonoGS.
|
2504.02454 | Changshuo Wang | Changshuo Wang and Shuting He and Xiang Fang and Meiqing Wu and
Siew-Kei Lam and Prayag Tiwari | Taylor Series-Inspired Local Structure Fitting Network for Few-shot
Point Cloud Semantic Segmentation | null | AAAI 2025 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Few-shot point cloud semantic segmentation aims to accurately segment
"unseen" new categories in point cloud scenes using limited labeled data.
However, pretraining-based methods not only introduce excessive time overhead
but also overlook the local structure representation among irregular point
clouds. To address these issues, we propose a pretraining-free local structure
fitting network for few-shot point cloud semantic segmentation, named
TaylorSeg. Specifically, inspired by Taylor series, we treat the local
structure representation of irregular point clouds as a polynomial fitting
problem and propose a novel local structure fitting convolution, called
TaylorConv. This convolution learns the low-order basic information and
high-order refined information of point clouds from explicit encoding of local
geometric structures. Then, using TaylorConv as the basic component, we
construct two variants of TaylorSeg: a non-parametric TaylorSeg-NN and a
parametric TaylorSeg-PN. The former can achieve performance comparable to
existing parametric models without pretraining. For the latter, we equip it
with an Adaptive Push-Pull (APP) module to mitigate the feature distribution
differences between the query set and the support set. Extensive experiments
validate the effectiveness of the proposed method. Notably, under the 2-way
1-shot setting, TaylorSeg-PN achieves improvements of +2.28% and +4.37% mIoU on
the S3DIS and ScanNet datasets respectively, compared to the previous
state-of-the-art methods. Our code is available at
https://github.com/changshuowang/TaylorSeg.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 10:19:06 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wang",
"Changshuo",
""
],
[
"He",
"Shuting",
""
],
[
"Fang",
"Xiang",
""
],
[
"Wu",
"Meiqing",
""
],
[
"Lam",
"Siew-Kei",
""
],
[
"Tiwari",
"Prayag",
""
]
] | TITLE: Taylor Series-Inspired Local Structure Fitting Network for Few-shot
Point Cloud Semantic Segmentation
ABSTRACT: Few-shot point cloud semantic segmentation aims to accurately segment
"unseen" new categories in point cloud scenes using limited labeled data.
However, pretraining-based methods not only introduce excessive time overhead
but also overlook the local structure representation among irregular point
clouds. To address these issues, we propose a pretraining-free local structure
fitting network for few-shot point cloud semantic segmentation, named
TaylorSeg. Specifically, inspired by Taylor series, we treat the local
structure representation of irregular point clouds as a polynomial fitting
problem and propose a novel local structure fitting convolution, called
TaylorConv. This convolution learns the low-order basic information and
high-order refined information of point clouds from explicit encoding of local
geometric structures. Then, using TaylorConv as the basic component, we
construct two variants of TaylorSeg: a non-parametric TaylorSeg-NN and a
parametric TaylorSeg-PN. The former can achieve performance comparable to
existing parametric models without pretraining. For the latter, we equip it
with an Adaptive Push-Pull (APP) module to mitigate the feature distribution
differences between the query set and the support set. Extensive experiments
validate the effectiveness of the proposed method. Notably, under the 2-way
1-shot setting, TaylorSeg-PN achieves improvements of +2.28% and +4.37% mIoU on
the S3DIS and ScanNet datasets respectively, compared to the previous
state-of-the-art methods. Our code is available at
https://github.com/changshuowang/TaylorSeg.
|
2504.02458 | Liangbo Ning | Liangbo Ning, Wenqi Fan, Qing Li | Retrieval-Augmented Purifier for Robust LLM-Empowered Recommendation | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Large Language Model (LLM)-empowered recommender systems have
revolutionized personalized recommendation frameworks and attracted extensive
attention. Despite the remarkable success, existing LLM-empowered RecSys have
been demonstrated to be highly vulnerable to minor perturbations. To mitigate
the negative impact of such vulnerabilities, one potential solution is to
employ collaborative signals based on item-item co-occurrence to purify the
malicious collaborative knowledge from the user's historical interactions
inserted by attackers. On the other hand, due to the capabilities to expand
insufficient internal knowledge of LLMs, Retrieval-Augmented Generation (RAG)
techniques provide unprecedented opportunities to enhance the robustness of
LLM-empowered recommender systems by introducing external collaborative
knowledge. Therefore, in this paper, we propose a novel framework (RETURN) by
retrieving external collaborative signals to purify the poisoned user profiles
and enhance the robustness of LLM-empowered RecSys in a plug-and-play manner.
Specifically, retrieval-augmented perturbation positioning is proposed to
identify potential perturbations within the users' historical sequences by
retrieving external knowledge from collaborative item graphs. After that, we
further retrieve the collaborative knowledge to cleanse the perturbations by
using either deletion or replacement strategies and introduce a robust ensemble
recommendation strategy to generate final robust predictions. Extensive
experiments on three real-world datasets demonstrate the effectiveness of the
proposed RETURN.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 10:22:30 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Ning",
"Liangbo",
""
],
[
"Fan",
"Wenqi",
""
],
[
"Li",
"Qing",
""
]
] | TITLE: Retrieval-Augmented Purifier for Robust LLM-Empowered Recommendation
ABSTRACT: Recently, Large Language Model (LLM)-empowered recommender systems have
revolutionized personalized recommendation frameworks and attracted extensive
attention. Despite the remarkable success, existing LLM-empowered RecSys have
been demonstrated to be highly vulnerable to minor perturbations. To mitigate
the negative impact of such vulnerabilities, one potential solution is to
employ collaborative signals based on item-item co-occurrence to purify the
malicious collaborative knowledge from the user's historical interactions
inserted by attackers. On the other hand, due to the capabilities to expand
insufficient internal knowledge of LLMs, Retrieval-Augmented Generation (RAG)
techniques provide unprecedented opportunities to enhance the robustness of
LLM-empowered recommender systems by introducing external collaborative
knowledge. Therefore, in this paper, we propose a novel framework (RETURN) by
retrieving external collaborative signals to purify the poisoned user profiles
and enhance the robustness of LLM-empowered RecSys in a plug-and-play manner.
Specifically, retrieval-augmented perturbation positioning is proposed to
identify potential perturbations within the users' historical sequences by
retrieving external knowledge from collaborative item graphs. After that, we
further retrieve the collaborative knowledge to cleanse the perturbations by
using either deletion or replacement strategies and introduce a robust ensemble
recommendation strategy to generate final robust predictions. Extensive
experiments on three real-world datasets demonstrate the effectiveness of the
proposed RETURN.
|
2504.02463 | Vladimir Slaykovskiy | Vladimir Slaykovskiy, Maksim Zvegintsev, Yury Sakhonchyk, Hrachik
Ajamian | Evaluating AI Recruitment Sourcing Tools by Human Preference | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study introduces a benchmarking methodology designed to evaluate the
performance of AI-driven recruitment sourcing tools. We created and utilized a
dataset to perform a comparative analysis of search results generated by
leading AI-based solutions, LinkedIn Recruiter, and our proprietary system,
Pearch.ai. Human experts assessed the relevance of the returned candidates, and
an Elo rating system was applied to quantitatively measure each tool's
comparative performance. Our findings indicate that AI-driven recruitment
sourcing tools consistently outperform LinkedIn Recruiter in candidate
relevance, with Pearch.ai achieving the highest performance scores.
Furthermore, we found a strong alignment between AI-based evaluations and human
judgments, highlighting the potential for advanced AI technologies to
substantially enhance talent acquisition effectiveness. Code and supporting
data are publicly available at
https://github.com/vslaykovsky/ai-sourcing-benchmark
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 10:33:43 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Slaykovskiy",
"Vladimir",
""
],
[
"Zvegintsev",
"Maksim",
""
],
[
"Sakhonchyk",
"Yury",
""
],
[
"Ajamian",
"Hrachik",
""
]
] | TITLE: Evaluating AI Recruitment Sourcing Tools by Human Preference
ABSTRACT: This study introduces a benchmarking methodology designed to evaluate the
performance of AI-driven recruitment sourcing tools. We created and utilized a
dataset to perform a comparative analysis of search results generated by
leading AI-based solutions, LinkedIn Recruiter, and our proprietary system,
Pearch.ai. Human experts assessed the relevance of the returned candidates, and
an Elo rating system was applied to quantitatively measure each tool's
comparative performance. Our findings indicate that AI-driven recruitment
sourcing tools consistently outperform LinkedIn Recruiter in candidate
relevance, with Pearch.ai achieving the highest performance scores.
Furthermore, we found a strong alignment between AI-based evaluations and human
judgments, highlighting the potential for advanced AI technologies to
substantially enhance talent acquisition effectiveness. Code and supporting
data are publicly available at
https://github.com/vslaykovsky/ai-sourcing-benchmark
|
2504.02464 | Ruixiao Zhang | Ruixiao Zhang, Runwei Guan, Xiangyu Chen, Adam Prugel-Bennett, Xiaohao
Cai | CornerPoint3D: Look at the Nearest Corner Instead of the Center | arXiv admin note: substantial text overlap with arXiv:2407.04061 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | 3D object detection aims to predict object centers, dimensions, and rotations
from LiDAR point clouds. Despite its simplicity, LiDAR captures only the near
side of objects, making center-based detectors prone to poor localization
accuracy in cross-domain tasks with varying point distributions. Meanwhile,
existing evaluation metrics designed for single-domain assessment also suffer
from overfitting due to dataset-specific size variations. A key question
arises: Do we really need models to maintain excellent performance in the
entire 3D bounding boxes after being applied across domains? Actually, one of
our main focuses is on preventing collisions between vehicles and other
obstacles, especially in cross-domain scenarios where correctly predicting the
sizes is much more difficult. To address these issues, we rethink cross-domain
3D object detection from a practical perspective. We propose two new metrics
that evaluate a model's ability to detect objects' closer-surfaces to the LiDAR
sensor. Additionally, we introduce EdgeHead, a refinement head that guides
models to focus more on learnable closer surfaces, significantly improving
cross-domain performance under both our new and traditional BEV/3D metrics.
Furthermore, we argue that predicting the nearest corner rather than the object
center enhances robustness. We propose a novel 3D object detector, coined as
CornerPoint3D, which is built upon CenterPoint and uses heatmaps to supervise
the learning and detection of the nearest corner of each object. Our proposed
methods realize a balanced trade-off between the detection quality of entire
bounding boxes and the locating accuracy of closer surfaces to the LiDAR
sensor, outperforming the traditional center-based detector CenterPoint in
multiple cross-domain tasks and providing a more practically reasonable and
robust cross-domain 3D object detection solution.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 10:33:43 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhang",
"Ruixiao",
""
],
[
"Guan",
"Runwei",
""
],
[
"Chen",
"Xiangyu",
""
],
[
"Prugel-Bennett",
"Adam",
""
],
[
"Cai",
"Xiaohao",
""
]
] | TITLE: CornerPoint3D: Look at the Nearest Corner Instead of the Center
ABSTRACT: 3D object detection aims to predict object centers, dimensions, and rotations
from LiDAR point clouds. Despite its simplicity, LiDAR captures only the near
side of objects, making center-based detectors prone to poor localization
accuracy in cross-domain tasks with varying point distributions. Meanwhile,
existing evaluation metrics designed for single-domain assessment also suffer
from overfitting due to dataset-specific size variations. A key question
arises: Do we really need models to maintain excellent performance in the
entire 3D bounding boxes after being applied across domains? Actually, one of
our main focuses is on preventing collisions between vehicles and other
obstacles, especially in cross-domain scenarios where correctly predicting the
sizes is much more difficult. To address these issues, we rethink cross-domain
3D object detection from a practical perspective. We propose two new metrics
that evaluate a model's ability to detect objects' closer-surfaces to the LiDAR
sensor. Additionally, we introduce EdgeHead, a refinement head that guides
models to focus more on learnable closer surfaces, significantly improving
cross-domain performance under both our new and traditional BEV/3D metrics.
Furthermore, we argue that predicting the nearest corner rather than the object
center enhances robustness. We propose a novel 3D object detector, coined as
CornerPoint3D, which is built upon CenterPoint and uses heatmaps to supervise
the learning and detection of the nearest corner of each object. Our proposed
methods realize a balanced trade-off between the detection quality of entire
bounding boxes and the locating accuracy of closer surfaces to the LiDAR
sensor, outperforming the traditional center-based detector CenterPoint in
multiple cross-domain tasks and providing a more practically reasonable and
robust cross-domain 3D object detection solution.
|
2504.02477 | Xiaofeng Han | Xiaofeng Han, Shunpeng Chen, Zenghuang Fu, Zhe Feng, Lue Fan, Dong An,
Changwei Wang, Li Guo, Weiliang Meng, Xiaopeng Zhang, Rongtao Xu, Shibiao Xu | Multimodal Fusion and Vision-Language Models: A Survey for Robot Vision | 27 pages, 11 figures, survey paper submitted to Information Fusion | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robot vision has greatly benefited from advancements in multimodal fusion
techniques and vision-language models (VLMs). We systematically review the
applications of multimodal fusion in key robotic vision tasks, including
semantic scene understanding, simultaneous localization and mapping (SLAM), 3D
object detection, navigation and localization, and robot manipulation. We
compare VLMs based on large language models (LLMs) with traditional multimodal
fusion methods, analyzing their advantages, limitations, and synergies.
Additionally, we conduct an in-depth analysis of commonly used datasets,
evaluating their applicability and challenges in real-world robotic scenarios.
Furthermore, we identify critical research challenges such as cross-modal
alignment, efficient fusion strategies, real-time deployment, and domain
adaptation, and propose future research directions, including self-supervised
learning for robust multimodal representations, transformer-based fusion
architectures, and scalable multimodal frameworks. Through a comprehensive
review, comparative analysis, and forward-looking discussion, we provide a
valuable reference for advancing multimodal perception and interaction in
robotic vision. A comprehensive list of studies in this survey is available at
https://github.com/Xiaofeng-Han-Res/MF-RV.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 10:53:07 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Han",
"Xiaofeng",
""
],
[
"Chen",
"Shunpeng",
""
],
[
"Fu",
"Zenghuang",
""
],
[
"Feng",
"Zhe",
""
],
[
"Fan",
"Lue",
""
],
[
"An",
"Dong",
""
],
[
"Wang",
"Changwei",
""
],
[
"Guo",
"Li",
""
],
[
"Meng",
"Weiliang",
""
],
[
"Zhang",
"Xiaopeng",
""
],
[
"Xu",
"Rongtao",
""
],
[
"Xu",
"Shibiao",
""
]
] | TITLE: Multimodal Fusion and Vision-Language Models: A Survey for Robot Vision
ABSTRACT: Robot vision has greatly benefited from advancements in multimodal fusion
techniques and vision-language models (VLMs). We systematically review the
applications of multimodal fusion in key robotic vision tasks, including
semantic scene understanding, simultaneous localization and mapping (SLAM), 3D
object detection, navigation and localization, and robot manipulation. We
compare VLMs based on large language models (LLMs) with traditional multimodal
fusion methods, analyzing their advantages, limitations, and synergies.
Additionally, we conduct an in-depth analysis of commonly used datasets,
evaluating their applicability and challenges in real-world robotic scenarios.
Furthermore, we identify critical research challenges such as cross-modal
alignment, efficient fusion strategies, real-time deployment, and domain
adaptation, and propose future research directions, including self-supervised
learning for robust multimodal representations, transformer-based fusion
architectures, and scalable multimodal frameworks. Through a comprehensive
review, comparative analysis, and forward-looking discussion, we provide a
valuable reference for advancing multimodal perception and interaction in
robotic vision. A comprehensive list of studies in this survey is available at
https://github.com/Xiaofeng-Han-Res/MF-RV.
|
2504.02486 | Mara Graziani Miss | Mara Graziani, Antonio Foncubierta, Dimitrios Christofidellis, Irina
Espejo-Morales, Malina Molnar, Marvin Alberts, Matteo Manica and Jannis Born | We Need Improved Data Curation and Attribution in AI for Scientific
Discovery | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As the interplay between human-generated and synthetic data evolves, new
challenges arise in scientific discovery concerning the integrity of the data
and the stability of the models. In this work, we examine the role of synthetic
data as opposed to that of real experimental data for scientific research. Our
analyses indicate that nearly three-quarters of experimental datasets available
on open-access platforms have relatively low adoption rates, opening new
opportunities to enhance their discoverability and usability by automated
methods. Additionally, we observe an increasing difficulty in distinguishing
synthetic from real experimental data. We propose supplementing ongoing efforts
in automating synthetic data detection by increasing the focus on watermarking
real experimental data, thereby strengthening data traceability and integrity.
Our estimates suggest that watermarking even less than half of the real world
data generated annually could help sustain model robustness, while promoting a
balanced integration of synthetic and human-generated content.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 11:07:52 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Graziani",
"Mara",
""
],
[
"Foncubierta",
"Antonio",
""
],
[
"Christofidellis",
"Dimitrios",
""
],
[
"Espejo-Morales",
"Irina",
""
],
[
"Molnar",
"Malina",
""
],
[
"Alberts",
"Marvin",
""
],
[
"Manica",
"Matteo",
""
],
[
"Born",
"Jannis",
""
]
] | TITLE: We Need Improved Data Curation and Attribution in AI for Scientific
Discovery
ABSTRACT: As the interplay between human-generated and synthetic data evolves, new
challenges arise in scientific discovery concerning the integrity of the data
and the stability of the models. In this work, we examine the role of synthetic
data as opposed to that of real experimental data for scientific research. Our
analyses indicate that nearly three-quarters of experimental datasets available
on open-access platforms have relatively low adoption rates, opening new
opportunities to enhance their discoverability and usability by automated
methods. Additionally, we observe an increasing difficulty in distinguishing
synthetic from real experimental data. We propose supplementing ongoing efforts
in automating synthetic data detection by increasing the focus on watermarking
real experimental data, thereby strengthening data traceability and integrity.
Our estimates suggest that watermarking even less than half of the real world
data generated annually could help sustain model robustness, while promoting a
balanced integration of synthetic and human-generated content.
|
2504.02494 | Faisal Mohammad | Faisal Mohammad, Duksan Ryu | Semiconductor Wafer Map Defect Classification with Tiny Vision
Transformers | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Semiconductor wafer defect classification is critical for ensuring high
precision and yield in manufacturing. Traditional CNN-based models often
struggle with class imbalances and recognition of the multiple overlapping
defect types in wafer maps. To address these challenges, we propose ViT-Tiny, a
lightweight Vision Transformer (ViT) framework optimized for wafer defect
classification. Trained on the WM-38k dataset. ViT-Tiny outperforms its
ViT-Base counterpart and state-of-the-art (SOTA) models, such as MSF-Trans and
CNN-based architectures. Through extensive ablation studies, we determine that
a patch size of 16 provides optimal performance. ViT-Tiny achieves an F1-score
of 98.4%, surpassing MSF-Trans by 2.94% in four-defect classification,
improving recall by 2.86% in two-defect classification, and increasing
precision by 3.13% in three-defect classification. Additionally, it
demonstrates enhanced robustness under limited labeled data conditions, making
it a computationally efficient and reliable solution for real-world
semiconductor defect detection.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 11:18:00 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Mohammad",
"Faisal",
""
],
[
"Ryu",
"Duksan",
""
]
] | TITLE: Semiconductor Wafer Map Defect Classification with Tiny Vision
Transformers
ABSTRACT: Semiconductor wafer defect classification is critical for ensuring high
precision and yield in manufacturing. Traditional CNN-based models often
struggle with class imbalances and recognition of the multiple overlapping
defect types in wafer maps. To address these challenges, we propose ViT-Tiny, a
lightweight Vision Transformer (ViT) framework optimized for wafer defect
classification. Trained on the WM-38k dataset. ViT-Tiny outperforms its
ViT-Base counterpart and state-of-the-art (SOTA) models, such as MSF-Trans and
CNN-based architectures. Through extensive ablation studies, we determine that
a patch size of 16 provides optimal performance. ViT-Tiny achieves an F1-score
of 98.4%, surpassing MSF-Trans by 2.94% in four-defect classification,
improving recall by 2.86% in two-defect classification, and increasing
precision by 3.13% in three-defect classification. Additionally, it
demonstrates enhanced robustness under limited labeled data conditions, making
it a computationally efficient and reliable solution for real-world
semiconductor defect detection.
|
2504.02496 | Jiuniu Wang | Jiuniu Wang, Wenjia Xu, Qingzhong Wang, Antoni B. Chan | Group-based Distinctive Image Captioning with Memory Difference Encoding
and Attention | 20 pages. arXiv admin note: substantial text overlap with
arXiv:2108.09151 | International Journal of Computer Vision, 2024 | null | null | cs.CV cs.MM | http://creativecommons.org/publicdomain/zero/1.0/ | Recent advances in image captioning have focused on enhancing accuracy by
substantially increasing the dataset and model size. While conventional
captioning models exhibit high performance on established metrics such as BLEU,
CIDEr, and SPICE, the capability of captions to distinguish the target image
from other similar images is under-explored. To generate distinctive captions,
a few pioneers employed contrastive learning or re-weighted the ground-truth
captions. However, these approaches often overlook the relationships among
objects in a similar image group (e.g., items or properties within the same
album or fine-grained events). In this paper, we introduce a novel approach to
enhance the distinctiveness of image captions, namely Group-based Differential
Distinctive Captioning Method, which visually compares each image with other
images in one similar group and highlights the uniqueness of each image. In
particular, we introduce a Group-based Differential Memory Attention (GDMA)
module, designed to identify and emphasize object features in an image that are
uniquely distinguishable within its image group, i.e., those exhibiting low
similarity with objects in other images. This mechanism ensures that such
unique object features are prioritized during caption generation for the image,
thereby enhancing the distinctiveness of the resulting captions. To further
refine this process, we select distinctive words from the ground-truth captions
to guide both the language decoder and the GDMA module. Additionally, we
propose a new evaluation metric, the Distinctive Word Rate (DisWordRate), to
quantitatively assess caption distinctiveness. Quantitative results indicate
that the proposed method significantly improves the distinctiveness of several
baseline models, and achieves state-of-the-art performance on distinctiveness
while not excessively sacrificing accuracy...
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 11:19:51 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wang",
"Jiuniu",
""
],
[
"Xu",
"Wenjia",
""
],
[
"Wang",
"Qingzhong",
""
],
[
"Chan",
"Antoni B.",
""
]
] | TITLE: Group-based Distinctive Image Captioning with Memory Difference Encoding
and Attention
ABSTRACT: Recent advances in image captioning have focused on enhancing accuracy by
substantially increasing the dataset and model size. While conventional
captioning models exhibit high performance on established metrics such as BLEU,
CIDEr, and SPICE, the capability of captions to distinguish the target image
from other similar images is under-explored. To generate distinctive captions,
a few pioneers employed contrastive learning or re-weighted the ground-truth
captions. However, these approaches often overlook the relationships among
objects in a similar image group (e.g., items or properties within the same
album or fine-grained events). In this paper, we introduce a novel approach to
enhance the distinctiveness of image captions, namely Group-based Differential
Distinctive Captioning Method, which visually compares each image with other
images in one similar group and highlights the uniqueness of each image. In
particular, we introduce a Group-based Differential Memory Attention (GDMA)
module, designed to identify and emphasize object features in an image that are
uniquely distinguishable within its image group, i.e., those exhibiting low
similarity with objects in other images. This mechanism ensures that such
unique object features are prioritized during caption generation for the image,
thereby enhancing the distinctiveness of the resulting captions. To further
refine this process, we select distinctive words from the ground-truth captions
to guide both the language decoder and the GDMA module. Additionally, we
propose a new evaluation metric, the Distinctive Word Rate (DisWordRate), to
quantitatively assess caption distinctiveness. Quantitative results indicate
that the proposed method significantly improves the distinctiveness of several
baseline models, and achieves state-of-the-art performance on distinctiveness
while not excessively sacrificing accuracy...
|
2504.02511 | Yafei Shen | Yafei Shen, Huan-Fei Ma, Ling Yang | Analytical Discovery of Manifold with Machine Learning | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding low-dimensional structures within high-dimensional data is
crucial for visualization, interpretation, and denoising in complex datasets.
Despite the advancements in manifold learning techniques, key challenges-such
as limited global insight and the lack of interpretable analytical
descriptions-remain unresolved. In this work, we introduce a novel framework,
GAMLA (Global Analytical Manifold Learning using Auto-encoding). GAMLA employs
a two-round training process within an auto-encoding framework to derive both
character and complementary representations for the underlying manifold. With
the character representation, the manifold is represented by a parametric
function which unfold the manifold to provide a global coordinate. While with
the complementary representation, an approximate explicit manifold description
is developed, offering a global and analytical representation of smooth
manifolds underlying high-dimensional datasets. This enables the analytical
derivation of geometric properties such as curvature and normal vectors.
Moreover, we find the two representations together decompose the whole latent
space and can thus characterize the local spatial structure surrounding the
manifold, proving particularly effective in anomaly detection and
categorization. Through extensive experiments on benchmark datasets and
real-world applications, GAMLA demonstrates its ability to achieve
computational efficiency and interpretability while providing precise geometric
and structural insights. This framework bridges the gap between data-driven
manifold learning and analytical geometry, presenting a versatile tool for
exploring the intrinsic properties of complex data sets.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 11:53:00 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Shen",
"Yafei",
""
],
[
"Ma",
"Huan-Fei",
""
],
[
"Yang",
"Ling",
""
]
] | TITLE: Analytical Discovery of Manifold with Machine Learning
ABSTRACT: Understanding low-dimensional structures within high-dimensional data is
crucial for visualization, interpretation, and denoising in complex datasets.
Despite the advancements in manifold learning techniques, key challenges-such
as limited global insight and the lack of interpretable analytical
descriptions-remain unresolved. In this work, we introduce a novel framework,
GAMLA (Global Analytical Manifold Learning using Auto-encoding). GAMLA employs
a two-round training process within an auto-encoding framework to derive both
character and complementary representations for the underlying manifold. With
the character representation, the manifold is represented by a parametric
function which unfold the manifold to provide a global coordinate. While with
the complementary representation, an approximate explicit manifold description
is developed, offering a global and analytical representation of smooth
manifolds underlying high-dimensional datasets. This enables the analytical
derivation of geometric properties such as curvature and normal vectors.
Moreover, we find the two representations together decompose the whole latent
space and can thus characterize the local spatial structure surrounding the
manifold, proving particularly effective in anomaly detection and
categorization. Through extensive experiments on benchmark datasets and
real-world applications, GAMLA demonstrates its ability to achieve
computational efficiency and interpretability while providing precise geometric
and structural insights. This framework bridges the gap between data-driven
manifold learning and analytical geometry, presenting a versatile tool for
exploring the intrinsic properties of complex data sets.
|
2504.02512 | Emad Bahrami | Emad Bahrami, Olga Zatsarynna, Gianpiero Francesca, Juergen Gall | Towards Generalizing Temporal Action Segmentation to Unseen Views | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | While there has been substantial progress in temporal action segmentation,
the challenge to generalize to unseen views remains unaddressed. Hence, we
define a protocol for unseen view action segmentation where camera views for
evaluating the model are unavailable during training. This includes changing
from top-frontal views to a side view or even more challenging from exocentric
to egocentric views. Furthermore, we present an approach for temporal action
segmentation that tackles this challenge. Our approach leverages a shared
representation at both the sequence and segment levels to reduce the impact of
view differences during training. We achieve this by introducing a sequence
loss and an action loss, which together facilitate consistent video and action
representations across different views. The evaluation on the Assembly101,
IkeaASM, and EgoExoLearn datasets demonstrate significant improvements, with a
12.8% increase in F1@50 for unseen exocentric views and a substantial 54%
improvement for unseen egocentric views.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 11:53:59 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Bahrami",
"Emad",
""
],
[
"Zatsarynna",
"Olga",
""
],
[
"Francesca",
"Gianpiero",
""
],
[
"Gall",
"Juergen",
""
]
] | TITLE: Towards Generalizing Temporal Action Segmentation to Unseen Views
ABSTRACT: While there has been substantial progress in temporal action segmentation,
the challenge to generalize to unseen views remains unaddressed. Hence, we
define a protocol for unseen view action segmentation where camera views for
evaluating the model are unavailable during training. This includes changing
from top-frontal views to a side view or even more challenging from exocentric
to egocentric views. Furthermore, we present an approach for temporal action
segmentation that tackles this challenge. Our approach leverages a shared
representation at both the sequence and segment levels to reduce the impact of
view differences during training. We achieve this by introducing a sequence
loss and an action loss, which together facilitate consistent video and action
representations across different views. The evaluation on the Assembly101,
IkeaASM, and EgoExoLearn datasets demonstrate significant improvements, with a
12.8% increase in F1@50 for unseen exocentric views and a substantial 54%
improvement for unseen egocentric views.
|
2504.02515 | Nedko Savov | Nedko Savov, Naser Kazemi, Mohammad Mahdi, Danda Pani Paudel, Xi Wang,
Luc Van Gool | Exploration-Driven Generative Interactive Environments | Accepted at CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Modern world models require costly and time-consuming collection of large
video datasets with action demonstrations by people or by environment-specific
agents. To simplify training, we focus on using many virtual environments for
inexpensive, automatically collected interaction data. Genie, a recent
multi-environment world model, demonstrates simulation abilities of many
environments with shared behavior. Unfortunately, training their model requires
expensive demonstrations. Therefore, we propose a training framework merely
using a random agent in virtual environments. While the model trained in this
manner exhibits good controls, it is limited by the random exploration
possibilities. To address this limitation, we propose AutoExplore Agent - an
exploration agent that entirely relies on the uncertainty of the world model,
delivering diverse data from which it can learn the best. Our agent is fully
independent of environment-specific rewards and thus adapts easily to new
environments. With this approach, the pretrained multi-environment model can
quickly adapt to new environments achieving video fidelity and controllability
improvement. In order to obtain automatically large-scale interaction datasets
for pretraining, we group environments with similar behavior and controls. To
this end, we annotate the behavior and controls of 974 virtual environments - a
dataset that we name RetroAct. For building our model, we first create an open
implementation of Genie - GenieRedux and apply enhancements and adaptations in
our version GenieRedux-G. Our code and data are available at
https://github.com/insait-institute/GenieRedux.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 12:01:41 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Savov",
"Nedko",
""
],
[
"Kazemi",
"Naser",
""
],
[
"Mahdi",
"Mohammad",
""
],
[
"Paudel",
"Danda Pani",
""
],
[
"Wang",
"Xi",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: Exploration-Driven Generative Interactive Environments
ABSTRACT: Modern world models require costly and time-consuming collection of large
video datasets with action demonstrations by people or by environment-specific
agents. To simplify training, we focus on using many virtual environments for
inexpensive, automatically collected interaction data. Genie, a recent
multi-environment world model, demonstrates simulation abilities of many
environments with shared behavior. Unfortunately, training their model requires
expensive demonstrations. Therefore, we propose a training framework merely
using a random agent in virtual environments. While the model trained in this
manner exhibits good controls, it is limited by the random exploration
possibilities. To address this limitation, we propose AutoExplore Agent - an
exploration agent that entirely relies on the uncertainty of the world model,
delivering diverse data from which it can learn the best. Our agent is fully
independent of environment-specific rewards and thus adapts easily to new
environments. With this approach, the pretrained multi-environment model can
quickly adapt to new environments achieving video fidelity and controllability
improvement. In order to obtain automatically large-scale interaction datasets
for pretraining, we group environments with similar behavior and controls. To
this end, we annotate the behavior and controls of 974 virtual environments - a
dataset that we name RetroAct. For building our model, we first create an open
implementation of Genie - GenieRedux and apply enhancements and adaptations in
our version GenieRedux-G. Our code and data are available at
https://github.com/insait-institute/GenieRedux.
|
2504.02517 | Yash Kulthe | Yash Kulthe, Andrew Gilbert, John Collomosse | MultiNeRF: Multiple Watermark Embedding for Neural Radiance Fields | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present MultiNeRF, a 3D watermarking method that embeds multiple uniquely
keyed watermarks within images rendered by a single Neural Radiance Field
(NeRF) model, whilst maintaining high visual quality. Our approach extends the
TensoRF NeRF model by incorporating a dedicated watermark grid alongside the
existing geometry and appearance grids. This extension ensures higher watermark
capacity without entangling watermark signals with scene content. We propose a
FiLM-based conditional modulation mechanism that dynamically activates
watermarks based on input identifiers, allowing multiple independent watermarks
to be embedded and extracted without requiring model retraining. MultiNeRF is
validated on the NeRF-Synthetic and LLFF datasets, with statistically
significant improvements in robust capacity without compromising rendering
quality. By generalizing single-watermark NeRF methods into a flexible
multi-watermarking framework, MultiNeRF provides a scalable solution for 3D
content. attribution.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 12:06:04 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Kulthe",
"Yash",
""
],
[
"Gilbert",
"Andrew",
""
],
[
"Collomosse",
"John",
""
]
] | TITLE: MultiNeRF: Multiple Watermark Embedding for Neural Radiance Fields
ABSTRACT: We present MultiNeRF, a 3D watermarking method that embeds multiple uniquely
keyed watermarks within images rendered by a single Neural Radiance Field
(NeRF) model, whilst maintaining high visual quality. Our approach extends the
TensoRF NeRF model by incorporating a dedicated watermark grid alongside the
existing geometry and appearance grids. This extension ensures higher watermark
capacity without entangling watermark signals with scene content. We propose a
FiLM-based conditional modulation mechanism that dynamically activates
watermarks based on input identifiers, allowing multiple independent watermarks
to be embedded and extracted without requiring model retraining. MultiNeRF is
validated on the NeRF-Synthetic and LLFF datasets, with statistically
significant improvements in robust capacity without compromising rendering
quality. By generalizing single-watermark NeRF methods into a flexible
multi-watermarking framework, MultiNeRF provides a scalable solution for 3D
content. attribution.
|
2504.02519 | Matthias Dr\"uppel | Christian Alexander Holz, Christian Bader, Markus Enzweiler, Matthias
Dr\"uppel | Data-Driven Object Tracking: Integrating Modular Neural Networks into a
Kalman Framework | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper presents novel Machine Learning (ML) methodologies for
Multi-Object Tracking (MOT), specifically designed to meet the increasing
complexity and precision demands of Advanced Driver Assistance Systems (ADAS).
We introduce three Neural Network (NN) models that address key challenges in
MOT: (i) the Single-Prediction Network (SPENT) for trajectory prediction, (ii)
the Single-Association Network (SANT) for mapping individual Sensor Object (SO)
to existing tracks, and (iii) the Multi-Association Network (MANTa) for
associating multiple SOs to multiple tracks. These models are seamlessly
integrated into a traditional Kalman Filter (KF) framework, maintaining the
system's modularity by replacing relevant components without disrupting the
overall architecture. Importantly, all three networks are designed to be run in
a realtime, embedded environment. Each network contains less than 50k trainable
parameters. Our evaluation, conducted on the public KITTI tracking dataset,
demonstrates significant improvements in tracking performance. SPENT reduces
the Root Mean Square Error (RMSE) by 50% compared to a standard KF, while SANT
and MANTa achieve up to 95% accuracy in sensor object-to-track assignments.
These results underscore the effectiveness of incorporating task-specific NNs
into traditional tracking systems, boosting performance and robustness while
preserving modularity, maintainability, and interpretability.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 12:13:38 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Holz",
"Christian Alexander",
""
],
[
"Bader",
"Christian",
""
],
[
"Enzweiler",
"Markus",
""
],
[
"Drüppel",
"Matthias",
""
]
] | TITLE: Data-Driven Object Tracking: Integrating Modular Neural Networks into a
Kalman Framework
ABSTRACT: This paper presents novel Machine Learning (ML) methodologies for
Multi-Object Tracking (MOT), specifically designed to meet the increasing
complexity and precision demands of Advanced Driver Assistance Systems (ADAS).
We introduce three Neural Network (NN) models that address key challenges in
MOT: (i) the Single-Prediction Network (SPENT) for trajectory prediction, (ii)
the Single-Association Network (SANT) for mapping individual Sensor Object (SO)
to existing tracks, and (iii) the Multi-Association Network (MANTa) for
associating multiple SOs to multiple tracks. These models are seamlessly
integrated into a traditional Kalman Filter (KF) framework, maintaining the
system's modularity by replacing relevant components without disrupting the
overall architecture. Importantly, all three networks are designed to be run in
a realtime, embedded environment. Each network contains less than 50k trainable
parameters. Our evaluation, conducted on the public KITTI tracking dataset,
demonstrates significant improvements in tracking performance. SPENT reduces
the Root Mean Square Error (RMSE) by 50% compared to a standard KF, while SANT
and MANTa achieve up to 95% accuracy in sensor object-to-track assignments.
These results underscore the effectiveness of incorporating task-specific NNs
into traditional tracking systems, boosting performance and robustness while
preserving modularity, maintainability, and interpretability.
|
2504.02522 | Fatemeh Behrad | Fatemeh Behrad, Tinne Tuytelaars, Johan Wagemans | Charm: The Missing Piece in ViT fine-tuning for Image Aesthetic
Assessment | CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The capacity of Vision transformers (ViTs) to handle variable-sized inputs is
often constrained by computational complexity and batch processing limitations.
Consequently, ViTs are typically trained on small, fixed-size images obtained
through downscaling or cropping. While reducing computational burden, these
methods result in significant information loss, negatively affecting tasks like
image aesthetic assessment. We introduce Charm, a novel tokenization approach
that preserves Composition, High-resolution, Aspect Ratio, and Multi-scale
information simultaneously. Charm prioritizes high-resolution details in
specific regions while downscaling others, enabling shorter fixed-size input
sequences for ViTs while incorporating essential information. Charm is designed
to be compatible with pre-trained ViTs and their learned positional embeddings.
By providing multiscale input and introducing variety to input tokens, Charm
improves ViT performance and generalizability for image aesthetic assessment.
We avoid cropping or changing the aspect ratio to further preserve information.
Extensive experiments demonstrate significant performance improvements on
various image aesthetic and quality assessment datasets (up to 8.1 %) using a
lightweight ViT backbone. Code and pre-trained models are available at
https://github.com/FBehrad/Charm.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 12:19:04 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Behrad",
"Fatemeh",
""
],
[
"Tuytelaars",
"Tinne",
""
],
[
"Wagemans",
"Johan",
""
]
] | TITLE: Charm: The Missing Piece in ViT fine-tuning for Image Aesthetic
Assessment
ABSTRACT: The capacity of Vision transformers (ViTs) to handle variable-sized inputs is
often constrained by computational complexity and batch processing limitations.
Consequently, ViTs are typically trained on small, fixed-size images obtained
through downscaling or cropping. While reducing computational burden, these
methods result in significant information loss, negatively affecting tasks like
image aesthetic assessment. We introduce Charm, a novel tokenization approach
that preserves Composition, High-resolution, Aspect Ratio, and Multi-scale
information simultaneously. Charm prioritizes high-resolution details in
specific regions while downscaling others, enabling shorter fixed-size input
sequences for ViTs while incorporating essential information. Charm is designed
to be compatible with pre-trained ViTs and their learned positional embeddings.
By providing multiscale input and introducing variety to input tokens, Charm
improves ViT performance and generalizability for image aesthetic assessment.
We avoid cropping or changing the aspect ratio to further preserve information.
Extensive experiments demonstrate significant performance improvements on
various image aesthetic and quality assessment datasets (up to 8.1 %) using a
lightweight ViT backbone. Code and pre-trained models are available at
https://github.com/FBehrad/Charm.
|
2504.02524 | Yunhao Lv | Yunhao Lv and Lingyu Chen and Jian Wang and Yangxi Li and Fang Chen | SelfMedHPM: Self Pre-training With Hard Patches Mining Masked
Autoencoders For Medical Image Segmentation | arXiv admin note: text overlap with arXiv:2304.05919 by other authors | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In recent years, deep learning methods such as convolutional neural network
(CNN) and transformers have made significant progress in CT multi-organ
segmentation. However, CT multi-organ segmentation methods based on masked
image modeling (MIM) are very limited. There are already methods using MAE for
CT multi-organ segmentation task, we believe that the existing methods do not
identify the most difficult areas to reconstruct. To this end, we propose a MIM
self-training framework with hard patches mining masked autoencoders for CT
multi-organ segmentation tasks (selfMedHPM). The method performs ViT
self-pretraining on the training set of the target data and introduces an
auxiliary loss predictor, which first predicts the patch loss and determines
the location of the next mask. SelfMedHPM implementation is better than various
competitive methods in abdominal CT multi-organ segmentation and body CT
multi-organ segmentation. We have validated the performance of our method on
the Multi Atlas Labeling Beyond The Cranial Vault (BTCV) dataset for abdomen
mult-organ segmentation and the SinoMed Whole Body (SMWB) dataset for body
multi-organ segmentation tasks.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 12:28:21 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Lv",
"Yunhao",
""
],
[
"Chen",
"Lingyu",
""
],
[
"Wang",
"Jian",
""
],
[
"Li",
"Yangxi",
""
],
[
"Chen",
"Fang",
""
]
] | TITLE: SelfMedHPM: Self Pre-training With Hard Patches Mining Masked
Autoencoders For Medical Image Segmentation
ABSTRACT: In recent years, deep learning methods such as convolutional neural network
(CNN) and transformers have made significant progress in CT multi-organ
segmentation. However, CT multi-organ segmentation methods based on masked
image modeling (MIM) are very limited. There are already methods using MAE for
CT multi-organ segmentation task, we believe that the existing methods do not
identify the most difficult areas to reconstruct. To this end, we propose a MIM
self-training framework with hard patches mining masked autoencoders for CT
multi-organ segmentation tasks (selfMedHPM). The method performs ViT
self-pretraining on the training set of the target data and introduces an
auxiliary loss predictor, which first predicts the patch loss and determines
the location of the next mask. SelfMedHPM implementation is better than various
competitive methods in abdominal CT multi-organ segmentation and body CT
multi-organ segmentation. We have validated the performance of our method on
the Multi Atlas Labeling Beyond The Cranial Vault (BTCV) dataset for abdomen
mult-organ segmentation and the SinoMed Whole Body (SMWB) dataset for body
multi-organ segmentation tasks.
|
2504.02529 | Nick Pepper | Amy Hodgkin, Nick Pepper, Marc Thomas | Probabilistic Simulation of Aircraft Descent via a Hybrid Physics-Data
Approach | null | null | null | null | eess.SY cs.SY | http://creativecommons.org/licenses/by/4.0/ | This paper presents a method for generating probabilistic descent
trajectories in simulations of real-world airspace. A dataset of 116,066
trajectories harvested from Mode S radar returns in UK airspace was used to
train and test the model. Thirteen aircraft types with varying performance
characteristics were investigated. It was found that the error in the mean
prediction of time to reach the bottom of descent for the proposed method was
less than that of the the Base of Aircraft Data (BADA) model by a factor of 10.
Furthermore, the method was capable of generating a range of trajectories that
were similar to the held out test dataset when analysed in distribution. The
proposed method is hybrid, with aircraft drag and calibrated airspeed functions
generated probabilistically to parameterise the BADA equations, ensuring the
physical plausibility of generated trajectories.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 12:33:48 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Hodgkin",
"Amy",
""
],
[
"Pepper",
"Nick",
""
],
[
"Thomas",
"Marc",
""
]
] | TITLE: Probabilistic Simulation of Aircraft Descent via a Hybrid Physics-Data
Approach
ABSTRACT: This paper presents a method for generating probabilistic descent
trajectories in simulations of real-world airspace. A dataset of 116,066
trajectories harvested from Mode S radar returns in UK airspace was used to
train and test the model. Thirteen aircraft types with varying performance
characteristics were investigated. It was found that the error in the mean
prediction of time to reach the bottom of descent for the proposed method was
less than that of the the Base of Aircraft Data (BADA) model by a factor of 10.
Furthermore, the method was capable of generating a range of trajectories that
were similar to the held out test dataset when analysed in distribution. The
proposed method is hybrid, with aircraft drag and calibrated airspeed functions
generated probabilistically to parameterise the BADA equations, ensuring the
physical plausibility of generated trajectories.
|
2504.02534 | Mykola Lavreniuk | Mykola Lavreniuk, Nataliia Kussul, Andrii Shelestov, Bohdan Yailymov,
Yevhenii Salii, Volodymyr Kuzin, Zoltan Szantoi | Delineate Anything: Resolution-Agnostic Field Boundary Delineation on
Satellite Imagery | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The accurate delineation of agricultural field boundaries from satellite
imagery is vital for land management and crop monitoring. However, current
methods face challenges due to limited dataset sizes, resolution discrepancies,
and diverse environmental conditions. We address this by reformulating the task
as instance segmentation and introducing the Field Boundary Instance
Segmentation - 22M dataset (FBIS-22M), a large-scale, multi-resolution dataset
comprising 672,909 high-resolution satellite image patches (ranging from 0.25 m
to 10 m) and 22,926,427 instance masks of individual fields, significantly
narrowing the gap between agricultural datasets and those in other computer
vision domains. We further propose Delineate Anything, an instance segmentation
model trained on our new FBIS-22M dataset. Our proposed model sets a new
state-of-the-art, achieving a substantial improvement of 88.5% in [email protected] and
103% in [email protected]:0.95 over existing methods, while also demonstrating
significantly faster inference and strong zero-shot generalization across
diverse image resolutions and unseen geographic regions. Code, pre-trained
models, and the FBIS-22M dataset are available at
https://lavreniuk.github.io/Delineate-Anything.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 12:37:04 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Lavreniuk",
"Mykola",
""
],
[
"Kussul",
"Nataliia",
""
],
[
"Shelestov",
"Andrii",
""
],
[
"Yailymov",
"Bohdan",
""
],
[
"Salii",
"Yevhenii",
""
],
[
"Kuzin",
"Volodymyr",
""
],
[
"Szantoi",
"Zoltan",
""
]
] | TITLE: Delineate Anything: Resolution-Agnostic Field Boundary Delineation on
Satellite Imagery
ABSTRACT: The accurate delineation of agricultural field boundaries from satellite
imagery is vital for land management and crop monitoring. However, current
methods face challenges due to limited dataset sizes, resolution discrepancies,
and diverse environmental conditions. We address this by reformulating the task
as instance segmentation and introducing the Field Boundary Instance
Segmentation - 22M dataset (FBIS-22M), a large-scale, multi-resolution dataset
comprising 672,909 high-resolution satellite image patches (ranging from 0.25 m
to 10 m) and 22,926,427 instance masks of individual fields, significantly
narrowing the gap between agricultural datasets and those in other computer
vision domains. We further propose Delineate Anything, an instance segmentation
model trained on our new FBIS-22M dataset. Our proposed model sets a new
state-of-the-art, achieving a substantial improvement of 88.5% in [email protected] and
103% in [email protected]:0.95 over existing methods, while also demonstrating
significantly faster inference and strong zero-shot generalization across
diverse image resolutions and unseen geographic regions. Code, pre-trained
models, and the FBIS-22M dataset are available at
https://lavreniuk.github.io/Delineate-Anything.
|
2504.02545 | Bo-Kai Ruan | Bo-Kai Ruan, Hong-Han Shuai | MAD: Makeup All-in-One with Cross-Domain Diffusion Model | Project page: https://basiclab.github.io/MAD | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Existing makeup techniques often require designing multiple models to handle
different inputs and align features across domains for different makeup tasks,
e.g., beauty filter, makeup transfer, and makeup removal, leading to increased
complexity. Another limitation is the absence of text-guided makeup try-on,
which is more user-friendly without needing reference images. In this study, we
make the first attempt to use a single model for various makeup tasks.
Specifically, we formulate different makeup tasks as cross-domain translations
and leverage a cross-domain diffusion model to accomplish all tasks. Unlike
existing methods that rely on separate encoder-decoder configurations or
cycle-based mechanisms, we propose using different domain embeddings to
facilitate domain control. This allows for seamless domain switching by merely
changing embeddings with a single model, thereby reducing the reliance on
additional modules for different tasks. Moreover, to support precise
text-to-makeup applications, we introduce the MT-Text dataset by extending the
MT dataset with textual annotations, advancing the practicality of makeup
technologies.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 12:52:31 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Ruan",
"Bo-Kai",
""
],
[
"Shuai",
"Hong-Han",
""
]
] | TITLE: MAD: Makeup All-in-One with Cross-Domain Diffusion Model
ABSTRACT: Existing makeup techniques often require designing multiple models to handle
different inputs and align features across domains for different makeup tasks,
e.g., beauty filter, makeup transfer, and makeup removal, leading to increased
complexity. Another limitation is the absence of text-guided makeup try-on,
which is more user-friendly without needing reference images. In this study, we
make the first attempt to use a single model for various makeup tasks.
Specifically, we formulate different makeup tasks as cross-domain translations
and leverage a cross-domain diffusion model to accomplish all tasks. Unlike
existing methods that rely on separate encoder-decoder configurations or
cycle-based mechanisms, we propose using different domain embeddings to
facilitate domain control. This allows for seamless domain switching by merely
changing embeddings with a single model, thereby reducing the reliance on
additional modules for different tasks. Moreover, to support precise
text-to-makeup applications, we introduce the MT-Text dataset by extending the
MT dataset with textual annotations, advancing the practicality of makeup
technologies.
|
2504.02555 | Hesong Li | Hesong Li and Ziqi Wu and Ruiwen Shao and Tao Zhang and Ying Fu | Noise Calibration and Spatial-Frequency Interactive Network for STEM
Image Enhancement | Acceped by CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scanning Transmission Electron Microscopy (STEM) enables the observation of
atomic arrangements at sub-angstrom resolution, allowing for atomically
resolved analysis of the physical and chemical properties of materials.
However, due to the effects of noise, electron beam damage, sample thickness,
etc, obtaining satisfactory atomic-level images is often challenging. Enhancing
STEM images can reveal clearer structural details of materials. Nonetheless,
existing STEM image enhancement methods usually overlook unique features in the
frequency domain, and existing datasets lack realism and generality. To resolve
these issues, in this paper, we develop noise calibration, data synthesis, and
enhancement methods for STEM images. We first present a STEM noise calibration
method, which is used to synthesize more realistic STEM images. The parameters
of background noise, scan noise, and pointwise noise are obtained by
statistical analysis and fitting of real STEM images containing atoms. Then we
use these parameters to develop a more general dataset that considers both
regular and random atomic arrangements and includes both HAADF and BF mode
images. Finally, we design a spatial-frequency interactive network for STEM
image enhancement, which can explore the information in the frequency domain
formed by the periodicity of atomic arrangement. Experimental results show that
our data is closer to real STEM images and achieves better enhancement
performances together with our network. Code will be available at
https://github.com/HeasonLee/SFIN}{https://github.com/HeasonLee/SFIN.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 13:11:57 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Li",
"Hesong",
""
],
[
"Wu",
"Ziqi",
""
],
[
"Shao",
"Ruiwen",
""
],
[
"Zhang",
"Tao",
""
],
[
"Fu",
"Ying",
""
]
] | TITLE: Noise Calibration and Spatial-Frequency Interactive Network for STEM
Image Enhancement
ABSTRACT: Scanning Transmission Electron Microscopy (STEM) enables the observation of
atomic arrangements at sub-angstrom resolution, allowing for atomically
resolved analysis of the physical and chemical properties of materials.
However, due to the effects of noise, electron beam damage, sample thickness,
etc, obtaining satisfactory atomic-level images is often challenging. Enhancing
STEM images can reveal clearer structural details of materials. Nonetheless,
existing STEM image enhancement methods usually overlook unique features in the
frequency domain, and existing datasets lack realism and generality. To resolve
these issues, in this paper, we develop noise calibration, data synthesis, and
enhancement methods for STEM images. We first present a STEM noise calibration
method, which is used to synthesize more realistic STEM images. The parameters
of background noise, scan noise, and pointwise noise are obtained by
statistical analysis and fitting of real STEM images containing atoms. Then we
use these parameters to develop a more general dataset that considers both
regular and random atomic arrangements and includes both HAADF and BF mode
images. Finally, we design a spatial-frequency interactive network for STEM
image enhancement, which can explore the information in the frequency domain
formed by the periodicity of atomic arrangement. Experimental results show that
our data is closer to real STEM images and achieves better enhancement
performances together with our network. Code will be available at
https://github.com/HeasonLee/SFIN}{https://github.com/HeasonLee/SFIN.
|
2504.02558 | Andrei Dumitriu | Andrei Dumitriu, Florin Tatui, Florin Miron, Radu Tudor Ionescu, Radu
Timofte | Rip Current Segmentation: A Novel Benchmark and YOLOv8 Baseline Results | Accepted at CVPR 2023 NTIRE Workshop | 2023 IEEE/CVF Conference on Computer Vision and Pattern
Recognition Workshops (CVPRW), pp. 1261-1271, June 2023 | 10.1109/CVPRW59228.2023.00133 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rip currents are the leading cause of fatal accidents and injuries on many
beaches worldwide, emphasizing the importance of automatically detecting these
hazardous surface water currents. In this paper, we address a novel task: rip
current instance segmentation. We introduce a comprehensive dataset containing
$2,466$ images with newly created polygonal annotations for instance
segmentation, used for training and validation. Additionally, we present a
novel dataset comprising $17$ drone videos (comprising about $24K$ frames)
captured at $30 FPS$, annotated with both polygons for instance segmentation
and bounding boxes for object detection, employed for testing purposes. We
train various versions of YOLOv8 for instance segmentation on static images and
assess their performance on the test dataset (videos). The best results were
achieved by the YOLOv8-nano model (runnable on a portable device), with an
mAP50 of $88.94%$ on the validation dataset and $81.21%$ macro average on the
test dataset. The results provide a baseline for future research in rip current
segmentation. Our work contributes to the existing literature by introducing a
detailed, annotated dataset, and training a deep learning model for instance
segmentation of rip currents. The code, training details and the annotated
dataset are made publicly available at https://github.com/Irikos/rip_currents.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 13:14:16 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Dumitriu",
"Andrei",
""
],
[
"Tatui",
"Florin",
""
],
[
"Miron",
"Florin",
""
],
[
"Ionescu",
"Radu Tudor",
""
],
[
"Timofte",
"Radu",
""
]
] | TITLE: Rip Current Segmentation: A Novel Benchmark and YOLOv8 Baseline Results
ABSTRACT: Rip currents are the leading cause of fatal accidents and injuries on many
beaches worldwide, emphasizing the importance of automatically detecting these
hazardous surface water currents. In this paper, we address a novel task: rip
current instance segmentation. We introduce a comprehensive dataset containing
$2,466$ images with newly created polygonal annotations for instance
segmentation, used for training and validation. Additionally, we present a
novel dataset comprising $17$ drone videos (comprising about $24K$ frames)
captured at $30 FPS$, annotated with both polygons for instance segmentation
and bounding boxes for object detection, employed for testing purposes. We
train various versions of YOLOv8 for instance segmentation on static images and
assess their performance on the test dataset (videos). The best results were
achieved by the YOLOv8-nano model (runnable on a portable device), with an
mAP50 of $88.94%$ on the validation dataset and $81.21%$ macro average on the
test dataset. The results provide a baseline for future research in rip current
segmentation. Our work contributes to the existing literature by introducing a
detailed, annotated dataset, and training a deep learning model for instance
segmentation of rip currents. The code, training details and the annotated
dataset are made publicly available at https://github.com/Irikos/rip_currents.
|
2504.02560 | Yongqi Zhai | Yongqi Zhai, Luyang Tang, Wei Jiang, Jiayu Yang, Ronggang Wang | L-LBVC: Long-Term Motion Estimation and Prediction for Learned
Bi-Directional Video Compression | Accepted to 2025 Data Compression Conference (DCC) | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, learned video compression (LVC) has shown superior performance
under low-delay configuration. However, the performance of learned
bi-directional video compression (LBVC) still lags behind traditional
bi-directional coding. The performance gap mainly arises from inaccurate
long-term motion estimation and prediction of distant frames, especially in
large motion scenes. To solve these two critical problems, this paper proposes
a novel LBVC framework, namely L-LBVC. Firstly, we propose an adaptive motion
estimation module that can handle both short-term and long-term motions.
Specifically, we directly estimate the optical flows for adjacent frames and
non-adjacent frames with small motions. For non-adjacent frames with large
motions, we recursively accumulate local flows between adjacent frames to
estimate long-term flows. Secondly, we propose an adaptive motion prediction
module that can largely reduce the bit cost for motion coding. To improve the
accuracy of long-term motion prediction, we adaptively downsample reference
frames during testing to match the motion ranges observed during training.
Experiments show that our L-LBVC significantly outperforms previous
state-of-the-art LVC methods and even surpasses VVC (VTM) on some test datasets
under random access configuration.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 13:15:45 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhai",
"Yongqi",
""
],
[
"Tang",
"Luyang",
""
],
[
"Jiang",
"Wei",
""
],
[
"Yang",
"Jiayu",
""
],
[
"Wang",
"Ronggang",
""
]
] | TITLE: L-LBVC: Long-Term Motion Estimation and Prediction for Learned
Bi-Directional Video Compression
ABSTRACT: Recently, learned video compression (LVC) has shown superior performance
under low-delay configuration. However, the performance of learned
bi-directional video compression (LBVC) still lags behind traditional
bi-directional coding. The performance gap mainly arises from inaccurate
long-term motion estimation and prediction of distant frames, especially in
large motion scenes. To solve these two critical problems, this paper proposes
a novel LBVC framework, namely L-LBVC. Firstly, we propose an adaptive motion
estimation module that can handle both short-term and long-term motions.
Specifically, we directly estimate the optical flows for adjacent frames and
non-adjacent frames with small motions. For non-adjacent frames with large
motions, we recursively accumulate local flows between adjacent frames to
estimate long-term flows. Secondly, we propose an adaptive motion prediction
module that can largely reduce the bit cost for motion coding. To improve the
accuracy of long-term motion prediction, we adaptively downsample reference
frames during testing to match the motion ranges observed during training.
Experiments show that our L-LBVC significantly outperforms previous
state-of-the-art LVC methods and even surpasses VVC (VTM) on some test datasets
under random access configuration.
|
2504.02577 | Erik Arakelyan | Erik Arakelyan | Reasoning Inconsistencies and How to Mitigate Them in Deep Learning | PhD thesis | null | null | null | cs.AI cs.CL cs.LG cs.LO | http://creativecommons.org/licenses/by/4.0/ | The recent advancements in Deep Learning models and techniques have led to
significant strides in performance across diverse tasks and modalities.
However, while the overall capabilities of models show promising growth, our
understanding of their internal reasoning processes remains limited,
particularly concerning systematic inconsistencies or errors patterns of
logical or inferential flaws. These inconsistencies may manifest as
contradictory outputs, failure to generalize across similar tasks, or erroneous
conclusions in specific contexts. Even detecting and measuring such reasoning
discrepancies is challenging, as they may arise from opaque internal
procedures, biases and imbalances in training data, or the inherent complexity
of the task. Without effective methods to detect, measure, and mitigate these
errors, there is a risk of deploying models that are biased, exploitable, or
logically unreliable. This thesis aims to address these issues by producing
novel methods for deep learning models that reason over knowledge graphs,
natural language, and images. The thesis contributes two techniques for
detecting and quantifying predictive inconsistencies originating from opaque
internal procedures in natural language and image processing models. To
mitigate inconsistencies from biases in training data, this thesis presents a
data efficient sampling method to improve fairness and performance and a
synthetic dataset generation approach in low resource scenarios. Finally, the
thesis offers two techniques to optimize the models for complex reasoning
tasks. These methods enhance model performance while allowing for more faithful
and interpretable exploration and exploitation during inference. Critically,
this thesis provides a comprehensive framework to improve the robustness,
fairness, and interpretability of deep learning models across diverse tasks and
modalities.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 13:40:55 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Arakelyan",
"Erik",
""
]
] | TITLE: Reasoning Inconsistencies and How to Mitigate Them in Deep Learning
ABSTRACT: The recent advancements in Deep Learning models and techniques have led to
significant strides in performance across diverse tasks and modalities.
However, while the overall capabilities of models show promising growth, our
understanding of their internal reasoning processes remains limited,
particularly concerning systematic inconsistencies or errors patterns of
logical or inferential flaws. These inconsistencies may manifest as
contradictory outputs, failure to generalize across similar tasks, or erroneous
conclusions in specific contexts. Even detecting and measuring such reasoning
discrepancies is challenging, as they may arise from opaque internal
procedures, biases and imbalances in training data, or the inherent complexity
of the task. Without effective methods to detect, measure, and mitigate these
errors, there is a risk of deploying models that are biased, exploitable, or
logically unreliable. This thesis aims to address these issues by producing
novel methods for deep learning models that reason over knowledge graphs,
natural language, and images. The thesis contributes two techniques for
detecting and quantifying predictive inconsistencies originating from opaque
internal procedures in natural language and image processing models. To
mitigate inconsistencies from biases in training data, this thesis presents a
data efficient sampling method to improve fairness and performance and a
synthetic dataset generation approach in low resource scenarios. Finally, the
thesis offers two techniques to optimize the models for complex reasoning
tasks. These methods enhance model performance while allowing for more faithful
and interpretable exploration and exploitation during inference. Critically,
this thesis provides a comprehensive framework to improve the robustness,
fairness, and interpretability of deep learning models across diverse tasks and
modalities.
|
2504.02590 | Kepu Zhang | Kepu Zhang, Guofu Xie, Weijie Yu, Mingyue Xu, Xu Tang, Yaxin Li, Jun
Xu | LexPam: Legal Procedure Awareness-Guided Mathematical Reasoning | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The legal mathematical reasoning ability of LLMs is crucial when applying
them to real-world scenarios, as it directly affects the credibility of the
LLM. While existing legal LLMs can perform general judicial question answering,
their legal mathematical reasoning capabilities have not been trained.
Open-domain reasoning models, though able to generate detailed calculation
steps, do not follow the reasoning logic required for legal scenarios.
Additionally, there is currently a lack of legal mathematical reasoning
datasets to help validate and enhance LLMs' reasoning abilities in legal
contexts. To address these issues, we propose the first Chinese legal
Mathematical Reasoning Dataset, LexNum, which includes three common legal
mathematical reasoning scenarios: economic compensation, work injury
compensation, and traffic accident compensation. Based on LexNum, we tested the
performance of existing legal LLMs and reasoning LLMs, and introduced LexPam, a
reinforcement learning algorithm guided by legal procedural awareness to train
LLMs, enhancing their mathematical reasoning abilities in legal scenarios.
Experiments on tasks in the three legal scenarios show that the performance of
existing legal LLMs and reasoning models in legal mathematical reasoning tasks
is unsatisfactory. LexPam can enhance the LLM's ability in these tasks.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 13:54:53 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhang",
"Kepu",
""
],
[
"Xie",
"Guofu",
""
],
[
"Yu",
"Weijie",
""
],
[
"Xu",
"Mingyue",
""
],
[
"Tang",
"Xu",
""
],
[
"Li",
"Yaxin",
""
],
[
"Xu",
"Jun",
""
]
] | TITLE: LexPam: Legal Procedure Awareness-Guided Mathematical Reasoning
ABSTRACT: The legal mathematical reasoning ability of LLMs is crucial when applying
them to real-world scenarios, as it directly affects the credibility of the
LLM. While existing legal LLMs can perform general judicial question answering,
their legal mathematical reasoning capabilities have not been trained.
Open-domain reasoning models, though able to generate detailed calculation
steps, do not follow the reasoning logic required for legal scenarios.
Additionally, there is currently a lack of legal mathematical reasoning
datasets to help validate and enhance LLMs' reasoning abilities in legal
contexts. To address these issues, we propose the first Chinese legal
Mathematical Reasoning Dataset, LexNum, which includes three common legal
mathematical reasoning scenarios: economic compensation, work injury
compensation, and traffic accident compensation. Based on LexNum, we tested the
performance of existing legal LLMs and reasoning LLMs, and introduced LexPam, a
reinforcement learning algorithm guided by legal procedural awareness to train
LLMs, enhancing their mathematical reasoning abilities in legal scenarios.
Experiments on tasks in the three legal scenarios show that the performance of
existing legal LLMs and reasoning models in legal mathematical reasoning tasks
is unsatisfactory. LexPam can enhance the LLM's ability in these tasks.
|
2504.02602 | Talha Meraj | Abdul Rehman, Talha Meraj, Aiman Mahmood Minhas, Ayisha Imran, Mohsen
Ali, Waqas Sultani, Mubarak Shah | Leveraging Sparse Annotations for Leukemia Diagnosis on the Large
Leukemia Dataset | Under Review | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Leukemia is 10th most frequently diagnosed cancer and one of the leading
causes of cancer related deaths worldwide. Realistic analysis of Leukemia
requires White Blook Cells (WBC) localization, classification, and
morphological assessment. Despite deep learning advances in medical imaging,
leukemia analysis lacks a large, diverse multi-task dataset, while existing
small datasets lack domain diversity, limiting real world applicability. To
overcome dataset challenges, we present a large scale WBC dataset named Large
Leukemia Dataset (LLD) and novel methods for detecting WBC with their
attributes. Our contribution here is threefold. First, we present a large-scale
Leukemia dataset collected through Peripheral Blood Films (PBF) from several
patients, through multiple microscopes, multi cameras, and multi magnification.
To enhance diagnosis explainability and medical expert acceptance, each
leukemia cell is annotated at 100x with 7 morphological attributes, ranging
from Cell Size to Nuclear Shape. Secondly, we propose a multi task model that
not only detects WBCs but also predicts their attributes, providing an
interpretable and clinically meaningful solution. Third, we propose a method
for WBC detection with attribute analysis using sparse annotations. This
approach reduces the annotation burden on hematologists, requiring them to mark
only a small area within the field of view. Our method enables the model to
leverage the entire field of view rather than just the annotated regions,
enhancing learning efficiency and diagnostic accuracy. From diagnosis
explainability to overcoming domain shift challenges, presented datasets could
be used for many challenging aspects of microscopic image analysis. The
datasets, code, and demo are available at:
https://im.itu.edu.pk/sparse-leukemiaattri/
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 14:04:02 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Rehman",
"Abdul",
""
],
[
"Meraj",
"Talha",
""
],
[
"Minhas",
"Aiman Mahmood",
""
],
[
"Imran",
"Ayisha",
""
],
[
"Ali",
"Mohsen",
""
],
[
"Sultani",
"Waqas",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: Leveraging Sparse Annotations for Leukemia Diagnosis on the Large
Leukemia Dataset
ABSTRACT: Leukemia is 10th most frequently diagnosed cancer and one of the leading
causes of cancer related deaths worldwide. Realistic analysis of Leukemia
requires White Blook Cells (WBC) localization, classification, and
morphological assessment. Despite deep learning advances in medical imaging,
leukemia analysis lacks a large, diverse multi-task dataset, while existing
small datasets lack domain diversity, limiting real world applicability. To
overcome dataset challenges, we present a large scale WBC dataset named Large
Leukemia Dataset (LLD) and novel methods for detecting WBC with their
attributes. Our contribution here is threefold. First, we present a large-scale
Leukemia dataset collected through Peripheral Blood Films (PBF) from several
patients, through multiple microscopes, multi cameras, and multi magnification.
To enhance diagnosis explainability and medical expert acceptance, each
leukemia cell is annotated at 100x with 7 morphological attributes, ranging
from Cell Size to Nuclear Shape. Secondly, we propose a multi task model that
not only detects WBCs but also predicts their attributes, providing an
interpretable and clinically meaningful solution. Third, we propose a method
for WBC detection with attribute analysis using sparse annotations. This
approach reduces the annotation burden on hematologists, requiring them to mark
only a small area within the field of view. Our method enables the model to
leverage the entire field of view rather than just the annotated regions,
enhancing learning efficiency and diagnostic accuracy. From diagnosis
explainability to overcoming domain shift challenges, presented datasets could
be used for many challenging aspects of microscopic image analysis. The
datasets, code, and demo are available at:
https://im.itu.edu.pk/sparse-leukemiaattri/
|
2504.02604 | Hedi Naouara | Hedi Naouara, Jean-Pierre Lorr\'e, J\'er\^ome Louradour | LinTO Audio and Textual Datasets to Train and Evaluate Automatic Speech
Recognition in Tunisian Arabic Dialect | null | null | null | null | cs.CL cs.SD eess.AS | http://creativecommons.org/licenses/by-sa/4.0/ | Developing Automatic Speech Recognition (ASR) systems for Tunisian Arabic
Dialect is challenging due to the dialect's linguistic complexity and the
scarcity of annotated speech datasets. To address these challenges, we propose
the LinTO audio and textual datasets -- comprehensive resources that capture
phonological and lexical features of Tunisian Arabic Dialect. These datasets
include a variety of texts from numerous sources and real-world audio samples
featuring diverse speakers and code-switching between Tunisian Arabic Dialect
and English or French. By providing high-quality audio paired with precise
transcriptions, the LinTO audio and textual datasets aim to provide qualitative
material to build and benchmark ASR systems for the Tunisian Arabic Dialect.
Keywords -- Tunisian Arabic Dialect, Speech-to-Text, Low-Resource Languages,
Audio Data Augmentation
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 14:05:56 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Naouara",
"Hedi",
""
],
[
"Lorré",
"Jean-Pierre",
""
],
[
"Louradour",
"Jérôme",
""
]
] | TITLE: LinTO Audio and Textual Datasets to Train and Evaluate Automatic Speech
Recognition in Tunisian Arabic Dialect
ABSTRACT: Developing Automatic Speech Recognition (ASR) systems for Tunisian Arabic
Dialect is challenging due to the dialect's linguistic complexity and the
scarcity of annotated speech datasets. To address these challenges, we propose
the LinTO audio and textual datasets -- comprehensive resources that capture
phonological and lexical features of Tunisian Arabic Dialect. These datasets
include a variety of texts from numerous sources and real-world audio samples
featuring diverse speakers and code-switching between Tunisian Arabic Dialect
and English or French. By providing high-quality audio paired with precise
transcriptions, the LinTO audio and textual datasets aim to provide qualitative
material to build and benchmark ASR systems for the Tunisian Arabic Dialect.
Keywords -- Tunisian Arabic Dialect, Speech-to-Text, Low-Resource Languages,
Audio Data Augmentation
|
2504.02605 | Daoguang Zan | Daoguang Zan and Zhirong Huang and Wei Liu and Hanwu Chen and Linhao
Zhang and Shulin Xin and Lu Chen and Qi Liu and Xiaojian Zhong and Aoyan Li
and Siyao Liu and Yongsheng Xiao and Liangqiang Chen and Yuyu Zhang and Jing
Su and Tianyu Liu and Rui Long and Kai Shen and Liang Xiang | Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving | null | null | null | null | cs.SE cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | The task of issue resolving is to modify a codebase to generate a patch that
addresses a given issue. However, existing benchmarks, such as SWE-bench, focus
almost exclusively on Python, making them insufficient for evaluating Large
Language Models (LLMs) across diverse software ecosystems. To address this, we
introduce a multilingual issue-resolving benchmark, called Multi-SWE-bench,
covering Java, TypeScript, JavaScript, Go, Rust, C, and C++. It includes a
total of 1,632 high-quality instances, which were carefully annotated from
2,456 candidates by 68 expert annotators, ensuring that the benchmark can
provide an accurate and reliable evaluation. Based on Multi-SWE-bench, we
evaluate a series of state-of-the-art models using three representative methods
(Agentless, SWE-agent, and OpenHands) and present a comprehensive analysis with
key empirical insights. In addition, we launch a Multi-SWE-RL open-source
community, aimed at building large-scale reinforcement learning (RL) training
datasets for issue-resolving tasks. As an initial contribution, we release a
set of 4,723 well-structured instances spanning seven programming languages,
laying a solid foundation for RL research in this domain. More importantly, we
open-source our entire data production pipeline, along with detailed tutorials,
encouraging the open-source community to continuously contribute and expand the
dataset. We envision our Multi-SWE-bench and the ever-growing Multi-SWE-RL
community as catalysts for advancing RL toward its full potential, bringing us
one step closer to the dawn of AGI.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 14:06:17 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zan",
"Daoguang",
""
],
[
"Huang",
"Zhirong",
""
],
[
"Liu",
"Wei",
""
],
[
"Chen",
"Hanwu",
""
],
[
"Zhang",
"Linhao",
""
],
[
"Xin",
"Shulin",
""
],
[
"Chen",
"Lu",
""
],
[
"Liu",
"Qi",
""
],
[
"Zhong",
"Xiaojian",
""
],
[
"Li",
"Aoyan",
""
],
[
"Liu",
"Siyao",
""
],
[
"Xiao",
"Yongsheng",
""
],
[
"Chen",
"Liangqiang",
""
],
[
"Zhang",
"Yuyu",
""
],
[
"Su",
"Jing",
""
],
[
"Liu",
"Tianyu",
""
],
[
"Long",
"Rui",
""
],
[
"Shen",
"Kai",
""
],
[
"Xiang",
"Liang",
""
]
] | TITLE: Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving
ABSTRACT: The task of issue resolving is to modify a codebase to generate a patch that
addresses a given issue. However, existing benchmarks, such as SWE-bench, focus
almost exclusively on Python, making them insufficient for evaluating Large
Language Models (LLMs) across diverse software ecosystems. To address this, we
introduce a multilingual issue-resolving benchmark, called Multi-SWE-bench,
covering Java, TypeScript, JavaScript, Go, Rust, C, and C++. It includes a
total of 1,632 high-quality instances, which were carefully annotated from
2,456 candidates by 68 expert annotators, ensuring that the benchmark can
provide an accurate and reliable evaluation. Based on Multi-SWE-bench, we
evaluate a series of state-of-the-art models using three representative methods
(Agentless, SWE-agent, and OpenHands) and present a comprehensive analysis with
key empirical insights. In addition, we launch a Multi-SWE-RL open-source
community, aimed at building large-scale reinforcement learning (RL) training
datasets for issue-resolving tasks. As an initial contribution, we release a
set of 4,723 well-structured instances spanning seven programming languages,
laying a solid foundation for RL research in this domain. More importantly, we
open-source our entire data production pipeline, along with detailed tutorials,
encouraging the open-source community to continuously contribute and expand the
dataset. We envision our Multi-SWE-bench and the ever-growing Multi-SWE-RL
community as catalysts for advancing RL toward its full potential, bringing us
one step closer to the dawn of AGI.
|
2504.02606 | Jonas Teufel | Jonas Teufel, Annika Leinweber, Pascal Friederich | Improving Counterfactual Truthfulness for Molecular Property Prediction
through Uncertainty Quantification | 24 pages, 5 figures, 4 tabels, accepted at the 3rd xAI World
Conference | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Explainable AI (xAI) interventions aim to improve interpretability for
complex black-box models, not only to improve user trust but also as a means to
extract scientific insights from high-performing predictive systems. In
molecular property prediction, counterfactual explanations offer a way to
understand predictive behavior by highlighting which minimal perturbations in
the input molecular structure cause the greatest deviation in the predicted
property. However, such explanations only allow for meaningful scientific
insights if they reflect the distribution of the true underlying property -- a
feature we define as counterfactual truthfulness. To increase this
truthfulness, we propose the integration of uncertainty estimation techniques
to filter counterfactual candidates with high predicted uncertainty. Through
computational experiments with synthetic and real-world datasets, we
demonstrate that traditional uncertainty estimation methods, such as ensembles
and mean-variance estimation, can already substantially reduce the average
prediction error and increase counterfactual truthfulness, especially for
out-of-distribution settings. Our results highlight the importance and
potential impact of incorporating uncertainty estimation into explainability
methods, especially considering the relatively high effectiveness of low-effort
interventions like model ensembles.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 14:07:30 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Teufel",
"Jonas",
""
],
[
"Leinweber",
"Annika",
""
],
[
"Friederich",
"Pascal",
""
]
] | TITLE: Improving Counterfactual Truthfulness for Molecular Property Prediction
through Uncertainty Quantification
ABSTRACT: Explainable AI (xAI) interventions aim to improve interpretability for
complex black-box models, not only to improve user trust but also as a means to
extract scientific insights from high-performing predictive systems. In
molecular property prediction, counterfactual explanations offer a way to
understand predictive behavior by highlighting which minimal perturbations in
the input molecular structure cause the greatest deviation in the predicted
property. However, such explanations only allow for meaningful scientific
insights if they reflect the distribution of the true underlying property -- a
feature we define as counterfactual truthfulness. To increase this
truthfulness, we propose the integration of uncertainty estimation techniques
to filter counterfactual candidates with high predicted uncertainty. Through
computational experiments with synthetic and real-world datasets, we
demonstrate that traditional uncertainty estimation methods, such as ensembles
and mean-variance estimation, can already substantially reduce the average
prediction error and increase counterfactual truthfulness, especially for
out-of-distribution settings. Our results highlight the importance and
potential impact of incorporating uncertainty estimation into explainability
methods, especially considering the relatively high effectiveness of low-effort
interventions like model ensembles.
|
2504.02615 | Shahid Shafi Dar | Aman Singh, Shahid Shafi Dar, Ranveer Singh, and Nagendra Kumar | A Hybrid Similarity-Aware Graph Neural Network with Transformer for Node
Classification | null | null | 10.1016/j.eswa.2025.127292 | null | cs.SI | http://creativecommons.org/licenses/by/4.0/ | Node classification has gained significant importance in graph deep learning
with real-world applications such as recommendation systems, drug discovery,
and citation networks. Graph Convolutional Networks and Graph Transformers have
achieved superior performance in node classification tasks. However, the key
concern with Graph Convolutional Networks is over-squashing, which limits their
ability to capture long-range dependencies in the network. Additionally, Graph
Transformers face scalability challenges, making it difficult to process large
graphs efficiently. To address this, we propose a novel framework, A Hybrid
SImilarity-Aware Graph Neural Network with Transformer for Node Classification
(SIGNNet), which capitalizes on local and global structural information,
enhances the model's capability to effectively capture fine-grained
relationships and broader contextual patterns within the graph structure. The
proposed method leverages Graph Convolutional Networks alongside a score-based
mechanism to effectively capture local and global node interactions while
addressing the limitations of over-squashing. Our proposed method employs a
novel Personalized PageRank-based node sampling method to address scalability
issues by generating subgraphs of nodes. Additionally, SIGNNet incorporates a
novel attention mechanism, Structure-Aware Multi-Head Attention (SA-MHA), which
integrates node structural information for informed attention weighting,
enabling the model to prioritize nodes based on topological significance.
Extensive experiments demonstrate the significant improvements achieved by the
proposed method over existing state-of-the-art methods, with average accuracy
gains of 6.03%, 5.47%, 4.78%, 19.10%, 19.61%, 7.22%, 19.54%, and 14.94% on
Cora, Citeseer, CS, Wisconsin, Texas, Actor, Cornell and Chameleon datasets,
respectively.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 14:14:37 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Singh",
"Aman",
""
],
[
"Dar",
"Shahid Shafi",
""
],
[
"Singh",
"Ranveer",
""
],
[
"Kumar",
"Nagendra",
""
]
] | TITLE: A Hybrid Similarity-Aware Graph Neural Network with Transformer for Node
Classification
ABSTRACT: Node classification has gained significant importance in graph deep learning
with real-world applications such as recommendation systems, drug discovery,
and citation networks. Graph Convolutional Networks and Graph Transformers have
achieved superior performance in node classification tasks. However, the key
concern with Graph Convolutional Networks is over-squashing, which limits their
ability to capture long-range dependencies in the network. Additionally, Graph
Transformers face scalability challenges, making it difficult to process large
graphs efficiently. To address this, we propose a novel framework, A Hybrid
SImilarity-Aware Graph Neural Network with Transformer for Node Classification
(SIGNNet), which capitalizes on local and global structural information,
enhances the model's capability to effectively capture fine-grained
relationships and broader contextual patterns within the graph structure. The
proposed method leverages Graph Convolutional Networks alongside a score-based
mechanism to effectively capture local and global node interactions while
addressing the limitations of over-squashing. Our proposed method employs a
novel Personalized PageRank-based node sampling method to address scalability
issues by generating subgraphs of nodes. Additionally, SIGNNet incorporates a
novel attention mechanism, Structure-Aware Multi-Head Attention (SA-MHA), which
integrates node structural information for informed attention weighting,
enabling the model to prioritize nodes based on topological significance.
Extensive experiments demonstrate the significant improvements achieved by the
proposed method over existing state-of-the-art methods, with average accuracy
gains of 6.03%, 5.47%, 4.78%, 19.10%, 19.61%, 7.22%, 19.54%, and 14.94% on
Cora, Citeseer, CS, Wisconsin, Texas, Actor, Cornell and Chameleon datasets,
respectively.
|
2504.02617 | Jiehong Lin | Lihua Liu, Jiehong Lin, Zhenxin Liu, Kui Jia | PicoPose: Progressive Pixel-to-Pixel Correspondence Learning for Novel
Object Pose Estimation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Novel object pose estimation from RGB images presents a significant challenge
for zero-shot generalization, as it involves estimating the relative 6D
transformation between an RGB observation and a CAD model of an object that was
not seen during training. In this paper, we introduce PicoPose, a novel
framework designed to tackle this task using a three-stage pixel-to-pixel
correspondence learning process. Firstly, PicoPose matches features from the
RGB observation with those from rendered object templates, identifying the
best-matched template and establishing coarse correspondences. Secondly,
PicoPose smooths the correspondences by globally regressing a 2D affine
transformation, including in-plane rotation, scale, and 2D translation, from
the coarse correspondence map. Thirdly, PicoPose applies the affine
transformation to the feature map of the best-matched template and learns
correspondence offsets within local regions to achieve fine-grained
correspondences. By progressively refining the correspondences, PicoPose
significantly improves the accuracy of object poses computed via PnP/RANSAC.
PicoPose achieves state-of-the-art performance on the seven core datasets of
the BOP benchmark, demonstrating exceptional generalization to novel objects
represented by CAD models or object reference images. Code and models are
available at https://github.com/foollh/PicoPose.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 14:16:41 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Lihua",
""
],
[
"Lin",
"Jiehong",
""
],
[
"Liu",
"Zhenxin",
""
],
[
"Jia",
"Kui",
""
]
] | TITLE: PicoPose: Progressive Pixel-to-Pixel Correspondence Learning for Novel
Object Pose Estimation
ABSTRACT: Novel object pose estimation from RGB images presents a significant challenge
for zero-shot generalization, as it involves estimating the relative 6D
transformation between an RGB observation and a CAD model of an object that was
not seen during training. In this paper, we introduce PicoPose, a novel
framework designed to tackle this task using a three-stage pixel-to-pixel
correspondence learning process. Firstly, PicoPose matches features from the
RGB observation with those from rendered object templates, identifying the
best-matched template and establishing coarse correspondences. Secondly,
PicoPose smooths the correspondences by globally regressing a 2D affine
transformation, including in-plane rotation, scale, and 2D translation, from
the coarse correspondence map. Thirdly, PicoPose applies the affine
transformation to the feature map of the best-matched template and learns
correspondence offsets within local regions to achieve fine-grained
correspondences. By progressively refining the correspondences, PicoPose
significantly improves the accuracy of object poses computed via PnP/RANSAC.
PicoPose achieves state-of-the-art performance on the seven core datasets of
the BOP benchmark, demonstrating exceptional generalization to novel objects
represented by CAD models or object reference images. Code and models are
available at https://github.com/foollh/PicoPose.
|
2504.02647 | Feng Gao | Feng Gao, Miao Fu, Jingchao Cao, Junyu Dong, Qian Du | Adaptive Frequency Enhancement Network for Remote Sensing Image Semantic
Segmentation | Accepted by IEEE TGRS 2025 | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Semantic segmentation of high-resolution remote sensing images plays a
crucial role in land-use monitoring and urban planning. Recent remarkable
progress in deep learning-based methods makes it possible to generate
satisfactory segmentation results. However, existing methods still face
challenges in adapting network parameters to various land cover distributions
and enhancing the interaction between spatial and frequency domain features. To
address these challenges, we propose the Adaptive Frequency Enhancement Network
(AFENet), which integrates two key components: the Adaptive Frequency and
Spatial feature Interaction Module (AFSIM) and the Selective feature Fusion
Module (SFM). AFSIM dynamically separates and modulates high- and low-frequency
features according to the content of the input image. It adaptively generates
two masks to separate high- and low-frequency components, therefore providing
optimal details and contextual supplementary information for ground object
feature representation. SFM selectively fuses global context and local detailed
features to enhance the network's representation capability. Hence, the
interactions between frequency and spatial features are further enhanced.
Extensive experiments on three publicly available datasets demonstrate that the
proposed AFENet outperforms state-of-the-art methods. In addition, we also
validate the effectiveness of AFSIM and SFM in managing diverse land cover
types and complex scenarios. Our codes are available at
https://github.com/oucailab/AFENet.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 14:42:49 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Gao",
"Feng",
""
],
[
"Fu",
"Miao",
""
],
[
"Cao",
"Jingchao",
""
],
[
"Dong",
"Junyu",
""
],
[
"Du",
"Qian",
""
]
] | TITLE: Adaptive Frequency Enhancement Network for Remote Sensing Image Semantic
Segmentation
ABSTRACT: Semantic segmentation of high-resolution remote sensing images plays a
crucial role in land-use monitoring and urban planning. Recent remarkable
progress in deep learning-based methods makes it possible to generate
satisfactory segmentation results. However, existing methods still face
challenges in adapting network parameters to various land cover distributions
and enhancing the interaction between spatial and frequency domain features. To
address these challenges, we propose the Adaptive Frequency Enhancement Network
(AFENet), which integrates two key components: the Adaptive Frequency and
Spatial feature Interaction Module (AFSIM) and the Selective feature Fusion
Module (SFM). AFSIM dynamically separates and modulates high- and low-frequency
features according to the content of the input image. It adaptively generates
two masks to separate high- and low-frequency components, therefore providing
optimal details and contextual supplementary information for ground object
feature representation. SFM selectively fuses global context and local detailed
features to enhance the network's representation capability. Hence, the
interactions between frequency and spatial features are further enhanced.
Extensive experiments on three publicly available datasets demonstrate that the
proposed AFENet outperforms state-of-the-art methods. In addition, we also
validate the effectiveness of AFSIM and SFM in managing diverse land cover
types and complex scenarios. Our codes are available at
https://github.com/oucailab/AFENet.
|
2504.02653 | Max Heinz Herkersdorf | Max Herkersdorf and Oliver Nelles | Online and Offline Space-Filling Input Design for Nonlinear System
Identification: A Receding Horizon Control-Based Approach | null | null | null | null | eess.SY cs.SY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The effectiveness of data-driven techniques heavily depends on the input
signal used to generate the estimation data. However, a significant research
gap exists in the field of input design for nonlinear dynamic system
identification. In particular, existing methods largely overlook the
minimization of the generalization error, i.e., model inaccuracies in regions
not covered by the estimation dataset. This work addresses this gap by
proposing an input design method that embeds a novel optimality criterion
within a receding horizon control (RHC)-based optimization framework. The
distance-based optimality criterion induces a space-filling design within a
user-defined region of interest in a surrogate model's input space, requiring
only minimal prior knowledge. Additionally, the method is applicable both
online, where model parameters are continuously updated based on process
observations, and offline, where a fixed model is employed. The space-filling
performance of the proposed strategy is evaluated on an artificial example and
compared to state-of-the-art methods, demonstrating superior efficiency in
exploring process operating spaces.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 14:50:52 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Herkersdorf",
"Max",
""
],
[
"Nelles",
"Oliver",
""
]
] | TITLE: Online and Offline Space-Filling Input Design for Nonlinear System
Identification: A Receding Horizon Control-Based Approach
ABSTRACT: The effectiveness of data-driven techniques heavily depends on the input
signal used to generate the estimation data. However, a significant research
gap exists in the field of input design for nonlinear dynamic system
identification. In particular, existing methods largely overlook the
minimization of the generalization error, i.e., model inaccuracies in regions
not covered by the estimation dataset. This work addresses this gap by
proposing an input design method that embeds a novel optimality criterion
within a receding horizon control (RHC)-based optimization framework. The
distance-based optimality criterion induces a space-filling design within a
user-defined region of interest in a surrogate model's input space, requiring
only minimal prior knowledge. Additionally, the method is applicable both
online, where model parameters are continuously updated based on process
observations, and offline, where a fixed model is employed. The space-filling
performance of the proposed strategy is evaluated on an artificial example and
compared to state-of-the-art methods, demonstrating superior efficiency in
exploring process operating spaces.
|
2504.02671 | Zishuo Liu | Zishuo Liu, Carlos Rabat Villarreal, Mostafa Rahgouy, Amit Das, Zheng
Zhang, Chang Ren, Dongji Feng | LLM for Complex Reasoning Task: An Exploratory Study in Fermi Problems | 7 pages,7 tables, 5 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Fermi Problems (FPs) are mathematical reasoning tasks that require human-like
logic and numerical reasoning. Unlike other reasoning questions, FPs often
involve real-world impracticalities or ambiguous concepts, making them
challenging even for humans to solve. Despite advancements in AI, particularly
with large language models (LLMs) in various reasoning tasks, FPs remain
relatively under-explored. This work conducted an exploratory study to examine
the capabilities and limitations of LLMs in solving FPs. We first evaluated the
overall performance of three advanced LLMs using a publicly available FP
dataset. We designed prompts according to the recently proposed TELeR taxonomy,
including a zero-shot scenario. Results indicated that all three LLMs achieved
a fp_score (range between 0 - 1) below 0.5, underscoring the inherent
difficulty of these reasoning tasks. To further investigate, we categorized FPs
into standard and specific questions, hypothesizing that LLMs would perform
better on standard questions, which are characterized by clarity and
conciseness, than on specific ones. Comparative experiments confirmed this
hypothesis, demonstrating that LLMs performed better on standard FPs in terms
of both accuracy and efficiency.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 15:13:36 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Zishuo",
""
],
[
"Villarreal",
"Carlos Rabat",
""
],
[
"Rahgouy",
"Mostafa",
""
],
[
"Das",
"Amit",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Ren",
"Chang",
""
],
[
"Feng",
"Dongji",
""
]
] | TITLE: LLM for Complex Reasoning Task: An Exploratory Study in Fermi Problems
ABSTRACT: Fermi Problems (FPs) are mathematical reasoning tasks that require human-like
logic and numerical reasoning. Unlike other reasoning questions, FPs often
involve real-world impracticalities or ambiguous concepts, making them
challenging even for humans to solve. Despite advancements in AI, particularly
with large language models (LLMs) in various reasoning tasks, FPs remain
relatively under-explored. This work conducted an exploratory study to examine
the capabilities and limitations of LLMs in solving FPs. We first evaluated the
overall performance of three advanced LLMs using a publicly available FP
dataset. We designed prompts according to the recently proposed TELeR taxonomy,
including a zero-shot scenario. Results indicated that all three LLMs achieved
a fp_score (range between 0 - 1) below 0.5, underscoring the inherent
difficulty of these reasoning tasks. To further investigate, we categorized FPs
into standard and specific questions, hypothesizing that LLMs would perform
better on standard questions, which are characterized by clarity and
conciseness, than on specific ones. Comparative experiments confirmed this
hypothesis, demonstrating that LLMs performed better on standard FPs in terms
of both accuracy and efficiency.
|
2504.02674 | Jacqueline Rowe Ms | Jacqueline Rowe, Edward Gow-Smith, Mark Hepple | Limitations of Religious Data and the Importance of the Target Domain:
Towards Machine Translation for Guinea-Bissau Creole | 9 pages, 5 figures, 7 tables. To be published in Proceedings of the
8th Workshop on Technologies for Machine Translation of Low-Resource
Languages (NAACL 2025) | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We introduce a new dataset for machine translation of Guinea-Bissau Creole
(Kiriol), comprising around 40 thousand parallel sentences to English and
Portuguese. This dataset is made up of predominantly religious data (from the
Bible and texts from the Jehovah's Witnesses), but also a small amount of
general domain data (from a dictionary). This mirrors the typical resource
availability of many low resource languages. We train a number of
transformer-based models to investigate how to improve domain transfer from
religious data to a more general domain. We find that adding even 300 sentences
from the target domain when training substantially improves the translation
performance, highlighting the importance and need for data collection for
low-resource languages, even on a small-scale. We additionally find that
Portuguese-to-Kiriol translation models perform better on average than other
source and target language pairs, and investigate how this relates to the
morphological complexity of the languages involved and the degree of lexical
overlap between creoles and lexifiers. Overall, we hope our work will stimulate
research into Kiriol and into how machine translation might better support
creole languages in general.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 15:14:19 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Rowe",
"Jacqueline",
""
],
[
"Gow-Smith",
"Edward",
""
],
[
"Hepple",
"Mark",
""
]
] | TITLE: Limitations of Religious Data and the Importance of the Target Domain:
Towards Machine Translation for Guinea-Bissau Creole
ABSTRACT: We introduce a new dataset for machine translation of Guinea-Bissau Creole
(Kiriol), comprising around 40 thousand parallel sentences to English and
Portuguese. This dataset is made up of predominantly religious data (from the
Bible and texts from the Jehovah's Witnesses), but also a small amount of
general domain data (from a dictionary). This mirrors the typical resource
availability of many low resource languages. We train a number of
transformer-based models to investigate how to improve domain transfer from
religious data to a more general domain. We find that adding even 300 sentences
from the target domain when training substantially improves the translation
performance, highlighting the importance and need for data collection for
low-resource languages, even on a small-scale. We additionally find that
Portuguese-to-Kiriol translation models perform better on average than other
source and target language pairs, and investigate how this relates to the
morphological complexity of the languages involved and the degree of lexical
overlap between creoles and lexifiers. Overall, we hope our work will stimulate
research into Kiriol and into how machine translation might better support
creole languages in general.
|
2504.02685 | Ivan Sevillano-Garc\'ia | Iv\'an Sevillano-Garc\'ia, Juli\'an Luengo, Francisco Herrera | STOOD-X methodology: using statistical nonparametric test for OOD
Detection Large-Scale datasets enhanced with explainability | 18 pages, 7 Figures | null | null | null | cs.LG cs.AI cs.HC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Out-of-Distribution (OOD) detection is a critical task in machine learning,
particularly in safety-sensitive applications where model failures can have
serious consequences. However, current OOD detection methods often suffer from
restrictive distributional assumptions, limited scalability, and a lack of
interpretability. To address these challenges, we propose STOOD-X, a two-stage
methodology that combines a Statistical nonparametric Test for OOD Detection
with eXplainability enhancements. In the first stage, STOOD-X uses
feature-space distances and a Wilcoxon-Mann-Whitney test to identify OOD
samples without assuming a specific feature distribution. In the second stage,
it generates user-friendly, concept-based visual explanations that reveal the
features driving each decision, aligning with the BLUE XAI paradigm. Through
extensive experiments on benchmark datasets and multiple architectures, STOOD-X
achieves competitive performance against state-of-the-art post hoc OOD
detectors, particularly in high-dimensional and complex settings. In addition,
its explainability framework enables human oversight, bias detection, and model
debugging, fostering trust and collaboration between humans and AI systems. The
STOOD-X methodology therefore offers a robust, explainable, and scalable
solution for real-world OOD detection tasks.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 15:26:03 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Sevillano-García",
"Iván",
""
],
[
"Luengo",
"Julián",
""
],
[
"Herrera",
"Francisco",
""
]
] | TITLE: STOOD-X methodology: using statistical nonparametric test for OOD
Detection Large-Scale datasets enhanced with explainability
ABSTRACT: Out-of-Distribution (OOD) detection is a critical task in machine learning,
particularly in safety-sensitive applications where model failures can have
serious consequences. However, current OOD detection methods often suffer from
restrictive distributional assumptions, limited scalability, and a lack of
interpretability. To address these challenges, we propose STOOD-X, a two-stage
methodology that combines a Statistical nonparametric Test for OOD Detection
with eXplainability enhancements. In the first stage, STOOD-X uses
feature-space distances and a Wilcoxon-Mann-Whitney test to identify OOD
samples without assuming a specific feature distribution. In the second stage,
it generates user-friendly, concept-based visual explanations that reveal the
features driving each decision, aligning with the BLUE XAI paradigm. Through
extensive experiments on benchmark datasets and multiple architectures, STOOD-X
achieves competitive performance against state-of-the-art post hoc OOD
detectors, particularly in high-dimensional and complex settings. In addition,
its explainability framework enables human oversight, bias detection, and model
debugging, fostering trust and collaboration between humans and AI systems. The
STOOD-X methodology therefore offers a robust, explainable, and scalable
solution for real-world OOD detection tasks.
|
2504.02698 | Tianchi Lu | Shengrui XU and Tianchi Lu and Zikun Wang and Jixiu Zhai and Jingwan
Wang | SCMPPI: Supervised Contrastive Multimodal Framework for Predicting
Protein-Protein Interactions | 19 pages,11 figures,conference | null | null | null | cs.LG cs.AI q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein-Protein Interaction (PPI) prediction is a key task in uncovering
cellular functional networks and disease mechanisms. However, traditional
experimental methods are time-consuming and costly, and existing computational
models face challenges in cross-modal feature fusion, robustness, and
false-negative suppression. In this paper, we propose a novel supervised
contrastive multimodal framework, SCMPPI, for PPI prediction. By integrating
protein sequence features (AAC, DPC, CKSAAP-ESMC) with PPI network topology
information (Node2Vec graph embedding), and combining an improved supervised
contrastive learning strategy, SCMPPI significantly enhances PPI prediction
performance. For the PPI task, SCMPPI introduces a negative sample filtering
mechanism and modifies the contrastive loss function, effectively optimizing
multimodal features. Experiments on eight benchmark datasets, including yeast,
human, and H.pylori, show that SCMPPI outperforms existing state-of-the-art
methods (such as DF-PPI and TAGPPI) in key metrics such as accuracy ( 98.01%)
and AUC (99.62%), and demonstrates strong generalization in cross-species
prediction (AUC > 99% on multi-species datasets). Furthermore, SCMPPI has been
successfully applied to CD9 networks, the Wnt pathway, and cancer-specific
networks, providing a reliable tool for disease target discovery. This
framework also offers a new paradigm for multimodal biological information
fusion and contrastive learning in collaborative optimization for various
combined predictions.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 15:34:02 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"XU",
"Shengrui",
""
],
[
"Lu",
"Tianchi",
""
],
[
"Wang",
"Zikun",
""
],
[
"Zhai",
"Jixiu",
""
],
[
"Wang",
"Jingwan",
""
]
] | TITLE: SCMPPI: Supervised Contrastive Multimodal Framework for Predicting
Protein-Protein Interactions
ABSTRACT: Protein-Protein Interaction (PPI) prediction is a key task in uncovering
cellular functional networks and disease mechanisms. However, traditional
experimental methods are time-consuming and costly, and existing computational
models face challenges in cross-modal feature fusion, robustness, and
false-negative suppression. In this paper, we propose a novel supervised
contrastive multimodal framework, SCMPPI, for PPI prediction. By integrating
protein sequence features (AAC, DPC, CKSAAP-ESMC) with PPI network topology
information (Node2Vec graph embedding), and combining an improved supervised
contrastive learning strategy, SCMPPI significantly enhances PPI prediction
performance. For the PPI task, SCMPPI introduces a negative sample filtering
mechanism and modifies the contrastive loss function, effectively optimizing
multimodal features. Experiments on eight benchmark datasets, including yeast,
human, and H.pylori, show that SCMPPI outperforms existing state-of-the-art
methods (such as DF-PPI and TAGPPI) in key metrics such as accuracy ( 98.01%)
and AUC (99.62%), and demonstrates strong generalization in cross-species
prediction (AUC > 99% on multi-species datasets). Furthermore, SCMPPI has been
successfully applied to CD9 networks, the Wnt pathway, and cancer-specific
networks, providing a reliable tool for disease target discovery. This
framework also offers a new paradigm for multimodal biological information
fusion and contrastive learning in collaborative optimization for various
combined predictions.
|
2504.02704 | Ilham Qasse | Ilham Qasse, Mohammad Hamdaqa, and Bj\"orn {\TH}\'or J\'onsson | EvoChain: A Framework for Tracking and Visualizing Smart Contract
Evolution | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Tracking the evolution of smart contracts is challenging due to their
immutable nature and complex upgrade mechanisms. We introduce EvoChain, a
comprehensive framework and dataset designed to track and visualize smart
contract evolution. Building upon data from our previous empirical study,
EvoChain models contract relationships using a Neo4j graph database and
provides an interactive web interface for exploration. The framework consists
of a data layer, an API layer, and a user interface layer. EvoChain allows
stakeholders to analyze contract histories, upgrade paths, and associated
vulnerabilities by leveraging these components. Our dataset encompasses
approximately 1.3 million upgradeable proxies and nearly 15,000 historical
versions, enhancing transparency and trust in blockchain ecosystems by
providing an accessible platform for understanding smart contract evolution.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 15:41:48 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Qasse",
"Ilham",
""
],
[
"Hamdaqa",
"Mohammad",
""
],
[
"Jónsson",
"Björn Þór",
""
]
] | TITLE: EvoChain: A Framework for Tracking and Visualizing Smart Contract
Evolution
ABSTRACT: Tracking the evolution of smart contracts is challenging due to their
immutable nature and complex upgrade mechanisms. We introduce EvoChain, a
comprehensive framework and dataset designed to track and visualize smart
contract evolution. Building upon data from our previous empirical study,
EvoChain models contract relationships using a Neo4j graph database and
provides an interactive web interface for exploration. The framework consists
of a data layer, an API layer, and a user interface layer. EvoChain allows
stakeholders to analyze contract histories, upgrade paths, and associated
vulnerabilities by leveraging these components. Our dataset encompasses
approximately 1.3 million upgradeable proxies and nearly 15,000 historical
versions, enhancing transparency and trust in blockchain ecosystems by
providing an accessible platform for understanding smart contract evolution.
|
2504.02708 | Nikhil Verma | Nikhil Verma, Manasa Bharadwaj | The Hidden Space of Safety: Understanding Preference-Tuned LLMs in
Multilingual context | 14 pages, 11 Figures, 2 Tables, currently under review at ACL 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Alignment tuning has enabled large language models to excel in reasoning,
instruction-following, and minimizing harmful generations. However, despite
their widespread deployment, these models exhibit a monolingual bias, raising
concerns about the effectiveness of alignment across languages. Current
alignment methods predominantly focus on English, leaving it unclear how
alignment mechanism generalize to multilingual settings. To address this, we
conduct a systematic analysis of distributional shifts in the embedding space
of LLMs before and after alignment, uncovering its impact on model behavior
across diverse languages. We leverage the alignment-induced separation in
safety space as a quantitative tool to measure how alignment enforces safety
constraints. Our study evaluates seven LLMs using balanced toxicity datasets
and parallel text-detoxification benchmarks, revealing substantial disparities
in the latent representation space between high-resource and low-resource
languages. These findings underscore the need for language-specific fine-tuning
to ensure fair, reliable and robust multilingual alignment. Our insights
provide a foundation for developing truly safe multilingual LLMs, emphasizing
the urgency of addressing alignment gaps in underrepresented languages.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 15:46:46 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Verma",
"Nikhil",
""
],
[
"Bharadwaj",
"Manasa",
""
]
] | TITLE: The Hidden Space of Safety: Understanding Preference-Tuned LLMs in
Multilingual context
ABSTRACT: Alignment tuning has enabled large language models to excel in reasoning,
instruction-following, and minimizing harmful generations. However, despite
their widespread deployment, these models exhibit a monolingual bias, raising
concerns about the effectiveness of alignment across languages. Current
alignment methods predominantly focus on English, leaving it unclear how
alignment mechanism generalize to multilingual settings. To address this, we
conduct a systematic analysis of distributional shifts in the embedding space
of LLMs before and after alignment, uncovering its impact on model behavior
across diverse languages. We leverage the alignment-induced separation in
safety space as a quantitative tool to measure how alignment enforces safety
constraints. Our study evaluates seven LLMs using balanced toxicity datasets
and parallel text-detoxification benchmarks, revealing substantial disparities
in the latent representation space between high-resource and low-resource
languages. These findings underscore the need for language-specific fine-tuning
to ensure fair, reliable and robust multilingual alignment. Our insights
provide a foundation for developing truly safe multilingual LLMs, emphasizing
the urgency of addressing alignment gaps in underrepresented languages.
|
2504.02724 | Sammy Christen | Sammy Christen, David M\"uller, Agon Serifi, Ruben Grandia, Georg
Wiedebach, Michael A. Hopkins, Espen Knoop, Moritz B\"acher | Autonomous Human-Robot Interaction via Operator Imitation | null | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Teleoperated robotic characters can perform expressive interactions with
humans, relying on the operators' experience and social intuition. In this
work, we propose to create autonomous interactive robots, by training a model
to imitate operator data. Our model is trained on a dataset of human-robot
interactions, where an expert operator is asked to vary the interactions and
mood of the robot, while the operator commands as well as the pose of the human
and robot are recorded. Our approach learns to predict continuous operator
commands through a diffusion process and discrete commands through a
classifier, all unified within a single transformer architecture. We evaluate
the resulting model in simulation and with a user study on the real system. We
show that our method enables simple autonomous human-robot interactions that
are comparable to the expert-operator baseline, and that users can recognize
the different robot moods as generated by our model. Finally, we demonstrate a
zero-shot transfer of our model onto a different robotic platform with the same
operator interface.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 16:06:44 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Christen",
"Sammy",
""
],
[
"Müller",
"David",
""
],
[
"Serifi",
"Agon",
""
],
[
"Grandia",
"Ruben",
""
],
[
"Wiedebach",
"Georg",
""
],
[
"Hopkins",
"Michael A.",
""
],
[
"Knoop",
"Espen",
""
],
[
"Bächer",
"Moritz",
""
]
] | TITLE: Autonomous Human-Robot Interaction via Operator Imitation
ABSTRACT: Teleoperated robotic characters can perform expressive interactions with
humans, relying on the operators' experience and social intuition. In this
work, we propose to create autonomous interactive robots, by training a model
to imitate operator data. Our model is trained on a dataset of human-robot
interactions, where an expert operator is asked to vary the interactions and
mood of the robot, while the operator commands as well as the pose of the human
and robot are recorded. Our approach learns to predict continuous operator
commands through a diffusion process and discrete commands through a
classifier, all unified within a single transformer architecture. We evaluate
the resulting model in simulation and with a user study on the real system. We
show that our method enables simple autonomous human-robot interactions that
are comparable to the expert-operator baseline, and that users can recognize
the different robot moods as generated by our model. Finally, we demonstrate a
zero-shot transfer of our model onto a different robotic platform with the same
operator interface.
|
2504.02730 | Hui Zhang | Hui Zhang, Qinglin Zhao, Mengchu Zhou, Li Feng | HQViT: Hybrid Quantum Vision Transformer for Image Classification | 13 pages, 8 figures | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformer-based architectures have revolutionized the landscape of deep
learning. In computer vision domain, Vision Transformer demonstrates remarkable
performance on par with or even surpassing that of convolutional neural
networks. However, the quadratic computational complexity of its self-attention
mechanism poses challenges for classical computing, making model training with
high-dimensional input data, e.g., images, particularly expensive. To address
such limitations, we propose a Hybrid Quantum Vision Transformer (HQViT), that
leverages the principles of quantum computing to accelerate model training
while enhancing model performance. HQViT introduces whole-image processing with
amplitude encoding to better preserve global image information without
additional positional encoding. By leveraging quantum computation on the most
critical steps and selectively handling other components in a classical way, we
lower the cost of quantum resources for HQViT. The qubit requirement is
minimized to $O(log_2N)$ and the number of parameterized quantum gates is only
$O(log_2d)$, making it well-suited for Noisy Intermediate-Scale Quantum
devices. By offloading the computationally intensive attention coefficient
matrix calculation to the quantum framework, HQViT reduces the classical
computational load by $O(T^2d)$. Extensive experiments across various computer
vision datasets demonstrate that HQViT outperforms existing models, achieving a
maximum improvement of up to $10.9\%$ (on the MNIST 10-classification task)
over the state of the art. This work highlights the great potential to combine
quantum and classical computing to cope with complex image classification
tasks.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 16:13:34 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhang",
"Hui",
""
],
[
"Zhao",
"Qinglin",
""
],
[
"Zhou",
"Mengchu",
""
],
[
"Feng",
"Li",
""
]
] | TITLE: HQViT: Hybrid Quantum Vision Transformer for Image Classification
ABSTRACT: Transformer-based architectures have revolutionized the landscape of deep
learning. In computer vision domain, Vision Transformer demonstrates remarkable
performance on par with or even surpassing that of convolutional neural
networks. However, the quadratic computational complexity of its self-attention
mechanism poses challenges for classical computing, making model training with
high-dimensional input data, e.g., images, particularly expensive. To address
such limitations, we propose a Hybrid Quantum Vision Transformer (HQViT), that
leverages the principles of quantum computing to accelerate model training
while enhancing model performance. HQViT introduces whole-image processing with
amplitude encoding to better preserve global image information without
additional positional encoding. By leveraging quantum computation on the most
critical steps and selectively handling other components in a classical way, we
lower the cost of quantum resources for HQViT. The qubit requirement is
minimized to $O(log_2N)$ and the number of parameterized quantum gates is only
$O(log_2d)$, making it well-suited for Noisy Intermediate-Scale Quantum
devices. By offloading the computationally intensive attention coefficient
matrix calculation to the quantum framework, HQViT reduces the classical
computational load by $O(T^2d)$. Extensive experiments across various computer
vision datasets demonstrate that HQViT outperforms existing models, achieving a
maximum improvement of up to $10.9\%$ (on the MNIST 10-classification task)
over the state of the art. This work highlights the great potential to combine
quantum and classical computing to cope with complex image classification
tasks.
|
2504.02733 | Lisa Alazraki | Aryan Agrawal, Lisa Alazraki, Shahin Honarvar, Marek Rei | Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study | Building Trust Workshop, ICLR 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) are highly vulnerable to input perturbations, as
even a small prompt change may result in a substantially different output.
Existing methods to enhance LLM robustness are primarily focused on perturbed
data samples, whereas improving resiliency to perturbations of task-level
instructions has remained relatively underexplored. In this work, we focus on
character- and word-level edits of task-specific instructions, which
substantially degrade downstream performance. We experiment with a variety of
techniques to enhance the robustness of LLMs, including self-denoising and
representation alignment, testing different models (Llama 3 and Flan-T5),
datasets (CoLA, QNLI, SST-2) and instructions (both task-oriented and
role-oriented). We find that, on average, self-denoising -- whether performed
by a frozen LLM or a fine-tuned model -- achieves substantially higher
performance gains than alternative strategies, including more complex baselines
such as ensembling and supervised methods.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 16:17:56 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Agrawal",
"Aryan",
""
],
[
"Alazraki",
"Lisa",
""
],
[
"Honarvar",
"Shahin",
""
],
[
"Rei",
"Marek",
""
]
] | TITLE: Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study
ABSTRACT: Large Language Models (LLMs) are highly vulnerable to input perturbations, as
even a small prompt change may result in a substantially different output.
Existing methods to enhance LLM robustness are primarily focused on perturbed
data samples, whereas improving resiliency to perturbations of task-level
instructions has remained relatively underexplored. In this work, we focus on
character- and word-level edits of task-specific instructions, which
substantially degrade downstream performance. We experiment with a variety of
techniques to enhance the robustness of LLMs, including self-denoising and
representation alignment, testing different models (Llama 3 and Flan-T5),
datasets (CoLA, QNLI, SST-2) and instructions (both task-oriented and
role-oriented). We find that, on average, self-denoising -- whether performed
by a frozen LLM or a fine-tuned model -- achieves substantially higher
performance gains than alternative strategies, including more complex baselines
such as ensembling and supervised methods.
|
2504.02735 | Manh Pham Hung | Manh Pham Hung, Matthew Yiwen Ho, Yiming Zhang, Dimitris Spathis,
Aaqib Saeed, and Dong Ma | Pushing the Limit of PPG Sensing in Sedentary Conditions by Addressing
Poor Skin-sensor Contact | null | null | null | null | cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Photoplethysmography (PPG) is a widely used non-invasive technique for
monitoring cardiovascular health and various physiological parameters on
consumer and medical devices. While motion artifacts are well-known challenges
in dynamic settings, suboptimal skin-sensor contact in sedentary conditions - a
critical issue often overlooked in existing literature - can distort PPG signal
morphology, leading to the loss or shift of essential waveform features and
therefore degrading sensing performance. In this work, we propose CP-PPG, a
novel approach that transforms Contact Pressure-distorted PPG signals into ones
with the ideal morphology. CP-PPG incorporates a novel data collection
approach, a well-crafted signal processing pipeline, and an advanced deep
adversarial model trained with a custom PPG-aware loss function. We validated
CP-PPG through comprehensive evaluations, including 1) morphology
transformation performance on our self-collected dataset, 2) downstream
physiological monitoring performance on public datasets, and 3) in-the-wild
performance. Extensive experiments demonstrate substantial and consistent
improvements in signal fidelity (Mean Absolute Error: 0.09, 40% improvement
over the original signal) as well as downstream performance across all
evaluations in Heart Rate (HR), Heart Rate Variability (HRV), Respiration Rate
(RR), and Blood Pressure (BP) estimation (on average, 21% improvement in HR;
41-46% in HRV; 6% in RR; and 4-5% in BP). These findings highlight the critical
importance of addressing skin-sensor contact issues for accurate and dependable
PPG-based physiological monitoring. Furthermore, CP-PPG can serve as a generic,
plug-in API to enhance PPG signal quality.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 16:22:15 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Hung",
"Manh Pham",
""
],
[
"Ho",
"Matthew Yiwen",
""
],
[
"Zhang",
"Yiming",
""
],
[
"Spathis",
"Dimitris",
""
],
[
"Saeed",
"Aaqib",
""
],
[
"Ma",
"Dong",
""
]
] | TITLE: Pushing the Limit of PPG Sensing in Sedentary Conditions by Addressing
Poor Skin-sensor Contact
ABSTRACT: Photoplethysmography (PPG) is a widely used non-invasive technique for
monitoring cardiovascular health and various physiological parameters on
consumer and medical devices. While motion artifacts are well-known challenges
in dynamic settings, suboptimal skin-sensor contact in sedentary conditions - a
critical issue often overlooked in existing literature - can distort PPG signal
morphology, leading to the loss or shift of essential waveform features and
therefore degrading sensing performance. In this work, we propose CP-PPG, a
novel approach that transforms Contact Pressure-distorted PPG signals into ones
with the ideal morphology. CP-PPG incorporates a novel data collection
approach, a well-crafted signal processing pipeline, and an advanced deep
adversarial model trained with a custom PPG-aware loss function. We validated
CP-PPG through comprehensive evaluations, including 1) morphology
transformation performance on our self-collected dataset, 2) downstream
physiological monitoring performance on public datasets, and 3) in-the-wild
performance. Extensive experiments demonstrate substantial and consistent
improvements in signal fidelity (Mean Absolute Error: 0.09, 40% improvement
over the original signal) as well as downstream performance across all
evaluations in Heart Rate (HR), Heart Rate Variability (HRV), Respiration Rate
(RR), and Blood Pressure (BP) estimation (on average, 21% improvement in HR;
41-46% in HRV; 6% in RR; and 4-5% in BP). These findings highlight the critical
importance of addressing skin-sensor contact issues for accurate and dependable
PPG-based physiological monitoring. Furthermore, CP-PPG can serve as a generic,
plug-in API to enhance PPG signal quality.
|
2504.02747 | Pradyumn Goyal | Pradyumn Goyal, Dmitry Petrov, Sheldon Andrews, Yizhak Ben-Shabat,
Hsueh-Ti Derek Liu, Evangelos Kalogerakis | GEOPARD: Geometric Pretraining for Articulation Prediction in 3D Shapes | null | null | null | null | cs.GR | http://creativecommons.org/licenses/by/4.0/ | We present GEOPARD, a transformer-based architecture for predicting
articulation from a single static snapshot of a 3D shape. The key idea of our
method is a pretraining strategy that allows our transformer to learn plausible
candidate articulations for 3D shapes based on a geometric-driven search
without manual articulation annotation. The search automatically discovers
physically valid part motions that do not cause detachments or collisions with
other shape parts. Our experiments indicate that this geometric pretraining
strategy, along with carefully designed choices in our transformer
architecture, yields state-of-the-art results in articulation inference in the
PartNet-Mobility dataset.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 16:35:17 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Goyal",
"Pradyumn",
""
],
[
"Petrov",
"Dmitry",
""
],
[
"Andrews",
"Sheldon",
""
],
[
"Ben-Shabat",
"Yizhak",
""
],
[
"Liu",
"Hsueh-Ti Derek",
""
],
[
"Kalogerakis",
"Evangelos",
""
]
] | TITLE: GEOPARD: Geometric Pretraining for Articulation Prediction in 3D Shapes
ABSTRACT: We present GEOPARD, a transformer-based architecture for predicting
articulation from a single static snapshot of a 3D shape. The key idea of our
method is a pretraining strategy that allows our transformer to learn plausible
candidate articulations for 3D shapes based on a geometric-driven search
without manual articulation annotation. The search automatically discovers
physically valid part motions that do not cause detachments or collisions with
other shape parts. Our experiments indicate that this geometric pretraining
strategy, along with carefully designed choices in our transformer
architecture, yields state-of-the-art results in articulation inference in the
PartNet-Mobility dataset.
|
2504.02775 | Yoon Gyo Jung | Yoon Gyo Jung, Jaewoo Park, Jaeho Yoon, Kuan-Chuan Peng, Wonchul Kim,
Andrew Beng Jin Teoh, Octavia Camps | TailedCore: Few-Shot Sampling for Unsupervised Long-Tail Noisy Anomaly
Detection | Accepted to CVPR2025 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We aim to solve unsupervised anomaly detection in a practical challenging
environment where the normal dataset is both contaminated with defective
regions and its product class distribution is tailed but unknown. We observe
that existing models suffer from tail-versus-noise trade-off where if a model
is robust against pixel noise, then its performance deteriorates on tail class
samples, and vice versa. To mitigate the issue, we handle the tail class and
noise samples independently. To this end, we propose TailSampler, a novel class
size predictor that estimates the class cardinality of samples based on a
symmetric assumption on the class-wise distribution of embedding similarities.
TailSampler can be utilized to sample the tail class samples exclusively,
allowing to handle them separately. Based on these facets, we build a
memory-based anomaly detection model TailedCore, whose memory both well
captures tail class information and is noise-robust. We extensively validate
the effectiveness of TailedCore on the unsupervised long-tail noisy anomaly
detection setting, and show that TailedCore outperforms the state-of-the-art in
most settings.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:14:57 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Jung",
"Yoon Gyo",
""
],
[
"Park",
"Jaewoo",
""
],
[
"Yoon",
"Jaeho",
""
],
[
"Peng",
"Kuan-Chuan",
""
],
[
"Kim",
"Wonchul",
""
],
[
"Teoh",
"Andrew Beng Jin",
""
],
[
"Camps",
"Octavia",
""
]
] | TITLE: TailedCore: Few-Shot Sampling for Unsupervised Long-Tail Noisy Anomaly
Detection
ABSTRACT: We aim to solve unsupervised anomaly detection in a practical challenging
environment where the normal dataset is both contaminated with defective
regions and its product class distribution is tailed but unknown. We observe
that existing models suffer from tail-versus-noise trade-off where if a model
is robust against pixel noise, then its performance deteriorates on tail class
samples, and vice versa. To mitigate the issue, we handle the tail class and
noise samples independently. To this end, we propose TailSampler, a novel class
size predictor that estimates the class cardinality of samples based on a
symmetric assumption on the class-wise distribution of embedding similarities.
TailSampler can be utilized to sample the tail class samples exclusively,
allowing to handle them separately. Based on these facets, we build a
memory-based anomaly detection model TailedCore, whose memory both well
captures tail class information and is noise-robust. We extensively validate
the effectiveness of TailedCore on the unsupervised long-tail noisy anomaly
detection setting, and show that TailedCore outperforms the state-of-the-art in
most settings.
|
2504.02778 | Vincent Gbouna Zakka Mr | Vincent Gbouna Zakka, Luis J. Manso, Zhuangzhuang Dai | Multi-Head Adaptive Graph Convolution Network for Sparse Point
Cloud-Based Human Activity Recognition | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Human activity recognition is increasingly vital for supporting independent
living, particularly for the elderly and those in need of assistance. Domestic
service robots with monitoring capabilities can enhance safety and provide
essential support. Although image-based methods have advanced considerably in
the past decade, their adoption remains limited by concerns over privacy and
sensitivity to low-light or dark conditions. As an alternative, millimetre-wave
(mmWave) radar can produce point cloud data which is privacy-preserving.
However, processing the sparse and noisy point clouds remains a long-standing
challenge. While graph-based methods and attention mechanisms show promise,
they predominantly rely on "fixed" kernels; kernels that are applied uniformly
across all neighbourhoods, highlighting the need for adaptive approaches that
can dynamically adjust their kernels to the specific geometry of each local
neighbourhood in point cloud data. To overcome this limitation, we introduce an
adaptive approach within the graph convolutional framework. Instead of a single
shared weight function, our Multi-Head Adaptive Kernel (MAK) module generates
multiple dynamic kernels, each capturing different aspects of the local feature
space. By progressively refining local features while maintaining global
spatial context, our method enables convolution kernels to adapt to varying
local features. Experimental results on benchmark datasets confirm the
effectiveness of our approach, achieving state-of-the-art performance in human
activity recognition. Our source code is made publicly available at:
https://github.com/Gbouna/MAK-GCN
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:19:20 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zakka",
"Vincent Gbouna",
""
],
[
"Manso",
"Luis J.",
""
],
[
"Dai",
"Zhuangzhuang",
""
]
] | TITLE: Multi-Head Adaptive Graph Convolution Network for Sparse Point
Cloud-Based Human Activity Recognition
ABSTRACT: Human activity recognition is increasingly vital for supporting independent
living, particularly for the elderly and those in need of assistance. Domestic
service robots with monitoring capabilities can enhance safety and provide
essential support. Although image-based methods have advanced considerably in
the past decade, their adoption remains limited by concerns over privacy and
sensitivity to low-light or dark conditions. As an alternative, millimetre-wave
(mmWave) radar can produce point cloud data which is privacy-preserving.
However, processing the sparse and noisy point clouds remains a long-standing
challenge. While graph-based methods and attention mechanisms show promise,
they predominantly rely on "fixed" kernels; kernels that are applied uniformly
across all neighbourhoods, highlighting the need for adaptive approaches that
can dynamically adjust their kernels to the specific geometry of each local
neighbourhood in point cloud data. To overcome this limitation, we introduce an
adaptive approach within the graph convolutional framework. Instead of a single
shared weight function, our Multi-Head Adaptive Kernel (MAK) module generates
multiple dynamic kernels, each capturing different aspects of the local feature
space. By progressively refining local features while maintaining global
spatial context, our method enables convolution kernels to adapt to varying
local features. Experimental results on benchmark datasets confirm the
effectiveness of our approach, achieving state-of-the-art performance in human
activity recognition. Our source code is made publicly available at:
https://github.com/Gbouna/MAK-GCN
|
2504.02782 | Zhiyuan Yan | Zhiyuan Yan, Junyan Ye, Weijia Li, Zilong Huang, Shenghai Yuan,
Xiangyang He, Kaiqing Lin, Jun He, Conghui He, Li Yuan | GPT-ImgEval: A Comprehensive Benchmark for Diagnosing GPT4o in Image
Generation | null | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | The recent breakthroughs in OpenAI's GPT4o model have demonstrated
surprisingly good capabilities in image generation and editing, resulting in
significant excitement in the community. This technical report presents the
first-look evaluation benchmark (named GPT-ImgEval), quantitatively and
qualitatively diagnosing GPT-4o's performance across three critical dimensions:
(1) generation quality, (2) editing proficiency, and (3) world
knowledge-informed semantic synthesis. Across all three tasks, GPT-4o
demonstrates strong performance, significantly surpassing existing methods in
both image generation control and output quality, while also showcasing
exceptional knowledge reasoning capabilities. Furthermore, based on the
GPT-4o's generated data, we propose a classification-model-based approach to
investigate the underlying architecture of GPT-4o, where our empirical results
suggest the model consists of an auto-regressive (AR) combined with a
diffusion-based head for image decoding, rather than the VAR-like
architectures. We also provide a complete speculation on GPT-4o's overall
architecture. In addition, we conduct a series of analyses to identify and
visualize GPT-4o's specific limitations and the synthetic artifacts commonly
observed in its image generation. We also present a comparative study of
multi-round image editing between GPT-4o and Gemini 2.0 Flash, and discuss the
safety implications of GPT-4o's outputs, particularly their detectability by
existing image forensic models. We hope that our work can offer valuable
insight and provide a reliable benchmark to guide future research, foster
reproducibility, and accelerate innovation in the field of image generation and
beyond. The codes and datasets used for evaluating GPT-4o can be found at
https://github.com/PicoTrex/GPT-ImgEval.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:23:16 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Yan",
"Zhiyuan",
""
],
[
"Ye",
"Junyan",
""
],
[
"Li",
"Weijia",
""
],
[
"Huang",
"Zilong",
""
],
[
"Yuan",
"Shenghai",
""
],
[
"He",
"Xiangyang",
""
],
[
"Lin",
"Kaiqing",
""
],
[
"He",
"Jun",
""
],
[
"He",
"Conghui",
""
],
[
"Yuan",
"Li",
""
]
] | TITLE: GPT-ImgEval: A Comprehensive Benchmark for Diagnosing GPT4o in Image
Generation
ABSTRACT: The recent breakthroughs in OpenAI's GPT4o model have demonstrated
surprisingly good capabilities in image generation and editing, resulting in
significant excitement in the community. This technical report presents the
first-look evaluation benchmark (named GPT-ImgEval), quantitatively and
qualitatively diagnosing GPT-4o's performance across three critical dimensions:
(1) generation quality, (2) editing proficiency, and (3) world
knowledge-informed semantic synthesis. Across all three tasks, GPT-4o
demonstrates strong performance, significantly surpassing existing methods in
both image generation control and output quality, while also showcasing
exceptional knowledge reasoning capabilities. Furthermore, based on the
GPT-4o's generated data, we propose a classification-model-based approach to
investigate the underlying architecture of GPT-4o, where our empirical results
suggest the model consists of an auto-regressive (AR) combined with a
diffusion-based head for image decoding, rather than the VAR-like
architectures. We also provide a complete speculation on GPT-4o's overall
architecture. In addition, we conduct a series of analyses to identify and
visualize GPT-4o's specific limitations and the synthetic artifacts commonly
observed in its image generation. We also present a comparative study of
multi-round image editing between GPT-4o and Gemini 2.0 Flash, and discuss the
safety implications of GPT-4o's outputs, particularly their detectability by
existing image forensic models. We hope that our work can offer valuable
insight and provide a reliable benchmark to guide future research, foster
reproducibility, and accelerate innovation in the field of image generation and
beyond. The codes and datasets used for evaluating GPT-4o can be found at
https://github.com/PicoTrex/GPT-ImgEval.
|
2504.02792 | Chuning Zhu | Chuning Zhu, Raymond Yu, Siyuan Feng, Benjamin Burchfiel, Paarth Shah,
and Abhishek Gupta | Unified World Models: Coupling Video and Action Diffusion for
Pretraining on Large Robotic Datasets | null | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Imitation learning has emerged as a promising approach towards building
generalist robots. However, scaling imitation learning for large robot
foundation models remains challenging due to its reliance on high-quality
expert demonstrations. Meanwhile, large amounts of video data depicting a wide
range of environments and diverse behaviors are readily available. This data
provides a rich source of information about real-world dynamics and
agent-environment interactions. Leveraging this data directly for imitation
learning, however, has proven difficult due to the lack of action annotation
required for most contemporary methods. In this work, we present Unified World
Models (UWM), a framework that allows for leveraging both video and action data
for policy learning. Specifically, a UWM integrates an action diffusion process
and a video diffusion process within a unified transformer architecture, where
independent diffusion timesteps govern each modality. We show that by simply
controlling each diffusion timestep, UWM can flexibly represent a policy, a
forward dynamics, an inverse dynamics, and a video generator. Through simulated
and real-world experiments, we show that: (1) UWM enables effective pretraining
on large-scale multitask robot datasets with both dynamics and action
predictions, resulting in more generalizable and robust policies than imitation
learning, (2) UWM naturally facilitates learning from action-free video data
through independent control of modality-specific diffusion timesteps, further
improving the performance of finetuned policies. Our results suggest that UWM
offers a promising step toward harnessing large, heterogeneous datasets for
scalable robot learning, and provides a simple unification between the often
disparate paradigms of imitation learning and world modeling. Videos and code
are available at https://weirdlabuw.github.io/uwm/.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:38:59 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhu",
"Chuning",
""
],
[
"Yu",
"Raymond",
""
],
[
"Feng",
"Siyuan",
""
],
[
"Burchfiel",
"Benjamin",
""
],
[
"Shah",
"Paarth",
""
],
[
"Gupta",
"Abhishek",
""
]
] | TITLE: Unified World Models: Coupling Video and Action Diffusion for
Pretraining on Large Robotic Datasets
ABSTRACT: Imitation learning has emerged as a promising approach towards building
generalist robots. However, scaling imitation learning for large robot
foundation models remains challenging due to its reliance on high-quality
expert demonstrations. Meanwhile, large amounts of video data depicting a wide
range of environments and diverse behaviors are readily available. This data
provides a rich source of information about real-world dynamics and
agent-environment interactions. Leveraging this data directly for imitation
learning, however, has proven difficult due to the lack of action annotation
required for most contemporary methods. In this work, we present Unified World
Models (UWM), a framework that allows for leveraging both video and action data
for policy learning. Specifically, a UWM integrates an action diffusion process
and a video diffusion process within a unified transformer architecture, where
independent diffusion timesteps govern each modality. We show that by simply
controlling each diffusion timestep, UWM can flexibly represent a policy, a
forward dynamics, an inverse dynamics, and a video generator. Through simulated
and real-world experiments, we show that: (1) UWM enables effective pretraining
on large-scale multitask robot datasets with both dynamics and action
predictions, resulting in more generalizable and robust policies than imitation
learning, (2) UWM naturally facilitates learning from action-free video data
through independent control of modality-specific diffusion timesteps, further
improving the performance of finetuned policies. Our results suggest that UWM
offers a promising step toward harnessing large, heterogeneous datasets for
scalable robot learning, and provides a simple unification between the often
disparate paradigms of imitation learning and world modeling. Videos and code
are available at https://weirdlabuw.github.io/uwm/.
|
2504.02797 | Agon Serifi | Prashanth Chandran, Agon Serifi, Markus Gross, Moritz B\"acher | Spline-based Transformers | null | European Conference on Computer Vision (ECCV 2024) | 10.1007/978-3-031-73016-0_1 | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Spline-based Transformers, a novel class of Transformer models
that eliminate the need for positional encoding. Inspired by workflows using
splines in computer animation, our Spline-based Transformers embed an input
sequence of elements as a smooth trajectory in latent space. Overcoming
drawbacks of positional encoding such as sequence length extrapolation,
Spline-based Transformers also provide a novel way for users to interact with
transformer latent spaces by directly manipulating the latent control points to
create new latent trajectories and sequences. We demonstrate the superior
performance of our approach in comparison to conventional positional encoding
on a variety of datasets, ranging from synthetic 2D to large-scale real-world
datasets of images, 3D shapes, and animations.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:42:07 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Chandran",
"Prashanth",
""
],
[
"Serifi",
"Agon",
""
],
[
"Gross",
"Markus",
""
],
[
"Bächer",
"Moritz",
""
]
] | TITLE: Spline-based Transformers
ABSTRACT: We introduce Spline-based Transformers, a novel class of Transformer models
that eliminate the need for positional encoding. Inspired by workflows using
splines in computer animation, our Spline-based Transformers embed an input
sequence of elements as a smooth trajectory in latent space. Overcoming
drawbacks of positional encoding such as sequence length extrapolation,
Spline-based Transformers also provide a novel way for users to interact with
transformer latent spaces by directly manipulating the latent control points to
create new latent trajectories and sequences. We demonstrate the superior
performance of our approach in comparison to conventional positional encoding
on a variety of datasets, ranging from synthetic 2D to large-scale real-world
datasets of images, 3D shapes, and animations.
|
2504.02799 | Anita Rau | Anita Rau, Mark Endo, Josiah Aklilu, Jaewoo Heo, Khaled Saab, Alberto
Paderno, Jeffrey Jopling, F. Christopher Holsinger, Serena Yeung-Levy | Systematic Evaluation of Large Vision-Language Models for Surgical
Artificial Intelligence | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Vision-Language Models offer a new paradigm for AI-driven image
understanding, enabling models to perform tasks without task-specific training.
This flexibility holds particular promise across medicine, where
expert-annotated data is scarce. Yet, VLMs' practical utility in
intervention-focused domains--especially surgery, where decision-making is
subjective and clinical scenarios are variable--remains uncertain. Here, we
present a comprehensive analysis of 11 state-of-the-art VLMs across 17 key
visual understanding tasks in surgical AI--from anatomy recognition to skill
assessment--using 13 datasets spanning laparoscopic, robotic, and open
procedures. In our experiments, VLMs demonstrate promising generalizability, at
times outperforming supervised models when deployed outside their training
setting. In-context learning, incorporating examples during testing, boosted
performance up to three-fold, suggesting adaptability as a key strength. Still,
tasks requiring spatial or temporal reasoning remained difficult. Beyond
surgery, our findings offer insights into VLMs' potential for tackling complex
and dynamic scenarios in clinical and broader real-world applications.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:42:56 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Rau",
"Anita",
""
],
[
"Endo",
"Mark",
""
],
[
"Aklilu",
"Josiah",
""
],
[
"Heo",
"Jaewoo",
""
],
[
"Saab",
"Khaled",
""
],
[
"Paderno",
"Alberto",
""
],
[
"Jopling",
"Jeffrey",
""
],
[
"Holsinger",
"F. Christopher",
""
],
[
"Yeung-Levy",
"Serena",
""
]
] | TITLE: Systematic Evaluation of Large Vision-Language Models for Surgical
Artificial Intelligence
ABSTRACT: Large Vision-Language Models offer a new paradigm for AI-driven image
understanding, enabling models to perform tasks without task-specific training.
This flexibility holds particular promise across medicine, where
expert-annotated data is scarce. Yet, VLMs' practical utility in
intervention-focused domains--especially surgery, where decision-making is
subjective and clinical scenarios are variable--remains uncertain. Here, we
present a comprehensive analysis of 11 state-of-the-art VLMs across 17 key
visual understanding tasks in surgical AI--from anatomy recognition to skill
assessment--using 13 datasets spanning laparoscopic, robotic, and open
procedures. In our experiments, VLMs demonstrate promising generalizability, at
times outperforming supervised models when deployed outside their training
setting. In-context learning, incorporating examples during testing, boosted
performance up to three-fold, suggesting adaptability as a key strength. Still,
tasks requiring spatial or temporal reasoning remained difficult. Beyond
surgery, our findings offer insights into VLMs' potential for tackling complex
and dynamic scenarios in clinical and broader real-world applications.
|
2504.02801 | Jay Paranjape | Jay N. Paranjape, Celso de Melo, Vishal M. Patel | F-ViTA: Foundation Model Guided Visible to Thermal Translation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Thermal imaging is crucial for scene understanding, particularly in low-light
and nighttime conditions. However, collecting large thermal datasets is costly
and labor-intensive due to the specialized equipment required for infrared
image capture. To address this challenge, researchers have explored
visible-to-thermal image translation. Most existing methods rely on Generative
Adversarial Networks (GANs) or Diffusion Models (DMs), treating the task as a
style transfer problem. As a result, these approaches attempt to learn both the
modality distribution shift and underlying physical principles from limited
training data. In this paper, we propose F-ViTA, a novel approach that
leverages the general world knowledge embedded in foundation models to guide
the diffusion process for improved translation. Specifically, we condition an
InstructPix2Pix Diffusion Model with zero-shot masks and labels from foundation
models such as SAM and Grounded DINO. This allows the model to learn meaningful
correlations between scene objects and their thermal signatures in infrared
imagery. Extensive experiments on five public datasets demonstrate that F-ViTA
outperforms state-of-the-art (SOTA) methods. Furthermore, our model generalizes
well to out-of-distribution (OOD) scenarios and can generate Long-Wave Infrared
(LWIR), Mid-Wave Infrared (MWIR), and Near-Infrared (NIR) translations from the
same visible image. Code: https://github.com/JayParanjape/F-ViTA/tree/master.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:47:06 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Paranjape",
"Jay N.",
""
],
[
"de Melo",
"Celso",
""
],
[
"Patel",
"Vishal M.",
""
]
] | TITLE: F-ViTA: Foundation Model Guided Visible to Thermal Translation
ABSTRACT: Thermal imaging is crucial for scene understanding, particularly in low-light
and nighttime conditions. However, collecting large thermal datasets is costly
and labor-intensive due to the specialized equipment required for infrared
image capture. To address this challenge, researchers have explored
visible-to-thermal image translation. Most existing methods rely on Generative
Adversarial Networks (GANs) or Diffusion Models (DMs), treating the task as a
style transfer problem. As a result, these approaches attempt to learn both the
modality distribution shift and underlying physical principles from limited
training data. In this paper, we propose F-ViTA, a novel approach that
leverages the general world knowledge embedded in foundation models to guide
the diffusion process for improved translation. Specifically, we condition an
InstructPix2Pix Diffusion Model with zero-shot masks and labels from foundation
models such as SAM and Grounded DINO. This allows the model to learn meaningful
correlations between scene objects and their thermal signatures in infrared
imagery. Extensive experiments on five public datasets demonstrate that F-ViTA
outperforms state-of-the-art (SOTA) methods. Furthermore, our model generalizes
well to out-of-distribution (OOD) scenarios and can generate Long-Wave Infrared
(LWIR), Mid-Wave Infrared (MWIR), and Near-Infrared (NIR) translations from the
same visible image. Code: https://github.com/JayParanjape/F-ViTA/tree/master.
|
2504.02807 | Fan Zhou | Fan Zhou, Zengzhi Wang, Nikhil Ranjan, Zhoujun Cheng, Liping Tang,
Guowei He, Zhengzhong Liu, Eric P. Xing | MegaMath: Pushing the Limits of Open Math Corpora | 26 pages, 15 figures, 22 tables | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mathematical reasoning is a cornerstone of human intelligence and a key
benchmark for advanced capabilities in large language models (LLMs). However,
the research community still lacks an open, large-scale, high-quality corpus
tailored to the demands of math-centric LLM pre-training. We present MegaMath,
an open dataset curated from diverse, math-focused sources through following
practices: (1) Revisiting web data: We re-extracted mathematical documents from
Common Crawl with math-oriented HTML optimizations, fasttext-based filtering
and deduplication, all for acquiring higher-quality data on the Internet. (2)
Recalling Math-related code data: We identified high quality math-related code
from large code training corpus, Stack-V2, further enhancing data diversity.
(3) Exploring Synthetic data: We synthesized QA-style text, math-related code,
and interleaved text-code blocks from web data or code data. By integrating
these strategies and validating their effectiveness through extensive
ablations, MegaMath delivers 371B tokens with the largest quantity and top
quality among existing open math pre-training datasets.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:52:07 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhou",
"Fan",
""
],
[
"Wang",
"Zengzhi",
""
],
[
"Ranjan",
"Nikhil",
""
],
[
"Cheng",
"Zhoujun",
""
],
[
"Tang",
"Liping",
""
],
[
"He",
"Guowei",
""
],
[
"Liu",
"Zhengzhong",
""
],
[
"Xing",
"Eric P.",
""
]
] | TITLE: MegaMath: Pushing the Limits of Open Math Corpora
ABSTRACT: Mathematical reasoning is a cornerstone of human intelligence and a key
benchmark for advanced capabilities in large language models (LLMs). However,
the research community still lacks an open, large-scale, high-quality corpus
tailored to the demands of math-centric LLM pre-training. We present MegaMath,
an open dataset curated from diverse, math-focused sources through following
practices: (1) Revisiting web data: We re-extracted mathematical documents from
Common Crawl with math-oriented HTML optimizations, fasttext-based filtering
and deduplication, all for acquiring higher-quality data on the Internet. (2)
Recalling Math-related code data: We identified high quality math-related code
from large code training corpus, Stack-V2, further enhancing data diversity.
(3) Exploring Synthetic data: We synthesized QA-style text, math-related code,
and interleaved text-code blocks from web data or code data. By integrating
these strategies and validating their effectiveness through extensive
ablations, MegaMath delivers 371B tokens with the largest quantity and top
quality among existing open math pre-training datasets.
|
2504.02810 | Haowei Lin | Haowei Lin and Xiangyu Wang and Ruilin Yan and Baizhou Huang and
Haotian Ye and Jianhua Zhu and Zihao Wang and James Zou and Jianzhu Ma and
Yitao Liang | Generative Evaluation of Complex Reasoning in Large Language Models | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With powerful large language models (LLMs) demonstrating superhuman reasoning
capabilities, a critical question arises: Do LLMs genuinely reason, or do they
merely recall answers from their extensive, web-scraped training datasets?
Publicly released benchmarks inevitably become contaminated once incorporated
into subsequent LLM training sets, undermining their reliability as faithful
assessments. To address this, we introduce KUMO, a generative evaluation
framework designed specifically for assessing reasoning in LLMs. KUMO
synergistically combines LLMs with symbolic engines to dynamically produce
diverse, multi-turn reasoning tasks that are partially observable and
adjustable in difficulty. Through an automated pipeline, KUMO continuously
generates novel tasks across open-ended domains, compelling models to
demonstrate genuine generalization rather than memorization. We evaluated 23
state-of-the-art LLMs on 5,000 tasks across 100 domains created by KUMO,
benchmarking their reasoning abilities against university students. Our
findings reveal that many LLMs have outperformed university-level performance
on easy reasoning tasks, and reasoning-scaled LLMs reach university-level
performance on complex reasoning challenges. Moreover, LLM performance on KUMO
tasks correlates strongly with results on newly released real-world reasoning
benchmarks, underscoring KUMO's value as a robust, enduring assessment tool for
genuine LLM reasoning capabilities.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:54:18 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Lin",
"Haowei",
""
],
[
"Wang",
"Xiangyu",
""
],
[
"Yan",
"Ruilin",
""
],
[
"Huang",
"Baizhou",
""
],
[
"Ye",
"Haotian",
""
],
[
"Zhu",
"Jianhua",
""
],
[
"Wang",
"Zihao",
""
],
[
"Zou",
"James",
""
],
[
"Ma",
"Jianzhu",
""
],
[
"Liang",
"Yitao",
""
]
] | TITLE: Generative Evaluation of Complex Reasoning in Large Language Models
ABSTRACT: With powerful large language models (LLMs) demonstrating superhuman reasoning
capabilities, a critical question arises: Do LLMs genuinely reason, or do they
merely recall answers from their extensive, web-scraped training datasets?
Publicly released benchmarks inevitably become contaminated once incorporated
into subsequent LLM training sets, undermining their reliability as faithful
assessments. To address this, we introduce KUMO, a generative evaluation
framework designed specifically for assessing reasoning in LLMs. KUMO
synergistically combines LLMs with symbolic engines to dynamically produce
diverse, multi-turn reasoning tasks that are partially observable and
adjustable in difficulty. Through an automated pipeline, KUMO continuously
generates novel tasks across open-ended domains, compelling models to
demonstrate genuine generalization rather than memorization. We evaluated 23
state-of-the-art LLMs on 5,000 tasks across 100 domains created by KUMO,
benchmarking their reasoning abilities against university students. Our
findings reveal that many LLMs have outperformed university-level performance
on easy reasoning tasks, and reasoning-scaled LLMs reach university-level
performance on complex reasoning challenges. Moreover, LLM performance on KUMO
tasks correlates strongly with results on newly released real-world reasoning
benchmarks, underscoring KUMO's value as a robust, enduring assessment tool for
genuine LLM reasoning capabilities.
|
2504.02812 | Van Nguyen Nguyen | Van Nguyen Nguyen, Stephen Tyree, Andrew Guo, Mederic Fourmy, Anas
Gouda, Taeyeop Lee, Sungphill Moon, Hyeontae Son, Lukas Ranftl, Jonathan
Tremblay, Eric Brachmann, Bertram Drost, Vincent Lepetit, Carsten Rother,
Stan Birchfield, Jiri Matas, Yann Labbe, Martin Sundermeyer, Tomas Hodan | BOP Challenge 2024 on Model-Based and Model-Free 6D Object Pose
Estimation | arXiv admin note: text overlap with arXiv:2403.09799 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the evaluation methodology, datasets and results of the BOP
Challenge 2024, the sixth in a series of public competitions organized to
capture the state of the art in 6D object pose estimation and related tasks. In
2024, our goal was to transition BOP from lab-like setups to real-world
scenarios. First, we introduced new model-free tasks, where no 3D object models
are available and methods need to onboard objects just from provided reference
videos. Second, we defined a new, more practical 6D object detection task where
identities of objects visible in a test image are not provided as input. Third,
we introduced new BOP-H3 datasets recorded with high-resolution sensors and
AR/VR headsets, closely resembling real-world scenarios. BOP-H3 include 3D
models and onboarding videos to support both model-based and model-free tasks.
Participants competed on seven challenge tracks, each defined by a task, object
onboarding setup, and dataset group. Notably, the best 2024 method for
model-based 6D localization of unseen objects (FreeZeV2.1) achieves 22% higher
accuracy on BOP-Classic-Core than the best 2023 method (GenFlow), and is only
4% behind the best 2023 method for seen objects (GPose2023) although being
significantly slower (24.9 vs 2.7s per image). A more practical 2024 method for
this task is Co-op which takes only 0.8s per image and is 25X faster and 13%
more accurate than GenFlow. Methods have a similar ranking on 6D detection as
on 6D localization but higher run time. On model-based 2D detection of unseen
objects, the best 2024 method (MUSE) achieves 21% relative improvement compared
to the best 2023 method (CNOS). However, the 2D detection accuracy for unseen
objects is still noticealy (-53%) behind the accuracy for seen objects
(GDet2023). The online evaluation system stays open and is available at
http://bop.felk.cvut.cz/
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:55:19 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Nguyen",
"Van Nguyen",
""
],
[
"Tyree",
"Stephen",
""
],
[
"Guo",
"Andrew",
""
],
[
"Fourmy",
"Mederic",
""
],
[
"Gouda",
"Anas",
""
],
[
"Lee",
"Taeyeop",
""
],
[
"Moon",
"Sungphill",
""
],
[
"Son",
"Hyeontae",
""
],
[
"Ranftl",
"Lukas",
""
],
[
"Tremblay",
"Jonathan",
""
],
[
"Brachmann",
"Eric",
""
],
[
"Drost",
"Bertram",
""
],
[
"Lepetit",
"Vincent",
""
],
[
"Rother",
"Carsten",
""
],
[
"Birchfield",
"Stan",
""
],
[
"Matas",
"Jiri",
""
],
[
"Labbe",
"Yann",
""
],
[
"Sundermeyer",
"Martin",
""
],
[
"Hodan",
"Tomas",
""
]
] | TITLE: BOP Challenge 2024 on Model-Based and Model-Free 6D Object Pose
Estimation
ABSTRACT: We present the evaluation methodology, datasets and results of the BOP
Challenge 2024, the sixth in a series of public competitions organized to
capture the state of the art in 6D object pose estimation and related tasks. In
2024, our goal was to transition BOP from lab-like setups to real-world
scenarios. First, we introduced new model-free tasks, where no 3D object models
are available and methods need to onboard objects just from provided reference
videos. Second, we defined a new, more practical 6D object detection task where
identities of objects visible in a test image are not provided as input. Third,
we introduced new BOP-H3 datasets recorded with high-resolution sensors and
AR/VR headsets, closely resembling real-world scenarios. BOP-H3 include 3D
models and onboarding videos to support both model-based and model-free tasks.
Participants competed on seven challenge tracks, each defined by a task, object
onboarding setup, and dataset group. Notably, the best 2024 method for
model-based 6D localization of unseen objects (FreeZeV2.1) achieves 22% higher
accuracy on BOP-Classic-Core than the best 2023 method (GenFlow), and is only
4% behind the best 2023 method for seen objects (GPose2023) although being
significantly slower (24.9 vs 2.7s per image). A more practical 2024 method for
this task is Co-op which takes only 0.8s per image and is 25X faster and 13%
more accurate than GenFlow. Methods have a similar ranking on 6D detection as
on 6D localization but higher run time. On model-based 2D detection of unseen
objects, the best 2024 method (MUSE) achieves 21% relative improvement compared
to the best 2023 method (CNOS). However, the 2D detection accuracy for unseen
objects is still noticealy (-53%) behind the accuracy for seen objects
(GDet2023). The online evaluation system stays open and is available at
http://bop.felk.cvut.cz/
|
2504.02819 | Yuexi Du | Yuexi Du, Jiazhen Zhang, Nicha C. Dvornek, John A. Onofrey | GMR-Conv: An Efficient Rotation and Reflection Equivariant Convolution
Kernel Using Gaussian Mixture Rings | null | null | null | null | cs.CV cs.AI eess.IV eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symmetry, where certain features remain invariant under geometric
transformations, can often serve as a powerful prior in designing convolutional
neural networks (CNNs). While conventional CNNs inherently support
translational equivariance, extending this property to rotation and reflection
has proven challenging, often forcing a compromise between equivariance,
efficiency, and information loss. In this work, we introduce Gaussian Mixture
Ring Convolution (GMR-Conv), an efficient convolution kernel that smooths
radial symmetry using a mixture of Gaussian-weighted rings. This design
mitigates discretization errors of circular kernels, thereby preserving robust
rotation and reflection equivariance without incurring computational overhead.
We further optimize both the space and speed efficiency of GMR-Conv via a novel
parameterization and computation strategy, allowing larger kernels at an
acceptable cost. Extensive experiments on eight classification and one
segmentation datasets demonstrate that GMR-Conv not only matches conventional
CNNs' performance but can also surpass it in applications with orientation-less
data. GMR-Conv is also proven to be more robust and efficient than the
state-of-the-art equivariant learning methods. Our work provides inspiring
empirical evidence that carefully applied radial symmetry can alleviate the
challenges of information loss, marking a promising advance in equivariant
network architectures. The code is available at
https://github.com/XYPB/GMR-Conv.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:58:18 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Du",
"Yuexi",
""
],
[
"Zhang",
"Jiazhen",
""
],
[
"Dvornek",
"Nicha C.",
""
],
[
"Onofrey",
"John A.",
""
]
] | TITLE: GMR-Conv: An Efficient Rotation and Reflection Equivariant Convolution
Kernel Using Gaussian Mixture Rings
ABSTRACT: Symmetry, where certain features remain invariant under geometric
transformations, can often serve as a powerful prior in designing convolutional
neural networks (CNNs). While conventional CNNs inherently support
translational equivariance, extending this property to rotation and reflection
has proven challenging, often forcing a compromise between equivariance,
efficiency, and information loss. In this work, we introduce Gaussian Mixture
Ring Convolution (GMR-Conv), an efficient convolution kernel that smooths
radial symmetry using a mixture of Gaussian-weighted rings. This design
mitigates discretization errors of circular kernels, thereby preserving robust
rotation and reflection equivariance without incurring computational overhead.
We further optimize both the space and speed efficiency of GMR-Conv via a novel
parameterization and computation strategy, allowing larger kernels at an
acceptable cost. Extensive experiments on eight classification and one
segmentation datasets demonstrate that GMR-Conv not only matches conventional
CNNs' performance but can also surpass it in applications with orientation-less
data. GMR-Conv is also proven to be more robust and efficient than the
state-of-the-art equivariant learning methods. Our work provides inspiring
empirical evidence that carefully applied radial symmetry can alleviate the
challenges of information loss, marking a promising advance in equivariant
network architectures. The code is available at
https://github.com/XYPB/GMR-Conv.
|
2504.02823 | Muzammal Naseer | Divya Velayudhan, Abdelfatah Ahmed, Mohamad Alansari, Neha Gour,
Abderaouf Behouch, Taimur Hassan, Syed Talal Wasim, Nabil Maalej, Muzammal
Naseer, Juergen Gall, Mohammed Bennamoun, Ernesto Damiani, Naoufel Werghi | STING-BEE: Towards Vision-Language Model for Real-World X-ray Baggage
Security Inspection | Accepted at CVPR 2025 | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Advancements in Computer-Aided Screening (CAS) systems are essential for
improving the detection of security threats in X-ray baggage scans. However,
current datasets are limited in representing real-world, sophisticated threats
and concealment tactics, and existing approaches are constrained by a
closed-set paradigm with predefined labels. To address these challenges, we
introduce STCray, the first multimodal X-ray baggage security dataset,
comprising 46,642 image-caption paired scans across 21 threat categories,
generated using an X-ray scanner for airport security. STCray is meticulously
developed with our specialized protocol that ensures domain-aware, coherent
captions, that lead to the multi-modal instruction following data in X-ray
baggage security. This allows us to train a domain-aware visual AI assistant
named STING-BEE that supports a range of vision-language tasks, including scene
comprehension, referring threat localization, visual grounding, and visual
question answering (VQA), establishing novel baselines for multi-modal learning
in X-ray baggage security. Further, STING-BEE shows state-of-the-art
generalization in cross-domain settings. Code, data, and models are available
at https://divs1159.github.io/STING-BEE/.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:59:12 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Velayudhan",
"Divya",
""
],
[
"Ahmed",
"Abdelfatah",
""
],
[
"Alansari",
"Mohamad",
""
],
[
"Gour",
"Neha",
""
],
[
"Behouch",
"Abderaouf",
""
],
[
"Hassan",
"Taimur",
""
],
[
"Wasim",
"Syed Talal",
""
],
[
"Maalej",
"Nabil",
""
],
[
"Naseer",
"Muzammal",
""
],
[
"Gall",
"Juergen",
""
],
[
"Bennamoun",
"Mohammed",
""
],
[
"Damiani",
"Ernesto",
""
],
[
"Werghi",
"Naoufel",
""
]
] | TITLE: STING-BEE: Towards Vision-Language Model for Real-World X-ray Baggage
Security Inspection
ABSTRACT: Advancements in Computer-Aided Screening (CAS) systems are essential for
improving the detection of security threats in X-ray baggage scans. However,
current datasets are limited in representing real-world, sophisticated threats
and concealment tactics, and existing approaches are constrained by a
closed-set paradigm with predefined labels. To address these challenges, we
introduce STCray, the first multimodal X-ray baggage security dataset,
comprising 46,642 image-caption paired scans across 21 threat categories,
generated using an X-ray scanner for airport security. STCray is meticulously
developed with our specialized protocol that ensures domain-aware, coherent
captions, that lead to the multi-modal instruction following data in X-ray
baggage security. This allows us to train a domain-aware visual AI assistant
named STING-BEE that supports a range of vision-language tasks, including scene
comprehension, referring threat localization, visual grounding, and visual
question answering (VQA), establishing novel baselines for multi-modal learning
in X-ray baggage security. Further, STING-BEE shows state-of-the-art
generalization in cross-domain settings. Code, data, and models are available
at https://divs1159.github.io/STING-BEE/.
|
2504.02828 | Jinqi Luo | Jinqi Luo, Tianjiao Ding, Kwan Ho Ryan Chan, Hancheng Min, Chris
Callison-Burch, Ren\'e Vidal | Concept Lancet: Image Editing with Compositional Representation
Transplant | Accepted in CVPR 2025. Project page at
https://peterljq.github.io/project/colan | null | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Diffusion models are widely used for image editing tasks. Existing editing
methods often design a representation manipulation procedure by curating an
edit direction in the text embedding or score space. However, such a procedure
faces a key challenge: overestimating the edit strength harms visual
consistency while underestimating it fails the editing task. Notably, each
source image may require a different editing strength, and it is costly to
search for an appropriate strength via trial-and-error. To address this
challenge, we propose Concept Lancet (CoLan), a zero-shot plug-and-play
framework for principled representation manipulation in diffusion-based image
editing. At inference time, we decompose the source input in the latent (text
embedding or diffusion score) space as a sparse linear combination of the
representations of the collected visual concepts. This allows us to accurately
estimate the presence of concepts in each image, which informs the edit. Based
on the editing task (replace/add/remove), we perform a customized concept
transplant process to impose the corresponding editing direction. To
sufficiently model the concept space, we curate a conceptual representation
dataset, CoLan-150K, which contains diverse descriptions and scenarios of
visual terms and phrases for the latent dictionary. Experiments on multiple
diffusion-based image editing baselines show that methods equipped with CoLan
achieve state-of-the-art performance in editing effectiveness and consistency
preservation.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:59:58 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Luo",
"Jinqi",
""
],
[
"Ding",
"Tianjiao",
""
],
[
"Chan",
"Kwan Ho Ryan",
""
],
[
"Min",
"Hancheng",
""
],
[
"Callison-Burch",
"Chris",
""
],
[
"Vidal",
"René",
""
]
] | TITLE: Concept Lancet: Image Editing with Compositional Representation
Transplant
ABSTRACT: Diffusion models are widely used for image editing tasks. Existing editing
methods often design a representation manipulation procedure by curating an
edit direction in the text embedding or score space. However, such a procedure
faces a key challenge: overestimating the edit strength harms visual
consistency while underestimating it fails the editing task. Notably, each
source image may require a different editing strength, and it is costly to
search for an appropriate strength via trial-and-error. To address this
challenge, we propose Concept Lancet (CoLan), a zero-shot plug-and-play
framework for principled representation manipulation in diffusion-based image
editing. At inference time, we decompose the source input in the latent (text
embedding or diffusion score) space as a sparse linear combination of the
representations of the collected visual concepts. This allows us to accurately
estimate the presence of concepts in each image, which informs the edit. Based
on the editing task (replace/add/remove), we perform a customized concept
transplant process to impose the corresponding editing direction. To
sufficiently model the concept space, we curate a conceptual representation
dataset, CoLan-150K, which contains diverse descriptions and scenarios of
visual terms and phrases for the latent dictionary. Experiments on multiple
diffusion-based image editing baselines show that methods equipped with CoLan
achieve state-of-the-art performance in editing effectiveness and consistency
preservation.
|
2208.03486 | Emilie Mathian | E. Mathian, H. Liu, L. Fernandez-Cuesta, D. Samaras, M. Foll, L. Chen | HaloAE: An HaloNet based Local Transformer Auto-Encoder for Anomaly
Detection and Localization | 21 pages, 6 figures, rejected to ECCV 2023 | In Proceedings of the 18th International Joint Conference on
Computer Vision, Imaging and Computer Graphics Theory and Applications
(VISIGRAPP 2023) - Volume 5: VISAPP; ISBN 978-989-758-634-7; ISSN 2184-4321,
SciTePress, pages 325-337 | 10.5220/0011865900003417 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Unsupervised anomaly detection and localization is a crucial task as it is
impossible to collect and label all possible anomalies. Many studies have
emphasized the importance of integrating local and global information to
achieve accurate segmentation of anomalies. To this end, there has been a
growing interest in Transformer, which allows modeling long-range content
interactions. However, global interactions through self attention are generally
too expensive for most image scales. In this study, we introduce HaloAE, the
first auto-encoder based on a local 2D version of Transformer with HaloNet.
With HaloAE, we have created a hybrid model that combines convolution and local
2D block-wise self-attention layers and jointly performs anomaly detection and
segmentation through a single model. We achieved competitive results on the
MVTec dataset, suggesting that vision models incorporating Transformer could
benefit from a local computation of the self-attention operation, and pave the
way for other applications.
| [
{
"version": "v1",
"created": "Sat, 6 Aug 2022 09:52:32 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Aug 2022 09:28:20 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Sep 2022 13:37:53 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Mathian",
"E.",
""
],
[
"Liu",
"H.",
""
],
[
"Fernandez-Cuesta",
"L.",
""
],
[
"Samaras",
"D.",
""
],
[
"Foll",
"M.",
""
],
[
"Chen",
"L.",
""
]
] | TITLE: HaloAE: An HaloNet based Local Transformer Auto-Encoder for Anomaly
Detection and Localization
ABSTRACT: Unsupervised anomaly detection and localization is a crucial task as it is
impossible to collect and label all possible anomalies. Many studies have
emphasized the importance of integrating local and global information to
achieve accurate segmentation of anomalies. To this end, there has been a
growing interest in Transformer, which allows modeling long-range content
interactions. However, global interactions through self attention are generally
too expensive for most image scales. In this study, we introduce HaloAE, the
first auto-encoder based on a local 2D version of Transformer with HaloNet.
With HaloAE, we have created a hybrid model that combines convolution and local
2D block-wise self-attention layers and jointly performs anomaly detection and
segmentation through a single model. We achieved competitive results on the
MVTec dataset, suggesting that vision models incorporating Transformer could
benefit from a local computation of the self-attention operation, and pave the
way for other applications.
|
2208.14161 | Yuhang Liu | Yuhang Liu, Zhen Zhang, Dong Gong, Mingming Gong, Biwei Huang, Anton
van den Hengel, Kun Zhang, Javen Qinfeng Shi | Latent Covariate Shift: Unlocking Partial Identifiability for
Multi-Source Domain Adaptation | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-source domain adaptation (MSDA) addresses the challenge of learning a
label prediction function for an unlabeled target domain by leveraging both the
labeled data from multiple source domains and the unlabeled data from the
target domain. Conventional MSDA approaches often rely on covariate shift or
conditional shift paradigms, which assume a consistent label distribution
across domains. However, this assumption proves limiting in practical scenarios
where label distributions do vary across domains, diminishing its applicability
in real-world settings. For example, animals from different regions exhibit
diverse characteristics due to varying diets and genetics.
Motivated by this, we propose a novel paradigm called latent covariate shift
(LCS), which introduces significantly greater variability and adaptability
across domains. Notably, it provides a theoretical assurance for recovering the
latent cause of the label variable, which we refer to as the latent content
variable. Within this new paradigm, we present an intricate causal generative
model by introducing latent noises across domains, along with a latent content
variable and a latent style variable to achieve more nuanced rendering of
observational data. We demonstrate that the latent content variable can be
identified up to block identifiability due to its versatile yet distinct causal
structure. We anchor our theoretical insights into a novel MSDA method, which
learns the label distribution conditioned on the identifiable latent content
variable, thereby accommodating more substantial distribution shifts. The
proposed approach showcases exceptional performance and efficacy on both
simulated and real-world datasets.
| [
{
"version": "v1",
"created": "Tue, 30 Aug 2022 11:25:15 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Sep 2022 07:19:36 GMT"
},
{
"version": "v3",
"created": "Sun, 31 Mar 2024 23:09:38 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Apr 2025 23:47:59 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Liu",
"Yuhang",
""
],
[
"Zhang",
"Zhen",
""
],
[
"Gong",
"Dong",
""
],
[
"Gong",
"Mingming",
""
],
[
"Huang",
"Biwei",
""
],
[
"Hengel",
"Anton van den",
""
],
[
"Zhang",
"Kun",
""
],
[
"Shi",
"Javen Qinfeng",
""
]
] | TITLE: Latent Covariate Shift: Unlocking Partial Identifiability for
Multi-Source Domain Adaptation
ABSTRACT: Multi-source domain adaptation (MSDA) addresses the challenge of learning a
label prediction function for an unlabeled target domain by leveraging both the
labeled data from multiple source domains and the unlabeled data from the
target domain. Conventional MSDA approaches often rely on covariate shift or
conditional shift paradigms, which assume a consistent label distribution
across domains. However, this assumption proves limiting in practical scenarios
where label distributions do vary across domains, diminishing its applicability
in real-world settings. For example, animals from different regions exhibit
diverse characteristics due to varying diets and genetics.
Motivated by this, we propose a novel paradigm called latent covariate shift
(LCS), which introduces significantly greater variability and adaptability
across domains. Notably, it provides a theoretical assurance for recovering the
latent cause of the label variable, which we refer to as the latent content
variable. Within this new paradigm, we present an intricate causal generative
model by introducing latent noises across domains, along with a latent content
variable and a latent style variable to achieve more nuanced rendering of
observational data. We demonstrate that the latent content variable can be
identified up to block identifiability due to its versatile yet distinct causal
structure. We anchor our theoretical insights into a novel MSDA method, which
learns the label distribution conditioned on the identifiable latent content
variable, thereby accommodating more substantial distribution shifts. The
proposed approach showcases exceptional performance and efficacy on both
simulated and real-world datasets.
|
2209.12675 | Guillermo Carbajal | Guillermo Carbajal, Patricia Vitoria, Jos\'e Lezama, and Pablo Mus\'e | Assessing the Role of Datasets in the Generalization of Motion
Deblurring Methods to Real Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Successfully training end-to-end deep networks for real motion deblurring
requires datasets of sharp/blurred image pairs that are realistic and diverse
enough to achieve generalization to real blurred images. Obtaining such
datasets remains a challenging task. In this paper, we first review the
limitations of existing deblurring benchmark datasets and analyze the
underlying causes for deblurring networks' lack of generalization to blurry
images in the wild. Based on this analysis, we propose an efficient procedural
methodology to generate sharp/blurred image pairs based on a simple yet
effective model. This allows for generating virtually unlimited diverse
training pairs mimicking realistic blur properties. We demonstrate the
effectiveness of the proposed dataset by training existing deblurring
architectures on the simulated pairs and performing cross-dataset evaluation on
three standard datasets of real blurred images. When training with the proposed
method, we observed superior generalization performance for the ultimate task
of deblurring real motion-blurred photos of dynamic scenes.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2022 13:20:35 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 22:22:09 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Carbajal",
"Guillermo",
""
],
[
"Vitoria",
"Patricia",
""
],
[
"Lezama",
"José",
""
],
[
"Musé",
"Pablo",
""
]
] | TITLE: Assessing the Role of Datasets in the Generalization of Motion
Deblurring Methods to Real Images
ABSTRACT: Successfully training end-to-end deep networks for real motion deblurring
requires datasets of sharp/blurred image pairs that are realistic and diverse
enough to achieve generalization to real blurred images. Obtaining such
datasets remains a challenging task. In this paper, we first review the
limitations of existing deblurring benchmark datasets and analyze the
underlying causes for deblurring networks' lack of generalization to blurry
images in the wild. Based on this analysis, we propose an efficient procedural
methodology to generate sharp/blurred image pairs based on a simple yet
effective model. This allows for generating virtually unlimited diverse
training pairs mimicking realistic blur properties. We demonstrate the
effectiveness of the proposed dataset by training existing deblurring
architectures on the simulated pairs and performing cross-dataset evaluation on
three standard datasets of real blurred images. When training with the proposed
method, we observed superior generalization performance for the ultimate task
of deblurring real motion-blurred photos of dynamic scenes.
|
2210.04745 | Luca Leuzzi | Daniele Ancora, Matteo Negri, Antonio Gianfrate, Dimitris
Trypogeorgos, Lorenzo Dominici, Daniele Sanvitto, Federico Ricci-Tersenghi,
Luca Leuzzi | Low-power multi-mode fiber projector overcomes shallow neural networks
classifiers | 12 pages, 8 figures | Phys. Rev. Applied 21, 064027 (2024) | 10.1103/PhysRevApplied.21.0640 | null | physics.optics physics.app-ph physics.data-an stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the domain of disordered photonics, the characterization of optically
opaque materials for light manipulation and imaging is a primary aim. Among
various complex devices, multi-mode optical fibers stand out as cost-effective
and easy-to-handle tools, making them attractive for several tasks. In this
context, we cast these fibers into random hardware projectors, transforming an
input dataset into a higher dimensional speckled image set. The goal of our
study is to demonstrate that using such randomized data for classification by
training a single logistic regression layer improves accuracy compared to
training on direct raw images. Interestingly, we found that the classification
accuracy achieved is higher than that obtained with the standard transmission
matrix model, a widely accepted tool for describing light transmission through
disordered devices. We conjecture that the reason for such improved performance
could be due to the fact that the hardware classifier operates in a flatter
region of the loss landscape when trained on fiber data, which aligns with the
current theory of deep neural networks. These findings suggest that the class
of random projections operated by multi-mode fibers generalize better to
previously unseen data, positioning them as promising tools for
optically-assisted neural networks. With this study, in fact, we want to
contribute to advancing the knowledge and practical utilization of these
versatile instruments, which may play a significant role in shaping the future
of neuromorphic machine learning.
| [
{
"version": "v1",
"created": "Mon, 10 Oct 2022 14:55:02 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 14:40:26 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 16:17:19 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ancora",
"Daniele",
""
],
[
"Negri",
"Matteo",
""
],
[
"Gianfrate",
"Antonio",
""
],
[
"Trypogeorgos",
"Dimitris",
""
],
[
"Dominici",
"Lorenzo",
""
],
[
"Sanvitto",
"Daniele",
""
],
[
"Ricci-Tersenghi",
"Federico",
""
],
[
"Leuzzi",
"Luca",
""
]
] | TITLE: Low-power multi-mode fiber projector overcomes shallow neural networks
classifiers
ABSTRACT: In the domain of disordered photonics, the characterization of optically
opaque materials for light manipulation and imaging is a primary aim. Among
various complex devices, multi-mode optical fibers stand out as cost-effective
and easy-to-handle tools, making them attractive for several tasks. In this
context, we cast these fibers into random hardware projectors, transforming an
input dataset into a higher dimensional speckled image set. The goal of our
study is to demonstrate that using such randomized data for classification by
training a single logistic regression layer improves accuracy compared to
training on direct raw images. Interestingly, we found that the classification
accuracy achieved is higher than that obtained with the standard transmission
matrix model, a widely accepted tool for describing light transmission through
disordered devices. We conjecture that the reason for such improved performance
could be due to the fact that the hardware classifier operates in a flatter
region of the loss landscape when trained on fiber data, which aligns with the
current theory of deep neural networks. These findings suggest that the class
of random projections operated by multi-mode fibers generalize better to
previously unseen data, positioning them as promising tools for
optically-assisted neural networks. With this study, in fact, we want to
contribute to advancing the knowledge and practical utilization of these
versatile instruments, which may play a significant role in shaping the future
of neuromorphic machine learning.
|
2304.07983 | Sofiane Tanji | Sofiane Tanji and Andrea Della Vecchia and Fran\c{c}ois Glineur and
Silvia Villa | Snacks: a fast large-scale kernel SVM solver | 6 pages | null | 10.23919/ECC57647.2023.10178323 | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kernel methods provide a powerful framework for non parametric learning. They
are based on kernel functions and allow learning in a rich functional space
while applying linear statistical learning tools, such as Ridge Regression or
Support Vector Machines. However, standard kernel methods suffer from a
quadratic time and memory complexity in the number of data points and thus have
limited applications in large-scale learning. In this paper, we propose Snacks,
a new large-scale solver for Kernel Support Vector Machines. Specifically,
Snacks relies on a Nystr\"om approximation of the kernel matrix and an
accelerated variant of the stochastic subgradient method. We demonstrate
formally through a detailed empirical evaluation, that it competes with other
SVM solvers on a variety of benchmark datasets.
| [
{
"version": "v1",
"created": "Mon, 17 Apr 2023 04:19:20 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Tanji",
"Sofiane",
""
],
[
"Della Vecchia",
"Andrea",
""
],
[
"Glineur",
"François",
""
],
[
"Villa",
"Silvia",
""
]
] | TITLE: Snacks: a fast large-scale kernel SVM solver
ABSTRACT: Kernel methods provide a powerful framework for non parametric learning. They
are based on kernel functions and allow learning in a rich functional space
while applying linear statistical learning tools, such as Ridge Regression or
Support Vector Machines. However, standard kernel methods suffer from a
quadratic time and memory complexity in the number of data points and thus have
limited applications in large-scale learning. In this paper, we propose Snacks,
a new large-scale solver for Kernel Support Vector Machines. Specifically,
Snacks relies on a Nystr\"om approximation of the kernel matrix and an
accelerated variant of the stochastic subgradient method. We demonstrate
formally through a detailed empirical evaluation, that it competes with other
SVM solvers on a variety of benchmark datasets.
|
2305.00645 | Qifan Wang | Qifan Wang, Shujie Cui, Lei Zhou, Ye Dong, Jianli Bai, Yun Sing Koh
and Giovanni Russello | GTree: GPU-Friendly Privacy-preserving Decision Tree Training and
Inference | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision tree (DT) is a widely used machine learning model due to its
versatility, speed, and interpretability. However, for privacy-sensitive
applications, outsourcing DT training and inference to cloud platforms raise
concerns about data privacy. Researchers have developed privacy-preserving
approaches for DT training and inference using cryptographic primitives, such
as Secure Multi-Party Computation (MPC). While these approaches have shown
progress, they still suffer from heavy computation and communication overheads.
Few recent works employ Graphical Processing Units (GPU) to improve the
performance of MPC-protected deep learning. This raises a natural question:
\textit{can MPC-protected DT training and inference be accelerated by GPU?}
We present GTree, the first scheme that uses GPU to accelerate MPC-protected
secure DT training and inference. GTree is built across 3 parties who securely
and jointly perform each step of DT training and inference with GPU. Each MPC
protocol in GTree is designed in a GPU-friendly version. The performance
evaluation shows that GTree achieves ${\thicksim}11{\times}$ and
${\thicksim}21{\times}$ improvements in training SPECT and Adult datasets,
compared to the prior most efficient CPU-based work. For inference, GTree shows
its superior efficiency when the DT has less than 10 levels, which is
$126\times$ faster than the prior most efficient work when inferring $10^4$
instances with a tree of 7 levels. GTree also achieves a stronger security
guarantee than prior solutions, which only leaks the tree depth and size of
data samples while prior solutions also leak the tree structure. With
\textit{oblivious array access}, the access pattern on GPU is also protected.
| [
{
"version": "v1",
"created": "Mon, 1 May 2023 03:35:43 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Aug 2024 15:35:12 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 21:33:59 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wang",
"Qifan",
""
],
[
"Cui",
"Shujie",
""
],
[
"Zhou",
"Lei",
""
],
[
"Dong",
"Ye",
""
],
[
"Bai",
"Jianli",
""
],
[
"Koh",
"Yun Sing",
""
],
[
"Russello",
"Giovanni",
""
]
] | TITLE: GTree: GPU-Friendly Privacy-preserving Decision Tree Training and
Inference
ABSTRACT: Decision tree (DT) is a widely used machine learning model due to its
versatility, speed, and interpretability. However, for privacy-sensitive
applications, outsourcing DT training and inference to cloud platforms raise
concerns about data privacy. Researchers have developed privacy-preserving
approaches for DT training and inference using cryptographic primitives, such
as Secure Multi-Party Computation (MPC). While these approaches have shown
progress, they still suffer from heavy computation and communication overheads.
Few recent works employ Graphical Processing Units (GPU) to improve the
performance of MPC-protected deep learning. This raises a natural question:
\textit{can MPC-protected DT training and inference be accelerated by GPU?}
We present GTree, the first scheme that uses GPU to accelerate MPC-protected
secure DT training and inference. GTree is built across 3 parties who securely
and jointly perform each step of DT training and inference with GPU. Each MPC
protocol in GTree is designed in a GPU-friendly version. The performance
evaluation shows that GTree achieves ${\thicksim}11{\times}$ and
${\thicksim}21{\times}$ improvements in training SPECT and Adult datasets,
compared to the prior most efficient CPU-based work. For inference, GTree shows
its superior efficiency when the DT has less than 10 levels, which is
$126\times$ faster than the prior most efficient work when inferring $10^4$
instances with a tree of 7 levels. GTree also achieves a stronger security
guarantee than prior solutions, which only leaks the tree depth and size of
data samples while prior solutions also leak the tree structure. With
\textit{oblivious array access}, the access pattern on GPU is also protected.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.