Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.18595 | Chengxiang Huang | Chengxiang Huang, Yake Wei, Zequn Yang, Di Hu | Adaptive Unimodal Regulation for Balanced Multimodal Information
Acquisition | 10pages, 16 figures, CVPR2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sensory training during the early ages is vital for human development.
Inspired by this cognitive phenomenon, we observe that the early training stage
is also important for the multimodal learning process, where dataset
information is rapidly acquired. We refer to this stage as the prime learning
window. However, based on our observation, this prime learning window in
multimodal learning is often dominated by information-sufficient modalities,
which in turn suppresses the information acquisition of
information-insufficient modalities. To address this issue, we propose
Information Acquisition Regulation (InfoReg), a method designed to balance
information acquisition among modalities. Specifically, InfoReg slows down the
information acquisition process of information-sufficient modalities during the
prime learning window, which could promote information acquisition of
information-insufficient modalities. This regulation enables a more balanced
learning process and improves the overall performance of the multimodal
network. Experiments show that InfoReg outperforms related multimodal
imbalanced methods across various datasets, achieving superior model
performance. The code is available at
https://github.com/GeWu-Lab/InfoReg_CVPR2025.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 11:52:57 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Huang",
"Chengxiang",
""
],
[
"Wei",
"Yake",
""
],
[
"Yang",
"Zequn",
""
],
[
"Hu",
"Di",
""
]
] | TITLE: Adaptive Unimodal Regulation for Balanced Multimodal Information
Acquisition
ABSTRACT: Sensory training during the early ages is vital for human development.
Inspired by this cognitive phenomenon, we observe that the early training stage
is also important for the multimodal learning process, where dataset
information is rapidly acquired. We refer to this stage as the prime learning
window. However, based on our observation, this prime learning window in
multimodal learning is often dominated by information-sufficient modalities,
which in turn suppresses the information acquisition of
information-insufficient modalities. To address this issue, we propose
Information Acquisition Regulation (InfoReg), a method designed to balance
information acquisition among modalities. Specifically, InfoReg slows down the
information acquisition process of information-sufficient modalities during the
prime learning window, which could promote information acquisition of
information-insufficient modalities. This regulation enables a more balanced
learning process and improves the overall performance of the multimodal
network. Experiments show that InfoReg outperforms related multimodal
imbalanced methods across various datasets, achieving superior model
performance. The code is available at
https://github.com/GeWu-Lab/InfoReg_CVPR2025.
|
2503.18617 | Tomasz R\'o\.za\'nski | Tomasz R\'o\.za\'nski, Yuan-Sen Ting | Scaling Laws for Emulation of Stellar Spectra | 25 pages, 11 figures, submitted to OJA | null | null | null | astro-ph.IM astro-ph.SR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Neural network-based emulators for the inference of stellar parameters and
elemental abundances represent an increasingly popular methodology in modern
spectroscopic surveys. However, these approaches are often constrained by their
emulation precision and domain transfer capabilities. Greater generalizability
has previously been achieved only with significantly larger model
architectures, as demonstrated by Transformer-based models in natural language
processing. This observation aligns with neural scaling laws, where model
performance predictably improves with increased model size, computational
resources allocated to model training, and training data volume. In this study,
we demonstrate that these scaling laws also apply to Transformer-based spectral
emulators in astronomy. Building upon our previous work with TransformerPayne
and incorporating Maximum Update Parametrization techniques from natural
language models, we provide training guidelines for scaling models to achieve
optimal performance. Our results show that within the explored parameter space,
clear scaling relationships emerge. These findings suggest that optimal
computational resource allocation requires balanced scaling. Specifically,
given a tenfold increase in training compute, achieving an optimal seven-fold
reduction in mean squared error necessitates an approximately 2.5-fold increase
in dataset size and a 3.8-fold increase in model size. This study establishes a
foundation for developing spectral foundational models with enhanced domain
transfer capabilities.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 12:20:24 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Różański",
"Tomasz",
""
],
[
"Ting",
"Yuan-Sen",
""
]
] | TITLE: Scaling Laws for Emulation of Stellar Spectra
ABSTRACT: Neural network-based emulators for the inference of stellar parameters and
elemental abundances represent an increasingly popular methodology in modern
spectroscopic surveys. However, these approaches are often constrained by their
emulation precision and domain transfer capabilities. Greater generalizability
has previously been achieved only with significantly larger model
architectures, as demonstrated by Transformer-based models in natural language
processing. This observation aligns with neural scaling laws, where model
performance predictably improves with increased model size, computational
resources allocated to model training, and training data volume. In this study,
we demonstrate that these scaling laws also apply to Transformer-based spectral
emulators in astronomy. Building upon our previous work with TransformerPayne
and incorporating Maximum Update Parametrization techniques from natural
language models, we provide training guidelines for scaling models to achieve
optimal performance. Our results show that within the explored parameter space,
clear scaling relationships emerge. These findings suggest that optimal
computational resource allocation requires balanced scaling. Specifically,
given a tenfold increase in training compute, achieving an optimal seven-fold
reduction in mean squared error necessitates an approximately 2.5-fold increase
in dataset size and a 3.8-fold increase in model size. This study establishes a
foundation for developing spectral foundational models with enhanced domain
transfer capabilities.
|
2503.18623 | Deepayan Das | Deepayan Das, Davide Talon, Yiming Wang, Massimiliano Mancini, Elisa
Ricci | Training-Free Personalization via Retrieval and Reasoning on
Fingerprints | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision Language Models (VLMs) have lead to major improvements in multimodal
reasoning, yet they still struggle to understand user-specific concepts.
Existing personalization methods address this limitation but heavily rely on
training procedures, that can be either costly or unpleasant to individual
users. We depart from existing work, and for the first time explore the
training-free setting in the context of personalization. We propose a novel
method, Retrieval and Reasoning for Personalization (R2P), leveraging internal
knowledge of VLMs. First, we leverage VLMs to extract the concept fingerprint,
i.e., key attributes uniquely defining the concept within its semantic class.
When a query arrives, the most similar fingerprints are retrieved and scored
via chain-of-thought-reasoning. To reduce the risk of hallucinations, the
scores are validated through cross-modal verification at the attribute level:
in case of a discrepancy between the scores, R2P refines the concept
association via pairwise multimodal matching, where the retrieved fingerprints
and their images are directly compared with the query. We validate R2P on two
publicly available benchmarks and a newly introduced dataset, Personal Concepts
with Visual Ambiguity (PerVA), for concept identification highlighting
challenges in visual ambiguity. R2P consistently outperforms state-of-the-art
approaches on various downstream tasks across all benchmarks. Code will be
available upon acceptance.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 12:36:24 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Das",
"Deepayan",
""
],
[
"Talon",
"Davide",
""
],
[
"Wang",
"Yiming",
""
],
[
"Mancini",
"Massimiliano",
""
],
[
"Ricci",
"Elisa",
""
]
] | TITLE: Training-Free Personalization via Retrieval and Reasoning on
Fingerprints
ABSTRACT: Vision Language Models (VLMs) have lead to major improvements in multimodal
reasoning, yet they still struggle to understand user-specific concepts.
Existing personalization methods address this limitation but heavily rely on
training procedures, that can be either costly or unpleasant to individual
users. We depart from existing work, and for the first time explore the
training-free setting in the context of personalization. We propose a novel
method, Retrieval and Reasoning for Personalization (R2P), leveraging internal
knowledge of VLMs. First, we leverage VLMs to extract the concept fingerprint,
i.e., key attributes uniquely defining the concept within its semantic class.
When a query arrives, the most similar fingerprints are retrieved and scored
via chain-of-thought-reasoning. To reduce the risk of hallucinations, the
scores are validated through cross-modal verification at the attribute level:
in case of a discrepancy between the scores, R2P refines the concept
association via pairwise multimodal matching, where the retrieved fingerprints
and their images are directly compared with the query. We validate R2P on two
publicly available benchmarks and a newly introduced dataset, Personal Concepts
with Visual Ambiguity (PerVA), for concept identification highlighting
challenges in visual ambiguity. R2P consistently outperforms state-of-the-art
approaches on various downstream tasks across all benchmarks. Code will be
available upon acceptance.
|
2503.18626 | Junqiao Fan | Junqiao Fan, Yunjiao Zhou, Min Chang Jordan Ren and Jianfei Yang | Generative Dataset Distillation using Min-Max Diffusion Model | The paper is accepted as the ECCV2024 workshop paper and achieved
second place in the generative track of The First Dataset Distillation
Challenge of ECCV2024, https://www.dd-challenge.com/#/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we address the problem of generative dataset distillation that
utilizes generative models to synthesize images. The generator may produce any
number of images under a preserved evaluation time. In this work, we leverage
the popular diffusion model as the generator to compute a surrogate dataset,
boosted by a min-max loss to control the dataset's diversity and
representativeness during training. However, the diffusion model is
time-consuming when generating images, as it requires an iterative generation
process. We observe a critical trade-off between the number of image samples
and the image quality controlled by the diffusion steps and propose Diffusion
Step Reduction to achieve optimal performance. This paper details our
comprehensive method and its performance. Our model achieved $2^{nd}$ place in
the generative track of \href{https://www.dd-challenge.com/#/}{The First
Dataset Distillation Challenge of ECCV2024}, demonstrating its superior
performance.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 12:41:40 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Fan",
"Junqiao",
""
],
[
"Zhou",
"Yunjiao",
""
],
[
"Ren",
"Min Chang Jordan",
""
],
[
"Yang",
"Jianfei",
""
]
] | TITLE: Generative Dataset Distillation using Min-Max Diffusion Model
ABSTRACT: In this paper, we address the problem of generative dataset distillation that
utilizes generative models to synthesize images. The generator may produce any
number of images under a preserved evaluation time. In this work, we leverage
the popular diffusion model as the generator to compute a surrogate dataset,
boosted by a min-max loss to control the dataset's diversity and
representativeness during training. However, the diffusion model is
time-consuming when generating images, as it requires an iterative generation
process. We observe a critical trade-off between the number of image samples
and the image quality controlled by the diffusion steps and propose Diffusion
Step Reduction to achieve optimal performance. This paper details our
comprehensive method and its performance. Our model achieved $2^{nd}$ place in
the generative track of \href{https://www.dd-challenge.com/#/}{The First
Dataset Distillation Challenge of ECCV2024}, demonstrating its superior
performance.
|
2503.18629 | Philipp Spitzer | Arne Grobr\"ugge, Niklas K\"uhl, Gerhard Satzger, Philipp Spitzer | Towards Human-Understandable Multi-Dimensional Concept Discovery | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Concept-based eXplainable AI (C-XAI) aims to overcome the limitations of
traditional saliency maps by converting pixels into human-understandable
concepts that are consistent across an entire dataset. A crucial aspect of
C-XAI is completeness, which measures how well a set of concepts explains a
model's decisions. Among C-XAI methods, Multi-Dimensional Concept Discovery
(MCD) effectively improves completeness by breaking down the CNN latent space
into distinct and interpretable concept subspaces. However, MCD's explanations
can be difficult for humans to understand, raising concerns about their
practical utility. To address this, we propose Human-Understandable
Multi-dimensional Concept Discovery (HU-MCD). HU-MCD uses the Segment Anything
Model for concept identification and implements a CNN-specific input masking
technique to reduce noise introduced by traditional masking methods. These
changes to MCD, paired with the completeness relation, enable HU-MCD to enhance
concept understandability while maintaining explanation faithfulness. Our
experiments, including human subject studies, show that HU-MCD provides more
precise and reliable explanations than existing C-XAI methods. The code is
available at https://github.com/grobruegge/hu-mcd.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 12:45:52 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Grobrügge",
"Arne",
""
],
[
"Kühl",
"Niklas",
""
],
[
"Satzger",
"Gerhard",
""
],
[
"Spitzer",
"Philipp",
""
]
] | TITLE: Towards Human-Understandable Multi-Dimensional Concept Discovery
ABSTRACT: Concept-based eXplainable AI (C-XAI) aims to overcome the limitations of
traditional saliency maps by converting pixels into human-understandable
concepts that are consistent across an entire dataset. A crucial aspect of
C-XAI is completeness, which measures how well a set of concepts explains a
model's decisions. Among C-XAI methods, Multi-Dimensional Concept Discovery
(MCD) effectively improves completeness by breaking down the CNN latent space
into distinct and interpretable concept subspaces. However, MCD's explanations
can be difficult for humans to understand, raising concerns about their
practical utility. To address this, we propose Human-Understandable
Multi-dimensional Concept Discovery (HU-MCD). HU-MCD uses the Segment Anything
Model for concept identification and implements a CNN-specific input masking
technique to reduce noise introduced by traditional masking methods. These
changes to MCD, paired with the completeness relation, enable HU-MCD to enhance
concept understandability while maintaining explanation faithfulness. Our
experiments, including human subject studies, show that HU-MCD provides more
precise and reliable explanations than existing C-XAI methods. The code is
available at https://github.com/grobruegge/hu-mcd.
|
2503.18634 | Sebasti\'an Andr\'es Cajas Ord\'o\~nez | Sebasti\'an A. Cajas Ord\'o\~nez, Jaydeep Samanta, Andr\'es L.
Su\'arez-Cetrulo, and Ricardo Sim\'on Carbajo | Adaptive Machine Learning for Resource-Constrained Environments | 17 pages, 11 figures, accepted at DELTA 2024 (Workshop on Discovering
Drift Phenomena in Evolving Landscapes), co-located with ACM SIGKDD 2024.
This preprint has not undergone peer review. The Version of Record is
available at https://doi.org/10.1007/978-3-031-82346-6_1 | Discovering Drift Phenomena in Evolving Landscapes, Lecture Notes
in Computer Science, LNCS 15013, Springer, 2025, pp. 3-19 | 10.1007/978-3-031-82346-6_1 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Internet of Things is an example domain where data is perpetually
generated in ever-increasing quantities, reflecting the proliferation of
connected devices and the formation of continuous data streams over time.
Consequently, the demand for ad-hoc, cost-effective machine learning solutions
must adapt to this evolving data influx. This study tackles the task of
offloading in small gateways, exacerbated by their dynamic availability over
time. An approach leveraging CPU utilization metrics using online and continual
machine learning techniques is proposed to predict gateway availability. These
methods are compared to popular machine learning algorithms and a recent
time-series foundation model, Lag-Llama, for fine-tuned and zero-shot setups.
Their performance is benchmarked on a dataset of CPU utilization measurements
over time from an IoT gateway and focuses on model metrics such as prediction
errors, training and inference times, and memory consumption. Our primary
objective is to study new efficient ways to predict CPU performance in IoT
environments. Across various scenarios, our findings highlight that ensemble
and online methods offer promising results for this task in terms of accuracy
while maintaining a low resource footprint.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 12:52:26 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ordóñez",
"Sebastián A. Cajas",
""
],
[
"Samanta",
"Jaydeep",
""
],
[
"Suárez-Cetrulo",
"Andrés L.",
""
],
[
"Carbajo",
"Ricardo Simón",
""
]
] | TITLE: Adaptive Machine Learning for Resource-Constrained Environments
ABSTRACT: The Internet of Things is an example domain where data is perpetually
generated in ever-increasing quantities, reflecting the proliferation of
connected devices and the formation of continuous data streams over time.
Consequently, the demand for ad-hoc, cost-effective machine learning solutions
must adapt to this evolving data influx. This study tackles the task of
offloading in small gateways, exacerbated by their dynamic availability over
time. An approach leveraging CPU utilization metrics using online and continual
machine learning techniques is proposed to predict gateway availability. These
methods are compared to popular machine learning algorithms and a recent
time-series foundation model, Lag-Llama, for fine-tuned and zero-shot setups.
Their performance is benchmarked on a dataset of CPU utilization measurements
over time from an IoT gateway and focuses on model metrics such as prediction
errors, training and inference times, and memory consumption. Our primary
objective is to study new efficient ways to predict CPU performance in IoT
environments. Across various scenarios, our findings highlight that ensemble
and online methods offer promising results for this task in terms of accuracy
while maintaining a low resource footprint.
|
2503.18635 | Congcong Bian | Hui Li, Congcong Bian, Zeyang Zhang, Xiaoning Song, Xi Li and Xiao-Jun
Wu | OCCO: LVM-guided Infrared and Visible Image Fusion Framework based on
Object-aware and Contextual COntrastive Learning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Image fusion is a crucial technique in the field of computer vision, and its
goal is to generate high-quality fused images and improve the performance of
downstream tasks. However, existing fusion methods struggle to balance these
two factors. Achieving high quality in fused images may result in lower
performance in downstream visual tasks, and vice versa. To address this
drawback, a novel LVM (large vision model)-guided fusion framework with
Object-aware and Contextual COntrastive learning is proposed, termed as OCCO.
The pre-trained LVM is utilized to provide semantic guidance, allowing the
network to focus solely on fusion tasks while emphasizing learning salient
semantic features in form of contrastive learning. Additionally, a novel
feature interaction fusion network is also designed to resolve information
conflicts in fusion images caused by modality differences. By learning the
distinction between positive samples and negative samples in the latent feature
space (contextual space), the integrity of target information in fused image is
improved, thereby benefiting downstream performance. Finally, compared with
eight state-of-the-art methods on four datasets, the effectiveness of the
proposed method is validated, and exceptional performance is also demonstrated
on downstream visual task.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 12:57:23 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Hui",
""
],
[
"Bian",
"Congcong",
""
],
[
"Zhang",
"Zeyang",
""
],
[
"Song",
"Xiaoning",
""
],
[
"Li",
"Xi",
""
],
[
"Wu",
"Xiao-Jun",
""
]
] | TITLE: OCCO: LVM-guided Infrared and Visible Image Fusion Framework based on
Object-aware and Contextual COntrastive Learning
ABSTRACT: Image fusion is a crucial technique in the field of computer vision, and its
goal is to generate high-quality fused images and improve the performance of
downstream tasks. However, existing fusion methods struggle to balance these
two factors. Achieving high quality in fused images may result in lower
performance in downstream visual tasks, and vice versa. To address this
drawback, a novel LVM (large vision model)-guided fusion framework with
Object-aware and Contextual COntrastive learning is proposed, termed as OCCO.
The pre-trained LVM is utilized to provide semantic guidance, allowing the
network to focus solely on fusion tasks while emphasizing learning salient
semantic features in form of contrastive learning. Additionally, a novel
feature interaction fusion network is also designed to resolve information
conflicts in fusion images caused by modality differences. By learning the
distinction between positive samples and negative samples in the latent feature
space (contextual space), the integrity of target information in fused image is
improved, thereby benefiting downstream performance. Finally, compared with
eight state-of-the-art methods on four datasets, the effectiveness of the
proposed method is validated, and exceptional performance is also demonstrated
on downstream visual task.
|
2503.18637 | Nina Shvetsova | Nina Shvetsova, Arsha Nagrani, Bernt Schiele, Hilde Kuehne, Christian
Rupprecht | Unbiasing through Textual Descriptions: Mitigating Representation Bias
in Video Benchmarks | To be published at CVPR 2025, project webpage
https://utd-project.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new "Unbiased through Textual Description (UTD)" video benchmark
based on unbiased subsets of existing video classification and retrieval
datasets to enable a more robust assessment of video understanding
capabilities. Namely, we tackle the problem that current video benchmarks may
suffer from different representation biases, e.g., object bias or single-frame
bias, where mere recognition of objects or utilization of only a single frame
is sufficient for correct prediction. We leverage VLMs and LLMs to analyze and
debias benchmarks from such representation biases. Specifically, we generate
frame-wise textual descriptions of videos, filter them for specific information
(e.g. only objects) and leverage them to examine representation biases across
three dimensions: 1) concept bias - determining if a specific concept (e.g.,
objects) alone suffice for prediction; 2) temporal bias - assessing if temporal
information contributes to prediction; and 3) common sense vs. dataset bias -
evaluating whether zero-shot reasoning or dataset correlations contribute to
prediction. We conduct a systematic analysis of 12 popular video classification
and retrieval datasets and create new object-debiased test splits for these
datasets. Moreover, we benchmark 30 state-of-the-art video models on original
and debiased splits and analyze biases in the models. To facilitate the future
development of more robust video understanding benchmarks and models, we
release: "UTD-descriptions", a dataset with our rich structured descriptions
for each dataset, and "UTD-splits", a dataset of object-debiased test splits.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 13:00:25 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Shvetsova",
"Nina",
""
],
[
"Nagrani",
"Arsha",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Kuehne",
"Hilde",
""
],
[
"Rupprecht",
"Christian",
""
]
] | TITLE: Unbiasing through Textual Descriptions: Mitigating Representation Bias
in Video Benchmarks
ABSTRACT: We propose a new "Unbiased through Textual Description (UTD)" video benchmark
based on unbiased subsets of existing video classification and retrieval
datasets to enable a more robust assessment of video understanding
capabilities. Namely, we tackle the problem that current video benchmarks may
suffer from different representation biases, e.g., object bias or single-frame
bias, where mere recognition of objects or utilization of only a single frame
is sufficient for correct prediction. We leverage VLMs and LLMs to analyze and
debias benchmarks from such representation biases. Specifically, we generate
frame-wise textual descriptions of videos, filter them for specific information
(e.g. only objects) and leverage them to examine representation biases across
three dimensions: 1) concept bias - determining if a specific concept (e.g.,
objects) alone suffice for prediction; 2) temporal bias - assessing if temporal
information contributes to prediction; and 3) common sense vs. dataset bias -
evaluating whether zero-shot reasoning or dataset correlations contribute to
prediction. We conduct a systematic analysis of 12 popular video classification
and retrieval datasets and create new object-debiased test splits for these
datasets. Moreover, we benchmark 30 state-of-the-art video models on original
and debiased splits and analyze biases in the models. To facilitate the future
development of more robust video understanding benchmarks and models, we
release: "UTD-descriptions", a dataset with our rich structured descriptions
for each dataset, and "UTD-splits", a dataset of object-debiased test splits.
|
2503.18640 | Jingwei Huang | Haoran Wang, Jingwei Huang, Lu Yang, Tianchen Deng, Gaojing Zhang, and
Mingrui Li | LLGS: Unsupervised Gaussian Splatting for Image Enhancement and
Reconstruction in Pure Dark Environment | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | 3D Gaussian Splatting has shown remarkable capabilities in novel view
rendering tasks and exhibits significant potential for multi-view
optimization.However, the original 3D Gaussian Splatting lacks color
representation for inputs in low-light environments. Simply using enhanced
images as inputs would lead to issues with multi-view consistency, and current
single-view enhancement systems rely on pre-trained data, lacking scene
generalization. These problems limit the application of 3D Gaussian Splatting
in low-light conditions in the field of robotics, including high-fidelity
modeling and feature matching. To address these challenges, we propose an
unsupervised multi-view stereoscopic system based on Gaussian Splatting, called
Low-Light Gaussian Splatting (LLGS). This system aims to enhance images in
low-light environments while reconstructing the scene. Our method introduces a
decomposable Gaussian representation called M-Color, which separately
characterizes color information for targeted enhancement. Furthermore, we
propose an unsupervised optimization method with zero-knowledge priors, using
direction-based enhancement to ensure multi-view consistency. Experiments
conducted on real-world datasets demonstrate that our system outperforms
state-of-the-art methods in both low-light enhancement and 3D Gaussian
Splatting.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 13:05:05 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Haoran",
""
],
[
"Huang",
"Jingwei",
""
],
[
"Yang",
"Lu",
""
],
[
"Deng",
"Tianchen",
""
],
[
"Zhang",
"Gaojing",
""
],
[
"Li",
"Mingrui",
""
]
] | TITLE: LLGS: Unsupervised Gaussian Splatting for Image Enhancement and
Reconstruction in Pure Dark Environment
ABSTRACT: 3D Gaussian Splatting has shown remarkable capabilities in novel view
rendering tasks and exhibits significant potential for multi-view
optimization.However, the original 3D Gaussian Splatting lacks color
representation for inputs in low-light environments. Simply using enhanced
images as inputs would lead to issues with multi-view consistency, and current
single-view enhancement systems rely on pre-trained data, lacking scene
generalization. These problems limit the application of 3D Gaussian Splatting
in low-light conditions in the field of robotics, including high-fidelity
modeling and feature matching. To address these challenges, we propose an
unsupervised multi-view stereoscopic system based on Gaussian Splatting, called
Low-Light Gaussian Splatting (LLGS). This system aims to enhance images in
low-light environments while reconstructing the scene. Our method introduces a
decomposable Gaussian representation called M-Color, which separately
characterizes color information for targeted enhancement. Furthermore, we
propose an unsupervised optimization method with zero-knowledge priors, using
direction-based enhancement to ensure multi-view consistency. Experiments
conducted on real-world datasets demonstrate that our system outperforms
state-of-the-art methods in both low-light enhancement and 3D Gaussian
Splatting.
|
2503.18642 | Taejin Jeong | Taejin Jeong, Joohyeok Kim, Jaehoon Joo, Yeonwoo Jung, Hyeonmin Kim,
Seong Jae Hwang | Rethinking Glaucoma Calibration: Voting-Based Binocular and Metadata
Integration | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Glaucoma is an incurable ophthalmic disease that damages the optic nerve,
leads to vision loss, and ranks among the leading causes of blindness
worldwide. Diagnosing glaucoma typically involves fundus photography, optical
coherence tomography (OCT), and visual field testing. However, the high cost of
OCT often leads to reliance on fundus photography and visual field testing,
both of which exhibit inherent inter-observer variability. This stems from
glaucoma being a multifaceted disease that influenced by various factors. As a
result, glaucoma diagnosis is highly subjective, emphasizing the necessity of
calibration, which aligns predicted probabilities with actual disease
likelihood. Proper calibration is essential to prevent overdiagnosis or
misdiagnosis, which are critical concerns for high-risk diseases. Although AI
has significantly improved diagnostic accuracy, overconfidence in models have
worsen calibration performance. Recent study has begun focusing on calibration
for glaucoma. Nevertheless, previous study has not fully considered glaucoma's
systemic nature and the high subjectivity in its diagnostic process. To
overcome these limitations, we propose V-ViT (Voting-based ViT), a novel
framework that enhances calibration by incorporating disease-specific
characteristics. V-ViT integrates binocular data and metadata, reflecting the
multi-faceted nature of glaucoma diagnosis. Additionally, we introduce a MC
dropout-based Voting System to address high subjectivity. Our approach achieves
state-of-the-art performance across all metrics, including accuracy,
demonstrating that our proposed methods are effective in addressing calibration
issues. We validate our method using a custom dataset including binocular data.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 13:09:47 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Jeong",
"Taejin",
""
],
[
"Kim",
"Joohyeok",
""
],
[
"Joo",
"Jaehoon",
""
],
[
"Jung",
"Yeonwoo",
""
],
[
"Kim",
"Hyeonmin",
""
],
[
"Hwang",
"Seong Jae",
""
]
] | TITLE: Rethinking Glaucoma Calibration: Voting-Based Binocular and Metadata
Integration
ABSTRACT: Glaucoma is an incurable ophthalmic disease that damages the optic nerve,
leads to vision loss, and ranks among the leading causes of blindness
worldwide. Diagnosing glaucoma typically involves fundus photography, optical
coherence tomography (OCT), and visual field testing. However, the high cost of
OCT often leads to reliance on fundus photography and visual field testing,
both of which exhibit inherent inter-observer variability. This stems from
glaucoma being a multifaceted disease that influenced by various factors. As a
result, glaucoma diagnosis is highly subjective, emphasizing the necessity of
calibration, which aligns predicted probabilities with actual disease
likelihood. Proper calibration is essential to prevent overdiagnosis or
misdiagnosis, which are critical concerns for high-risk diseases. Although AI
has significantly improved diagnostic accuracy, overconfidence in models have
worsen calibration performance. Recent study has begun focusing on calibration
for glaucoma. Nevertheless, previous study has not fully considered glaucoma's
systemic nature and the high subjectivity in its diagnostic process. To
overcome these limitations, we propose V-ViT (Voting-based ViT), a novel
framework that enhances calibration by incorporating disease-specific
characteristics. V-ViT integrates binocular data and metadata, reflecting the
multi-faceted nature of glaucoma diagnosis. Additionally, we introduce a MC
dropout-based Voting System to address high subjectivity. Our approach achieves
state-of-the-art performance across all metrics, including accuracy,
demonstrating that our proposed methods are effective in addressing calibration
issues. We validate our method using a custom dataset including binocular data.
|
2503.18671 | Yihan Chen | Yihan Chen, Wenfei Yang, Huan Ren, Shifeng Zhang, Tianzhu Zhang, Feng
Wu | Structure-Aware Correspondence Learning for Relative Pose Estimation | CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relative pose estimation provides a promising way for achieving
object-agnostic pose estimation. Despite the success of existing 3D
correspondence-based methods, the reliance on explicit feature matching suffers
from small overlaps in visible regions and unreliable feature estimation for
invisible regions. Inspired by humans' ability to assemble two object parts
that have small or no overlapping regions by considering object structure, we
propose a novel Structure-Aware Correspondence Learning method for Relative
Pose Estimation, which consists of two key modules. First, a structure-aware
keypoint extraction module is designed to locate a set of kepoints that can
represent the structure of objects with different shapes and appearance, under
the guidance of a keypoint based image reconstruction loss. Second, a
structure-aware correspondence estimation module is designed to model the
intra-image and inter-image relationships between keypoints to extract
structure-aware features for correspondence estimation. By jointly leveraging
these two modules, the proposed method can naturally estimate 3D-3D
correspondences for unseen objects without explicit feature matching for
precise relative pose estimation. Experimental results on the CO3D, Objaverse
and LineMOD datasets demonstrate that the proposed method significantly
outperforms prior methods, i.e., with 5.7{\deg}reduction in mean angular error
on the CO3D dataset.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 13:43:44 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Yihan",
""
],
[
"Yang",
"Wenfei",
""
],
[
"Ren",
"Huan",
""
],
[
"Zhang",
"Shifeng",
""
],
[
"Zhang",
"Tianzhu",
""
],
[
"Wu",
"Feng",
""
]
] | TITLE: Structure-Aware Correspondence Learning for Relative Pose Estimation
ABSTRACT: Relative pose estimation provides a promising way for achieving
object-agnostic pose estimation. Despite the success of existing 3D
correspondence-based methods, the reliance on explicit feature matching suffers
from small overlaps in visible regions and unreliable feature estimation for
invisible regions. Inspired by humans' ability to assemble two object parts
that have small or no overlapping regions by considering object structure, we
propose a novel Structure-Aware Correspondence Learning method for Relative
Pose Estimation, which consists of two key modules. First, a structure-aware
keypoint extraction module is designed to locate a set of kepoints that can
represent the structure of objects with different shapes and appearance, under
the guidance of a keypoint based image reconstruction loss. Second, a
structure-aware correspondence estimation module is designed to model the
intra-image and inter-image relationships between keypoints to extract
structure-aware features for correspondence estimation. By jointly leveraging
these two modules, the proposed method can naturally estimate 3D-3D
correspondences for unseen objects without explicit feature matching for
precise relative pose estimation. Experimental results on the CO3D, Objaverse
and LineMOD datasets demonstrate that the proposed method significantly
outperforms prior methods, i.e., with 5.7{\deg}reduction in mean angular error
on the CO3D dataset.
|
2503.18674 | Edoardo De Matteis | Edoardo De Matteis, Matteo Migliarini, Alessio Sampieri, Indro
Spinelli and Fabio Galasso | Human Motion Unlearning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce the task of human motion unlearning to prevent the synthesis of
toxic animations while preserving the general text-to-motion generative
performance. Unlearning toxic motions is challenging as those can be generated
from explicit text prompts and from implicit toxic combinations of safe motions
(e.g., ``kicking" is ``loading and swinging a leg"). We propose the first
motion unlearning benchmark by filtering toxic motions from the large and
recent text-to-motion datasets of HumanML3D and Motion-X. We propose baselines,
by adapting state-of-the-art image unlearning techniques to process
spatio-temporal signals. Finally, we propose a novel motion unlearning model
based on Latent Code Replacement, which we dub LCR. LCR is training-free and
suitable to the discrete latent spaces of state-of-the-art text-to-motion
diffusion models. LCR is simple and consistently outperforms baselines
qualitatively and quantitatively. Project page:
\href{https://www.pinlab.org/hmu}{https://www.pinlab.org/hmu}.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 13:46:27 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"De Matteis",
"Edoardo",
""
],
[
"Migliarini",
"Matteo",
""
],
[
"Sampieri",
"Alessio",
""
],
[
"Spinelli",
"Indro",
""
],
[
"Galasso",
"Fabio",
""
]
] | TITLE: Human Motion Unlearning
ABSTRACT: We introduce the task of human motion unlearning to prevent the synthesis of
toxic animations while preserving the general text-to-motion generative
performance. Unlearning toxic motions is challenging as those can be generated
from explicit text prompts and from implicit toxic combinations of safe motions
(e.g., ``kicking" is ``loading and swinging a leg"). We propose the first
motion unlearning benchmark by filtering toxic motions from the large and
recent text-to-motion datasets of HumanML3D and Motion-X. We propose baselines,
by adapting state-of-the-art image unlearning techniques to process
spatio-temporal signals. Finally, we propose a novel motion unlearning model
based on Latent Code Replacement, which we dub LCR. LCR is training-free and
suitable to the discrete latent spaces of state-of-the-art text-to-motion
diffusion models. LCR is simple and consistently outperforms baselines
qualitatively and quantitatively. Project page:
\href{https://www.pinlab.org/hmu}{https://www.pinlab.org/hmu}.
|
2503.18688 | Yinan Zhang | Yinan Zhang, Huiqi Hu, Xuan Zhou | SynchroStore: A Cost-Based Fine-Grained Incremental Compaction for
Hybrid Workloads | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study proposes a novel storage engine, SynchroStore, designed to address
the inefficiency of update operations in columnar storage systems based on
Log-Structured Merge Trees (LSM-Trees) under hybrid workload scenarios. While
columnar storage formats demonstrate significant query performance advantages
when handling large-scale datasets, traditional columnar storage systems face
challenges such as high update complexity and poor real-time performance in
data-intensive applications. SynchroStore introduces an incremental row storage
mechanism and a fine-grained row-to-column transformation and compaction
strategy, effectively balancing data update efficiency and query performance.
The storage system employs an in-memory row storage structure to support
efficient update operations, and the data is converted to a columnar format
after freezing to support high-performance read operations. The core
innovations of SynchroStore are reflected in the following aspects:(1) the
organic combination of incremental row storage and columnar storage; (2) a
fine-grained row-to-column transformation and compaction mechanism; (3) a
cost-based scheduling strategy. These innovative features allow SynchroStore to
leverage background computational resources for row-to-column transformation
and compaction operations, while ensuring query performance is unaffected, thus
effectively solving the update performance bottleneck of columnar storage under
hybrid workloads. Experimental evaluation results show that, compared to
existing columnar storage systems like DuckDB, SynchroStore exhibits
significant advantages in update performance under hybrid workloads.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 13:57:43 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Yinan",
""
],
[
"Hu",
"Huiqi",
""
],
[
"Zhou",
"Xuan",
""
]
] | TITLE: SynchroStore: A Cost-Based Fine-Grained Incremental Compaction for
Hybrid Workloads
ABSTRACT: This study proposes a novel storage engine, SynchroStore, designed to address
the inefficiency of update operations in columnar storage systems based on
Log-Structured Merge Trees (LSM-Trees) under hybrid workload scenarios. While
columnar storage formats demonstrate significant query performance advantages
when handling large-scale datasets, traditional columnar storage systems face
challenges such as high update complexity and poor real-time performance in
data-intensive applications. SynchroStore introduces an incremental row storage
mechanism and a fine-grained row-to-column transformation and compaction
strategy, effectively balancing data update efficiency and query performance.
The storage system employs an in-memory row storage structure to support
efficient update operations, and the data is converted to a columnar format
after freezing to support high-performance read operations. The core
innovations of SynchroStore are reflected in the following aspects:(1) the
organic combination of incremental row storage and columnar storage; (2) a
fine-grained row-to-column transformation and compaction mechanism; (3) a
cost-based scheduling strategy. These innovative features allow SynchroStore to
leverage background computational resources for row-to-column transformation
and compaction operations, while ensuring query performance is unaffected, thus
effectively solving the update performance bottleneck of columnar storage under
hybrid workloads. Experimental evaluation results show that, compared to
existing columnar storage systems like DuckDB, SynchroStore exhibits
significant advantages in update performance under hybrid workloads.
|
2503.18703 | Guanglu Dong | Guanglu Dong, Tianheng Zheng, Yuanzhouhan Cao, Linbo Qing, Chao Ren | Channel Consistency Prior and Self-Reconstruction Strategy Based
Unsupervised Image Deraining | Accepted to CVPR2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recently, deep image deraining models based on paired datasets have made a
series of remarkable progress. However, they cannot be well applied in
real-world applications due to the difficulty of obtaining real paired datasets
and the poor generalization performance. In this paper, we propose a novel
Channel Consistency Prior and Self-Reconstruction Strategy Based Unsupervised
Image Deraining framework, CSUD, to tackle the aforementioned challenges.
During training with unpaired data, CSUD is capable of generating high-quality
pseudo clean and rainy image pairs which are used to enhance the performance of
deraining network. Specifically, to preserve more image background details
while transferring rain streaks from rainy images to the unpaired clean images,
we propose a novel Channel Consistency Loss (CCLoss) by introducing the Channel
Consistency Prior (CCP) of rain streaks into training process, thereby ensuring
that the generated pseudo rainy images closely resemble the real ones.
Furthermore, we propose a novel Self-Reconstruction (SR) strategy to alleviate
the redundant information transfer problem of the generator, further improving
the deraining performance and the generalization capability of our method.
Extensive experiments on multiple synthetic and real-world datasets demonstrate
that the deraining performance of CSUD surpasses other state-of-the-art
unsupervised methods and CSUD exhibits superior generalization capability.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:15:48 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Dong",
"Guanglu",
""
],
[
"Zheng",
"Tianheng",
""
],
[
"Cao",
"Yuanzhouhan",
""
],
[
"Qing",
"Linbo",
""
],
[
"Ren",
"Chao",
""
]
] | TITLE: Channel Consistency Prior and Self-Reconstruction Strategy Based
Unsupervised Image Deraining
ABSTRACT: Recently, deep image deraining models based on paired datasets have made a
series of remarkable progress. However, they cannot be well applied in
real-world applications due to the difficulty of obtaining real paired datasets
and the poor generalization performance. In this paper, we propose a novel
Channel Consistency Prior and Self-Reconstruction Strategy Based Unsupervised
Image Deraining framework, CSUD, to tackle the aforementioned challenges.
During training with unpaired data, CSUD is capable of generating high-quality
pseudo clean and rainy image pairs which are used to enhance the performance of
deraining network. Specifically, to preserve more image background details
while transferring rain streaks from rainy images to the unpaired clean images,
we propose a novel Channel Consistency Loss (CCLoss) by introducing the Channel
Consistency Prior (CCP) of rain streaks into training process, thereby ensuring
that the generated pseudo rainy images closely resemble the real ones.
Furthermore, we propose a novel Self-Reconstruction (SR) strategy to alleviate
the redundant information transfer problem of the generator, further improving
the deraining performance and the generalization capability of our method.
Extensive experiments on multiple synthetic and real-world datasets demonstrate
that the deraining performance of CSUD surpasses other state-of-the-art
unsupervised methods and CSUD exhibits superior generalization capability.
|
2503.18705 | Min H. Kim | Inseung Hwang, Kiseok Choi, Hyunho Ha, Min H. Kim | Benchmarking Burst Super-Resolution for Polarization Images: Noise
Dataset and Analysis | null | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Snapshot polarization imaging calculates polarization states from linearly
polarized subimages. To achieve this, a polarization camera employs a double
Bayer-patterned sensor to capture both color and polarization. It demonstrates
low light efficiency and low spatial resolution, resulting in increased noise
and compromised polarization measurements. Although burst super-resolution
effectively reduces noise and enhances spatial resolution, applying it to
polarization imaging poses challenges due to the lack of tailored datasets and
reliable ground truth noise statistics. To address these issues, we introduce
PolarNS and PolarBurstSR, two innovative datasets developed specifically for
polarization imaging. PolarNS provides characterization of polarization noise
statistics, facilitating thorough analysis, while PolarBurstSR functions as a
benchmark for burst super-resolution in polarization images. These datasets,
collected under various real-world conditions, enable comprehensive evaluation.
Additionally, we present a model for analyzing polarization noise to quantify
noise propagation, tested on a large dataset captured in a darkroom
environment. As part of our application, we compare the latest burst
super-resolution models, highlighting the advantages of training tailored to
polarization compared to RGB-based methods. This work establishes a benchmark
for polarization burst super-resolution and offers critical insights into noise
propagation, thereby enhancing polarization image reconstruction.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:17:18 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hwang",
"Inseung",
""
],
[
"Choi",
"Kiseok",
""
],
[
"Ha",
"Hyunho",
""
],
[
"Kim",
"Min H.",
""
]
] | TITLE: Benchmarking Burst Super-Resolution for Polarization Images: Noise
Dataset and Analysis
ABSTRACT: Snapshot polarization imaging calculates polarization states from linearly
polarized subimages. To achieve this, a polarization camera employs a double
Bayer-patterned sensor to capture both color and polarization. It demonstrates
low light efficiency and low spatial resolution, resulting in increased noise
and compromised polarization measurements. Although burst super-resolution
effectively reduces noise and enhances spatial resolution, applying it to
polarization imaging poses challenges due to the lack of tailored datasets and
reliable ground truth noise statistics. To address these issues, we introduce
PolarNS and PolarBurstSR, two innovative datasets developed specifically for
polarization imaging. PolarNS provides characterization of polarization noise
statistics, facilitating thorough analysis, while PolarBurstSR functions as a
benchmark for burst super-resolution in polarization images. These datasets,
collected under various real-world conditions, enable comprehensive evaluation.
Additionally, we present a model for analyzing polarization noise to quantify
noise propagation, tested on a large dataset captured in a darkroom
environment. As part of our application, we compare the latest burst
super-resolution models, highlighting the advantages of training tailored to
polarization compared to RGB-based methods. This work establishes a benchmark
for polarization burst super-resolution and offers critical insights into noise
propagation, thereby enhancing polarization image reconstruction.
|
2503.18709 | Boqi Chen Mr. | Boqi Chen, C\'edric Vincent-Cuaz, Lydia A. Schoenpflug, Manuel
Madeira, Lisa Fournier, Vaishnavi Subramanian, Sonali Andani, Samuel
Ruiperez-Campillo, Julia E. Vogt, Rapha\"elle Luisier, Dorina Thanou, Viktor
H. Koelzer, Pascal Frossard, Gabriele Campanella, Gunnar R\"atsch | Revisiting Automatic Data Curation for Vision Foundation Models in
Digital Pathology | MICCAI 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision foundation models (FMs) are accelerating the development of digital
pathology algorithms and transforming biomedical research. These models learn,
in a self-supervised manner, to represent histological features in highly
heterogeneous tiles extracted from whole-slide images (WSIs) of real-world
patient samples. The performance of these FMs is significantly influenced by
the size, diversity, and balance of the pre-training data. However, data
selection has been primarily guided by expert knowledge at the WSI level,
focusing on factors such as disease classification and tissue types, while
largely overlooking the granular details available at the tile level. In this
paper, we investigate the potential of unsupervised automatic data curation at
the tile-level, taking into account 350 million tiles. Specifically, we apply
hierarchical clustering trees to pre-extracted tile embeddings, allowing us to
sample balanced datasets uniformly across the embedding space of the pretrained
FM. We further identify these datasets are subject to a trade-off between size
and balance, potentially compromising the quality of representations learned by
FMs, and propose tailored batch sampling strategies to mitigate this effect. We
demonstrate the effectiveness of our method through improved performance on a
diverse range of clinically relevant downstream tasks.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:23:48 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Boqi",
""
],
[
"Vincent-Cuaz",
"Cédric",
""
],
[
"Schoenpflug",
"Lydia A.",
""
],
[
"Madeira",
"Manuel",
""
],
[
"Fournier",
"Lisa",
""
],
[
"Subramanian",
"Vaishnavi",
""
],
[
"Andani",
"Sonali",
""
],
[
"Ruiperez-Campillo",
"Samuel",
""
],
[
"Vogt",
"Julia E.",
""
],
[
"Luisier",
"Raphaëlle",
""
],
[
"Thanou",
"Dorina",
""
],
[
"Koelzer",
"Viktor H.",
""
],
[
"Frossard",
"Pascal",
""
],
[
"Campanella",
"Gabriele",
""
],
[
"Rätsch",
"Gunnar",
""
]
] | TITLE: Revisiting Automatic Data Curation for Vision Foundation Models in
Digital Pathology
ABSTRACT: Vision foundation models (FMs) are accelerating the development of digital
pathology algorithms and transforming biomedical research. These models learn,
in a self-supervised manner, to represent histological features in highly
heterogeneous tiles extracted from whole-slide images (WSIs) of real-world
patient samples. The performance of these FMs is significantly influenced by
the size, diversity, and balance of the pre-training data. However, data
selection has been primarily guided by expert knowledge at the WSI level,
focusing on factors such as disease classification and tissue types, while
largely overlooking the granular details available at the tile level. In this
paper, we investigate the potential of unsupervised automatic data curation at
the tile-level, taking into account 350 million tiles. Specifically, we apply
hierarchical clustering trees to pre-extracted tile embeddings, allowing us to
sample balanced datasets uniformly across the embedding space of the pretrained
FM. We further identify these datasets are subject to a trade-off between size
and balance, potentially compromising the quality of representations learned by
FMs, and propose tailored batch sampling strategies to mitigate this effect. We
demonstrate the effectiveness of our method through improved performance on a
diverse range of clinically relevant downstream tasks.
|
2503.18711 | Michelle Jou | Thomas Sugg, Kyle O'Brien, Lekh Poudel, Alex Dumouchelle, Michelle
Jou, Marc Bosch, Deva Ramanan, Srinivasa Narasimhan, Shubham Tulsiani | Accenture-NVS1: A Novel View Synthesis Dataset | 6 pages, 7 figures | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces ACC-NVS1, a specialized dataset designed for research
on Novel View Synthesis specifically for airborne and ground imagery. Data for
ACC-NVS1 was collected in Austin, TX and Pittsburgh, PA in 2023 and 2024. The
collection encompasses six diverse real-world scenes captured from both
airborne and ground cameras, resulting in a total of 148,000 images. ACC-NVS1
addresses challenges such as varying altitudes and transient objects. This
dataset is intended to supplement existing datasets, providing additional
resources for comprehensive research, rather than serving as a benchmark.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:24:08 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Sugg",
"Thomas",
""
],
[
"O'Brien",
"Kyle",
""
],
[
"Poudel",
"Lekh",
""
],
[
"Dumouchelle",
"Alex",
""
],
[
"Jou",
"Michelle",
""
],
[
"Bosch",
"Marc",
""
],
[
"Ramanan",
"Deva",
""
],
[
"Narasimhan",
"Srinivasa",
""
],
[
"Tulsiani",
"Shubham",
""
]
] | TITLE: Accenture-NVS1: A Novel View Synthesis Dataset
ABSTRACT: This paper introduces ACC-NVS1, a specialized dataset designed for research
on Novel View Synthesis specifically for airborne and ground imagery. Data for
ACC-NVS1 was collected in Austin, TX and Pittsburgh, PA in 2023 and 2024. The
collection encompasses six diverse real-world scenes captured from both
airborne and ground cameras, resulting in a total of 148,000 images. ACC-NVS1
addresses challenges such as varying altitudes and transient objects. This
dataset is intended to supplement existing datasets, providing additional
resources for comprehensive research, rather than serving as a benchmark.
|
2503.18712 | Mackenzie Mathis | Shaokai Ye, Haozhe Qi, Alexander Mathis, Mackenzie W. Mathis | LLaVAction: evaluating and training multi-modal large language models
for action recognition | https://github.com/AdaptiveMotorControlLab/LLaVAction | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Understanding human behavior requires measuring behavioral actions. Due to
its complexity, behavior is best mapped onto a rich, semantic structure such as
language. The recent development of multi-modal large language models (MLLMs)
is a promising candidate for a wide range of action understanding tasks. In
this work, we focus on evaluating and then improving MLLMs to perform action
recognition. We reformulate EPIC-KITCHENS-100, one of the largest and most
challenging egocentric action datasets, to the form of video multiple question
answering (EPIC-KITCHENS-100-MQA). We show that when we sample difficult
incorrect answers as distractors, leading MLLMs struggle to recognize the
correct actions. We propose a series of methods that greatly improve the MLLMs'
ability to perform action recognition, achieving state-of-the-art on both the
EPIC-KITCHENS-100 validation set, as well as outperforming GPT-4o by 21 points
in accuracy on EPIC-KITCHENS-100-MQA. Lastly, we show improvements on other
action-related video benchmarks such as EgoSchema, PerceptionTest,
LongVideoBench, VideoMME and MVBench, suggesting that MLLMs are a promising
path forward for complex action tasks. Code and models are available at:
https://github.com/AdaptiveMotorControlLab/LLaVAction.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:24:17 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ye",
"Shaokai",
""
],
[
"Qi",
"Haozhe",
""
],
[
"Mathis",
"Alexander",
""
],
[
"Mathis",
"Mackenzie W.",
""
]
] | TITLE: LLaVAction: evaluating and training multi-modal large language models
for action recognition
ABSTRACT: Understanding human behavior requires measuring behavioral actions. Due to
its complexity, behavior is best mapped onto a rich, semantic structure such as
language. The recent development of multi-modal large language models (MLLMs)
is a promising candidate for a wide range of action understanding tasks. In
this work, we focus on evaluating and then improving MLLMs to perform action
recognition. We reformulate EPIC-KITCHENS-100, one of the largest and most
challenging egocentric action datasets, to the form of video multiple question
answering (EPIC-KITCHENS-100-MQA). We show that when we sample difficult
incorrect answers as distractors, leading MLLMs struggle to recognize the
correct actions. We propose a series of methods that greatly improve the MLLMs'
ability to perform action recognition, achieving state-of-the-art on both the
EPIC-KITCHENS-100 validation set, as well as outperforming GPT-4o by 21 points
in accuracy on EPIC-KITCHENS-100-MQA. Lastly, we show improvements on other
action-related video benchmarks such as EgoSchema, PerceptionTest,
LongVideoBench, VideoMME and MVBench, suggesting that MLLMs are a promising
path forward for complex action tasks. Code and models are available at:
https://github.com/AdaptiveMotorControlLab/LLaVAction.
|
2503.18719 | Cong Liu | Cong Liu, Liang Hou, Mingwu Zheng, Xin Tao, Pengfei Wan, Di Zhang, Kun
Gai | Boosting Resolution Generalization of Diffusion Transformers with
Randomized Positional Encodings | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Resolution generalization in image generation tasks enables the production of
higher-resolution images with lower training resolution overhead. However, a
significant challenge in resolution generalization, particularly in the widely
used Diffusion Transformers, lies in the mismatch between the positional
encodings encountered during testing and those used during training. While
existing methods have employed techniques such as interpolation, extrapolation,
or their combinations, none have fully resolved this issue. In this paper, we
propose a novel two-dimensional randomized positional encodings (RPE-2D)
framework that focuses on learning positional order of image patches instead of
the specific distances between them, enabling seamless high- and low-resolution
image generation without requiring high- and low-resolution image training.
Specifically, RPE-2D independently selects positions over a broader range along
both the horizontal and vertical axes, ensuring that all position encodings are
trained during the inference phase, thus improving resolution generalization.
Additionally, we propose a random data augmentation technique to enhance the
modeling of position order. To address the issue of image cropping caused by
the augmentation, we introduce corresponding micro-conditioning to enable the
model to perceive the specific cropping patterns. On the ImageNet dataset, our
proposed RPE-2D achieves state-of-the-art resolution generalization
performance, outperforming existing competitive methods when trained at a
resolution of $256 \times 256$ and inferred at $384 \times 384$ and $512 \times
512$, as well as when scaling from $512 \times 512$ to $768 \times 768$ and
$1024 \times 1024$. And it also exhibits outstanding capabilities in
low-resolution image generation, multi-stage training acceleration and
multi-resolution inheritance.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:30:38 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Cong",
""
],
[
"Hou",
"Liang",
""
],
[
"Zheng",
"Mingwu",
""
],
[
"Tao",
"Xin",
""
],
[
"Wan",
"Pengfei",
""
],
[
"Zhang",
"Di",
""
],
[
"Gai",
"Kun",
""
]
] | TITLE: Boosting Resolution Generalization of Diffusion Transformers with
Randomized Positional Encodings
ABSTRACT: Resolution generalization in image generation tasks enables the production of
higher-resolution images with lower training resolution overhead. However, a
significant challenge in resolution generalization, particularly in the widely
used Diffusion Transformers, lies in the mismatch between the positional
encodings encountered during testing and those used during training. While
existing methods have employed techniques such as interpolation, extrapolation,
or their combinations, none have fully resolved this issue. In this paper, we
propose a novel two-dimensional randomized positional encodings (RPE-2D)
framework that focuses on learning positional order of image patches instead of
the specific distances between them, enabling seamless high- and low-resolution
image generation without requiring high- and low-resolution image training.
Specifically, RPE-2D independently selects positions over a broader range along
both the horizontal and vertical axes, ensuring that all position encodings are
trained during the inference phase, thus improving resolution generalization.
Additionally, we propose a random data augmentation technique to enhance the
modeling of position order. To address the issue of image cropping caused by
the augmentation, we introduce corresponding micro-conditioning to enable the
model to perceive the specific cropping patterns. On the ImageNet dataset, our
proposed RPE-2D achieves state-of-the-art resolution generalization
performance, outperforming existing competitive methods when trained at a
resolution of $256 \times 256$ and inferred at $384 \times 384$ and $512 \times
512$, as well as when scaling from $512 \times 512$ to $768 \times 768$ and
$1024 \times 1024$. And it also exhibits outstanding capabilities in
low-resolution image generation, multi-stage training acceleration and
multi-resolution inheritance.
|
2503.18730 | Hongkuan Zhou | Hongkuan Zhou, Stefan Schmid, Yicong Li, Lavdim Halilaj, Xiangtong
Yao, Wei cao | Predicting the Road Ahead: A Knowledge Graph based Foundation Model for
Scene Understanding in Autonomous Driving | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The autonomous driving field has seen remarkable advancements in various
topics, such as object recognition, trajectory prediction, and motion planning.
However, current approaches face limitations in effectively comprehending the
complex evolutions of driving scenes over time. This paper proposes FM4SU, a
novel methodology for training a symbolic foundation model (FM) for scene
understanding in autonomous driving. It leverages knowledge graphs (KGs) to
capture sensory observation along with domain knowledge such as road topology,
traffic rules, or complex interactions between traffic participants. A bird's
eye view (BEV) symbolic representation is extracted from the KG for each
driving scene, including the spatio-temporal information among the objects
across the scenes. The BEV representation is serialized into a sequence of
tokens and given to pre-trained language models (PLMs) for learning an inherent
understanding of the co-occurrence among driving scene elements and generating
predictions on the next scenes. We conducted a number of experiments using the
nuScenes dataset and KG in various scenarios. The results demonstrate that
fine-tuned models achieve significantly higher accuracy in all tasks. The
fine-tuned T5 model achieved a next scene prediction accuracy of 86.7%. This
paper concludes that FM4SU offers a promising foundation for developing more
comprehensive models for scene understanding in autonomous driving.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:38:25 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhou",
"Hongkuan",
""
],
[
"Schmid",
"Stefan",
""
],
[
"Li",
"Yicong",
""
],
[
"Halilaj",
"Lavdim",
""
],
[
"Yao",
"Xiangtong",
""
],
[
"cao",
"Wei",
""
]
] | TITLE: Predicting the Road Ahead: A Knowledge Graph based Foundation Model for
Scene Understanding in Autonomous Driving
ABSTRACT: The autonomous driving field has seen remarkable advancements in various
topics, such as object recognition, trajectory prediction, and motion planning.
However, current approaches face limitations in effectively comprehending the
complex evolutions of driving scenes over time. This paper proposes FM4SU, a
novel methodology for training a symbolic foundation model (FM) for scene
understanding in autonomous driving. It leverages knowledge graphs (KGs) to
capture sensory observation along with domain knowledge such as road topology,
traffic rules, or complex interactions between traffic participants. A bird's
eye view (BEV) symbolic representation is extracted from the KG for each
driving scene, including the spatio-temporal information among the objects
across the scenes. The BEV representation is serialized into a sequence of
tokens and given to pre-trained language models (PLMs) for learning an inherent
understanding of the co-occurrence among driving scene elements and generating
predictions on the next scenes. We conducted a number of experiments using the
nuScenes dataset and KG in various scenarios. The results demonstrate that
fine-tuned models achieve significantly higher accuracy in all tasks. The
fine-tuned T5 model achieved a next scene prediction accuracy of 86.7%. This
paper concludes that FM4SU offers a promising foundation for developing more
comprehensive models for scene understanding in autonomous driving.
|
2503.18738 | Shaoting Zhu | Chengbo Yuan, Suraj Joshi, Shaoting Zhu, Hang Su, Hang Zhao, Yang Gao | RoboEngine: Plug-and-Play Robot Data Augmentation with Semantic Robot
Segmentation and Background Generation | Project Page: https://roboengine.github.io/ | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Visual augmentation has become a crucial technique for enhancing the visual
robustness of imitation learning. However, existing methods are often limited
by prerequisites such as camera calibration or the need for controlled
environments (e.g., green screen setups). In this work, we introduce
RoboEngine, the first plug-and-play visual robot data augmentation toolkit. For
the first time, users can effortlessly generate physics- and task-aware robot
scenes with just a few lines of code. To achieve this, we present a novel robot
scene segmentation dataset, a generalizable high-quality robot segmentation
model, and a fine-tuned background generation model, which together form the
core components of the out-of-the-box toolkit. Using RoboEngine, we demonstrate
the ability to generalize robot manipulation tasks across six entirely new
scenes, based solely on demonstrations collected from a single scene, achieving
a more than 200% performance improvement compared to the no-augmentation
baseline. All datasets, model weights, and the toolkit will be publicly
released.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:46:14 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yuan",
"Chengbo",
""
],
[
"Joshi",
"Suraj",
""
],
[
"Zhu",
"Shaoting",
""
],
[
"Su",
"Hang",
""
],
[
"Zhao",
"Hang",
""
],
[
"Gao",
"Yang",
""
]
] | TITLE: RoboEngine: Plug-and-Play Robot Data Augmentation with Semantic Robot
Segmentation and Background Generation
ABSTRACT: Visual augmentation has become a crucial technique for enhancing the visual
robustness of imitation learning. However, existing methods are often limited
by prerequisites such as camera calibration or the need for controlled
environments (e.g., green screen setups). In this work, we introduce
RoboEngine, the first plug-and-play visual robot data augmentation toolkit. For
the first time, users can effortlessly generate physics- and task-aware robot
scenes with just a few lines of code. To achieve this, we present a novel robot
scene segmentation dataset, a generalizable high-quality robot segmentation
model, and a fine-tuned background generation model, which together form the
core components of the out-of-the-box toolkit. Using RoboEngine, we demonstrate
the ability to generalize robot manipulation tasks across six entirely new
scenes, based solely on demonstrations collected from a single scene, achieving
a more than 200% performance improvement compared to the no-augmentation
baseline. All datasets, model weights, and the toolkit will be publicly
released.
|
2503.18742 | Jiaming Zhang | Sebastian Tewes, Yufan Chen, Omar Moured, Jiaming Zhang, Rainer
Stiefelhagen | SFDLA: Source-Free Document Layout Analysis | The benchmark, models, and code will be publicly available at
https://github.com/s3setewe/sfdla-DLAdapter | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Document Layout Analysis (DLA) is a fundamental task in document
understanding. However, existing DLA and adaptation methods often require
access to large-scale source data and target labels. This requirements severely
limiting their real-world applicability, particularly in privacy-sensitive and
resource-constrained domains, such as financial statements, medical records,
and proprietary business documents. According to our observation, directly
transferring source-domain fine-tuned models on target domains often results in
a significant performance drop (Avg. -32.64%). In this work, we introduce
Source-Free Document Layout Analysis (SFDLA), aiming for adapting a pre-trained
source DLA models to an unlabeled target domain, without access to any source
data. To address this challenge, we establish the first SFDLA benchmark,
covering three major DLA datasets for geometric- and content-aware adaptation.
Furthermore, we propose Document Layout Analysis Adapter (DLAdapter), a novel
framework that is designed to improve source-free adaptation across document
domains. Our method achieves a +4.21% improvement over the source-only baseline
and a +2.26% gain over existing source-free methods from PubLayNet to
DocLayNet. We believe this work will inspire the DLA community to further
investigate source-free document understanding. To support future research of
the community, the benchmark, models, and code will be publicly available at
https://github.com/s3setewe/sfdla-DLAdapter.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:50:28 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Tewes",
"Sebastian",
""
],
[
"Chen",
"Yufan",
""
],
[
"Moured",
"Omar",
""
],
[
"Zhang",
"Jiaming",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] | TITLE: SFDLA: Source-Free Document Layout Analysis
ABSTRACT: Document Layout Analysis (DLA) is a fundamental task in document
understanding. However, existing DLA and adaptation methods often require
access to large-scale source data and target labels. This requirements severely
limiting their real-world applicability, particularly in privacy-sensitive and
resource-constrained domains, such as financial statements, medical records,
and proprietary business documents. According to our observation, directly
transferring source-domain fine-tuned models on target domains often results in
a significant performance drop (Avg. -32.64%). In this work, we introduce
Source-Free Document Layout Analysis (SFDLA), aiming for adapting a pre-trained
source DLA models to an unlabeled target domain, without access to any source
data. To address this challenge, we establish the first SFDLA benchmark,
covering three major DLA datasets for geometric- and content-aware adaptation.
Furthermore, we propose Document Layout Analysis Adapter (DLAdapter), a novel
framework that is designed to improve source-free adaptation across document
domains. Our method achieves a +4.21% improvement over the source-only baseline
and a +2.26% gain over existing source-free methods from PubLayNet to
DocLayNet. We believe this work will inspire the DLA community to further
investigate source-free document understanding. To support future research of
the community, the benchmark, models, and code will be publicly available at
https://github.com/s3setewe/sfdla-DLAdapter.
|
2503.18746 | Yifei Zhang | Yifei Zhang, Chang Liu, Jin Wei, Xiaomeng Yang, Yu Zhou, Can Ma,
Xiangyang Ji | Linguistics-aware Masked Image Modeling for Self-supervised Scene Text
Recognition | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text images are unique in their dual nature, encompassing both visual and
linguistic information. The visual component encompasses structural and
appearance-based features, while the linguistic dimension incorporates
contextual and semantic elements. In scenarios with degraded visual quality,
linguistic patterns serve as crucial supplements for comprehension,
highlighting the necessity of integrating both aspects for robust scene text
recognition (STR). Contemporary STR approaches often use language models or
semantic reasoning modules to capture linguistic features, typically requiring
large-scale annotated datasets. Self-supervised learning, which lacks
annotations, presents challenges in disentangling linguistic features related
to the global context. Typically, sequence contrastive learning emphasizes the
alignment of local features, while masked image modeling (MIM) tends to exploit
local structures to reconstruct visual patterns, resulting in limited
linguistic knowledge. In this paper, we propose a Linguistics-aware Masked
Image Modeling (LMIM) approach, which channels the linguistic information into
the decoding process of MIM through a separate branch. Specifically, we design
a linguistics alignment module to extract vision-independent features as
linguistic guidance using inputs with different visual appearances. As features
extend beyond mere visual structures, LMIM must consider the global context to
achieve reconstruction. Extensive experiments on various benchmarks
quantitatively demonstrate our state-of-the-art performance, and attention
visualizations qualitatively show the simultaneous capture of both visual and
linguistic information.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:53:35 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Yifei",
""
],
[
"Liu",
"Chang",
""
],
[
"Wei",
"Jin",
""
],
[
"Yang",
"Xiaomeng",
""
],
[
"Zhou",
"Yu",
""
],
[
"Ma",
"Can",
""
],
[
"Ji",
"Xiangyang",
""
]
] | TITLE: Linguistics-aware Masked Image Modeling for Self-supervised Scene Text
Recognition
ABSTRACT: Text images are unique in their dual nature, encompassing both visual and
linguistic information. The visual component encompasses structural and
appearance-based features, while the linguistic dimension incorporates
contextual and semantic elements. In scenarios with degraded visual quality,
linguistic patterns serve as crucial supplements for comprehension,
highlighting the necessity of integrating both aspects for robust scene text
recognition (STR). Contemporary STR approaches often use language models or
semantic reasoning modules to capture linguistic features, typically requiring
large-scale annotated datasets. Self-supervised learning, which lacks
annotations, presents challenges in disentangling linguistic features related
to the global context. Typically, sequence contrastive learning emphasizes the
alignment of local features, while masked image modeling (MIM) tends to exploit
local structures to reconstruct visual patterns, resulting in limited
linguistic knowledge. In this paper, we propose a Linguistics-aware Masked
Image Modeling (LMIM) approach, which channels the linguistic information into
the decoding process of MIM through a separate branch. Specifically, we design
a linguistics alignment module to extract vision-independent features as
linguistic guidance using inputs with different visual appearances. As features
extend beyond mere visual structures, LMIM must consider the global context to
achieve reconstruction. Extensive experiments on various benchmarks
quantitatively demonstrate our state-of-the-art performance, and attention
visualizations qualitatively show the simultaneous capture of both visual and
linguistic information.
|
2503.18751 | Wesley Scivetti | Wesley Scivetti and Nathan Schneider | Construction Identification and Disambiguation Using BERT: A Case Study
of NPN | 8 pages, ACL long-paper format (preprint) | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Construction Grammar hypothesizes that knowledge of a language consists
chiefly of knowledge of form-meaning pairs (''constructions'') that include
vocabulary, general grammar rules, and even idiosyncratic patterns. Recent work
has shown that transformer language models represent at least some
constructional patterns, including ones where the construction is rare overall.
In this work, we probe BERT's representation of the form and meaning of a minor
construction of English, the NPN (noun-preposition-noun) construction --
exhibited in such expressions as face to face and day to day -- which is known
to be polysemous. We construct a benchmark dataset of semantically annotated
corpus instances (including distractors that superficially resemble the
construction). With this dataset, we train and evaluate probing classifiers.
They achieve decent discrimination of the construction from distractors, as
well as sense disambiguation among true instances of the construction,
revealing that BERT embeddings carry indications of the construction's
semantics. Moreover, artificially permuting the word order of true construction
instances causes them to be rejected, indicating sensitivity to matters of
form. We conclude that BERT does latently encode at least some knowledge of the
NPN construction going beyond a surface syntactic pattern and lexical cues.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:59:39 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Scivetti",
"Wesley",
""
],
[
"Schneider",
"Nathan",
""
]
] | TITLE: Construction Identification and Disambiguation Using BERT: A Case Study
of NPN
ABSTRACT: Construction Grammar hypothesizes that knowledge of a language consists
chiefly of knowledge of form-meaning pairs (''constructions'') that include
vocabulary, general grammar rules, and even idiosyncratic patterns. Recent work
has shown that transformer language models represent at least some
constructional patterns, including ones where the construction is rare overall.
In this work, we probe BERT's representation of the form and meaning of a minor
construction of English, the NPN (noun-preposition-noun) construction --
exhibited in such expressions as face to face and day to day -- which is known
to be polysemous. We construct a benchmark dataset of semantically annotated
corpus instances (including distractors that superficially resemble the
construction). With this dataset, we train and evaluate probing classifiers.
They achieve decent discrimination of the construction from distractors, as
well as sense disambiguation among true instances of the construction,
revealing that BERT embeddings carry indications of the construction's
semantics. Moreover, artificially permuting the word order of true construction
instances causes them to be rejected, indicating sensitivity to matters of
form. We conclude that BERT does latently encode at least some knowledge of the
NPN construction going beyond a surface syntactic pattern and lexical cues.
|
2503.18755 | Ryo Fujii | Nathan Darjana, Ryo Fujii, Hideo Saito, Hiroki Kajita | EgoSurgery-HTS: A Dataset for Egocentric Hand-Tool Segmentation in Open
Surgery Videos | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Egocentric open-surgery videos capture rich, fine-grained details essential
for accurately modeling surgical procedures and human behavior in the operating
room. A detailed, pixel-level understanding of hands and surgical tools is
crucial for interpreting a surgeon's actions and intentions. We introduce
EgoSurgery-HTS, a new dataset with pixel-wise annotations and a benchmark suite
for segmenting surgical tools, hands, and interacting tools in egocentric
open-surgery videos. Specifically, we provide a labeled dataset for (1) tool
instance segmentation of 14 distinct surgical tools, (2) hand instance
segmentation, and (3) hand-tool segmentation to label hands and the tools they
manipulate. Using EgoSurgery-HTS, we conduct extensive evaluations of
state-of-the-art segmentation methods and demonstrate significant improvements
in the accuracy of hand and hand-tool segmentation in egocentric open-surgery
videos compared to existing datasets. The dataset will be released at
https://github.com/Fujiry0/EgoSurgery.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 15:04:32 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Darjana",
"Nathan",
""
],
[
"Fujii",
"Ryo",
""
],
[
"Saito",
"Hideo",
""
],
[
"Kajita",
"Hiroki",
""
]
] | TITLE: EgoSurgery-HTS: A Dataset for Egocentric Hand-Tool Segmentation in Open
Surgery Videos
ABSTRACT: Egocentric open-surgery videos capture rich, fine-grained details essential
for accurately modeling surgical procedures and human behavior in the operating
room. A detailed, pixel-level understanding of hands and surgical tools is
crucial for interpreting a surgeon's actions and intentions. We introduce
EgoSurgery-HTS, a new dataset with pixel-wise annotations and a benchmark suite
for segmenting surgical tools, hands, and interacting tools in egocentric
open-surgery videos. Specifically, we provide a labeled dataset for (1) tool
instance segmentation of 14 distinct surgical tools, (2) hand instance
segmentation, and (3) hand-tool segmentation to label hands and the tools they
manipulate. Using EgoSurgery-HTS, we conduct extensive evaluations of
state-of-the-art segmentation methods and demonstrate significant improvements
in the accuracy of hand and hand-tool segmentation in egocentric open-surgery
videos compared to existing datasets. The dataset will be released at
https://github.com/Fujiry0/EgoSurgery.
|
2503.18759 | Wenchao Xie | Wenchao Xie, Jiawei Xu, Zheng Peng, Qingsong Wang | Efficient QR-Based CP Decomposition Acceleration via Dimension Tree and
Extrapolation | null | null | null | null | math.NA cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The canonical polyadic (CP) decomposition is one of the most widely used
tensor decomposition techniques. The conventional CP decomposition algorithm
combines alternating least squares (ALS) with the normal equation. However, the
normal equation is susceptible to numerical ill-conditioning, which can
adversely affect the decomposition results. To mitigate this issue, ALS
combined with QR decomposition has been proposed as a more numerically stable
alternative. Although this method enhances stability, its iterative process
involves tensor-times-matrix (TTM) operations, which typically result in higher
computational costs. To reduce this cost, we propose branch reutilization of
dimension tree, which increases the reuse of intermediate tensors and reduces
the number of TTM operations. This strategy achieves a $33\%$ reduction in
computational complexity for third and fourth order tensors. Additionally, we
introduce a specialized extrapolation method in CP-ALS-QR algorithm, leveraging
the unique structure of the matrix $\mathbf{Q}_0$ to further enhance
convergence. By integrating both techniques, we develop a novel CP
decomposition algorithm that significantly improves efficiency. Numerical
experiments on five real-world datasets show that our proposed algorithm
reduces iteration costs and enhances fitting accuracy compared to the CP-ALS-QR
algorithm.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 15:07:37 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xie",
"Wenchao",
""
],
[
"Xu",
"Jiawei",
""
],
[
"Peng",
"Zheng",
""
],
[
"Wang",
"Qingsong",
""
]
] | TITLE: Efficient QR-Based CP Decomposition Acceleration via Dimension Tree and
Extrapolation
ABSTRACT: The canonical polyadic (CP) decomposition is one of the most widely used
tensor decomposition techniques. The conventional CP decomposition algorithm
combines alternating least squares (ALS) with the normal equation. However, the
normal equation is susceptible to numerical ill-conditioning, which can
adversely affect the decomposition results. To mitigate this issue, ALS
combined with QR decomposition has been proposed as a more numerically stable
alternative. Although this method enhances stability, its iterative process
involves tensor-times-matrix (TTM) operations, which typically result in higher
computational costs. To reduce this cost, we propose branch reutilization of
dimension tree, which increases the reuse of intermediate tensors and reduces
the number of TTM operations. This strategy achieves a $33\%$ reduction in
computational complexity for third and fourth order tensors. Additionally, we
introduce a specialized extrapolation method in CP-ALS-QR algorithm, leveraging
the unique structure of the matrix $\mathbf{Q}_0$ to further enhance
convergence. By integrating both techniques, we develop a novel CP
decomposition algorithm that significantly improves efficiency. Numerical
experiments on five real-world datasets show that our proposed algorithm
reduces iteration costs and enhances fitting accuracy compared to the CP-ALS-QR
algorithm.
|
2503.18760 | Nick McKenna | Nick McKenna, Xinnuo Xu, Jack Williams, Nick Wilson, Benjamin Van
Durme, Christian Poelitz | Synthetic Function Demonstrations Improve Generation in Low-Resource
Programming Languages | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | A key consideration when training an LLM is whether the target language is
more or less resourced, whether this is English compared to Welsh, or Python
compared to Excel. Typical training data for programming languages consist of
real program demonstrations coupled with human-written comments. Here we
present novel approaches to the creation of such data for low resource
programming languages. We generate fully-synthetic, textbook-quality
demonstrations of common library functions in an example domain of Excel
formulas, using a teacher model. We then finetune an underperforming student
model, and show improvement on 2 question-answering datasets recast into the
Excel domain. We show advantages of finetuning over standard, off-the-shelf RAG
approaches, which can offer only modest improvement due to the unfamiliar
target domain.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 15:09:03 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"McKenna",
"Nick",
""
],
[
"Xu",
"Xinnuo",
""
],
[
"Williams",
"Jack",
""
],
[
"Wilson",
"Nick",
""
],
[
"Van Durme",
"Benjamin",
""
],
[
"Poelitz",
"Christian",
""
]
] | TITLE: Synthetic Function Demonstrations Improve Generation in Low-Resource
Programming Languages
ABSTRACT: A key consideration when training an LLM is whether the target language is
more or less resourced, whether this is English compared to Welsh, or Python
compared to Excel. Typical training data for programming languages consist of
real program demonstrations coupled with human-written comments. Here we
present novel approaches to the creation of such data for low resource
programming languages. We generate fully-synthetic, textbook-quality
demonstrations of common library functions in an example domain of Excel
formulas, using a teacher model. We then finetune an underperforming student
model, and show improvement on 2 question-answering datasets recast into the
Excel domain. We show advantages of finetuning over standard, off-the-shelf RAG
approaches, which can offer only modest improvement due to the unfamiliar
target domain.
|
2503.18792 | Wenyue Hua | Jingwen Cheng, Kshitish Ghate, Wenyue Hua, William Yang Wang, Hong
Shen, Fei Fang | REALM: A Dataset of Real-World LLM Use Cases | 9 pages, 5 figures | null | null | null | cs.HC cs.AI cs.CL cs.CY | http://creativecommons.org/licenses/by/4.0/ | Large Language Models, such as the GPT series, have driven significant
industrial applications, leading to economic and societal transformations.
However, a comprehensive understanding of their real-world applications remains
limited. To address this, we introduce REALM, a dataset of over 94,000 LLM use
cases collected from Reddit and news articles. REALM captures two key
dimensions: the diverse applications of LLMs and the demographics of their
users. It categorizes LLM applications and explores how users' occupations
relate to the types of applications they use. By integrating real-world data,
REALM offers insights into LLM adoption across different domains, providing a
foundation for future research on their evolving societal roles. A dedicated
dashboard https://realm-e7682.web.app/ presents the data.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 15:39:25 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Cheng",
"Jingwen",
""
],
[
"Ghate",
"Kshitish",
""
],
[
"Hua",
"Wenyue",
""
],
[
"Wang",
"William Yang",
""
],
[
"Shen",
"Hong",
""
],
[
"Fang",
"Fei",
""
]
] | TITLE: REALM: A Dataset of Real-World LLM Use Cases
ABSTRACT: Large Language Models, such as the GPT series, have driven significant
industrial applications, leading to economic and societal transformations.
However, a comprehensive understanding of their real-world applications remains
limited. To address this, we introduce REALM, a dataset of over 94,000 LLM use
cases collected from Reddit and news articles. REALM captures two key
dimensions: the diverse applications of LLMs and the demographics of their
users. It categorizes LLM applications and explores how users' occupations
relate to the types of applications they use. By integrating real-world data,
REALM offers insights into LLM adoption across different domains, providing a
foundation for future research on their evolving societal roles. A dedicated
dashboard https://realm-e7682.web.app/ presents the data.
|
2503.18797 | Jean-Fran\c{c}ois Muzy | Roberta Baggio, Killian Pujol, Florian Pantillon, Dominique Lambert,
Jean-Baptiste Filippi and Jean-Fran\c{c}ois Muzy | Local wind speed forecasting at short time horizons relying on both
Numerical Weather Prediction and observations from surrounding station | 19 pages, 12 figures, 4 tables | null | null | null | physics.ao-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study presents a hybrid neural network model for short-term (1-6 hours
ahead) surface wind speed forecasting, combining Numerical Weather Prediction
(NWP) with observational data from ground weather stations. It relies on the
MeteoNet dataset, which includes data from global (ARPEGE) and regional (AROME)
NWP models of the French weather service and meteorological observations from
ground stations in the French Mediterranean. The proposed neural network
architecture integrates recent past station observations (over last few hours)
and AROME and ARPEGE predictions on a small subgrid around the target location.
The model is designed to provide both deterministic and probabilistic
forecasts, with the latter predicting the parameters of a suitable probability
distribution that notably allows us to capture extreme wind events. Our results
demonstrate that the hybrid model significantly outperforms baseline methods,
including raw NWP predictions, persistence models, and linear regression,
across all forecast horizons. For instance, the model reduces RMSE by up 30\%
compared to AROME predictions. Probabilistic forecasting further enhances
performance, particularly for extreme quantiles, by estimating conditional
quantiles rather than relying solely on the conditional mean. Fine-tuning the
model for specific stations, such as those in the Mediterranean island of
Corsica, further improves forecasting accuracy. Our study highlights the
importance of integrating multiple data sources and probabilistic approaches to
improve short-term wind speed forecasting. It defines an effective approach,
even in a complex terrain like Corsica where localized wind variations are
significant
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 15:42:03 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Baggio",
"Roberta",
""
],
[
"Pujol",
"Killian",
""
],
[
"Pantillon",
"Florian",
""
],
[
"Lambert",
"Dominique",
""
],
[
"Filippi",
"Jean-Baptiste",
""
],
[
"Muzy",
"Jean-François",
""
]
] | TITLE: Local wind speed forecasting at short time horizons relying on both
Numerical Weather Prediction and observations from surrounding station
ABSTRACT: This study presents a hybrid neural network model for short-term (1-6 hours
ahead) surface wind speed forecasting, combining Numerical Weather Prediction
(NWP) with observational data from ground weather stations. It relies on the
MeteoNet dataset, which includes data from global (ARPEGE) and regional (AROME)
NWP models of the French weather service and meteorological observations from
ground stations in the French Mediterranean. The proposed neural network
architecture integrates recent past station observations (over last few hours)
and AROME and ARPEGE predictions on a small subgrid around the target location.
The model is designed to provide both deterministic and probabilistic
forecasts, with the latter predicting the parameters of a suitable probability
distribution that notably allows us to capture extreme wind events. Our results
demonstrate that the hybrid model significantly outperforms baseline methods,
including raw NWP predictions, persistence models, and linear regression,
across all forecast horizons. For instance, the model reduces RMSE by up 30\%
compared to AROME predictions. Probabilistic forecasting further enhances
performance, particularly for extreme quantiles, by estimating conditional
quantiles rather than relying solely on the conditional mean. Fine-tuning the
model for specific stations, such as those in the Mediterranean island of
Corsica, further improves forecasting accuracy. Our study highlights the
importance of integrating multiple data sources and probabilistic approaches to
improve short-term wind speed forecasting. It defines an effective approach,
even in a complex terrain like Corsica where localized wind variations are
significant
|
2503.18799 | Vivek Vrujlal Vekariya | Vivek Vekariya, Mojdeh Golagha, Andrea Stocco and Alexander Pretschner | Latent Space Class Dispersion: Effective Test Data Quality Assessment
for DNNs | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by-sa/4.0/ | High-quality test datasets are crucial for assessing the reliability of Deep
Neural Networks (DNNs). Mutation testing evaluates test dataset quality based
on their ability to uncover injected faults in DNNs as measured by mutation
score (MS). At the same time, its high computational cost motivates researchers
to seek alternative test adequacy criteria. We propose Latent Space Class
Dispersion (LSCD), a novel metric to quantify the quality of test datasets for
DNNs. It measures the degree of dispersion within a test dataset as observed in
the latent space of a DNN. Our empirical study shows that LSCD reveals and
quantifies deficiencies in the test dataset of three popular benchmarks
pertaining to image classification tasks using DNNs. Corner cases generated
using automated fuzzing were found to help enhance fault detection and improve
the overall quality of the original test sets calculated by MS and LSCD. Our
experiments revealed a high positive correlation (0.87) between LSCD and MS,
significantly higher than the one achieved by the well-studied Distance-based
Surprise Coverage (0.25). These results were obtained from 129 mutants
generated through pre-training mutation operators, with statistical
significance and a high validity of corner cases. These observations suggest
that LSCD can serve as a cost-effective alternative to expensive mutation
testing, eliminating the need to generate mutant models while offering
comparably valuable insights into test dataset quality for DNNs.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 15:45:50 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Vekariya",
"Vivek",
""
],
[
"Golagha",
"Mojdeh",
""
],
[
"Stocco",
"Andrea",
""
],
[
"Pretschner",
"Alexander",
""
]
] | TITLE: Latent Space Class Dispersion: Effective Test Data Quality Assessment
for DNNs
ABSTRACT: High-quality test datasets are crucial for assessing the reliability of Deep
Neural Networks (DNNs). Mutation testing evaluates test dataset quality based
on their ability to uncover injected faults in DNNs as measured by mutation
score (MS). At the same time, its high computational cost motivates researchers
to seek alternative test adequacy criteria. We propose Latent Space Class
Dispersion (LSCD), a novel metric to quantify the quality of test datasets for
DNNs. It measures the degree of dispersion within a test dataset as observed in
the latent space of a DNN. Our empirical study shows that LSCD reveals and
quantifies deficiencies in the test dataset of three popular benchmarks
pertaining to image classification tasks using DNNs. Corner cases generated
using automated fuzzing were found to help enhance fault detection and improve
the overall quality of the original test sets calculated by MS and LSCD. Our
experiments revealed a high positive correlation (0.87) between LSCD and MS,
significantly higher than the one achieved by the well-studied Distance-based
Surprise Coverage (0.25). These results were obtained from 129 mutants
generated through pre-training mutation operators, with statistical
significance and a high validity of corner cases. These observations suggest
that LSCD can serve as a cost-effective alternative to expensive mutation
testing, eliminating the need to generate mutant models while offering
comparably valuable insights into test dataset quality for DNNs.
|
2503.18802 | Monan Zhou Dr | Monan Zhou and Shenyang Xu and Zhaorui Liu and Zhaowen Wang and Feng
Yu and Wei Li and Baoqiang Han | CCMusic: An Open and Diverse Database for Chinese Music Information
Retrieval Research | 17 pages, 18 figures | Transactions of the International Society for Music Information
Retrieval, 2025, 8(1), 22-38 | 10.5334/tismir.194 | null | cs.IR cs.SD | http://creativecommons.org/licenses/by/4.0/ | Data are crucial in various computer-related fields, including music
information retrieval (MIR), an interdisciplinary area bridging computer
science and music. This paper introduces CCMusic, an open and diverse database
comprising multiple datasets specifically designed for tasks related to Chinese
music, highlighting our focus on this culturally rich domain. The database
integrates both published and unpublished datasets, with steps taken such as
data cleaning, label refinement, and data structure unification to ensure data
consistency and create ready-to-use versions. We conduct benchmark evaluations
for all datasets using a unified evaluation framework developed specifically
for this purpose. This publicly available framework supports both
classification and detection tasks, ensuring standardized and reproducible
results across all datasets. The database is hosted on HuggingFace and
ModelScope, two open and multifunctional data and model hosting platforms,
ensuring ease of accessibility and usability.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 15:47:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhou",
"Monan",
""
],
[
"Xu",
"Shenyang",
""
],
[
"Liu",
"Zhaorui",
""
],
[
"Wang",
"Zhaowen",
""
],
[
"Yu",
"Feng",
""
],
[
"Li",
"Wei",
""
],
[
"Han",
"Baoqiang",
""
]
] | TITLE: CCMusic: An Open and Diverse Database for Chinese Music Information
Retrieval Research
ABSTRACT: Data are crucial in various computer-related fields, including music
information retrieval (MIR), an interdisciplinary area bridging computer
science and music. This paper introduces CCMusic, an open and diverse database
comprising multiple datasets specifically designed for tasks related to Chinese
music, highlighting our focus on this culturally rich domain. The database
integrates both published and unpublished datasets, with steps taken such as
data cleaning, label refinement, and data structure unification to ensure data
consistency and create ready-to-use versions. We conduct benchmark evaluations
for all datasets using a unified evaluation framework developed specifically
for this purpose. This publicly available framework supports both
classification and detection tasks, ensuring standardized and reproducible
results across all datasets. The database is hosted on HuggingFace and
ModelScope, two open and multifunctional data and model hosting platforms,
ensuring ease of accessibility and usability.
|
2503.18812 | Stamos Katsigiannis | Shrikant Malviya, Neelanjan Bhowmik, Stamos Katsigiannis | SKDU at De-Factify 4.0: Vision Transformer with Data Augmentation for
AI-Generated Image Detection | De-Factify 4.0 workshop at the 39th Annual AAAI Conference on
Artificial Intelligence (AAAI 2025) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The aim of this work is to explore the potential of pre-trained
vision-language models, e.g. Vision Transformers (ViT), enhanced with advanced
data augmentation strategies for the detection of AI-generated images. Our
approach leverages a fine-tuned ViT model trained on the Defactify-4.0 dataset,
which includes images generated by state-of-the-art models such as Stable
Diffusion 2.1, Stable Diffusion XL, Stable Diffusion 3, DALL-E 3, and
MidJourney. We employ perturbation techniques like flipping, rotation, Gaussian
noise injection, and JPEG compression during training to improve model
robustness and generalisation. The experimental results demonstrate that our
ViT-based pipeline achieves state-of-the-art performance, significantly
outperforming competing methods on both validation and test datasets.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 15:53:54 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Malviya",
"Shrikant",
""
],
[
"Bhowmik",
"Neelanjan",
""
],
[
"Katsigiannis",
"Stamos",
""
]
] | TITLE: SKDU at De-Factify 4.0: Vision Transformer with Data Augmentation for
AI-Generated Image Detection
ABSTRACT: The aim of this work is to explore the potential of pre-trained
vision-language models, e.g. Vision Transformers (ViT), enhanced with advanced
data augmentation strategies for the detection of AI-generated images. Our
approach leverages a fine-tuned ViT model trained on the Defactify-4.0 dataset,
which includes images generated by state-of-the-art models such as Stable
Diffusion 2.1, Stable Diffusion XL, Stable Diffusion 3, DALL-E 3, and
MidJourney. We employ perturbation techniques like flipping, rotation, Gaussian
noise injection, and JPEG compression during training to improve model
robustness and generalisation. The experimental results demonstrate that our
ViT-based pipeline achieves state-of-the-art performance, significantly
outperforming competing methods on both validation and test datasets.
|
2503.18814 | Jacopo De Berardinis | Jacopo de Berardinis, Lorenzo Porcaro, Albert Mero\~no-Pe\~nuela,
Angelo Cangelosi, Tess Buckley | Towards Responsible AI Music: an Investigation of Trustworthy Features
for Creative Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative AI is radically changing the creative arts, by fundamentally
transforming the way we create and interact with cultural artefacts. While
offering unprecedented opportunities for artistic expression and
commercialisation, this technology also raises ethical, societal, and legal
concerns. Key among these are the potential displacement of human creativity,
copyright infringement stemming from vast training datasets, and the lack of
transparency, explainability, and fairness mechanisms. As generative systems
become pervasive in this domain, responsible design is crucial. Whilst previous
work has tackled isolated aspects of generative systems (e.g., transparency,
evaluation, data), we take a comprehensive approach, grounding these efforts
within the Ethics Guidelines for Trustworthy Artificial Intelligence produced
by the High-Level Expert Group on AI appointed by the European Commission - a
framework for designing responsible AI systems across seven macro requirements.
Focusing on generative music AI, we illustrate how these requirements can be
contextualised for the field, addressing trustworthiness across multiple
dimensions and integrating insights from the existing literature. We further
propose a roadmap for operationalising these contextualised requirements,
emphasising interdisciplinary collaboration and stakeholder engagement. Our
work provides a foundation for designing and evaluating responsible music
generation systems, calling for collaboration among AI experts, ethicists,
legal scholars, and artists. This manuscript is accompanied by a website:
https://amresearchlab.github.io/raim-framework/.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 15:54:47 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"de Berardinis",
"Jacopo",
""
],
[
"Porcaro",
"Lorenzo",
""
],
[
"Meroño-Peñuela",
"Albert",
""
],
[
"Cangelosi",
"Angelo",
""
],
[
"Buckley",
"Tess",
""
]
] | TITLE: Towards Responsible AI Music: an Investigation of Trustworthy Features
for Creative Systems
ABSTRACT: Generative AI is radically changing the creative arts, by fundamentally
transforming the way we create and interact with cultural artefacts. While
offering unprecedented opportunities for artistic expression and
commercialisation, this technology also raises ethical, societal, and legal
concerns. Key among these are the potential displacement of human creativity,
copyright infringement stemming from vast training datasets, and the lack of
transparency, explainability, and fairness mechanisms. As generative systems
become pervasive in this domain, responsible design is crucial. Whilst previous
work has tackled isolated aspects of generative systems (e.g., transparency,
evaluation, data), we take a comprehensive approach, grounding these efforts
within the Ethics Guidelines for Trustworthy Artificial Intelligence produced
by the High-Level Expert Group on AI appointed by the European Commission - a
framework for designing responsible AI systems across seven macro requirements.
Focusing on generative music AI, we illustrate how these requirements can be
contextualised for the field, addressing trustworthiness across multiple
dimensions and integrating insights from the existing literature. We further
propose a roadmap for operationalising these contextualised requirements,
emphasising interdisciplinary collaboration and stakeholder engagement. Our
work provides a foundation for designing and evaluating responsible music
generation systems, calling for collaboration among AI experts, ethicists,
legal scholars, and artists. This manuscript is accompanied by a website:
https://amresearchlab.github.io/raim-framework/.
|
2503.18817 | Jeonghyeon Kim | Jeonghyeon Kim and Sangheum Hwang | Enhanced OoD Detection through Cross-Modal Alignment of Multi-Modal
Representations | CVPR 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Prior research on out-of-distribution detection (OoDD) has primarily focused
on single-modality models. Recently, with the advent of large-scale pretrained
vision-language models such as CLIP, OoDD methods utilizing such multi-modal
representations through zero-shot and prompt learning strategies have emerged.
However, these methods typically involve either freezing the pretrained weights
or only partially tuning them, which can be suboptimal for downstream datasets.
In this paper, we highlight that multi-modal fine-tuning (MMFT) can achieve
notable OoDD performance. Despite some recent works demonstrating the impact of
fine-tuning methods for OoDD, there remains significant potential for
performance improvement. We investigate the limitation of na\"ive fine-tuning
methods, examining why they fail to fully leverage the pretrained knowledge.
Our empirical analysis suggests that this issue could stem from the modality
gap within in-distribution (ID) embeddings. To address this, we propose a
training objective that enhances cross-modal alignment by regularizing the
distances between image and text embeddings of ID data. This adjustment helps
in better utilizing pretrained textual information by aligning similar
semantics from different modalities (i.e., text and image) more closely in the
hyperspherical representation space. We theoretically demonstrate that the
proposed regularization corresponds to the maximum likelihood estimation of an
energy-based model on a hypersphere. Utilizing ImageNet-1k OoD benchmark
datasets, we show that our method, combined with post-hoc OoDD approaches
leveraging pretrained knowledge (e.g., NegLabel), significantly outperforms
existing methods, achieving state-of-the-art OoDD performance and leading ID
accuracy.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 16:00:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Kim",
"Jeonghyeon",
""
],
[
"Hwang",
"Sangheum",
""
]
] | TITLE: Enhanced OoD Detection through Cross-Modal Alignment of Multi-Modal
Representations
ABSTRACT: Prior research on out-of-distribution detection (OoDD) has primarily focused
on single-modality models. Recently, with the advent of large-scale pretrained
vision-language models such as CLIP, OoDD methods utilizing such multi-modal
representations through zero-shot and prompt learning strategies have emerged.
However, these methods typically involve either freezing the pretrained weights
or only partially tuning them, which can be suboptimal for downstream datasets.
In this paper, we highlight that multi-modal fine-tuning (MMFT) can achieve
notable OoDD performance. Despite some recent works demonstrating the impact of
fine-tuning methods for OoDD, there remains significant potential for
performance improvement. We investigate the limitation of na\"ive fine-tuning
methods, examining why they fail to fully leverage the pretrained knowledge.
Our empirical analysis suggests that this issue could stem from the modality
gap within in-distribution (ID) embeddings. To address this, we propose a
training objective that enhances cross-modal alignment by regularizing the
distances between image and text embeddings of ID data. This adjustment helps
in better utilizing pretrained textual information by aligning similar
semantics from different modalities (i.e., text and image) more closely in the
hyperspherical representation space. We theoretically demonstrate that the
proposed regularization corresponds to the maximum likelihood estimation of an
energy-based model on a hypersphere. Utilizing ImageNet-1k OoD benchmark
datasets, we show that our method, combined with post-hoc OoDD approaches
leveraging pretrained knowledge (e.g., NegLabel), significantly outperforms
existing methods, achieving state-of-the-art OoDD performance and leading ID
accuracy.
|
2503.18830 | Zhengxian Wu | Zhengxian Wu, Chuanrui Zhang, Hangrui Xu, Peng Jiao, Haoqian Wang | DAGait: Generalized Skeleton-Guided Data Alignment for Gait Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gait recognition is emerging as a promising and innovative area within the
field of computer vision, widely applied to remote person identification.
Although existing gait recognition methods have achieved substantial success in
controlled laboratory datasets, their performance often declines significantly
when transitioning to wild datasets.We argue that the performance gap can be
primarily attributed to the spatio-temporal distribution inconsistencies
present in wild datasets, where subjects appear at varying angles, positions,
and distances across the frames. To achieve accurate gait recognition in the
wild, we propose a skeleton-guided silhouette alignment strategy, which uses
prior knowledge of the skeletons to perform affine transformations on the
corresponding silhouettes.To the best of our knowledge, this is the first study
to explore the impact of data alignment on gait recognition. We conducted
extensive experiments across multiple datasets and network architectures, and
the results demonstrate the significant advantages of our proposed alignment
strategy.Specifically, on the challenging Gait3D dataset, our method achieved
an average performance improvement of 7.9% across all evaluated networks.
Furthermore, our method achieves substantial improvements on cross-domain
datasets, with accuracy improvements of up to 24.0%.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 16:08:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wu",
"Zhengxian",
""
],
[
"Zhang",
"Chuanrui",
""
],
[
"Xu",
"Hangrui",
""
],
[
"Jiao",
"Peng",
""
],
[
"Wang",
"Haoqian",
""
]
] | TITLE: DAGait: Generalized Skeleton-Guided Data Alignment for Gait Recognition
ABSTRACT: Gait recognition is emerging as a promising and innovative area within the
field of computer vision, widely applied to remote person identification.
Although existing gait recognition methods have achieved substantial success in
controlled laboratory datasets, their performance often declines significantly
when transitioning to wild datasets.We argue that the performance gap can be
primarily attributed to the spatio-temporal distribution inconsistencies
present in wild datasets, where subjects appear at varying angles, positions,
and distances across the frames. To achieve accurate gait recognition in the
wild, we propose a skeleton-guided silhouette alignment strategy, which uses
prior knowledge of the skeletons to perform affine transformations on the
corresponding silhouettes.To the best of our knowledge, this is the first study
to explore the impact of data alignment on gait recognition. We conducted
extensive experiments across multiple datasets and network architectures, and
the results demonstrate the significant advantages of our proposed alignment
strategy.Specifically, on the challenging Gait3D dataset, our method achieved
an average performance improvement of 7.9% across all evaluated networks.
Furthermore, our method achieves substantial improvements on cross-domain
datasets, with accuracy improvements of up to 24.0%.
|
2503.18836 | Bo Zhou | Yuxuan Zhang, Jinkui Hao, Bo Zhou | Dual-domain Multi-path Self-supervised Diffusion Model for Accelerated
MRI Reconstruction | 10 pages, 8 figures, 5 tables | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Magnetic resonance imaging (MRI) is a vital diagnostic tool, but its
inherently long acquisition times reduce clinical efficiency and patient
comfort. Recent advancements in deep learning, particularly diffusion models,
have improved accelerated MRI reconstruction. However, existing diffusion
models' training often relies on fully sampled data, models incur high
computational costs, and often lack uncertainty estimation, limiting their
clinical applicability. To overcome these challenges, we propose a novel
framework, called Dual-domain Multi-path Self-supervised Diffusion Model
(DMSM), that integrates a self-supervised dual-domain diffusion model training
scheme, a lightweight hybrid attention network for the reconstruction diffusion
model, and a multi-path inference strategy, to enhance reconstruction accuracy,
efficiency, and explainability. Unlike traditional diffusion-based models, DMSM
eliminates the dependency on training from fully sampled data, making it more
practical for real-world clinical settings. We evaluated DMSM on two human MRI
datasets, demonstrating that it achieves favorable performance over several
supervised and self-supervised baselines, particularly in preserving fine
anatomical structures and suppressing artifacts under high acceleration
factors. Additionally, our model generates uncertainty maps that correlate
reasonably well with reconstruction errors, offering valuable clinically
interpretable guidance and potentially enhancing diagnostic confidence.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 16:10:51 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Yuxuan",
""
],
[
"Hao",
"Jinkui",
""
],
[
"Zhou",
"Bo",
""
]
] | TITLE: Dual-domain Multi-path Self-supervised Diffusion Model for Accelerated
MRI Reconstruction
ABSTRACT: Magnetic resonance imaging (MRI) is a vital diagnostic tool, but its
inherently long acquisition times reduce clinical efficiency and patient
comfort. Recent advancements in deep learning, particularly diffusion models,
have improved accelerated MRI reconstruction. However, existing diffusion
models' training often relies on fully sampled data, models incur high
computational costs, and often lack uncertainty estimation, limiting their
clinical applicability. To overcome these challenges, we propose a novel
framework, called Dual-domain Multi-path Self-supervised Diffusion Model
(DMSM), that integrates a self-supervised dual-domain diffusion model training
scheme, a lightweight hybrid attention network for the reconstruction diffusion
model, and a multi-path inference strategy, to enhance reconstruction accuracy,
efficiency, and explainability. Unlike traditional diffusion-based models, DMSM
eliminates the dependency on training from fully sampled data, making it more
practical for real-world clinical settings. We evaluated DMSM on two human MRI
datasets, demonstrating that it achieves favorable performance over several
supervised and self-supervised baselines, particularly in preserving fine
anatomical structures and suppressing artifacts under high acceleration
factors. Additionally, our model generates uncertainty maps that correlate
reasonably well with reconstruction errors, offering valuable clinically
interpretable guidance and potentially enhancing diagnostic confidence.
|
2503.18841 | Xuan Li | Xuan Li, Yuting Peng, Xiaoxuan Sun, Yifei Duan, Zhou Fang, Tengda Tang | Unsupervised Detection of Fraudulent Transactions in E-commerce Using
Contrastive Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development of e-commerce, e-commerce platforms are facing an
increasing number of fraud threats. Effectively identifying and preventing
these fraudulent activities has become a critical research problem. Traditional
fraud detection methods typically rely on supervised learning, which requires
large amounts of labeled data. However, such data is often difficult to obtain,
and the continuous evolution of fraudulent activities further reduces the
adaptability and effectiveness of traditional methods. To address this issue,
this study proposes an unsupervised e-commerce fraud detection algorithm based
on SimCLR. The algorithm leverages the contrastive learning framework to
effectively detect fraud by learning the underlying representations of
transaction data in an unlabeled setting. Experimental results on the eBay
platform dataset show that the proposed algorithm outperforms traditional
unsupervised methods such as K-means, Isolation Forest, and Autoencoders in
terms of accuracy, precision, recall, and F1 score, demonstrating strong fraud
detection capabilities. The results confirm that the SimCLR-based unsupervised
fraud detection method has broad application prospects in e-commerce platform
security, improving both detection accuracy and robustness. In the future, with
the increasing scale and diversity of datasets, the model's performance will
continue to improve, and it could be integrated with real-time monitoring
systems to provide more efficient security for e-commerce platforms.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 16:14:16 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Xuan",
""
],
[
"Peng",
"Yuting",
""
],
[
"Sun",
"Xiaoxuan",
""
],
[
"Duan",
"Yifei",
""
],
[
"Fang",
"Zhou",
""
],
[
"Tang",
"Tengda",
""
]
] | TITLE: Unsupervised Detection of Fraudulent Transactions in E-commerce Using
Contrastive Learning
ABSTRACT: With the rapid development of e-commerce, e-commerce platforms are facing an
increasing number of fraud threats. Effectively identifying and preventing
these fraudulent activities has become a critical research problem. Traditional
fraud detection methods typically rely on supervised learning, which requires
large amounts of labeled data. However, such data is often difficult to obtain,
and the continuous evolution of fraudulent activities further reduces the
adaptability and effectiveness of traditional methods. To address this issue,
this study proposes an unsupervised e-commerce fraud detection algorithm based
on SimCLR. The algorithm leverages the contrastive learning framework to
effectively detect fraud by learning the underlying representations of
transaction data in an unlabeled setting. Experimental results on the eBay
platform dataset show that the proposed algorithm outperforms traditional
unsupervised methods such as K-means, Isolation Forest, and Autoencoders in
terms of accuracy, precision, recall, and F1 score, demonstrating strong fraud
detection capabilities. The results confirm that the SimCLR-based unsupervised
fraud detection method has broad application prospects in e-commerce platform
security, improving both detection accuracy and robustness. In the future, with
the increasing scale and diversity of datasets, the model's performance will
continue to improve, and it could be integrated with real-time monitoring
systems to provide more efficient security for e-commerce platforms.
|
2503.18856 | Paul Villoutreix | Daniel Lepe-Soltero, Thierry Arti\`eres, Ana\"is Baudot, Paul
Villoutreix | MODIS: Multi-Omics Data Integration for Small and Unpaired Datasets | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | A key challenge today lies in the ability to efficiently handle multi-omics
data since such multimodal data may provide a more comprehensive overview of
the underlying processes in a system. Yet it comes with challenges: multi-omics
data are most often unpaired and only partially labeled, moreover only small
amounts of data are available in some situation such as rare diseases. We
propose MODIS which stands for Multi-Omics Data Integration for Small and
unpaired datasets, a semi supervised approach to account for these particular
settings. MODIS learns a probabilistic coupling of heterogeneous data
modalities and learns a shared latent space where modalities are aligned. We
rely on artificial data to build controlled experiments to explore how much
supervision is needed for an accurate alignment of modalities, and how our
approach enables dealing with new conditions for which few data are available.
The code is available athttps://github.com/VILLOUTREIXLab/MODIS.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 16:33:11 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lepe-Soltero",
"Daniel",
""
],
[
"Artières",
"Thierry",
""
],
[
"Baudot",
"Anaïs",
""
],
[
"Villoutreix",
"Paul",
""
]
] | TITLE: MODIS: Multi-Omics Data Integration for Small and Unpaired Datasets
ABSTRACT: A key challenge today lies in the ability to efficiently handle multi-omics
data since such multimodal data may provide a more comprehensive overview of
the underlying processes in a system. Yet it comes with challenges: multi-omics
data are most often unpaired and only partially labeled, moreover only small
amounts of data are available in some situation such as rare diseases. We
propose MODIS which stands for Multi-Omics Data Integration for Small and
unpaired datasets, a semi supervised approach to account for these particular
settings. MODIS learns a probabilistic coupling of heterogeneous data
modalities and learns a shared latent space where modalities are aligned. We
rely on artificial data to build controlled experiments to explore how much
supervision is needed for an accurate alignment of modalities, and how our
approach enables dealing with new conditions for which few data are available.
The code is available athttps://github.com/VILLOUTREIXLab/MODIS.
|
2503.18862 | Tobias Holmes | DeShin Hwa, Tobias Holmes and Klaus Drechsler | Exploring the Integration of Key-Value Attention Into Pure and Hybrid
Transformers for Semantic Segmentation | 6 pages, 3 figures, Preprint. Final version published in:
Bildverarbeitung f\"ur die Medizin 2025, Springer. DOI:
https://doi.org/10.1007/978-3-658-47422-5_71 | Bildverarbeitung f\"ur die Medizin 2025. BVM 2025. Informatik
aktuell. Springer Vieweg, Wiesbaden, pp 305-310 | 10.1007/978-3-658-47422-5_71 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | While CNNs were long considered state of the art for image processing, the
introduction of Transformer architectures has challenged this position. While
achieving excellent results in image classification and segmentation,
Transformers remain inherently reliant on large training datasets and remain
computationally expensive. A newly introduced Transformer derivative named KV
Transformer shows promising results in synthetic, NLP, and image classification
tasks, while reducing complexity and memory usage. This is especially conducive
to use cases where local inference is required, such as medical screening
applications. We endeavoured to further evaluate the merit of KV Transformers
on semantic segmentation tasks, specifically in the domain of medical imaging.
By directly comparing traditional and KV variants of the same base
architectures, we provide further insight into the practical tradeoffs of
reduced model complexity. We observe a notable reduction in parameter count and
multiply accumulate operations, while achieving similar performance from most
of the KV variant models when directly compared to their QKV implementation.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 16:38:31 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hwa",
"DeShin",
""
],
[
"Holmes",
"Tobias",
""
],
[
"Drechsler",
"Klaus",
""
]
] | TITLE: Exploring the Integration of Key-Value Attention Into Pure and Hybrid
Transformers for Semantic Segmentation
ABSTRACT: While CNNs were long considered state of the art for image processing, the
introduction of Transformer architectures has challenged this position. While
achieving excellent results in image classification and segmentation,
Transformers remain inherently reliant on large training datasets and remain
computationally expensive. A newly introduced Transformer derivative named KV
Transformer shows promising results in synthetic, NLP, and image classification
tasks, while reducing complexity and memory usage. This is especially conducive
to use cases where local inference is required, such as medical screening
applications. We endeavoured to further evaluate the merit of KV Transformers
on semantic segmentation tasks, specifically in the domain of medical imaging.
By directly comparing traditional and KV variants of the same base
architectures, we provide further insight into the practical tradeoffs of
reduced model complexity. We observe a notable reduction in parameter count and
multiply accumulate operations, while achieving similar performance from most
of the KV variant models when directly compared to their QKV implementation.
|
2503.18872 | Gongwei Chen | Yanda Chen, Gongwei Chen, Miao Zhang, Weili Guan, Liqiang Nie | Curriculum Coarse-to-Fine Selection for High-IPC Dataset Distillation | Accepted by CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dataset distillation (DD) excels in synthesizing a small number of images per
class (IPC) but struggles to maintain its effectiveness in high-IPC settings.
Recent works on dataset distillation demonstrate that combining distilled and
real data can mitigate the effectiveness decay. However, our analysis of the
combination paradigm reveals that the current one-shot and independent
selection mechanism induces an incompatibility issue between distilled and real
images. To address this issue, we introduce a novel curriculum coarse-to-fine
selection (CCFS) method for efficient high-IPC dataset distillation. CCFS
employs a curriculum selection framework for real data selection, where we
leverage a coarse-to-fine strategy to select appropriate real data based on the
current synthetic dataset in each curriculum. Extensive experiments validate
CCFS, surpassing the state-of-the-art by +6.6\% on CIFAR-10, +5.8\% on
CIFAR-100, and +3.4\% on Tiny-ImageNet under high-IPC settings. Notably, CCFS
achieves 60.2\% test accuracy on ResNet-18 with a 20\% compression ratio of
Tiny-ImageNet, closely matching full-dataset training with only 0.3\%
degradation. Code: https://github.com/CYDaaa30/CCFS.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 16:47:40 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Yanda",
""
],
[
"Chen",
"Gongwei",
""
],
[
"Zhang",
"Miao",
""
],
[
"Guan",
"Weili",
""
],
[
"Nie",
"Liqiang",
""
]
] | TITLE: Curriculum Coarse-to-Fine Selection for High-IPC Dataset Distillation
ABSTRACT: Dataset distillation (DD) excels in synthesizing a small number of images per
class (IPC) but struggles to maintain its effectiveness in high-IPC settings.
Recent works on dataset distillation demonstrate that combining distilled and
real data can mitigate the effectiveness decay. However, our analysis of the
combination paradigm reveals that the current one-shot and independent
selection mechanism induces an incompatibility issue between distilled and real
images. To address this issue, we introduce a novel curriculum coarse-to-fine
selection (CCFS) method for efficient high-IPC dataset distillation. CCFS
employs a curriculum selection framework for real data selection, where we
leverage a coarse-to-fine strategy to select appropriate real data based on the
current synthetic dataset in each curriculum. Extensive experiments validate
CCFS, surpassing the state-of-the-art by +6.6\% on CIFAR-10, +5.8\% on
CIFAR-100, and +3.4\% on Tiny-ImageNet under high-IPC settings. Notably, CCFS
achieves 60.2\% test accuracy on ResNet-18 with a 20\% compression ratio of
Tiny-ImageNet, closely matching full-dataset training with only 0.3\%
degradation. Code: https://github.com/CYDaaa30/CCFS.
|
2503.18880 | Arda Senocak | Hyeonggon Ryu, Seongyu Kim, Joon Son Chung, Arda Senocak | Seeing Speech and Sound: Distinguishing and Locating Audios in Visual
Scenes | CVPR 2025 | null | null | null | cs.CV cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | We present a unified model capable of simultaneously grounding both spoken
language and non-speech sounds within a visual scene, addressing key
limitations in current audio-visual grounding models. Existing approaches are
typically limited to handling either speech or non-speech sounds independently,
or at best, together but sequentially without mixing. This limitation prevents
them from capturing the complexity of real-world audio sources that are often
mixed. Our approach introduces a 'mix-and-separate' framework with audio-visual
alignment objectives that jointly learn correspondence and disentanglement
using mixed audio. Through these objectives, our model learns to produce
distinct embeddings for each audio type, enabling effective disentanglement and
grounding across mixed audio sources. Additionally, we created a new dataset to
evaluate simultaneous grounding of mixed audio sources, demonstrating that our
model outperforms prior methods. Our approach also achieves comparable or
better performance in standard segmentation and cross-modal retrieval tasks,
highlighting the benefits of our mix-and-separate approach.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 16:56:04 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ryu",
"Hyeonggon",
""
],
[
"Kim",
"Seongyu",
""
],
[
"Chung",
"Joon Son",
""
],
[
"Senocak",
"Arda",
""
]
] | TITLE: Seeing Speech and Sound: Distinguishing and Locating Audios in Visual
Scenes
ABSTRACT: We present a unified model capable of simultaneously grounding both spoken
language and non-speech sounds within a visual scene, addressing key
limitations in current audio-visual grounding models. Existing approaches are
typically limited to handling either speech or non-speech sounds independently,
or at best, together but sequentially without mixing. This limitation prevents
them from capturing the complexity of real-world audio sources that are often
mixed. Our approach introduces a 'mix-and-separate' framework with audio-visual
alignment objectives that jointly learn correspondence and disentanglement
using mixed audio. Through these objectives, our model learns to produce
distinct embeddings for each audio type, enabling effective disentanglement and
grounding across mixed audio sources. Additionally, we created a new dataset to
evaluate simultaneous grounding of mixed audio sources, demonstrating that our
model outperforms prior methods. Our approach also achieves comparable or
better performance in standard segmentation and cross-modal retrieval tasks,
highlighting the benefits of our mix-and-separate approach.
|
2503.18897 | Thomas Chabal | Thomas Chabal, Shizhe Chen, Jean Ponce, Cordelia Schmid | Online 3D Scene Reconstruction Using Neural Object Priors | 3DV 2025. Project page:
https://www.di.ens.fr/willow/research/online-scene-reconstruction/ | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | This paper addresses the problem of reconstructing a scene online at the
level of objects given an RGB-D video sequence. While current object-aware
neural implicit representations hold promise, they are limited in online
reconstruction efficiency and shape completion. Our main contributions to
alleviate the above limitations are twofold. First, we propose a feature grid
interpolation mechanism to continuously update grid-based object-centric neural
implicit representations as new object parts are revealed. Second, we construct
an object library with previously mapped objects in advance and leverage the
corresponding shape priors to initialize geometric object models in new videos,
subsequently completing them with novel views as well as synthesized past views
to avoid losing original object details. Extensive experiments on synthetic
environments from the Replica dataset, real-world ScanNet sequences and videos
captured in our laboratory demonstrate that our approach outperforms
state-of-the-art neural implicit models for this task in terms of
reconstruction accuracy and completeness.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 17:09:36 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chabal",
"Thomas",
""
],
[
"Chen",
"Shizhe",
""
],
[
"Ponce",
"Jean",
""
],
[
"Schmid",
"Cordelia",
""
]
] | TITLE: Online 3D Scene Reconstruction Using Neural Object Priors
ABSTRACT: This paper addresses the problem of reconstructing a scene online at the
level of objects given an RGB-D video sequence. While current object-aware
neural implicit representations hold promise, they are limited in online
reconstruction efficiency and shape completion. Our main contributions to
alleviate the above limitations are twofold. First, we propose a feature grid
interpolation mechanism to continuously update grid-based object-centric neural
implicit representations as new object parts are revealed. Second, we construct
an object library with previously mapped objects in advance and leverage the
corresponding shape priors to initialize geometric object models in new videos,
subsequently completing them with novel views as well as synthesized past views
to avoid losing original object details. Extensive experiments on synthetic
environments from the Replica dataset, real-world ScanNet sequences and videos
captured in our laboratory demonstrate that our approach outperforms
state-of-the-art neural implicit models for this task in terms of
reconstruction accuracy and completeness.
|
2503.18903 | Moussa Kassem Sbeyti | Moussa Kassem Sbeyti and Nadja Klein and Azarm Nowzad and Fikret
Sivrikaya and Sahin Albayrak | Building Blocks for Robust and Effective Semi-Supervised Real-World
Object Detection | Accepted to Transactions on Machine Learning Research (TMLR).
OpenReview: https://openreview.net/forum?id=vRYt8QLKqK | Transactions on Machine Learning Research, 2025 | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Semi-supervised object detection (SSOD) based on pseudo-labeling
significantly reduces dependence on large labeled datasets by effectively
leveraging both labeled and unlabeled data. However, real-world applications of
SSOD often face critical challenges, including class imbalance, label noise,
and labeling errors. We present an in-depth analysis of SSOD under real-world
conditions, uncovering causes of suboptimal pseudo-labeling and key trade-offs
between label quality and quantity. Based on our findings, we propose four
building blocks that can be seamlessly integrated into an SSOD framework. Rare
Class Collage (RCC): a data augmentation method that enhances the
representation of rare classes by creating collages of rare objects. Rare Class
Focus (RCF): a stratified batch sampling strategy that ensures a more balanced
representation of all classes during training. Ground Truth Label Correction
(GLC): a label refinement method that identifies and corrects false, missing,
and noisy ground truth labels by leveraging the consistency of teacher model
predictions. Pseudo-Label Selection (PLS): a selection method for removing
low-quality pseudo-labeled images, guided by a novel metric estimating the
missing detection rate while accounting for class rarity. We validate our
methods through comprehensive experiments on autonomous driving datasets,
resulting in up to 6% increase in SSOD performance. Overall, our investigation
and novel, data-centric, and broadly applicable building blocks enable robust
and effective SSOD in complex, real-world scenarios. Code is available at
https://mos-ks.github.io/publications.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 17:15:24 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Sbeyti",
"Moussa Kassem",
""
],
[
"Klein",
"Nadja",
""
],
[
"Nowzad",
"Azarm",
""
],
[
"Sivrikaya",
"Fikret",
""
],
[
"Albayrak",
"Sahin",
""
]
] | TITLE: Building Blocks for Robust and Effective Semi-Supervised Real-World
Object Detection
ABSTRACT: Semi-supervised object detection (SSOD) based on pseudo-labeling
significantly reduces dependence on large labeled datasets by effectively
leveraging both labeled and unlabeled data. However, real-world applications of
SSOD often face critical challenges, including class imbalance, label noise,
and labeling errors. We present an in-depth analysis of SSOD under real-world
conditions, uncovering causes of suboptimal pseudo-labeling and key trade-offs
between label quality and quantity. Based on our findings, we propose four
building blocks that can be seamlessly integrated into an SSOD framework. Rare
Class Collage (RCC): a data augmentation method that enhances the
representation of rare classes by creating collages of rare objects. Rare Class
Focus (RCF): a stratified batch sampling strategy that ensures a more balanced
representation of all classes during training. Ground Truth Label Correction
(GLC): a label refinement method that identifies and corrects false, missing,
and noisy ground truth labels by leveraging the consistency of teacher model
predictions. Pseudo-Label Selection (PLS): a selection method for removing
low-quality pseudo-labeled images, guided by a novel metric estimating the
missing detection rate while accounting for class rarity. We validate our
methods through comprehensive experiments on autonomous driving datasets,
resulting in up to 6% increase in SSOD performance. Overall, our investigation
and novel, data-centric, and broadly applicable building blocks enable robust
and effective SSOD in complex, real-world scenarios. Code is available at
https://mos-ks.github.io/publications.
|
2503.18928 | Sabah Shahnoor Anis | Sabah Shahnoor Anis, Devin M. Kellis, Kris Ford Kaigler, Marlene A.
Wilson, Christian O'Reilly | A Reliable and Efficient Detection Pipeline for Rodent Ultrasonic
Vocalizations | Accepted for publication in the proceeding of the 7th International
Conference on Advances in Signal Processing and Artificial Intelligence
(ASPAI' 2025), 8-10 April 2025, Innsbruck, Austria | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Analyzing ultrasonic vocalizations (USVs) is crucial for understanding
rodents' affective states and social behaviors, but the manual analysis is
time-consuming and prone to errors. Automated USV detection systems have been
developed to address these challenges. Yet, these systems often rely on machine
learning and fail to generalize effectively to new datasets. To tackle these
shortcomings, we introduce ContourUSV, an efficient automated system for
detecting USVs from audio recordings. Our pipeline includes spectrogram
generation, cleaning, pre-processing, contour detection, post-processing, and
evaluation against manual annotations. To ensure robustness and reliability, we
compared ContourUSV with three state-of-the-art systems using an existing
open-access USV dataset (USVSEG) and a second dataset we are releasing publicly
along with this paper. On average, across the two datasets, ContourUSV
outperformed the other three systems with a 1.51x improvement in precision,
1.17x in recall, 1.80x in F1 score, and 1.49x in specificity while achieving an
average speedup of 117.07x.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 17:50:49 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Anis",
"Sabah Shahnoor",
""
],
[
"Kellis",
"Devin M.",
""
],
[
"Kaigler",
"Kris Ford",
""
],
[
"Wilson",
"Marlene A.",
""
],
[
"O'Reilly",
"Christian",
""
]
] | TITLE: A Reliable and Efficient Detection Pipeline for Rodent Ultrasonic
Vocalizations
ABSTRACT: Analyzing ultrasonic vocalizations (USVs) is crucial for understanding
rodents' affective states and social behaviors, but the manual analysis is
time-consuming and prone to errors. Automated USV detection systems have been
developed to address these challenges. Yet, these systems often rely on machine
learning and fail to generalize effectively to new datasets. To tackle these
shortcomings, we introduce ContourUSV, an efficient automated system for
detecting USVs from audio recordings. Our pipeline includes spectrogram
generation, cleaning, pre-processing, contour detection, post-processing, and
evaluation against manual annotations. To ensure robustness and reliability, we
compared ContourUSV with three state-of-the-art systems using an existing
open-access USV dataset (USVSEG) and a second dataset we are releasing publicly
along with this paper. On average, across the two datasets, ContourUSV
outperformed the other three systems with a 1.51x improvement in precision,
1.17x in recall, 1.80x in F1 score, and 1.49x in specificity while achieving an
average speedup of 117.07x.
|
2503.18933 | Enrico Pallotta | Enrico Pallotta, Sina Mokhtarzadeh Azar, Shuai Li, Olga Zatsarynna,
Juergen Gall | SyncVP: Joint Diffusion for Synchronous Multi-Modal Video Prediction | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Predicting future video frames is essential for decision-making systems, yet
RGB frames alone often lack the information needed to fully capture the
underlying complexities of the real world. To address this limitation, we
propose a multi-modal framework for Synchronous Video Prediction (SyncVP) that
incorporates complementary data modalities, enhancing the richness and accuracy
of future predictions. SyncVP builds on pre-trained modality-specific diffusion
models and introduces an efficient spatio-temporal cross-attention module to
enable effective information sharing across modalities. We evaluate SyncVP on
standard benchmark datasets, such as Cityscapes and BAIR, using depth as an
additional modality. We furthermore demonstrate its generalization to other
modalities on SYNTHIA with semantic information and ERA5-Land with climate
data. Notably, SyncVP achieves state-of-the-art performance, even in scenarios
where only one modality is present, demonstrating its robustness and potential
for a wide range of applications.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 17:53:44 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Pallotta",
"Enrico",
""
],
[
"Azar",
"Sina Mokhtarzadeh",
""
],
[
"Li",
"Shuai",
""
],
[
"Zatsarynna",
"Olga",
""
],
[
"Gall",
"Juergen",
""
]
] | TITLE: SyncVP: Joint Diffusion for Synchronous Multi-Modal Video Prediction
ABSTRACT: Predicting future video frames is essential for decision-making systems, yet
RGB frames alone often lack the information needed to fully capture the
underlying complexities of the real world. To address this limitation, we
propose a multi-modal framework for Synchronous Video Prediction (SyncVP) that
incorporates complementary data modalities, enhancing the richness and accuracy
of future predictions. SyncVP builds on pre-trained modality-specific diffusion
models and introduces an efficient spatio-temporal cross-attention module to
enable effective information sharing across modalities. We evaluate SyncVP on
standard benchmark datasets, such as Cityscapes and BAIR, using depth as an
additional modality. We furthermore demonstrate its generalization to other
modalities on SYNTHIA with semantic information and ERA5-Land with climate
data. Notably, SyncVP achieves state-of-the-art performance, even in scenarios
where only one modality is present, demonstrating its robustness and potential
for a wide range of applications.
|
2503.18944 | Karim Abou Zeid | Karim Abou Zeid, Kadir Yilmaz, Daan de Geus, Alexander Hermans, David
Adrian, Timm Linder, Bastian Leibe | DINO in the Room: Leveraging 2D Foundation Models for 3D Segmentation | Project page at https://vision.rwth-aachen.de/DITR | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision foundation models (VFMs) trained on large-scale image datasets provide
high-quality features that have significantly advanced 2D visual recognition.
However, their potential in 3D vision remains largely untapped, despite the
common availability of 2D images alongside 3D point cloud datasets. While
significant research has been dedicated to 2D-3D fusion, recent
state-of-the-art 3D methods predominantly focus on 3D data, leaving the
integration of VFMs into 3D models underexplored. In this work, we challenge
this trend by introducing DITR, a simple yet effective approach that extracts
2D foundation model features, projects them to 3D, and finally injects them
into a 3D point cloud segmentation model. DITR achieves state-of-the-art
results on both indoor and outdoor 3D semantic segmentation benchmarks. To
enable the use of VFMs even when images are unavailable during inference, we
further propose to distill 2D foundation models into a 3D backbone as a
pretraining task. By initializing the 3D backbone with knowledge distilled from
2D VFMs, we create a strong basis for downstream 3D segmentation tasks,
ultimately boosting performance across various datasets.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 17:59:11 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zeid",
"Karim Abou",
""
],
[
"Yilmaz",
"Kadir",
""
],
[
"de Geus",
"Daan",
""
],
[
"Hermans",
"Alexander",
""
],
[
"Adrian",
"David",
""
],
[
"Linder",
"Timm",
""
],
[
"Leibe",
"Bastian",
""
]
] | TITLE: DINO in the Room: Leveraging 2D Foundation Models for 3D Segmentation
ABSTRACT: Vision foundation models (VFMs) trained on large-scale image datasets provide
high-quality features that have significantly advanced 2D visual recognition.
However, their potential in 3D vision remains largely untapped, despite the
common availability of 2D images alongside 3D point cloud datasets. While
significant research has been dedicated to 2D-3D fusion, recent
state-of-the-art 3D methods predominantly focus on 3D data, leaving the
integration of VFMs into 3D models underexplored. In this work, we challenge
this trend by introducing DITR, a simple yet effective approach that extracts
2D foundation model features, projects them to 3D, and finally injects them
into a 3D point cloud segmentation model. DITR achieves state-of-the-art
results on both indoor and outdoor 3D semantic segmentation benchmarks. To
enable the use of VFMs even when images are unavailable during inference, we
further propose to distill 2D foundation models into a 3D backbone as a
pretraining task. By initializing the 3D backbone with knowledge distilled from
2D VFMs, we create a strong basis for downstream 3D segmentation tasks,
ultimately boosting performance across various datasets.
|
2503.18947 | Jae Joong Lee | Jae Joong Lee, Bedrich Benes, Raymond A. Yeh | Tuning-Free Amodal Segmentation via the Occlusion-Free Bias of
Inpainting Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Amodal segmentation aims to predict segmentation masks for both the visible
and occluded regions of an object. Most existing works formulate this as a
supervised learning problem, requiring manually annotated amodal masks or
synthetic training data. Consequently, their performance depends on the quality
of the datasets, which often lack diversity and scale. This work introduces a
tuning-free approach that repurposes pretrained diffusion-based inpainting
models for amodal segmentation. Our approach is motivated by the
"occlusion-free bias" of inpainting models, i.e., the inpainted objects tend to
be complete objects without occlusions. Specifically, we reconstruct the
occluded regions of an object via inpainting and then apply segmentation, all
without additional training or fine-tuning. Experiments on five datasets
demonstrate the generalizability and robustness of our approach. On average,
our approach achieves 5.3% more accurate masks over the state-of-the-art.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 17:59:56 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lee",
"Jae Joong",
""
],
[
"Benes",
"Bedrich",
""
],
[
"Yeh",
"Raymond A.",
""
]
] | TITLE: Tuning-Free Amodal Segmentation via the Occlusion-Free Bias of
Inpainting Models
ABSTRACT: Amodal segmentation aims to predict segmentation masks for both the visible
and occluded regions of an object. Most existing works formulate this as a
supervised learning problem, requiring manually annotated amodal masks or
synthetic training data. Consequently, their performance depends on the quality
of the datasets, which often lack diversity and scale. This work introduces a
tuning-free approach that repurposes pretrained diffusion-based inpainting
models for amodal segmentation. Our approach is motivated by the
"occlusion-free bias" of inpainting models, i.e., the inpainted objects tend to
be complete objects without occlusions. Specifically, we reconstruct the
occluded regions of an object via inpainting and then apply segmentation, all
without additional training or fine-tuning. Experiments on five datasets
demonstrate the generalizability and robustness of our approach. On average,
our approach achieves 5.3% more accurate masks over the state-of-the-art.
|
2109.13479 | Arun Sharma PhD | Arun K. Sharma and Nishchal K. Verma | Knowledge Transfer based Evolutionary Deep Neural Network for
Intelligent Fault Diagnosis | Submitted to IEEE Transactions on Sustainable Computing | null | null | null | eess.SP cs.AI cs.SY eess.SY math.OC | http://creativecommons.org/licenses/by/4.0/ | A faster response with commendable accuracy in intelligent systems is
essential for the reliability and smooth operations of industrial machines. Two
main challenges affect the design of such intelligent systems: (i) the
selection of a suitable model and (ii) domain adaptation if there is a
continuous change in operating conditions. Therefore, we propose an
evolutionary Net2Net transformation (EvoN2N) that finds the best suitable DNN
architecture with limited availability of labeled data samples. Net2Net
transformation-based quick learning algorithm has been used in the evolutionary
framework of Non-dominated sorting genetic algorithm II to obtain the best DNN
architecture. Net2Net transformation-based quick learning algorithm uses the
concept of knowledge transfer from one generation to the next for faster
fitness evaluation. The proposed framework can obtain the best model for
intelligent fault diagnosis without a long and time-consuming search process.
The proposed framework has been validated on the Case Western Reserve
University dataset, the Paderborn University dataset, and the gearbox fault
detection dataset under different operating conditions. The best models
obtained are capable of demonstrating an excellent diagnostic performance and
classification accuracy of almost up to 100% for most of the operating
conditions.
| [
{
"version": "v1",
"created": "Tue, 28 Sep 2021 04:31:23 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Feb 2022 12:45:24 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Dec 2024 05:50:39 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Feb 2025 06:43:51 GMT"
},
{
"version": "v5",
"created": "Fri, 21 Mar 2025 11:54:41 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Sharma",
"Arun K.",
""
],
[
"Verma",
"Nishchal K.",
""
]
] | TITLE: Knowledge Transfer based Evolutionary Deep Neural Network for
Intelligent Fault Diagnosis
ABSTRACT: A faster response with commendable accuracy in intelligent systems is
essential for the reliability and smooth operations of industrial machines. Two
main challenges affect the design of such intelligent systems: (i) the
selection of a suitable model and (ii) domain adaptation if there is a
continuous change in operating conditions. Therefore, we propose an
evolutionary Net2Net transformation (EvoN2N) that finds the best suitable DNN
architecture with limited availability of labeled data samples. Net2Net
transformation-based quick learning algorithm has been used in the evolutionary
framework of Non-dominated sorting genetic algorithm II to obtain the best DNN
architecture. Net2Net transformation-based quick learning algorithm uses the
concept of knowledge transfer from one generation to the next for faster
fitness evaluation. The proposed framework can obtain the best model for
intelligent fault diagnosis without a long and time-consuming search process.
The proposed framework has been validated on the Case Western Reserve
University dataset, the Paderborn University dataset, and the gearbox fault
detection dataset under different operating conditions. The best models
obtained are capable of demonstrating an excellent diagnostic performance and
classification accuracy of almost up to 100% for most of the operating
conditions.
|
2209.14790 | Ryan Cory-Wright | Ryan Cory-Wright, Jean Pauphilet | Sparse PCA With Multiple Components | Updated version with improved algorithmics and a new section
containing a generalization of the Gershgorin circle theorem; comments or
suggestions welcome | null | null | null | math.OC cs.LG math.ST stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse Principal Component Analysis (sPCA) is a cardinal technique for
obtaining combinations of features, or principal components (PCs), that explain
the variance of high-dimensional datasets in an interpretable manner. This
involves solving a sparsity and orthogonality constrained convex maximization
problem, which is extremely computationally challenging. Most existing works
address sparse PCA via methods-such as iteratively computing one sparse PC and
deflating the covariance matrix-that do not guarantee the orthogonality, let
alone the optimality, of the resulting solution when we seek multiple mutually
orthogonal PCs. We challenge this status by reformulating the orthogonality
conditions as rank constraints and optimizing over the sparsity and rank
constraints simultaneously. We design tight semidefinite relaxations to supply
high-quality upper bounds, which we strengthen via additional second-order cone
inequalities when each PC's individual sparsity is specified. Further, we
derive a combinatorial upper bound on the maximum amount of variance explained
as a function of the support. We exploit these relaxations and bounds to
propose exact methods and rounding mechanisms that, together, obtain solutions
with a bound gap on the order of 0%-15% for real-world datasets with p = 100s
or 1000s of features and r \in {2, 3} components. Numerically, our algorithms
match (and sometimes surpass) the best performing methods in terms of fraction
of variance explained and systematically return PCs that are sparse and
orthogonal. In contrast, we find that existing methods like deflation return
solutions that violate the orthogonality constraints, even when the data is
generated according to sparse orthogonal PCs. Altogether, our approach solves
sparse PCA problems with multiple components to certifiable (near) optimality
in a practically tractable fashion.
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2022 13:57:18 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Oct 2023 16:10:09 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 14:52:20 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Cory-Wright",
"Ryan",
""
],
[
"Pauphilet",
"Jean",
""
]
] | TITLE: Sparse PCA With Multiple Components
ABSTRACT: Sparse Principal Component Analysis (sPCA) is a cardinal technique for
obtaining combinations of features, or principal components (PCs), that explain
the variance of high-dimensional datasets in an interpretable manner. This
involves solving a sparsity and orthogonality constrained convex maximization
problem, which is extremely computationally challenging. Most existing works
address sparse PCA via methods-such as iteratively computing one sparse PC and
deflating the covariance matrix-that do not guarantee the orthogonality, let
alone the optimality, of the resulting solution when we seek multiple mutually
orthogonal PCs. We challenge this status by reformulating the orthogonality
conditions as rank constraints and optimizing over the sparsity and rank
constraints simultaneously. We design tight semidefinite relaxations to supply
high-quality upper bounds, which we strengthen via additional second-order cone
inequalities when each PC's individual sparsity is specified. Further, we
derive a combinatorial upper bound on the maximum amount of variance explained
as a function of the support. We exploit these relaxations and bounds to
propose exact methods and rounding mechanisms that, together, obtain solutions
with a bound gap on the order of 0%-15% for real-world datasets with p = 100s
or 1000s of features and r \in {2, 3} components. Numerically, our algorithms
match (and sometimes surpass) the best performing methods in terms of fraction
of variance explained and systematically return PCs that are sparse and
orthogonal. In contrast, we find that existing methods like deflation return
solutions that violate the orthogonality constraints, even when the data is
generated according to sparse orthogonal PCs. Altogether, our approach solves
sparse PCA problems with multiple components to certifiable (near) optimality
in a practically tractable fashion.
|
2302.03086 | Branton DeMoss | Branton DeMoss, Paul Duckworth, Jakob Foerster, Nick Hawes, Ingmar
Posner | DITTO: Offline Imitation Learning with World Models | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For imitation learning algorithms to scale to real-world challenges, they
must handle high-dimensional observations, offline learning, and policy-induced
covariate-shift. We propose DITTO, an offline imitation learning algorithm
which addresses all three of these problems. DITTO optimizes a novel distance
metric in the latent space of a learned world model: First, we train a world
model on all available trajectory data, then, the imitation agent is unrolled
from expert start states in the learned model, and penalized for its latent
divergence from the expert dataset over multiple time steps. We optimize this
multi-step latent divergence using standard reinforcement learning algorithms,
which provably induces imitation learning, and empirically achieves
state-of-the art performance and sample efficiency on a range of Atari
environments from pixels, without any online environment access. We also adapt
other standard imitation learning algorithms to the world model setting, and
show that this considerably improves their performance. Our results show how
creative use of world models can lead to a simple, robust, and
highly-performant policy-learning framework.
| [
{
"version": "v1",
"created": "Mon, 6 Feb 2023 19:41:18 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 12:00:05 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"DeMoss",
"Branton",
""
],
[
"Duckworth",
"Paul",
""
],
[
"Foerster",
"Jakob",
""
],
[
"Hawes",
"Nick",
""
],
[
"Posner",
"Ingmar",
""
]
] | TITLE: DITTO: Offline Imitation Learning with World Models
ABSTRACT: For imitation learning algorithms to scale to real-world challenges, they
must handle high-dimensional observations, offline learning, and policy-induced
covariate-shift. We propose DITTO, an offline imitation learning algorithm
which addresses all three of these problems. DITTO optimizes a novel distance
metric in the latent space of a learned world model: First, we train a world
model on all available trajectory data, then, the imitation agent is unrolled
from expert start states in the learned model, and penalized for its latent
divergence from the expert dataset over multiple time steps. We optimize this
multi-step latent divergence using standard reinforcement learning algorithms,
which provably induces imitation learning, and empirically achieves
state-of-the art performance and sample efficiency on a range of Atari
environments from pixels, without any online environment access. We also adapt
other standard imitation learning algorithms to the world model setting, and
show that this considerably improves their performance. Our results show how
creative use of world models can lead to a simple, robust, and
highly-performant policy-learning framework.
|
2310.04901 | Samet Hicsonmez | Samet Hicsonmez, Nermin Samet, Fidan Samet, Oguz Bakir, Emre Akbas,
Pinar Duygulu | WAIT: Feature Warping for Animation to Illustration video Translation
using GANs | Accepted to Neurocomputing | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore a new domain for video-to-video translation.
Motivated by the availability of animation movies that are adopted from
illustrated books for children, we aim to stylize these videos with the style
of the original illustrations. Current state-of-the-art video-to-video
translation models rely on having a video sequence or a single style image to
stylize an input video. We introduce a new problem for video stylizing where an
unordered set of images are used. This is a challenging task for two reasons:
i) we do not have the advantage of temporal consistency as in video sequences;
ii) it is more difficult to obtain consistent styles for video frames from a
set of unordered images compared to using a single image. Most of the
video-to-video translation methods are built on an image-to-image translation
model, and integrate additional networks such as optical flow, or temporal
predictors to capture temporal relations. These additional networks make the
model training and inference complicated and slow down the process. To ensure
temporal coherency in video-to-video style transfer, we propose a new generator
network with feature warping layers which overcomes the limitations of the
previous methods. We show the effectiveness of our method on three datasets
both qualitatively and quantitatively. Code and pretrained models are available
at https://github.com/giddyyupp/wait.
| [
{
"version": "v1",
"created": "Sat, 7 Oct 2023 19:45:24 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 11:48:35 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Hicsonmez",
"Samet",
""
],
[
"Samet",
"Nermin",
""
],
[
"Samet",
"Fidan",
""
],
[
"Bakir",
"Oguz",
""
],
[
"Akbas",
"Emre",
""
],
[
"Duygulu",
"Pinar",
""
]
] | TITLE: WAIT: Feature Warping for Animation to Illustration video Translation
using GANs
ABSTRACT: In this paper, we explore a new domain for video-to-video translation.
Motivated by the availability of animation movies that are adopted from
illustrated books for children, we aim to stylize these videos with the style
of the original illustrations. Current state-of-the-art video-to-video
translation models rely on having a video sequence or a single style image to
stylize an input video. We introduce a new problem for video stylizing where an
unordered set of images are used. This is a challenging task for two reasons:
i) we do not have the advantage of temporal consistency as in video sequences;
ii) it is more difficult to obtain consistent styles for video frames from a
set of unordered images compared to using a single image. Most of the
video-to-video translation methods are built on an image-to-image translation
model, and integrate additional networks such as optical flow, or temporal
predictors to capture temporal relations. These additional networks make the
model training and inference complicated and slow down the process. To ensure
temporal coherency in video-to-video style transfer, we propose a new generator
network with feature warping layers which overcomes the limitations of the
previous methods. We show the effectiveness of our method on three datasets
both qualitatively and quantitatively. Code and pretrained models are available
at https://github.com/giddyyupp/wait.
|
2310.08848 | Huili Cai | Huili Cai, Xiang Zhang and Xiaofeng Liu | Semi-Supervised End-To-End Contrastive Learning For Time Series
Classification | Submitted to NeurIPS 2023 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series classification is a critical task in various domains, such as
finance, healthcare, and sensor data analysis. Unsupervised contrastive
learning has garnered significant interest in learning effective
representations from time series data with limited labels. The prevalent
approach in existing contrastive learning methods consists of two separate
stages: pre-training the encoder on unlabeled datasets and fine-tuning the
well-trained model on a small-scale labeled dataset. However, such two-stage
approaches suffer from several shortcomings, such as the inability of
unsupervised pre-training contrastive loss to directly affect downstream
fine-tuning classifiers, and the lack of exploiting the classification loss
which is guided by valuable ground truth. In this paper, we propose an
end-to-end model called SLOTS (Semi-supervised Learning fOr Time
clasSification). SLOTS receives semi-labeled datasets, comprising a large
number of unlabeled samples and a small proportion of labeled samples, and maps
them to an embedding space through an encoder. We calculate not only the
unsupervised contrastive loss but also measure the supervised contrastive loss
on the samples with ground truth. The learned embeddings are fed into a
classifier, and the classification loss is calculated using the available true
labels. The unsupervised, supervised contrastive losses and classification loss
are jointly used to optimize the encoder and classifier. We evaluate SLOTS by
comparing it with ten state-of-the-art methods across five datasets. The
results demonstrate that SLOTS is a simple yet effective framework. When
compared to the two-stage framework, our end-to-end SLOTS utilizes the same
input data, consumes a similar computational cost, but delivers significantly
improved performance. We release code and datasets at
https://anonymous.4open.science/r/SLOTS-242E.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2023 04:22:21 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 03:06:40 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Cai",
"Huili",
""
],
[
"Zhang",
"Xiang",
""
],
[
"Liu",
"Xiaofeng",
""
]
] | TITLE: Semi-Supervised End-To-End Contrastive Learning For Time Series
Classification
ABSTRACT: Time series classification is a critical task in various domains, such as
finance, healthcare, and sensor data analysis. Unsupervised contrastive
learning has garnered significant interest in learning effective
representations from time series data with limited labels. The prevalent
approach in existing contrastive learning methods consists of two separate
stages: pre-training the encoder on unlabeled datasets and fine-tuning the
well-trained model on a small-scale labeled dataset. However, such two-stage
approaches suffer from several shortcomings, such as the inability of
unsupervised pre-training contrastive loss to directly affect downstream
fine-tuning classifiers, and the lack of exploiting the classification loss
which is guided by valuable ground truth. In this paper, we propose an
end-to-end model called SLOTS (Semi-supervised Learning fOr Time
clasSification). SLOTS receives semi-labeled datasets, comprising a large
number of unlabeled samples and a small proportion of labeled samples, and maps
them to an embedding space through an encoder. We calculate not only the
unsupervised contrastive loss but also measure the supervised contrastive loss
on the samples with ground truth. The learned embeddings are fed into a
classifier, and the classification loss is calculated using the available true
labels. The unsupervised, supervised contrastive losses and classification loss
are jointly used to optimize the encoder and classifier. We evaluate SLOTS by
comparing it with ten state-of-the-art methods across five datasets. The
results demonstrate that SLOTS is a simple yet effective framework. When
compared to the two-stage framework, our end-to-end SLOTS utilizes the same
input data, consumes a similar computational cost, but delivers significantly
improved performance. We release code and datasets at
https://anonymous.4open.science/r/SLOTS-242E.
|
2311.17978 | Kevin Klein | Kevin Klein, Antoine Muller, Alyssa Wohde, Alexander V. Gorelik,
Volker Heyd, Ralf L\"ammel, Yoan Diekmann, Maxime Brami | AutArch: An AI-assisted workflow for object detection and automated
recording in archaeological catalogues | null | null | null | null | cs.CV cs.GR cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The context of this paper is the creation of large uniform archaeological
datasets from heterogeneous published resources, such as find catalogues - with
the help of AI and Big Data. The paper is concerned with the challenge of
consistent assemblages of archaeological data. We cannot simply combine
existing records, as they differ in terms of quality and recording standards.
Thus, records have to be recreated from published archaeological illustrations.
This is only a viable path with the help of automation. The contribution of
this paper is a new workflow for collecting data from archaeological find
catalogues available as legacy resources, such as archaeological drawings and
photographs in large unsorted PDF files; the workflow relies on custom software
(AutArch) supporting image processing, object detection, and interactive means
of validating and adjusting automatically retrieved data. We integrate
artificial intelligence (AI) in terms of neural networks for object detection
and classification into the workflow, thereby speeding up, automating, and
standardising data collection. Objects commonly found in archaeological
catalogues - such as graves, skeletons, ceramics, ornaments, stone tools and
maps - are detected. Those objects are spatially related and analysed to
extract real-life attributes, such as the size and orientation of graves based
on the north arrow and the scale. We also automate recording of geometric
whole-outlines through contour detection, as an alternative to landmark-based
geometric morphometrics. Detected objects, contours, and other automatically
retrieved data can be manually validated and adjusted. We use third millennium
BC Europe (encompassing cultures such as 'Corded Ware' and 'Bell Beaker', and
their burial practices) as a 'testing ground' and for evaluation purposes; this
includes a user study for the workflow and the AutArch software.
| [
{
"version": "v1",
"created": "Wed, 29 Nov 2023 17:24:04 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Feb 2024 14:04:05 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 10:15:21 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Klein",
"Kevin",
""
],
[
"Muller",
"Antoine",
""
],
[
"Wohde",
"Alyssa",
""
],
[
"Gorelik",
"Alexander V.",
""
],
[
"Heyd",
"Volker",
""
],
[
"Lämmel",
"Ralf",
""
],
[
"Diekmann",
"Yoan",
""
],
[
"Brami",
"Maxime",
""
]
] | TITLE: AutArch: An AI-assisted workflow for object detection and automated
recording in archaeological catalogues
ABSTRACT: The context of this paper is the creation of large uniform archaeological
datasets from heterogeneous published resources, such as find catalogues - with
the help of AI and Big Data. The paper is concerned with the challenge of
consistent assemblages of archaeological data. We cannot simply combine
existing records, as they differ in terms of quality and recording standards.
Thus, records have to be recreated from published archaeological illustrations.
This is only a viable path with the help of automation. The contribution of
this paper is a new workflow for collecting data from archaeological find
catalogues available as legacy resources, such as archaeological drawings and
photographs in large unsorted PDF files; the workflow relies on custom software
(AutArch) supporting image processing, object detection, and interactive means
of validating and adjusting automatically retrieved data. We integrate
artificial intelligence (AI) in terms of neural networks for object detection
and classification into the workflow, thereby speeding up, automating, and
standardising data collection. Objects commonly found in archaeological
catalogues - such as graves, skeletons, ceramics, ornaments, stone tools and
maps - are detected. Those objects are spatially related and analysed to
extract real-life attributes, such as the size and orientation of graves based
on the north arrow and the scale. We also automate recording of geometric
whole-outlines through contour detection, as an alternative to landmark-based
geometric morphometrics. Detected objects, contours, and other automatically
retrieved data can be manually validated and adjusted. We use third millennium
BC Europe (encompassing cultures such as 'Corded Ware' and 'Bell Beaker', and
their burial practices) as a 'testing ground' and for evaluation purposes; this
includes a user study for the workflow and the AutArch software.
|
2312.00508 | Ruitong Liu | Ruitong Liu, Yanbin Wang, Zhenhao Guo, Haitao Xu, Zhan Qin, Wenrui Ma,
Fan Zhang | TransURL: Improving malicious URL detection with multi-layer Transformer
encoding and multi-scale pyramid features | 19 pages, 7 figures | Computer Networks 253 (2024) 110707 | 10.1016/j.comnet.2024.11070 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning progress is advancing the detection of malicious URLs.
However, advanced Transformers applied to URLs face difficulties in extracting
local information, character-level details, and structural relationships. To
address these challenges, we propose a novel approach for malicious URL
detection, named TransURL. This method is implemented by co-training the
character-aware Transformer with three feature modules: Multi-Layer Encoding,
Multi-Scale Feature Learning, and Spatial Pyramid Attention. This specialized
Transformer enables TransURL to extract embeddings with character-level
information from URL token sequences, with the three modules aiding the fusion
of multi-layer Transformer encodings and the capture of multi-scale local
details and structural relationships. The proposed method is evaluated across
several challenging scenarios, including class imbalance learning,
multi-classification, cross-dataset testing, and adversarial sample attacks.
Experimental results demonstrate a significant improvement compared to previous
methods. For instance, it achieved a peak F1-score improvement of 40% in
class-imbalanced scenarios and surpassed the best baseline by 14.13% in
accuracy for adversarial attack scenarios. Additionally, a case study
demonstrated that our method accurately identified all 30 active malicious web
pages, whereas two previous state-of-the-art methods missed 4 and 7 malicious
web pages, respectively. The codes and data are available at:
https://github.com/Vul-det/TransURL/.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 2023 11:27:00 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Dec 2023 16:46:54 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 13:48:59 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Liu",
"Ruitong",
""
],
[
"Wang",
"Yanbin",
""
],
[
"Guo",
"Zhenhao",
""
],
[
"Xu",
"Haitao",
""
],
[
"Qin",
"Zhan",
""
],
[
"Ma",
"Wenrui",
""
],
[
"Zhang",
"Fan",
""
]
] | TITLE: TransURL: Improving malicious URL detection with multi-layer Transformer
encoding and multi-scale pyramid features
ABSTRACT: Machine learning progress is advancing the detection of malicious URLs.
However, advanced Transformers applied to URLs face difficulties in extracting
local information, character-level details, and structural relationships. To
address these challenges, we propose a novel approach for malicious URL
detection, named TransURL. This method is implemented by co-training the
character-aware Transformer with three feature modules: Multi-Layer Encoding,
Multi-Scale Feature Learning, and Spatial Pyramid Attention. This specialized
Transformer enables TransURL to extract embeddings with character-level
information from URL token sequences, with the three modules aiding the fusion
of multi-layer Transformer encodings and the capture of multi-scale local
details and structural relationships. The proposed method is evaluated across
several challenging scenarios, including class imbalance learning,
multi-classification, cross-dataset testing, and adversarial sample attacks.
Experimental results demonstrate a significant improvement compared to previous
methods. For instance, it achieved a peak F1-score improvement of 40% in
class-imbalanced scenarios and surpassed the best baseline by 14.13% in
accuracy for adversarial attack scenarios. Additionally, a case study
demonstrated that our method accurately identified all 30 active malicious web
pages, whereas two previous state-of-the-art methods missed 4 and 7 malicious
web pages, respectively. The codes and data are available at:
https://github.com/Vul-det/TransURL/.
|
2401.09258 | Yinuo Zhao | Yinuo Zhao, Kun Wu, Tianjiao Yi, Zhiyuan Xu, Xiaozhu Ju, Zhengping
Che, Chi Harold Liu, Jian Tang | Efficient Training of Generalizable Visuomotor Policies via
Control-Aware Augmentation | null | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Improving generalization is one key challenge in embodied AI, where obtaining
large-scale datasets across diverse scenarios is costly. Traditional weak
augmentations, such as cropping and flipping, are insufficient for improving a
model's performance in new environments. Existing data augmentation methods
often disrupt task-relevant information in images, potentially degrading
performance. To overcome these challenges, we introduce EAGLE, an efficient
training framework for generalizable visuomotor policies that improves upon
existing methods by (1) enhancing generalization by applying augmentation only
to control-related regions identified through a self-supervised control-aware
mask and (2) improving training stability and efficiency by distilling
knowledge from an expert to a visuomotor student policy, which is then deployed
to unseen environments without further fine-tuning. Comprehensive experiments
on three domains, including the DMControl Generalization Benchmark, the
enhanced Robot Manipulation Distraction Benchmark, and a long-sequential
drawer-opening task, validate the effectiveness of our method.
| [
{
"version": "v1",
"created": "Wed, 17 Jan 2024 15:05:00 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 08:19:55 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhao",
"Yinuo",
""
],
[
"Wu",
"Kun",
""
],
[
"Yi",
"Tianjiao",
""
],
[
"Xu",
"Zhiyuan",
""
],
[
"Ju",
"Xiaozhu",
""
],
[
"Che",
"Zhengping",
""
],
[
"Liu",
"Chi Harold",
""
],
[
"Tang",
"Jian",
""
]
] | TITLE: Efficient Training of Generalizable Visuomotor Policies via
Control-Aware Augmentation
ABSTRACT: Improving generalization is one key challenge in embodied AI, where obtaining
large-scale datasets across diverse scenarios is costly. Traditional weak
augmentations, such as cropping and flipping, are insufficient for improving a
model's performance in new environments. Existing data augmentation methods
often disrupt task-relevant information in images, potentially degrading
performance. To overcome these challenges, we introduce EAGLE, an efficient
training framework for generalizable visuomotor policies that improves upon
existing methods by (1) enhancing generalization by applying augmentation only
to control-related regions identified through a self-supervised control-aware
mask and (2) improving training stability and efficiency by distilling
knowledge from an expert to a visuomotor student policy, which is then deployed
to unseen environments without further fine-tuning. Comprehensive experiments
on three domains, including the DMControl Generalization Benchmark, the
enhanced Robot Manipulation Distraction Benchmark, and a long-sequential
drawer-opening task, validate the effectiveness of our method.
|
2401.10090 | Yunpeng Gong | Yunpeng Gong and Zhun Zhong and Yansong Qu and Zhiming Luo and
Rongrong Ji and Min Jiang | Cross-Modality Perturbation Synergy Attack for Person Re-identification | Accepted at the Thirty-eighth Annual Conference on Neural Information
Processing Systems (https://openreview.net/forum?id=LONd7ACEjy) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In recent years, there has been significant research focusing on addressing
security concerns in single-modal person re-identification (ReID) systems that
are based on RGB images. However, the safety of cross-modality scenarios, which
are more commonly encountered in practical applications involving images
captured by infrared cameras, has not received adequate attention. The main
challenge in cross-modality ReID lies in effectively dealing with visual
differences between different modalities. For instance, infrared images are
typically grayscale, unlike visible images that contain color information.
Existing attack methods have primarily focused on the characteristics of the
visible image modality, overlooking the features of other modalities and the
variations in data distribution among different modalities. This oversight can
potentially undermine the effectiveness of these methods in image retrieval
across diverse modalities. This study represents the first exploration into the
security of cross-modality ReID models and proposes a universal perturbation
attack specifically designed for cross-modality ReID. This attack optimizes
perturbations by leveraging gradients from diverse modality data, thereby
disrupting the discriminator and reinforcing the differences between
modalities. We conducted experiments on three widely used cross-modality
datasets, namely RegDB, SYSU, and LLCM. The results not only demonstrate the
effectiveness of our method but also provide insights for future improvements
in the robustness of cross-modality ReID systems. The code will be available at
https://github.com/finger-monkey/cmps__attack.
| [
{
"version": "v1",
"created": "Thu, 18 Jan 2024 15:56:23 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Jan 2024 03:31:49 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Oct 2024 06:56:39 GMT"
},
{
"version": "v4",
"created": "Sun, 20 Oct 2024 14:41:28 GMT"
},
{
"version": "v5",
"created": "Tue, 22 Oct 2024 03:48:13 GMT"
},
{
"version": "v6",
"created": "Fri, 21 Mar 2025 07:20:14 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Gong",
"Yunpeng",
""
],
[
"Zhong",
"Zhun",
""
],
[
"Qu",
"Yansong",
""
],
[
"Luo",
"Zhiming",
""
],
[
"Ji",
"Rongrong",
""
],
[
"Jiang",
"Min",
""
]
] | TITLE: Cross-Modality Perturbation Synergy Attack for Person Re-identification
ABSTRACT: In recent years, there has been significant research focusing on addressing
security concerns in single-modal person re-identification (ReID) systems that
are based on RGB images. However, the safety of cross-modality scenarios, which
are more commonly encountered in practical applications involving images
captured by infrared cameras, has not received adequate attention. The main
challenge in cross-modality ReID lies in effectively dealing with visual
differences between different modalities. For instance, infrared images are
typically grayscale, unlike visible images that contain color information.
Existing attack methods have primarily focused on the characteristics of the
visible image modality, overlooking the features of other modalities and the
variations in data distribution among different modalities. This oversight can
potentially undermine the effectiveness of these methods in image retrieval
across diverse modalities. This study represents the first exploration into the
security of cross-modality ReID models and proposes a universal perturbation
attack specifically designed for cross-modality ReID. This attack optimizes
perturbations by leveraging gradients from diverse modality data, thereby
disrupting the discriminator and reinforcing the differences between
modalities. We conducted experiments on three widely used cross-modality
datasets, namely RegDB, SYSU, and LLCM. The results not only demonstrate the
effectiveness of our method but also provide insights for future improvements
in the robustness of cross-modality ReID systems. The code will be available at
https://github.com/finger-monkey/cmps__attack.
|
2401.11652 | Chu Myaet Thwal | Chu Myaet Thwal, Minh N.H. Nguyen, Ye Lin Tun, Seong Tae Kim, My T.
Thai, Choong Seon Hong | OnDev-LCT: On-Device Lightweight Convolutional Transformers towards
federated learning | Published in Neural Networks | null | 10.1016/j.neunet.2023.11.044 | null | cs.CV cs.AI cs.CC cs.DC cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Federated learning (FL) has emerged as a promising approach to
collaboratively train machine learning models across multiple edge devices
while preserving privacy. The success of FL hinges on the efficiency of
participating models and their ability to handle the unique challenges of
distributed learning. While several variants of Vision Transformer (ViT) have
shown great potential as alternatives to modern convolutional neural networks
(CNNs) for centralized training, the unprecedented size and higher
computational demands hinder their deployment on resource-constrained edge
devices, challenging their widespread application in FL. Since client devices
in FL typically have limited computing resources and communication bandwidth,
models intended for such devices must strike a balance between model size,
computational efficiency, and the ability to adapt to the diverse and non-IID
data distributions encountered in FL. To address these challenges, we propose
OnDev-LCT: Lightweight Convolutional Transformers for On-Device vision tasks
with limited training data and resources. Our models incorporate image-specific
inductive biases through the LCT tokenizer by leveraging efficient depthwise
separable convolutions in residual linear bottleneck blocks to extract local
features, while the multi-head self-attention (MHSA) mechanism in the LCT
encoder implicitly facilitates capturing global representations of images.
Extensive experiments on benchmark image datasets indicate that our models
outperform existing lightweight vision models while having fewer parameters and
lower computational demands, making them suitable for FL scenarios with data
heterogeneity and communication bottlenecks.
| [
{
"version": "v1",
"created": "Mon, 22 Jan 2024 02:17:36 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Thwal",
"Chu Myaet",
""
],
[
"Nguyen",
"Minh N. H.",
""
],
[
"Tun",
"Ye Lin",
""
],
[
"Kim",
"Seong Tae",
""
],
[
"Thai",
"My T.",
""
],
[
"Hong",
"Choong Seon",
""
]
] | TITLE: OnDev-LCT: On-Device Lightweight Convolutional Transformers towards
federated learning
ABSTRACT: Federated learning (FL) has emerged as a promising approach to
collaboratively train machine learning models across multiple edge devices
while preserving privacy. The success of FL hinges on the efficiency of
participating models and their ability to handle the unique challenges of
distributed learning. While several variants of Vision Transformer (ViT) have
shown great potential as alternatives to modern convolutional neural networks
(CNNs) for centralized training, the unprecedented size and higher
computational demands hinder their deployment on resource-constrained edge
devices, challenging their widespread application in FL. Since client devices
in FL typically have limited computing resources and communication bandwidth,
models intended for such devices must strike a balance between model size,
computational efficiency, and the ability to adapt to the diverse and non-IID
data distributions encountered in FL. To address these challenges, we propose
OnDev-LCT: Lightweight Convolutional Transformers for On-Device vision tasks
with limited training data and resources. Our models incorporate image-specific
inductive biases through the LCT tokenizer by leveraging efficient depthwise
separable convolutions in residual linear bottleneck blocks to extract local
features, while the multi-head self-attention (MHSA) mechanism in the LCT
encoder implicitly facilitates capturing global representations of images.
Extensive experiments on benchmark image datasets indicate that our models
outperform existing lightweight vision models while having fewer parameters and
lower computational demands, making them suitable for FL scenarios with data
heterogeneity and communication bottlenecks.
|
2402.05868 | Yongfeng Zhang | Sam Lin, Wenyue Hua, Zhenting Wang, Mingyu Jin, Lizhou Fan, Yongfeng
Zhang | EmojiPrompt: Generative Prompt Obfuscation for Privacy-Preserving
Communication with Cloud-based LLMs | Accepted to the 2025 Annual Conference of the Nations of the Americas
Chapter of the Association for Computational Linguistics (NAACL 2025) | null | null | null | cs.CL cs.AI cs.CR cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud-based Large Language Models (LLMs) such as ChatGPT have become
increasingly integral to daily operations. Nevertheless, they also introduce
privacy concerns: firstly, numerous studies underscore the risks to user
privacy posed by jailbreaking cloud-based LLMs; secondly, the LLM service
providers have access to all user data, which deters individuals from
confidently utilizing such services. To address such concerns, we propose a
simple yet effective paradigm, EmojiPrompt, to protect user privacy. At its
core, EmojiPrompt performs generative transformation, obfuscating private data
within prompts with linguistic and non-linguistic elements before submitting
them to cloud-based LLMs. We evaluate EmojiPrompt's performance across 8
datasets from various domains. We also propose simulated inference attacks to
assess EmojiPrompt's ability to preserve user privacy. The results demonstrate
that EmojiPrompt effectively obfuscates user private data, while largely
maintaining, or even enhancing, performances compared to the unobfuscated
version. Furthermore, EmojiPrompt's atomic-level obfuscation allows it to
function exclusively with cloud-based LLMs. For source code, please refer to:
https://github.com/agiresearch/EmojiCrypt.
| [
{
"version": "v1",
"created": "Thu, 8 Feb 2024 17:57:11 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Feb 2024 16:26:14 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 20:15:22 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Lin",
"Sam",
""
],
[
"Hua",
"Wenyue",
""
],
[
"Wang",
"Zhenting",
""
],
[
"Jin",
"Mingyu",
""
],
[
"Fan",
"Lizhou",
""
],
[
"Zhang",
"Yongfeng",
""
]
] | TITLE: EmojiPrompt: Generative Prompt Obfuscation for Privacy-Preserving
Communication with Cloud-based LLMs
ABSTRACT: Cloud-based Large Language Models (LLMs) such as ChatGPT have become
increasingly integral to daily operations. Nevertheless, they also introduce
privacy concerns: firstly, numerous studies underscore the risks to user
privacy posed by jailbreaking cloud-based LLMs; secondly, the LLM service
providers have access to all user data, which deters individuals from
confidently utilizing such services. To address such concerns, we propose a
simple yet effective paradigm, EmojiPrompt, to protect user privacy. At its
core, EmojiPrompt performs generative transformation, obfuscating private data
within prompts with linguistic and non-linguistic elements before submitting
them to cloud-based LLMs. We evaluate EmojiPrompt's performance across 8
datasets from various domains. We also propose simulated inference attacks to
assess EmojiPrompt's ability to preserve user privacy. The results demonstrate
that EmojiPrompt effectively obfuscates user private data, while largely
maintaining, or even enhancing, performances compared to the unobfuscated
version. Furthermore, EmojiPrompt's atomic-level obfuscation allows it to
function exclusively with cloud-based LLMs. For source code, please refer to:
https://github.com/agiresearch/EmojiCrypt.
|
2402.05935 | Renrui Zhang | Dongyang Liu, Renrui Zhang, Longtian Qiu, Siyuan Huang, Weifeng Lin,
Shitian Zhao, Shijie Geng, Ziyi Lin, Peng Jin, Kaipeng Zhang, Wenqi Shao,
Chao Xu, Conghui He, Junjun He, Hao Shao, Pan Lu, Hongsheng Li, Yu Qiao, Peng
Gao | SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large
Language Models | Accepted by ICML 2024. Code and models are released at
https://github.com/Alpha-VLLM/LLaMA2-Accessory | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose SPHINX-X, an extensive Multimodality Large Language Model (MLLM)
series developed upon SPHINX. To improve the architecture and training
efficiency, we modify the SPHINX framework by removing redundant visual
encoders, bypassing fully-padded sub-images with skip tokens, and simplifying
multi-stage training into a one-stage all-in-one paradigm. To fully unleash the
potential of MLLMs, we assemble a comprehensive multi-domain and multimodal
dataset covering publicly available resources in language, vision, and
vision-language tasks. We further enrich this collection with our curated OCR
intensive and Set-of-Mark datasets, extending the diversity and generality. By
training over different base LLMs including TinyLlama1.1B, InternLM2-7B,
LLaMA2-13B, and Mixtral8x7B, we obtain a spectrum of MLLMs that vary in
parameter size and multilingual capabilities. Comprehensive benchmarking
reveals a strong correlation between the multi-modal performance with the data
and parameter scales. Code and models are released at
https://github.com/Alpha-VLLM/LLaMA2-Accessory
| [
{
"version": "v1",
"created": "Thu, 8 Feb 2024 18:59:48 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jun 2024 07:59:03 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 10:19:01 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Liu",
"Dongyang",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Qiu",
"Longtian",
""
],
[
"Huang",
"Siyuan",
""
],
[
"Lin",
"Weifeng",
""
],
[
"Zhao",
"Shitian",
""
],
[
"Geng",
"Shijie",
""
],
[
"Lin",
"Ziyi",
""
],
[
"Jin",
"Peng",
""
],
[
"Zhang",
"Kaipeng",
""
],
[
"Shao",
"Wenqi",
""
],
[
"Xu",
"Chao",
""
],
[
"He",
"Conghui",
""
],
[
"He",
"Junjun",
""
],
[
"Shao",
"Hao",
""
],
[
"Lu",
"Pan",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Qiao",
"Yu",
""
],
[
"Gao",
"Peng",
""
]
] | TITLE: SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large
Language Models
ABSTRACT: We propose SPHINX-X, an extensive Multimodality Large Language Model (MLLM)
series developed upon SPHINX. To improve the architecture and training
efficiency, we modify the SPHINX framework by removing redundant visual
encoders, bypassing fully-padded sub-images with skip tokens, and simplifying
multi-stage training into a one-stage all-in-one paradigm. To fully unleash the
potential of MLLMs, we assemble a comprehensive multi-domain and multimodal
dataset covering publicly available resources in language, vision, and
vision-language tasks. We further enrich this collection with our curated OCR
intensive and Set-of-Mark datasets, extending the diversity and generality. By
training over different base LLMs including TinyLlama1.1B, InternLM2-7B,
LLaMA2-13B, and Mixtral8x7B, we obtain a spectrum of MLLMs that vary in
parameter size and multilingual capabilities. Comprehensive benchmarking
reveals a strong correlation between the multi-modal performance with the data
and parameter scales. Code and models are released at
https://github.com/Alpha-VLLM/LLaMA2-Accessory
|
2402.10079 | Hamed Haghighi Mr | Hamed Haghighi, Xiaomeng Wang, Hao Jing, and Mehrdad Dianati | Data-driven Camera and Lidar Simulation Models for Autonomous Driving: A
Review from Generative Models to Volume Renderers | To be published in IEEE Transactions on Intelligent Vehicles | null | null | null | cs.CV cs.GR cs.LG cs.RO | http://creativecommons.org/licenses/by/4.0/ | Perception sensors, particularly camera and Lidar, are key elements of
Autonomous Driving Systems (ADS) that enable them to comprehend their
surroundings to informed driving and control decisions. Therefore, developing
realistic simulation models for these sensors is essential for conducting
effective simulation-based testing of ADS. Moreover, the rise of deep
learning-based perception models has increased the utility of sensor simulation
models for synthesising diverse training datasets. The traditional sensor
simulation models rely on computationally expensive physics-based algorithms,
specifically in complex systems such as ADS. Hence, the current potential
resides in data-driven approaches, fuelled by the exceptional performance of
deep generative models in capturing high-dimensional data distribution and
volume renderers in accurately representing scenes. This paper reviews the
current state-of-the-art data-driven camera and Lidar simulation models and
their evaluation methods. It explores a spectrum of models from the novel
perspective of generative models and volume renderers. Generative models are
discussed in terms of their input-output types, while volume renderers are
categorised based on their input encoding. Finally, the paper illustrates
commonly used evaluation techniques for assessing sensor simulation models and
highlights the existing research gaps in the area.
| [
{
"version": "v1",
"created": "Mon, 29 Jan 2024 16:56:17 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 14:13:38 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Haghighi",
"Hamed",
""
],
[
"Wang",
"Xiaomeng",
""
],
[
"Jing",
"Hao",
""
],
[
"Dianati",
"Mehrdad",
""
]
] | TITLE: Data-driven Camera and Lidar Simulation Models for Autonomous Driving: A
Review from Generative Models to Volume Renderers
ABSTRACT: Perception sensors, particularly camera and Lidar, are key elements of
Autonomous Driving Systems (ADS) that enable them to comprehend their
surroundings to informed driving and control decisions. Therefore, developing
realistic simulation models for these sensors is essential for conducting
effective simulation-based testing of ADS. Moreover, the rise of deep
learning-based perception models has increased the utility of sensor simulation
models for synthesising diverse training datasets. The traditional sensor
simulation models rely on computationally expensive physics-based algorithms,
specifically in complex systems such as ADS. Hence, the current potential
resides in data-driven approaches, fuelled by the exceptional performance of
deep generative models in capturing high-dimensional data distribution and
volume renderers in accurately representing scenes. This paper reviews the
current state-of-the-art data-driven camera and Lidar simulation models and
their evaluation methods. It explores a spectrum of models from the novel
perspective of generative models and volume renderers. Generative models are
discussed in terms of their input-output types, while volume renderers are
categorised based on their input encoding. Finally, the paper illustrates
commonly used evaluation techniques for assessing sensor simulation models and
highlights the existing research gaps in the area.
|
2403.06586 | Michele Fiori | Luca Arrotta, Claudio Bettini, Gabriele Civitarese, Michele Fiori | ContextGPT: Infusing LLMs Knowledge into Neuro-Symbolic Activity
Recognition Models | null | null | 10.1145/3675094.3679000 | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Context-aware Human Activity Recognition (HAR) is a hot research area in
mobile computing, and the most effective solutions in the literature are based
on supervised deep learning models. However, the actual deployment of these
systems is limited by the scarcity of labeled data that is required for
training. Neuro-Symbolic AI (NeSy) provides an interesting research direction
to mitigate this issue, by infusing common-sense knowledge about human
activities and the contexts in which they can be performed into HAR deep
learning classifiers. Existing NeSy methods for context-aware HAR rely on
knowledge encoded in logic-based models (e.g., ontologies) whose design,
implementation, and maintenance to capture new activities and contexts require
significant human engineering efforts, technical knowledge, and domain
expertise. Recent works show that pre-trained Large Language Models (LLMs)
effectively encode common-sense knowledge about human activities. In this work,
we propose ContextGPT: a novel prompt engineering approach to retrieve from
LLMs common-sense knowledge about the relationship between human activities and
the context in which they are performed. Unlike ontologies, ContextGPT requires
limited human effort and expertise. An extensive evaluation carried out on two
public datasets shows how a NeSy model obtained by infusing common-sense
knowledge from ContextGPT is effective in data scarcity scenarios, leading to
similar (and sometimes better) recognition rates than logic-based approaches
with a fraction of the effort.
| [
{
"version": "v1",
"created": "Mon, 11 Mar 2024 10:32:23 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 18:38:58 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Arrotta",
"Luca",
""
],
[
"Bettini",
"Claudio",
""
],
[
"Civitarese",
"Gabriele",
""
],
[
"Fiori",
"Michele",
""
]
] | TITLE: ContextGPT: Infusing LLMs Knowledge into Neuro-Symbolic Activity
Recognition Models
ABSTRACT: Context-aware Human Activity Recognition (HAR) is a hot research area in
mobile computing, and the most effective solutions in the literature are based
on supervised deep learning models. However, the actual deployment of these
systems is limited by the scarcity of labeled data that is required for
training. Neuro-Symbolic AI (NeSy) provides an interesting research direction
to mitigate this issue, by infusing common-sense knowledge about human
activities and the contexts in which they can be performed into HAR deep
learning classifiers. Existing NeSy methods for context-aware HAR rely on
knowledge encoded in logic-based models (e.g., ontologies) whose design,
implementation, and maintenance to capture new activities and contexts require
significant human engineering efforts, technical knowledge, and domain
expertise. Recent works show that pre-trained Large Language Models (LLMs)
effectively encode common-sense knowledge about human activities. In this work,
we propose ContextGPT: a novel prompt engineering approach to retrieve from
LLMs common-sense knowledge about the relationship between human activities and
the context in which they are performed. Unlike ontologies, ContextGPT requires
limited human effort and expertise. An extensive evaluation carried out on two
public datasets shows how a NeSy model obtained by infusing common-sense
knowledge from ContextGPT is effective in data scarcity scenarios, leading to
similar (and sometimes better) recognition rates than logic-based approaches
with a fraction of the effort.
|
2403.09974 | Xialei Liu | Enguang Wang, Zhimao Peng, Zhengyuan Xie, Fei Yang, Xialei Liu,
Ming-Ming Cheng | GET: Unlocking the Multi-modal Potential of CLIP for Generalized
Category Discovery | CVPR 2025 | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Given unlabelled datasets containing both old and new categories, generalized
category discovery (GCD) aims to accurately discover new classes while
correctly classifying old classes. Current GCD methods only use a single visual
modality of information, resulting in a poor classification of visually similar
classes. As a different modality, text information can provide complementary
discriminative information, which motivates us to introduce it into the GCD
task. However, the lack of class names for unlabelled data makes it impractical
to utilize text information. To tackle this challenging problem, in this paper,
we propose a Text Embedding Synthesizer (TES) to generate pseudo text
embeddings for unlabelled samples. Specifically, our TES leverages the property
that CLIP can generate aligned vision-language features, converting visual
embeddings into tokens of the CLIP's text encoder to generate pseudo text
embeddings. Besides, we employ a dual-branch framework, through the joint
learning and instance consistency of different modality branches, visual and
semantic information mutually enhance each other, promoting the interaction and
fusion of visual and text knowledge. Our method unlocks the multi-modal
potentials of CLIP and outperforms the baseline methods by a large margin on
all GCD benchmarks, achieving new state-of-the-art. Our code is available at:
https://github.com/enguangW/GET.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 02:40:13 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Jul 2024 08:20:56 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 01:50:55 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Wang",
"Enguang",
""
],
[
"Peng",
"Zhimao",
""
],
[
"Xie",
"Zhengyuan",
""
],
[
"Yang",
"Fei",
""
],
[
"Liu",
"Xialei",
""
],
[
"Cheng",
"Ming-Ming",
""
]
] | TITLE: GET: Unlocking the Multi-modal Potential of CLIP for Generalized
Category Discovery
ABSTRACT: Given unlabelled datasets containing both old and new categories, generalized
category discovery (GCD) aims to accurately discover new classes while
correctly classifying old classes. Current GCD methods only use a single visual
modality of information, resulting in a poor classification of visually similar
classes. As a different modality, text information can provide complementary
discriminative information, which motivates us to introduce it into the GCD
task. However, the lack of class names for unlabelled data makes it impractical
to utilize text information. To tackle this challenging problem, in this paper,
we propose a Text Embedding Synthesizer (TES) to generate pseudo text
embeddings for unlabelled samples. Specifically, our TES leverages the property
that CLIP can generate aligned vision-language features, converting visual
embeddings into tokens of the CLIP's text encoder to generate pseudo text
embeddings. Besides, we employ a dual-branch framework, through the joint
learning and instance consistency of different modality branches, visual and
semantic information mutually enhance each other, promoting the interaction and
fusion of visual and text knowledge. Our method unlocks the multi-modal
potentials of CLIP and outperforms the baseline methods by a large margin on
all GCD benchmarks, achieving new state-of-the-art. Our code is available at:
https://github.com/enguangW/GET.
|
2403.10346 | George Yiasemis | George Yiasemis, Jan-Jakob Sonke, Jonas Teuwen | End-to-end Adaptive Dynamic Subsampling and Reconstruction for Cardiac
MRI | 38 pages, 26 figures, 2 tables | null | null | null | eess.IV cs.CV physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | $\textbf{Background:}$ Accelerating dynamic MRI is vital for advancing
clinical applications and improving patient comfort. Commonly, deep learning
(DL) methods for accelerated dynamic MRI reconstruction typically rely on
uniformly applying non-adaptive predetermined or random subsampling patterns
across all temporal frames of the dynamic acquisition. This approach fails to
exploit temporal correlations or optimize subsampling on a case-by-case basis.
$\textbf{Purpose:}$ To develop an end-to-end approach for adaptive dynamic
MRI subsampling and reconstruction, capable of generating customized sampling
patterns maximizing at the same time reconstruction quality.
$\textbf{Methods:}$ We introduce the End-to-end Adaptive Dynamic Sampling and
Reconstruction (E2E-ADS-Recon) for MRI framework, which integrates an adaptive
dynamic sampler (ADS) that adapts the acquisition trajectory to each case for a
given acceleration factor with a state-of-the-art dynamic reconstruction
network, vSHARP, for reconstructing the adaptively sampled data into a dynamic
image. The ADS can produce either frame-specific patterns or unified patterns
applied to all temporal frames. E2E-ADS-Recon is evaluated under both
frame-specific and unified 1D or 2D sampling settings, using dynamic cine
cardiac MRI data and compared with vSHARP models employing standard subsampling
trajectories, as well as pipelines where ADS was replaced by parameterized
samplers optimized for dataset-specific schemes.
$\textbf{Results:}$ E2E-ADS-Recon exhibited superior reconstruction quality,
especially at high accelerations, in terms of standard quantitative metrics
(SSIM, pSNR, NMSE).
$\textbf{Conclusion:}$ The proposed framework improves reconstruction
quality, highlighting the importance of case-specific subsampling optimization
in dynamic MRI applications.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 14:31:35 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 16:26:49 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Yiasemis",
"George",
""
],
[
"Sonke",
"Jan-Jakob",
""
],
[
"Teuwen",
"Jonas",
""
]
] | TITLE: End-to-end Adaptive Dynamic Subsampling and Reconstruction for Cardiac
MRI
ABSTRACT: $\textbf{Background:}$ Accelerating dynamic MRI is vital for advancing
clinical applications and improving patient comfort. Commonly, deep learning
(DL) methods for accelerated dynamic MRI reconstruction typically rely on
uniformly applying non-adaptive predetermined or random subsampling patterns
across all temporal frames of the dynamic acquisition. This approach fails to
exploit temporal correlations or optimize subsampling on a case-by-case basis.
$\textbf{Purpose:}$ To develop an end-to-end approach for adaptive dynamic
MRI subsampling and reconstruction, capable of generating customized sampling
patterns maximizing at the same time reconstruction quality.
$\textbf{Methods:}$ We introduce the End-to-end Adaptive Dynamic Sampling and
Reconstruction (E2E-ADS-Recon) for MRI framework, which integrates an adaptive
dynamic sampler (ADS) that adapts the acquisition trajectory to each case for a
given acceleration factor with a state-of-the-art dynamic reconstruction
network, vSHARP, for reconstructing the adaptively sampled data into a dynamic
image. The ADS can produce either frame-specific patterns or unified patterns
applied to all temporal frames. E2E-ADS-Recon is evaluated under both
frame-specific and unified 1D or 2D sampling settings, using dynamic cine
cardiac MRI data and compared with vSHARP models employing standard subsampling
trajectories, as well as pipelines where ADS was replaced by parameterized
samplers optimized for dataset-specific schemes.
$\textbf{Results:}$ E2E-ADS-Recon exhibited superior reconstruction quality,
especially at high accelerations, in terms of standard quantitative metrics
(SSIM, pSNR, NMSE).
$\textbf{Conclusion:}$ The proposed framework improves reconstruction
quality, highlighting the importance of case-specific subsampling optimization
in dynamic MRI applications.
|
2404.04138 | Rebeca Gonzalez Suarez | Olga Sunneborn Gudnadottir, Axel Gall\'en, Giulia Ripellino, Jochen
Jens Heinrich, Raazesh Sainudiin, Rebeca Gonzalez Suarez | Sparks in the Dark | 13 pages, 6 figures | SciPost Phys. 18, 080 (2025) | 10.21468/SciPostPhys.18.3.080 | null | hep-ex physics.data-an | http://creativecommons.org/licenses/by/4.0/ | This study presents a novel method for the definition of signal regions in
searches for new physics at collider experiments, specifically those conducted
at CERNs Large Hadron Collider. By leveraging multi-dimensional histograms with
precise arithmetic and utilizing the SparkDensityTree library, it is possible
to identify high-density regions within the available phase space, potentially
improving sensitivity to very small signals. Inspired by an ongoing search for
dark mesons at the ATLAS experiment, CMS open data is used for this
proof-of-concept intentionally targeting an already excluded signal. Several
signal regions are defined based on density estimates of signal and background.
These preliminary regions align well with the physical properties of the signal
while effectively rejecting background events. While not explored in this work,
this method is also scalable, which makes it ideal for large datasets such as
those expected at the high-luminosity upgrade of the LHC. Finally, this method
is flexible and can be easily extended, promising a boost to the signal region
definition process for new physics searches at colliders.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2024 14:37:30 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Oct 2024 13:07:21 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Gudnadottir",
"Olga Sunneborn",
""
],
[
"Gallén",
"Axel",
""
],
[
"Ripellino",
"Giulia",
""
],
[
"Heinrich",
"Jochen Jens",
""
],
[
"Sainudiin",
"Raazesh",
""
],
[
"Suarez",
"Rebeca Gonzalez",
""
]
] | TITLE: Sparks in the Dark
ABSTRACT: This study presents a novel method for the definition of signal regions in
searches for new physics at collider experiments, specifically those conducted
at CERNs Large Hadron Collider. By leveraging multi-dimensional histograms with
precise arithmetic and utilizing the SparkDensityTree library, it is possible
to identify high-density regions within the available phase space, potentially
improving sensitivity to very small signals. Inspired by an ongoing search for
dark mesons at the ATLAS experiment, CMS open data is used for this
proof-of-concept intentionally targeting an already excluded signal. Several
signal regions are defined based on density estimates of signal and background.
These preliminary regions align well with the physical properties of the signal
while effectively rejecting background events. While not explored in this work,
this method is also scalable, which makes it ideal for large datasets such as
those expected at the high-luminosity upgrade of the LHC. Finally, this method
is flexible and can be easily extended, promising a boost to the signal region
definition process for new physics searches at colliders.
|
2404.14719 | Ruitong Liu | Ruitong Liu, Yanbin Wang, Haitao Xu, Jianguo Sun, Fan Zhang, Peiyue
Li, Zhenhao Guo | Vul-LMGNNs: Fusing language models and online-distilled graph neural
networks for code vulnerability detection | 16 pages, 7 figures | Information Fusion 115 (2025) 102748 Information Fusion 115 (2025)
102748 Information Fusion 115 (2025) 102748 | 10.1016/j.inffus.2024.102748 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Code Language Models (codeLMs) and Graph Neural Networks (GNNs) are widely
used in code vulnerability detection. However, GNNs often rely on aggregating
information from adjacent nodes, limiting structural information propagation
across layers. While codeLMs can supplement GNNs with semantic information,
existing integration methods underexplore their collaborative potential. To
address these challenges, we propose Vul-LMGNNs, integrating pre-trained
codeLMs with GNNs to enable cross-layer propagation of semantic and structural
information. Vul-LMGNNs leverage Code Property Graphs (CPGs) to incorporate
syntax, control flow, and data dependencies, using gated GNNs for structural
extraction. An online knowledge distillation (KD) mechanism allows a student
GNN to capture structural information from a trained counterpart via
alternating training. Additionally, an "implicit-explicit" joint training
framework leverages codeLMs to initialize embeddings and propagate code
semantics. In the explicit phase, it performs late fusion via linear
interpolation. Evaluations on real-world vulnerability datasets show Vul-LMGNNs
outperform 17 state-of-the-art approaches. Source code is available at:
https://github.com/Vul-LMGNN/vul-LMGNN.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 03:48:18 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 13:29:30 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Liu",
"Ruitong",
""
],
[
"Wang",
"Yanbin",
""
],
[
"Xu",
"Haitao",
""
],
[
"Sun",
"Jianguo",
""
],
[
"Zhang",
"Fan",
""
],
[
"Li",
"Peiyue",
""
],
[
"Guo",
"Zhenhao",
""
]
] | TITLE: Vul-LMGNNs: Fusing language models and online-distilled graph neural
networks for code vulnerability detection
ABSTRACT: Code Language Models (codeLMs) and Graph Neural Networks (GNNs) are widely
used in code vulnerability detection. However, GNNs often rely on aggregating
information from adjacent nodes, limiting structural information propagation
across layers. While codeLMs can supplement GNNs with semantic information,
existing integration methods underexplore their collaborative potential. To
address these challenges, we propose Vul-LMGNNs, integrating pre-trained
codeLMs with GNNs to enable cross-layer propagation of semantic and structural
information. Vul-LMGNNs leverage Code Property Graphs (CPGs) to incorporate
syntax, control flow, and data dependencies, using gated GNNs for structural
extraction. An online knowledge distillation (KD) mechanism allows a student
GNN to capture structural information from a trained counterpart via
alternating training. Additionally, an "implicit-explicit" joint training
framework leverages codeLMs to initialize embeddings and propagate code
semantics. In the explicit phase, it performs late fusion via linear
interpolation. Evaluations on real-world vulnerability datasets show Vul-LMGNNs
outperform 17 state-of-the-art approaches. Source code is available at:
https://github.com/Vul-LMGNN/vul-LMGNN.
|
2405.08101 | Ben Moews | G. Ibikunle, B. Moews, D. Muravyev, K. Rzayev | Data-driven measures of high-frequency trading | 78 pages, 6 figures, 17 tables | null | null | null | q-fin.CP cs.LG | http://creativecommons.org/licenses/by/4.0/ | High-frequency trading (HFT) accounts for almost half of equity trading
volume, yet it is not identified in public data. We develop novel data-driven
measures of HFT activity that separate strategies that supply and demand
liquidity. We train machine learning models to predict HFT activity observed in
a proprietary dataset using concurrent public intraday data. Once trained on
the dataset, these models generate HFT measures for the entire U.S. stock
universe from 2010 to 2023. Our measures outperform conventional proxies, which
struggle to capture HFT's time dynamics. We further validate them using shocks
to HFT activity, including latency arbitrage, exchange speed bumps, and data
feed upgrades. Finally, our measures reveal how HFT affects fundamental
information acquisition. Liquidity-supplying HFTs improve price discovery
around earnings announcements while liquidity-demanding strategies impede it.
| [
{
"version": "v1",
"created": "Mon, 13 May 2024 18:28:39 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jan 2025 15:57:52 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 17:31:44 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Ibikunle",
"G.",
""
],
[
"Moews",
"B.",
""
],
[
"Muravyev",
"D.",
""
],
[
"Rzayev",
"K.",
""
]
] | TITLE: Data-driven measures of high-frequency trading
ABSTRACT: High-frequency trading (HFT) accounts for almost half of equity trading
volume, yet it is not identified in public data. We develop novel data-driven
measures of HFT activity that separate strategies that supply and demand
liquidity. We train machine learning models to predict HFT activity observed in
a proprietary dataset using concurrent public intraday data. Once trained on
the dataset, these models generate HFT measures for the entire U.S. stock
universe from 2010 to 2023. Our measures outperform conventional proxies, which
struggle to capture HFT's time dynamics. We further validate them using shocks
to HFT activity, including latency arbitrage, exchange speed bumps, and data
feed upgrades. Finally, our measures reveal how HFT affects fundamental
information acquisition. Liquidity-supplying HFTs improve price discovery
around earnings announcements while liquidity-demanding strategies impede it.
|
2405.18281 | Antonios Valkanas | Antonios Valkanas, Boris N. Oreshkin, Mark Coates | MODL: Multilearner Online Deep Learning | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Online deep learning tackles the challenge of learning from data streams by
balancing two competing goals: fast learning and deep learning. However,
existing research primarily emphasizes deep learning solutions, which are more
adept at handling the ``deep'' aspect than the ``fast'' aspect of online
learning. In this work, we introduce an alternative paradigm through a hybrid
multilearner approach. We begin by developing a fast online logistic regression
learner, which operates without relying on backpropagation. It leverages
closed-form recursive updates of model parameters, efficiently addressing the
fast learning component of the online learning challenge. This approach is
further integrated with a cascaded multilearner design, where shallow and deep
learners are co-trained in a cooperative, synergistic manner to solve the
online learning problem. We demonstrate that this approach achieves
state-of-the-art performance on standard online learning datasets. We make our
code available: https://github.com/AntonValk/MODL
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 15:34:33 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 03:21:40 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Valkanas",
"Antonios",
""
],
[
"Oreshkin",
"Boris N.",
""
],
[
"Coates",
"Mark",
""
]
] | TITLE: MODL: Multilearner Online Deep Learning
ABSTRACT: Online deep learning tackles the challenge of learning from data streams by
balancing two competing goals: fast learning and deep learning. However,
existing research primarily emphasizes deep learning solutions, which are more
adept at handling the ``deep'' aspect than the ``fast'' aspect of online
learning. In this work, we introduce an alternative paradigm through a hybrid
multilearner approach. We begin by developing a fast online logistic regression
learner, which operates without relying on backpropagation. It leverages
closed-form recursive updates of model parameters, efficiently addressing the
fast learning component of the online learning challenge. This approach is
further integrated with a cascaded multilearner design, where shallow and deep
learners are co-trained in a cooperative, synergistic manner to solve the
online learning problem. We demonstrate that this approach achieves
state-of-the-art performance on standard online learning datasets. We make our
code available: https://github.com/AntonValk/MODL
|
2406.05132 | Jianing Yang | Jianing Yang, Xuweiyi Chen, Nikhil Madaan, Madhavan Iyengar, Shengyi
Qian, David F. Fouhey, Joyce Chai | 3D-GRAND: A Million-Scale Dataset for 3D-LLMs with Better Grounding and
Less Hallucination | CVPR 2025. Project website: https://3d-grand.github.io | null | null | null | cs.CV cs.AI cs.CL cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The integration of language and 3D perception is crucial for embodied agents
and robots that comprehend and interact with the physical world. While large
language models (LLMs) have demonstrated impressive language understanding and
generation capabilities, their adaptation to 3D environments (3D-LLMs) remains
in its early stages. A primary challenge is a lack of large-scale datasets with
dense grounding between language and 3D scenes. We introduce 3D-GRAND, a
pioneering large-scale dataset comprising 40,087 household scenes paired with
6.2 million densely-grounded scene-language instructions. Our results show that
instruction tuning with 3D-GRAND significantly enhances grounding capabilities
and reduces hallucinations in 3D-LLMs. As part of our contributions, we propose
a comprehensive benchmark 3D-POPE to systematically evaluate hallucination in
3D-LLMs, enabling fair comparisons of models. Our experiments highlight a
scaling effect between dataset size and 3D-LLM performance, emphasizing the
importance of large-scale 3D-text datasets for embodied AI research. Our
results demonstrate early signals for effective sim-to-real transfer,
indicating that models trained on large synthetic data can perform well on
real-world 3D scans. Through 3D-GRAND and 3D-POPE, we aim to equip the embodied
AI community with resources and insights to lead to more reliable and
better-grounded 3D-LLMs. Project website: https://3d-grand.github.io
| [
{
"version": "v1",
"created": "Fri, 7 Jun 2024 17:59:59 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jun 2024 17:59:58 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 23:06:14 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Yang",
"Jianing",
""
],
[
"Chen",
"Xuweiyi",
""
],
[
"Madaan",
"Nikhil",
""
],
[
"Iyengar",
"Madhavan",
""
],
[
"Qian",
"Shengyi",
""
],
[
"Fouhey",
"David F.",
""
],
[
"Chai",
"Joyce",
""
]
] | TITLE: 3D-GRAND: A Million-Scale Dataset for 3D-LLMs with Better Grounding and
Less Hallucination
ABSTRACT: The integration of language and 3D perception is crucial for embodied agents
and robots that comprehend and interact with the physical world. While large
language models (LLMs) have demonstrated impressive language understanding and
generation capabilities, their adaptation to 3D environments (3D-LLMs) remains
in its early stages. A primary challenge is a lack of large-scale datasets with
dense grounding between language and 3D scenes. We introduce 3D-GRAND, a
pioneering large-scale dataset comprising 40,087 household scenes paired with
6.2 million densely-grounded scene-language instructions. Our results show that
instruction tuning with 3D-GRAND significantly enhances grounding capabilities
and reduces hallucinations in 3D-LLMs. As part of our contributions, we propose
a comprehensive benchmark 3D-POPE to systematically evaluate hallucination in
3D-LLMs, enabling fair comparisons of models. Our experiments highlight a
scaling effect between dataset size and 3D-LLM performance, emphasizing the
importance of large-scale 3D-text datasets for embodied AI research. Our
results demonstrate early signals for effective sim-to-real transfer,
indicating that models trained on large synthetic data can perform well on
real-world 3D scans. Through 3D-GRAND and 3D-POPE, we aim to equip the embodied
AI community with resources and insights to lead to more reliable and
better-grounded 3D-LLMs. Project website: https://3d-grand.github.io
|
2406.09396 | Jongwoo Park | Jongwoo Park, Kanchana Ranasinghe, Kumara Kahatapitiya, Wonjeong Ryu,
Donghyun Kim, Michael S. Ryoo | Too Many Frames, Not All Useful: Efficient Strategies for Long-Form
Video QA | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-form videos that span across wide temporal intervals are highly
information redundant and contain multiple distinct events or entities that are
often loosely related. Therefore, when performing long-form video question
answering (LVQA), all information necessary to generate a correct response can
often be contained within a small subset of frames. Recent literature explore
use of large language models (LLMs) in LVQA benchmarks, achieving exceptional
performance, while relying on vision language models (VLMs) to convert all
visual content within videos into natural language. Such VLMs often
independently caption a large number of frames uniformly sampled from long
videos, which is not efficient and can mostly be redundant. Questioning these
decision choices, we explore optimal strategies for key-frame selection that
can significantly reduce these redundancies, namely Hierarchical Keyframe
Selector. Our proposed framework, LVNet, achieves state-of-the-art performance
at a comparable caption scale across three benchmark LVQA datasets: EgoSchema,
NExT-QA, and IntentQA, while also demonstrating a strong performance on videos
up to an hour long in VideoMME. Our code will be released publicly. The code
can be found at https://github.com/jongwoopark7978/LVNet.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2024 17:59:16 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jun 2024 17:50:22 GMT"
},
{
"version": "v3",
"created": "Tue, 24 Sep 2024 00:57:54 GMT"
},
{
"version": "v4",
"created": "Sat, 21 Dec 2024 05:14:39 GMT"
},
{
"version": "v5",
"created": "Fri, 21 Mar 2025 03:42:27 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Park",
"Jongwoo",
""
],
[
"Ranasinghe",
"Kanchana",
""
],
[
"Kahatapitiya",
"Kumara",
""
],
[
"Ryu",
"Wonjeong",
""
],
[
"Kim",
"Donghyun",
""
],
[
"Ryoo",
"Michael S.",
""
]
] | TITLE: Too Many Frames, Not All Useful: Efficient Strategies for Long-Form
Video QA
ABSTRACT: Long-form videos that span across wide temporal intervals are highly
information redundant and contain multiple distinct events or entities that are
often loosely related. Therefore, when performing long-form video question
answering (LVQA), all information necessary to generate a correct response can
often be contained within a small subset of frames. Recent literature explore
use of large language models (LLMs) in LVQA benchmarks, achieving exceptional
performance, while relying on vision language models (VLMs) to convert all
visual content within videos into natural language. Such VLMs often
independently caption a large number of frames uniformly sampled from long
videos, which is not efficient and can mostly be redundant. Questioning these
decision choices, we explore optimal strategies for key-frame selection that
can significantly reduce these redundancies, namely Hierarchical Keyframe
Selector. Our proposed framework, LVNet, achieves state-of-the-art performance
at a comparable caption scale across three benchmark LVQA datasets: EgoSchema,
NExT-QA, and IntentQA, while also demonstrating a strong performance on videos
up to an hour long in VideoMME. Our code will be released publicly. The code
can be found at https://github.com/jongwoopark7978/LVNet.
|
2406.09782 | Runze Liu | Runze Liu, Dongchen Zhu, Guanghui Zhang, Lei Wang, and Jiamao Li | Self-supervised Monocular Depth Estimation Based on Hierarchical
Feature-Guided Diffusion | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-supervised monocular depth estimation has received widespread attention
because of its capability to train without ground truth. In real-world
scenarios, the images may be blurry or noisy due to the influence of weather
conditions and inherent limitations of the camera. Therefore, it is
particularly important to develop a robust depth estimation model. Benefiting
from the training strategies of generative networks, generative-based methods
often exhibit enhanced robustness. In light of this, we employ the
generative-based diffusion model with a unique denoising training process for
self-supervised monocular depth estimation. Additionally, to further enhance
the robustness of the diffusion model, we probe into the influence of
perturbations on image features and propose a hierarchical feature-guided
denoising module. Furthermore, we explore the implicit depth within
reprojection and design an implicit depth consistency loss. This loss function
is not interfered by the other subnetwork, which can be targeted to constrain
the depth estimation network and ensure the scale consistency of depth within a
video sequence. We conduct experiments on the KITTI and Make3D datasets. The
results indicate that our approach stands out among generative-based models,
while also showcasing remarkable robustness.
| [
{
"version": "v1",
"created": "Fri, 14 Jun 2024 07:31:20 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 13:23:31 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Liu",
"Runze",
""
],
[
"Zhu",
"Dongchen",
""
],
[
"Zhang",
"Guanghui",
""
],
[
"Wang",
"Lei",
""
],
[
"Li",
"Jiamao",
""
]
] | TITLE: Self-supervised Monocular Depth Estimation Based on Hierarchical
Feature-Guided Diffusion
ABSTRACT: Self-supervised monocular depth estimation has received widespread attention
because of its capability to train without ground truth. In real-world
scenarios, the images may be blurry or noisy due to the influence of weather
conditions and inherent limitations of the camera. Therefore, it is
particularly important to develop a robust depth estimation model. Benefiting
from the training strategies of generative networks, generative-based methods
often exhibit enhanced robustness. In light of this, we employ the
generative-based diffusion model with a unique denoising training process for
self-supervised monocular depth estimation. Additionally, to further enhance
the robustness of the diffusion model, we probe into the influence of
perturbations on image features and propose a hierarchical feature-guided
denoising module. Furthermore, we explore the implicit depth within
reprojection and design an implicit depth consistency loss. This loss function
is not interfered by the other subnetwork, which can be targeted to constrain
the depth estimation network and ensure the scale consistency of depth within a
video sequence. We conduct experiments on the KITTI and Make3D datasets. The
results indicate that our approach stands out among generative-based models,
while also showcasing remarkable robustness.
|
2406.12082 | Anna Susmelj | Anna Susmelj, Mael Macuglia, Nata\v{s}a Tagasovska, Reto Sutter,
Sebastiano Caprara, Jean-Philippe Thiran, Ender Konukoglu | Uncertainty modeling for fine-tuned implicit functions | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Implicit functions such as Neural Radiance Fields (NeRFs), occupancy
networks, and signed distance functions (SDFs) have become pivotal in computer
vision for reconstructing detailed object shapes from sparse views. Achieving
optimal performance with these models can be challenging due to the extreme
sparsity of inputs and distribution shifts induced by data corruptions. To this
end, large, noise-free synthetic datasets can serve as shape priors to help
models fill in gaps, but the resulting reconstructions must be approached with
caution. Uncertainty estimation is crucial for assessing the quality of these
reconstructions, particularly in identifying areas where the model is uncertain
about the parts it has inferred from the prior. In this paper, we introduce
Dropsembles, a novel method for uncertainty estimation in tuned implicit
functions. We demonstrate the efficacy of our approach through a series of
experiments, starting with toy examples and progressing to a real-world
scenario. Specifically, we train a Convolutional Occupancy Network on synthetic
anatomical data and test it on low-resolution MRI segmentations of the lumbar
spine. Our results show that Dropsembles achieve the accuracy and calibration
levels of deep ensembles but with significantly less computational cost.
| [
{
"version": "v1",
"created": "Mon, 17 Jun 2024 20:46:18 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 15:06:41 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Susmelj",
"Anna",
""
],
[
"Macuglia",
"Mael",
""
],
[
"Tagasovska",
"Nataša",
""
],
[
"Sutter",
"Reto",
""
],
[
"Caprara",
"Sebastiano",
""
],
[
"Thiran",
"Jean-Philippe",
""
],
[
"Konukoglu",
"Ender",
""
]
] | TITLE: Uncertainty modeling for fine-tuned implicit functions
ABSTRACT: Implicit functions such as Neural Radiance Fields (NeRFs), occupancy
networks, and signed distance functions (SDFs) have become pivotal in computer
vision for reconstructing detailed object shapes from sparse views. Achieving
optimal performance with these models can be challenging due to the extreme
sparsity of inputs and distribution shifts induced by data corruptions. To this
end, large, noise-free synthetic datasets can serve as shape priors to help
models fill in gaps, but the resulting reconstructions must be approached with
caution. Uncertainty estimation is crucial for assessing the quality of these
reconstructions, particularly in identifying areas where the model is uncertain
about the parts it has inferred from the prior. In this paper, we introduce
Dropsembles, a novel method for uncertainty estimation in tuned implicit
functions. We demonstrate the efficacy of our approach through a series of
experiments, starting with toy examples and progressing to a real-world
scenario. Specifically, we train a Convolutional Occupancy Network on synthetic
anatomical data and test it on low-resolution MRI segmentations of the lumbar
spine. Our results show that Dropsembles achieve the accuracy and calibration
levels of deep ensembles but with significantly less computational cost.
|
2406.12719 | Kushal Raj Bhandari | Kushal Raj Bhandari, Sixue Xing, Soham Dan, Jianxi Gao | On the Robustness of Language Models for Tabular Question Answering | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs), already shown to ace various text comprehension
tasks have also remarkably been shown to tackle table comprehension tasks
without specific training. While previous research has explored LLM
capabilities with tabular dataset tasks, our study assesses the influence of
\textit{in-context learning}, \textit{model scale}, \textit{instruction
tuning}, and \textit{domain biases} on Tabular Question Answering (TQA). We
evaluate the robustness of LLMs on Wikipedia-based \textbf{WTQ}, financial
report-based \textbf{TAT-QA}, and scientific claims-based \textbf{SCITAB}, TQA
datasets, focusing on their ability to interpret tabular data under various
augmentations and perturbations robustly. Our findings indicate that
instructions significantly enhance performance, with recent models exhibiting
greater robustness over earlier versions. However, data contamination and
practical reliability issues persist, especially with \textbf{WTQ}. We
highlight the need for improved methodologies, including structure-aware
self-attention mechanisms and better handling of domain-specific tabular data,
to develop more reliable LLMs for table comprehension.
| [
{
"version": "v1",
"created": "Tue, 18 Jun 2024 15:41:15 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 00:31:06 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Bhandari",
"Kushal Raj",
""
],
[
"Xing",
"Sixue",
""
],
[
"Dan",
"Soham",
""
],
[
"Gao",
"Jianxi",
""
]
] | TITLE: On the Robustness of Language Models for Tabular Question Answering
ABSTRACT: Large Language Models (LLMs), already shown to ace various text comprehension
tasks have also remarkably been shown to tackle table comprehension tasks
without specific training. While previous research has explored LLM
capabilities with tabular dataset tasks, our study assesses the influence of
\textit{in-context learning}, \textit{model scale}, \textit{instruction
tuning}, and \textit{domain biases} on Tabular Question Answering (TQA). We
evaluate the robustness of LLMs on Wikipedia-based \textbf{WTQ}, financial
report-based \textbf{TAT-QA}, and scientific claims-based \textbf{SCITAB}, TQA
datasets, focusing on their ability to interpret tabular data under various
augmentations and perturbations robustly. Our findings indicate that
instructions significantly enhance performance, with recent models exhibiting
greater robustness over earlier versions. However, data contamination and
practical reliability issues persist, especially with \textbf{WTQ}. We
highlight the need for improved methodologies, including structure-aware
self-attention mechanisms and better handling of domain-specific tabular data,
to develop more reliable LLMs for table comprehension.
|
2406.17382 | Matej Hoffmann Ph.D. | Filipe Gama, Matej Misar, Lukas Navara, Sergiu T. Popescu, Matej
Hoffmann | Automatic infant 2D pose estimation from videos: comparing seven deep
neural network methods | 34 pages, 7 figures, 20 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Automatic markerless estimation of infant posture and motion from ordinary
videos carries great potential for movement studies "in the wild", facilitating
understanding of motor development and massively increasing the chances of
early diagnosis of disorders. There is rapid development of human pose
estimation methods in computer vision thanks to advances in deep learning and
machine learning. However, these methods are trained on datasets that feature
adults in different contexts. This work tests and compares seven popular
methods (AlphaPose, DeepLabCut/DeeperCut, Detectron2, HRNet,
MediaPipe/BlazePose, OpenPose, and ViTPose) on videos of infants in supine
position and in more complex settings. Surprisingly, all methods except
DeepLabCut and MediaPipe have competitive performance without additional
finetuning, with ViTPose performing best. Next to standard performance metrics
(average precision and recall), we introduce errors expressed in the
neck-mid-hip (torso length) ratio and additionally study missed and redundant
detections, and the reliability of the internal confidence ratings of the
different methods, which are relevant for downstream tasks. Among the networks
with competitive performance, only AlphaPose could run close to real time (27
fps) on our machine. We provide documented Docker containers or instructions
for all the methods we used, our analysis scripts, and the processed data at
https://hub.docker.com/u/humanoidsctu and https://osf.io/x465b/.
| [
{
"version": "v1",
"created": "Tue, 25 Jun 2024 08:58:53 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jun 2024 14:59:18 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 11:23:11 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Gama",
"Filipe",
""
],
[
"Misar",
"Matej",
""
],
[
"Navara",
"Lukas",
""
],
[
"Popescu",
"Sergiu T.",
""
],
[
"Hoffmann",
"Matej",
""
]
] | TITLE: Automatic infant 2D pose estimation from videos: comparing seven deep
neural network methods
ABSTRACT: Automatic markerless estimation of infant posture and motion from ordinary
videos carries great potential for movement studies "in the wild", facilitating
understanding of motor development and massively increasing the chances of
early diagnosis of disorders. There is rapid development of human pose
estimation methods in computer vision thanks to advances in deep learning and
machine learning. However, these methods are trained on datasets that feature
adults in different contexts. This work tests and compares seven popular
methods (AlphaPose, DeepLabCut/DeeperCut, Detectron2, HRNet,
MediaPipe/BlazePose, OpenPose, and ViTPose) on videos of infants in supine
position and in more complex settings. Surprisingly, all methods except
DeepLabCut and MediaPipe have competitive performance without additional
finetuning, with ViTPose performing best. Next to standard performance metrics
(average precision and recall), we introduce errors expressed in the
neck-mid-hip (torso length) ratio and additionally study missed and redundant
detections, and the reliability of the internal confidence ratings of the
different methods, which are relevant for downstream tasks. Among the networks
with competitive performance, only AlphaPose could run close to real time (27
fps) on our machine. We provide documented Docker containers or instructions
for all the methods we used, our analysis scripts, and the processed data at
https://hub.docker.com/u/humanoidsctu and https://osf.io/x465b/.
|
2407.01238 | Gabriele Civitarese Dr. | Gabriele Civitarese, Michele Fiori, Priyankar Choudhary, Claudio
Bettini | Large Language Models are Zero-Shot Recognizers for Activities of Daily
Living | Paper accepted for publication in the ACM Transactions on Intelligent
Systems and Technology (TIST) journal | null | null | null | cs.AI cs.CL eess.SP | http://creativecommons.org/licenses/by/4.0/ | The sensor-based recognition of Activities of Daily Living (ADLs) in smart
home environments enables several applications in the areas of energy
management, safety, well-being, and healthcare. ADLs recognition is typically
based on deep learning methods requiring large datasets to be trained.
Recently, several studies proved that Large Language Models (LLMs) effectively
capture common-sense knowledge about human activities. However, the
effectiveness of LLMs for ADLs recognition in smart home environments still
deserves to be investigated. In this work, we propose ADL-LLM, a novel
LLM-based ADLs recognition system. ADLLLM transforms raw sensor data into
textual representations, that are processed by an LLM to perform zero-shot ADLs
recognition. Moreover, in the scenario where a small labeled dataset is
available, ADL-LLM can also be empowered with few-shot prompting. We evaluated
ADL-LLM on two public datasets, showing its effectiveness in this domain.
| [
{
"version": "v1",
"created": "Mon, 1 Jul 2024 12:32:38 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Oct 2024 13:31:09 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 20:43:37 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Civitarese",
"Gabriele",
""
],
[
"Fiori",
"Michele",
""
],
[
"Choudhary",
"Priyankar",
""
],
[
"Bettini",
"Claudio",
""
]
] | TITLE: Large Language Models are Zero-Shot Recognizers for Activities of Daily
Living
ABSTRACT: The sensor-based recognition of Activities of Daily Living (ADLs) in smart
home environments enables several applications in the areas of energy
management, safety, well-being, and healthcare. ADLs recognition is typically
based on deep learning methods requiring large datasets to be trained.
Recently, several studies proved that Large Language Models (LLMs) effectively
capture common-sense knowledge about human activities. However, the
effectiveness of LLMs for ADLs recognition in smart home environments still
deserves to be investigated. In this work, we propose ADL-LLM, a novel
LLM-based ADLs recognition system. ADLLLM transforms raw sensor data into
textual representations, that are processed by an LLM to perform zero-shot ADLs
recognition. Moreover, in the scenario where a small labeled dataset is
available, ADL-LLM can also be empowered with few-shot prompting. We evaluated
ADL-LLM on two public datasets, showing its effectiveness in this domain.
|
2407.04104 | Yaoming Zhen | Yaoming Zhen and Jin-Hong Du | Network-based Neighborhood regression | null | null | null | null | stat.ME cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given the ubiquity of modularity in biological systems, module-level
regulation analysis is vital for understanding biological systems across
various levels and their dynamics. Current statistical analysis on biological
modules predominantly focuses on either detecting the functional modules in
biological networks or sub-group regression on the biological features without
using the network data. This paper proposes a novel network-based neighborhood
regression framework whose regression functions depend on both the global
community-level information and local connectivity structures among entities.
An efficient community-wise least square optimization approach is developed to
uncover the strength of regulation among the network modules while enabling
asymptotic inference. With random graph theory, we derive non-asymptotic
estimation error bounds for the proposed estimator, achieving exact minimax
optimality. Unlike the root-n consistency typical in canonical linear
regression, our model exhibits linear consistency in the number of nodes n,
highlighting the advantage of incorporating neighborhood information. The
effectiveness of the proposed framework is further supported by extensive
numerical experiments. Application to whole-exome sequencing and RNA-sequencing
Autism datasets demonstrates the usage of the proposed method in identifying
the association between the gene modules of genetic variations and the gene
modules of genomic differential expressions.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2024 18:08:40 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 22:37:17 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhen",
"Yaoming",
""
],
[
"Du",
"Jin-Hong",
""
]
] | TITLE: Network-based Neighborhood regression
ABSTRACT: Given the ubiquity of modularity in biological systems, module-level
regulation analysis is vital for understanding biological systems across
various levels and their dynamics. Current statistical analysis on biological
modules predominantly focuses on either detecting the functional modules in
biological networks or sub-group regression on the biological features without
using the network data. This paper proposes a novel network-based neighborhood
regression framework whose regression functions depend on both the global
community-level information and local connectivity structures among entities.
An efficient community-wise least square optimization approach is developed to
uncover the strength of regulation among the network modules while enabling
asymptotic inference. With random graph theory, we derive non-asymptotic
estimation error bounds for the proposed estimator, achieving exact minimax
optimality. Unlike the root-n consistency typical in canonical linear
regression, our model exhibits linear consistency in the number of nodes n,
highlighting the advantage of incorporating neighborhood information. The
effectiveness of the proposed framework is further supported by extensive
numerical experiments. Application to whole-exome sequencing and RNA-sequencing
Autism datasets demonstrates the usage of the proposed method in identifying
the association between the gene modules of genetic variations and the gene
modules of genomic differential expressions.
|
2407.09230 | Chinedu Nwoye | Chinedu Innocent Nwoye, Rupak Bose, Kareem Elgohary, Lorenzo Arboit,
Giorgio Carlino, Jo\"el L. Lavanchy, Pietro Mascagni, Nicolas Padoy | Surgical Text-to-Image Generation | 13 pages, 13 figures, 3 tables, published in Pattern Recognition
Letters 2025, project page at https://camma-public.github.io/endogen/ | Pattern Recognition Letters, Volume 190, April 2025, Pages 73-80 | 10.1016/j.patrec.2025.02.002 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Acquiring surgical data for research and development is significantly
hindered by high annotation costs and practical and ethical constraints.
Utilizing synthetically generated images could offer a valuable alternative. In
this work, we explore adapting text-to-image generative models for the surgical
domain using the CholecT50 dataset, which provides surgical images annotated
with action triplets (instrument, verb, target). We investigate several
language models and find T5 to offer more distinct features for differentiating
surgical actions on triplet-based textual inputs, and showcasing stronger
alignment between long and triplet-based captions. To address challenges in
training text-to-image models solely on triplet-based captions without
additional inputs and supervisory signals, we discover that triplet text
embeddings are instrument-centric in the latent space. Leveraging this insight,
we design an instrument-based class balancing technique to counteract data
imbalance and skewness, improving training convergence. Extending Imagen, a
diffusion-based generative model, we develop Surgical Imagen to generate
photorealistic and activity-aligned surgical images from triplet-based textual
prompts. We assess the model on quality, alignment, reasoning, and knowledge,
achieving FID and CLIP scores of 3.7 and 26.8% respectively. Human expert
survey shows that participants were highly challenged by the realistic
characteristics of the generated samples, demonstrating Surgical Imagen's
effectiveness as a practical alternative to real data collection.
| [
{
"version": "v1",
"created": "Fri, 12 Jul 2024 12:49:11 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Jul 2024 16:40:23 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 09:57:02 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Nwoye",
"Chinedu Innocent",
""
],
[
"Bose",
"Rupak",
""
],
[
"Elgohary",
"Kareem",
""
],
[
"Arboit",
"Lorenzo",
""
],
[
"Carlino",
"Giorgio",
""
],
[
"Lavanchy",
"Joël L.",
""
],
[
"Mascagni",
"Pietro",
""
],
[
"Padoy",
"Nicolas",
""
]
] | TITLE: Surgical Text-to-Image Generation
ABSTRACT: Acquiring surgical data for research and development is significantly
hindered by high annotation costs and practical and ethical constraints.
Utilizing synthetically generated images could offer a valuable alternative. In
this work, we explore adapting text-to-image generative models for the surgical
domain using the CholecT50 dataset, which provides surgical images annotated
with action triplets (instrument, verb, target). We investigate several
language models and find T5 to offer more distinct features for differentiating
surgical actions on triplet-based textual inputs, and showcasing stronger
alignment between long and triplet-based captions. To address challenges in
training text-to-image models solely on triplet-based captions without
additional inputs and supervisory signals, we discover that triplet text
embeddings are instrument-centric in the latent space. Leveraging this insight,
we design an instrument-based class balancing technique to counteract data
imbalance and skewness, improving training convergence. Extending Imagen, a
diffusion-based generative model, we develop Surgical Imagen to generate
photorealistic and activity-aligned surgical images from triplet-based textual
prompts. We assess the model on quality, alignment, reasoning, and knowledge,
achieving FID and CLIP scores of 3.7 and 26.8% respectively. Human expert
survey shows that participants were highly challenged by the realistic
characteristics of the generated samples, demonstrating Surgical Imagen's
effectiveness as a practical alternative to real data collection.
|
2407.14757 | Arrun Sivasubramanian | Jayanth Mohan, Arrun Sivasubramanian, V Sowmya and Ravi Vinayakumar | Enhancing Skin Disease Classification Leveraging Transformer-based Deep
Learning Architectures and Explainable AI | Submitted to Computers in Biology and Medicine | null | 10.1016/j.compbiomed.2025.110007 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Skin diseases affect over a third of the global population, yet their impact
is often underestimated. Automating skin disease classification to assist
doctors with their prognosis might be difficult. Nevertheless, due to efficient
feature extraction pipelines, deep learning techniques have shown much promise
for various tasks, including dermatological disease identification. This study
uses a skin disease dataset with 31 classes and compares it with all versions
of Vision Transformers, Swin Transformers and DivoV2. The analysis is also
extended to compare with benchmark convolution-based architecture presented in
the literature. Transfer learning with ImageNet1k weights on the skin disease
dataset contributes to a high test accuracy of 96.48\% and an F1-Score of
0.9727 using DinoV2, which is almost a 10\% improvement over this data's
current benchmark results. The performance of DinoV2 was also compared for the
HAM10000 and Dermnet datasets to test the model's robustness, and the trained
model overcomes the benchmark results by a slight margin in test accuracy and
in F1-Score on the 23 and 7 class datasets. The results are substantiated using
explainable AI frameworks like GradCAM and SHAP, which provide precise image
locations to map the disease, assisting dermatologists in early detection,
prompt prognosis, and treatment.
| [
{
"version": "v1",
"created": "Sat, 20 Jul 2024 05:38:00 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Mohan",
"Jayanth",
""
],
[
"Sivasubramanian",
"Arrun",
""
],
[
"Sowmya",
"V",
""
],
[
"Vinayakumar",
"Ravi",
""
]
] | TITLE: Enhancing Skin Disease Classification Leveraging Transformer-based Deep
Learning Architectures and Explainable AI
ABSTRACT: Skin diseases affect over a third of the global population, yet their impact
is often underestimated. Automating skin disease classification to assist
doctors with their prognosis might be difficult. Nevertheless, due to efficient
feature extraction pipelines, deep learning techniques have shown much promise
for various tasks, including dermatological disease identification. This study
uses a skin disease dataset with 31 classes and compares it with all versions
of Vision Transformers, Swin Transformers and DivoV2. The analysis is also
extended to compare with benchmark convolution-based architecture presented in
the literature. Transfer learning with ImageNet1k weights on the skin disease
dataset contributes to a high test accuracy of 96.48\% and an F1-Score of
0.9727 using DinoV2, which is almost a 10\% improvement over this data's
current benchmark results. The performance of DinoV2 was also compared for the
HAM10000 and Dermnet datasets to test the model's robustness, and the trained
model overcomes the benchmark results by a slight margin in test accuracy and
in F1-Score on the 23 and 7 class datasets. The results are substantiated using
explainable AI frameworks like GradCAM and SHAP, which provide precise image
locations to map the disease, assisting dermatologists in early detection,
prompt prognosis, and treatment.
|
2407.14823 | Yukai Shi | Yukai Shi, Zhipeng Weng, Yupei Lin, Cidan Shi, Xiaojun Yang, and Liang
Lin | Scaling Up Single Image Dehazing Algorithm by Cross-Data Vision
Alignment for Richer Representation Learning and Beyond | A cross-dataset vision alignment and augmentation technology is
proposed to boost generalizable feature learning in the de-hazing task | null | null | null | cs.CV cs.AI cs.LG cs.MM eess.IV | http://creativecommons.org/licenses/by/4.0/ | In recent years, deep neural networks tasks have increasingly relied on
high-quality image inputs. With the development of high-resolution
representation learning, the task of image dehazing has received significant
attention. Previously, many methods collect diverse image data for large-scale
training to boost the performance on a target scene. Ignoring the domain gap
between different data, former de-hazing methods simply adopt multiple datasets
for explicit large-scale training, which often makes the methods themselves be
violated. To address this problem, we propose a novel method of cross-data
vision alignment for richer representation learning to improve the existing
dehazing methodology. Specifically, we call for the internal- and external
knowledge should be further adapted with a self-supervised manner to fill up
the domain gap. By using cross-data external alignment, the datasets inherit
samples from different domains that are firmly aligned, making the model learn
more robust and generalizable features. By using the internal augmentation
method, the model can fully exploit local information within the images, and
then obtaining more image details. To demonstrate the effectiveness of our
proposed method, we conduct training on the Natural Image Dataset (NID).
Experimental results show that our method clearly resolves the domain gap in
different dehazing datasets and presents a new pipeline for large-scale
training in the dehazing task. Our approach significantly outperforms other
advanced methods in dehazing and produces dehazed images that are closest to
real haze-free images.
| [
{
"version": "v1",
"created": "Sat, 20 Jul 2024 10:00:20 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 18:22:58 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Shi",
"Yukai",
""
],
[
"Weng",
"Zhipeng",
""
],
[
"Lin",
"Yupei",
""
],
[
"Shi",
"Cidan",
""
],
[
"Yang",
"Xiaojun",
""
],
[
"Lin",
"Liang",
""
]
] | TITLE: Scaling Up Single Image Dehazing Algorithm by Cross-Data Vision
Alignment for Richer Representation Learning and Beyond
ABSTRACT: In recent years, deep neural networks tasks have increasingly relied on
high-quality image inputs. With the development of high-resolution
representation learning, the task of image dehazing has received significant
attention. Previously, many methods collect diverse image data for large-scale
training to boost the performance on a target scene. Ignoring the domain gap
between different data, former de-hazing methods simply adopt multiple datasets
for explicit large-scale training, which often makes the methods themselves be
violated. To address this problem, we propose a novel method of cross-data
vision alignment for richer representation learning to improve the existing
dehazing methodology. Specifically, we call for the internal- and external
knowledge should be further adapted with a self-supervised manner to fill up
the domain gap. By using cross-data external alignment, the datasets inherit
samples from different domains that are firmly aligned, making the model learn
more robust and generalizable features. By using the internal augmentation
method, the model can fully exploit local information within the images, and
then obtaining more image details. To demonstrate the effectiveness of our
proposed method, we conduct training on the Natural Image Dataset (NID).
Experimental results show that our method clearly resolves the domain gap in
different dehazing datasets and presents a new pipeline for large-scale
training in the dehazing task. Our approach significantly outperforms other
advanced methods in dehazing and produces dehazed images that are closest to
real haze-free images.
|
2407.17777 | Shiqi Jiang | Shenghong Dai, Shiqi Jiang, Yifan Yang, Ting Cao, Mo Li, Suman
Banerjee, Lili Qiu | Babel: A Scalable Pre-trained Model for Multi-Modal Sensing via
Expandable Modality Alignment | Accepted by SenSys'25 | null | null | null | cs.AI cs.CV cs.LG eess.SP | http://creativecommons.org/licenses/by/4.0/ | This paper presents Babel, the expandable modality alignment model, specially
designed for multi-modal sensing. While there has been considerable work on
multi-modality alignment, they all struggle to effectively incorporate multiple
sensing modalities due to the data scarcity constraints. How to utilize
multi-modal data with partial pairings in sensing remains an unresolved
challenge. Babel tackles this challenge by introducing the concept of
expandable modality alignment. The key idea involves transforming the
N-modality alignment into a series of binary-modality alignments. Novel
techniques are also proposed to further mitigate data scarcity issue and
balance the contribution of the newly incorporated modality with the previously
established modality alignment during the expandable alignment process. We
provide the comprehensive implementation. In the pre-training phase, Babel
currently aligns 6 sensing modalities, namely Wi-Fi, mmWave, IMU, LiDAR, video,
and depth. For the deployment phase, as a foundation model, any single or
combination of aligned modalities could be selected from Babel and applied to
downstream tasks. Evaluation demonstrates Babel's outstanding performance on
eight human activity recognition datasets, compared to a broad range of
baselines e.g., the SOTA single-modal sensing networks, multi-modal sensing
framework, and multi-modal large language models. Babel not only improves the
performance of individual modality sensing (12% averaged accuracy improvement),
but also effectively fuses multiple available modalities (up to 22% accuracy
increase). Case studies also highlight emerging application scenarios empowered
by Babel, including cross-modality retrieval (i.e., sensing imaging), and
bridging LLM for sensing comprehension.
| [
{
"version": "v1",
"created": "Thu, 25 Jul 2024 05:10:48 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 10:51:22 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Dai",
"Shenghong",
""
],
[
"Jiang",
"Shiqi",
""
],
[
"Yang",
"Yifan",
""
],
[
"Cao",
"Ting",
""
],
[
"Li",
"Mo",
""
],
[
"Banerjee",
"Suman",
""
],
[
"Qiu",
"Lili",
""
]
] | TITLE: Babel: A Scalable Pre-trained Model for Multi-Modal Sensing via
Expandable Modality Alignment
ABSTRACT: This paper presents Babel, the expandable modality alignment model, specially
designed for multi-modal sensing. While there has been considerable work on
multi-modality alignment, they all struggle to effectively incorporate multiple
sensing modalities due to the data scarcity constraints. How to utilize
multi-modal data with partial pairings in sensing remains an unresolved
challenge. Babel tackles this challenge by introducing the concept of
expandable modality alignment. The key idea involves transforming the
N-modality alignment into a series of binary-modality alignments. Novel
techniques are also proposed to further mitigate data scarcity issue and
balance the contribution of the newly incorporated modality with the previously
established modality alignment during the expandable alignment process. We
provide the comprehensive implementation. In the pre-training phase, Babel
currently aligns 6 sensing modalities, namely Wi-Fi, mmWave, IMU, LiDAR, video,
and depth. For the deployment phase, as a foundation model, any single or
combination of aligned modalities could be selected from Babel and applied to
downstream tasks. Evaluation demonstrates Babel's outstanding performance on
eight human activity recognition datasets, compared to a broad range of
baselines e.g., the SOTA single-modal sensing networks, multi-modal sensing
framework, and multi-modal large language models. Babel not only improves the
performance of individual modality sensing (12% averaged accuracy improvement),
but also effectively fuses multiple available modalities (up to 22% accuracy
increase). Case studies also highlight emerging application scenarios empowered
by Babel, including cross-modality retrieval (i.e., sensing imaging), and
bridging LLM for sensing comprehension.
|
2407.19711 | Shuaiyu Xie | Shuaiyu Xie, Jian Wang, Hanbin He, Zhihao Wang, Yuqi Zhao, Neng Zhang,
Bing Li | TVDiag: A Task-oriented and View-invariant Failure Diagnosis Framework
with Multimodal Data | 32 pages | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microservice-based systems often suffer from reliability issues due to their
intricate interactions and expanding scale. With the rapid growth of
observability techniques, various methods have been proposed to achieve failure
diagnosis, including root cause localization and failure type identification,
by leveraging diverse monitoring data such as logs, metrics, or traces.
However, traditional failure diagnosis methods that use single-modal data can
hardly cover all failure scenarios due to the restricted information. Several
failure diagnosis methods have been recently proposed to integrate multimodal
data based on deep learning. These methods, however, tend to combine modalities
indiscriminately and treat them equally in failure diagnosis, ignoring the
relationship between specific modalities and different diagnostic tasks. This
oversight hinders the effective utilization of the unique advantages offered by
each modality. To address the limitation, we propose \textit{TVDiag}, a
multimodal failure diagnosis framework for locating culprit microservice
instances and identifying their failure types (e.g., Net-packets Corruption) in
microservice-based systems. \textit{TVDiag} employs task-oriented learning to
enhance the potential advantages of each modality and establishes cross-modal
associations based on contrastive learning to extract view-invariant failure
information. Furthermore, we develop a graph-level data augmentation strategy
that randomly inactivates the observability of some normal microservice
instances during training to mitigate the shortage of training data.
Experimental results show that \textit{TVDiag} outperforms state-of-the-art
methods in multimodal failure diagnosis, achieving at least a 55.94\% higher
$HR@1$ accuracy and over a 4.08\% increase in F1-score across two datasets.
| [
{
"version": "v1",
"created": "Mon, 29 Jul 2024 05:26:57 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Aug 2024 02:50:15 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 01:01:55 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Xie",
"Shuaiyu",
""
],
[
"Wang",
"Jian",
""
],
[
"He",
"Hanbin",
""
],
[
"Wang",
"Zhihao",
""
],
[
"Zhao",
"Yuqi",
""
],
[
"Zhang",
"Neng",
""
],
[
"Li",
"Bing",
""
]
] | TITLE: TVDiag: A Task-oriented and View-invariant Failure Diagnosis Framework
with Multimodal Data
ABSTRACT: Microservice-based systems often suffer from reliability issues due to their
intricate interactions and expanding scale. With the rapid growth of
observability techniques, various methods have been proposed to achieve failure
diagnosis, including root cause localization and failure type identification,
by leveraging diverse monitoring data such as logs, metrics, or traces.
However, traditional failure diagnosis methods that use single-modal data can
hardly cover all failure scenarios due to the restricted information. Several
failure diagnosis methods have been recently proposed to integrate multimodal
data based on deep learning. These methods, however, tend to combine modalities
indiscriminately and treat them equally in failure diagnosis, ignoring the
relationship between specific modalities and different diagnostic tasks. This
oversight hinders the effective utilization of the unique advantages offered by
each modality. To address the limitation, we propose \textit{TVDiag}, a
multimodal failure diagnosis framework for locating culprit microservice
instances and identifying their failure types (e.g., Net-packets Corruption) in
microservice-based systems. \textit{TVDiag} employs task-oriented learning to
enhance the potential advantages of each modality and establishes cross-modal
associations based on contrastive learning to extract view-invariant failure
information. Furthermore, we develop a graph-level data augmentation strategy
that randomly inactivates the observability of some normal microservice
instances during training to mitigate the shortage of training data.
Experimental results show that \textit{TVDiag} outperforms state-of-the-art
methods in multimodal failure diagnosis, achieving at least a 55.94\% higher
$HR@1$ accuracy and over a 4.08\% increase in F1-score across two datasets.
|
2408.01372 | Muhammad Ahmad | Muhammad Ahmad, Muhammad Hassaan Farooq Butt, Adil Mehmood Khan,
Manuel Mazzara, Salvatore Distefano, Muhammad Usama, Swalpa Kumar Roy,
Jocelyn Chanussot, Danfeng Hong | Spatial and Spatial-Spectral Morphological Mamba for Hyperspectral Image
Classification | null | null | 10.1016/j.neucom.2025.129995 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in transformers, specifically self-attention mechanisms,
have significantly improved hyperspectral image (HSI) classification. However,
these models often suffer from inefficiencies, as their computational
complexity scales quadratically with sequence length. To address these
challenges, we propose the morphological spatial mamba (SMM) and morphological
spatial-spectral Mamba (SSMM) model (MorpMamba), which combines the strengths
of morphological operations and the state space model framework, offering a
more computationally efficient alternative to transformers. In MorpMamba, a
novel token generation module first converts HSI patches into spatial-spectral
tokens. These tokens are then processed through morphological operations such
as erosion and dilation, utilizing depthwise separable convolutions to capture
structural and shape information. A token enhancement module refines these
features by dynamically adjusting the spatial and spectral tokens based on
central HSI regions, ensuring effective feature fusion within each block.
Subsequently, multi-head self-attention is applied to further enrich the
feature representations, allowing the model to capture complex relationships
and dependencies within the data. Finally, the enhanced tokens are fed into a
state space module, which efficiently models the temporal evolution of the
features for classification. Experimental results on widely used HSI datasets
demonstrate that MorpMamba achieves superior parametric efficiency compared to
traditional CNN and transformer models while maintaining high accuracy. The
code will be made publicly available at
\url{https://github.com/mahmad000/MorpMamba}.
| [
{
"version": "v1",
"created": "Fri, 2 Aug 2024 16:28:51 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Aug 2024 10:57:07 GMT"
},
{
"version": "v3",
"created": "Sat, 30 Nov 2024 13:24:19 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Ahmad",
"Muhammad",
""
],
[
"Butt",
"Muhammad Hassaan Farooq",
""
],
[
"Khan",
"Adil Mehmood",
""
],
[
"Mazzara",
"Manuel",
""
],
[
"Distefano",
"Salvatore",
""
],
[
"Usama",
"Muhammad",
""
],
[
"Roy",
"Swalpa Kumar",
""
],
[
"Chanussot",
"Jocelyn",
""
],
[
"Hong",
"Danfeng",
""
]
] | TITLE: Spatial and Spatial-Spectral Morphological Mamba for Hyperspectral Image
Classification
ABSTRACT: Recent advancements in transformers, specifically self-attention mechanisms,
have significantly improved hyperspectral image (HSI) classification. However,
these models often suffer from inefficiencies, as their computational
complexity scales quadratically with sequence length. To address these
challenges, we propose the morphological spatial mamba (SMM) and morphological
spatial-spectral Mamba (SSMM) model (MorpMamba), which combines the strengths
of morphological operations and the state space model framework, offering a
more computationally efficient alternative to transformers. In MorpMamba, a
novel token generation module first converts HSI patches into spatial-spectral
tokens. These tokens are then processed through morphological operations such
as erosion and dilation, utilizing depthwise separable convolutions to capture
structural and shape information. A token enhancement module refines these
features by dynamically adjusting the spatial and spectral tokens based on
central HSI regions, ensuring effective feature fusion within each block.
Subsequently, multi-head self-attention is applied to further enrich the
feature representations, allowing the model to capture complex relationships
and dependencies within the data. Finally, the enhanced tokens are fed into a
state space module, which efficiently models the temporal evolution of the
features for classification. Experimental results on widely used HSI datasets
demonstrate that MorpMamba achieves superior parametric efficiency compared to
traditional CNN and transformer models while maintaining high accuracy. The
code will be made publicly available at
\url{https://github.com/mahmad000/MorpMamba}.
|
2408.09278 | Junchao Zhu | Junchao Zhu, Mengmeng Yin, Ruining Deng, Yitian Long, Yu Wang, Yaohong
Wang, Shilin Zhao, Haichun Yang, Yuankai Huo | Cross-Species Data Integration for Enhanced Layer Segmentation in Kidney
Pathology | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate delineation of the boundaries between the renal cortex and medulla
is crucial for subsequent functional structural analysis and disease diagnosis.
Training high-quality deep-learning models for layer segmentation relies on the
availability of large amounts of annotated data. However, due to the patient's
privacy of medical data and scarce clinical cases, constructing pathological
datasets from clinical sources is relatively difficult and expensive. Moreover,
using external natural image datasets introduces noise during the domain
generalization process. Cross-species homologous data, such as mouse kidney
data, which exhibits high structural and feature similarity to human kidneys,
has the potential to enhance model performance on human datasets. In this
study, we incorporated the collected private Periodic Acid-Schiff (PAS) stained
mouse kidney dataset into the human kidney dataset for joint training. The
results showed that after introducing cross-species homologous data, the
semantic segmentation models based on CNN and Transformer architectures
achieved an average increase of 1.77% and 1.24% in mIoU, and 1.76% and 0.89% in
Dice score for the human renal cortex and medulla datasets, respectively. This
approach is also capable of enhancing the model's generalization ability. This
indicates that cross-species homologous data, as a low-noise trainable data
source, can help improve model performance under conditions of limited clinical
samples. Code is available at https://github.com/hrlblab/layer_segmentation.
| [
{
"version": "v1",
"created": "Sat, 17 Aug 2024 19:30:40 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 04:57:26 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhu",
"Junchao",
""
],
[
"Yin",
"Mengmeng",
""
],
[
"Deng",
"Ruining",
""
],
[
"Long",
"Yitian",
""
],
[
"Wang",
"Yu",
""
],
[
"Wang",
"Yaohong",
""
],
[
"Zhao",
"Shilin",
""
],
[
"Yang",
"Haichun",
""
],
[
"Huo",
"Yuankai",
""
]
] | TITLE: Cross-Species Data Integration for Enhanced Layer Segmentation in Kidney
Pathology
ABSTRACT: Accurate delineation of the boundaries between the renal cortex and medulla
is crucial for subsequent functional structural analysis and disease diagnosis.
Training high-quality deep-learning models for layer segmentation relies on the
availability of large amounts of annotated data. However, due to the patient's
privacy of medical data and scarce clinical cases, constructing pathological
datasets from clinical sources is relatively difficult and expensive. Moreover,
using external natural image datasets introduces noise during the domain
generalization process. Cross-species homologous data, such as mouse kidney
data, which exhibits high structural and feature similarity to human kidneys,
has the potential to enhance model performance on human datasets. In this
study, we incorporated the collected private Periodic Acid-Schiff (PAS) stained
mouse kidney dataset into the human kidney dataset for joint training. The
results showed that after introducing cross-species homologous data, the
semantic segmentation models based on CNN and Transformer architectures
achieved an average increase of 1.77% and 1.24% in mIoU, and 1.76% and 0.89% in
Dice score for the human renal cortex and medulla datasets, respectively. This
approach is also capable of enhancing the model's generalization ability. This
indicates that cross-species homologous data, as a low-noise trainable data
source, can help improve model performance under conditions of limited clinical
samples. Code is available at https://github.com/hrlblab/layer_segmentation.
|
2409.11905 | Pengan Chen | Zhaxizhuoma Zhaxizhuoma, Pengan Chen, Ziniu Wu, Jiawei Sun, Dong Wang,
Peng Zhou, Nieqing Cao, Yan Ding, Bin Zhao, Xuelong Li | AlignBot: Aligning VLM-powered Customized Task Planning with User
Reminders Through Fine-Tuning for Household Robots | null | null | null | null | cs.RO cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents AlignBot, a novel framework designed to optimize
VLM-powered customized task planning for household robots by effectively
aligning with user reminders. In domestic settings, aligning task planning with
user reminders poses significant challenges due to the limited quantity,
diversity, and multimodal nature of the reminders. To address these challenges,
AlignBot employs a fine-tuned LLaVA-7B model, functioning as an adapter for
GPT-4o. This adapter model internalizes diverse forms of user reminders-such as
personalized preferences, corrective guidance, and contextual assistance-into
structured instruction-formatted cues that prompt GPT-4o in generating
customized task plans. Additionally, AlignBot integrates a dynamic retrieval
mechanism that selects task-relevant historical successes as prompts for
GPT-4o, further enhancing task planning accuracy. To validate the effectiveness
of AlignBot, experiments are conducted in real-world household environments,
which are constructed within the laboratory to replicate typical household
settings. A multimodal dataset with over 1,500 entries derived from volunteer
reminders is used for training and evaluation. The results demonstrate that
AlignBot significantly improves customized task planning, outperforming
existing LLM- and VLM-powered planners by interpreting and aligning with user
reminders, achieving 86.8% success rate compared to the vanilla GPT-4o baseline
at 21.6%, reflecting a 65% improvement and over four times greater
effectiveness. Supplementary materials are available at:
https://yding25.com/AlignBot/
| [
{
"version": "v1",
"created": "Wed, 18 Sep 2024 12:05:30 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 04:40:24 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhaxizhuoma",
"Zhaxizhuoma",
""
],
[
"Chen",
"Pengan",
""
],
[
"Wu",
"Ziniu",
""
],
[
"Sun",
"Jiawei",
""
],
[
"Wang",
"Dong",
""
],
[
"Zhou",
"Peng",
""
],
[
"Cao",
"Nieqing",
""
],
[
"Ding",
"Yan",
""
],
[
"Zhao",
"Bin",
""
],
[
"Li",
"Xuelong",
""
]
] | TITLE: AlignBot: Aligning VLM-powered Customized Task Planning with User
Reminders Through Fine-Tuning for Household Robots
ABSTRACT: This paper presents AlignBot, a novel framework designed to optimize
VLM-powered customized task planning for household robots by effectively
aligning with user reminders. In domestic settings, aligning task planning with
user reminders poses significant challenges due to the limited quantity,
diversity, and multimodal nature of the reminders. To address these challenges,
AlignBot employs a fine-tuned LLaVA-7B model, functioning as an adapter for
GPT-4o. This adapter model internalizes diverse forms of user reminders-such as
personalized preferences, corrective guidance, and contextual assistance-into
structured instruction-formatted cues that prompt GPT-4o in generating
customized task plans. Additionally, AlignBot integrates a dynamic retrieval
mechanism that selects task-relevant historical successes as prompts for
GPT-4o, further enhancing task planning accuracy. To validate the effectiveness
of AlignBot, experiments are conducted in real-world household environments,
which are constructed within the laboratory to replicate typical household
settings. A multimodal dataset with over 1,500 entries derived from volunteer
reminders is used for training and evaluation. The results demonstrate that
AlignBot significantly improves customized task planning, outperforming
existing LLM- and VLM-powered planners by interpreting and aligning with user
reminders, achieving 86.8% success rate compared to the vanilla GPT-4o baseline
at 21.6%, reflecting a 65% improvement and over four times greater
effectiveness. Supplementary materials are available at:
https://yding25.com/AlignBot/
|
2409.14693 | Omkar Oak | Omkar Oak, Rukmini Nazre, Rujuta Budke, Yogita Mahatekar | A Novel Multivariate Bi-LSTM model for Short-Term Equity Price
Forecasting | Paper Accepted for presentation at 5th IEEE Global Conference for
Advancement in Technology (GCAT) 2024 | null | 10.1109/GCAT62922.2024.10923989 | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | Prediction models are crucial in the stock market as they aid in forecasting
future prices and trends, enabling investors to make informed decisions and
manage risks more effectively. In the Indian stock market, where volatility is
often high, accurate predictions can provide a significant edge in capitalizing
on market movements. While various models like regression and Artificial Neural
Networks (ANNs) have been explored for this purpose, studies have shown that
Long Short-Term Memory networks (LSTMs) are the most effective. This is because
they can capture complex temporal dependencies present in financial data. This
paper presents a Bidirectional Multivariate LSTM model designed to predict
short-term stock prices of Indian companies in the NIFTY 100 across four major
sectors. Both Univariate LSTM and Univariate Bidirectional LSTM models were
evaluated based on R2 score, RMSE, MSE, MAE, and MAPE. To improve predictive
accuracy, the analysis was extended to multivariate data. Additionally, 12
technical indicators, having high correlation values with the close
price(greater than 0.99) including EMA5, SMA5, TRIMA5, KAMA10 and the Bollinger
Bands were selected as variables to further optimize the prediction models. The
proposed Bidirectional Multivariate LSTM model, when applied to a dataset
containing these indicators, achieved an exceptionally high average R2 score of
99.4779% across the four stocks, which is 3.9833% higher than that of the
Unidirectional Multivariate LSTM without technical indicators. The proposed
model has an average RMSE of 0.0103955, an average MAE of 0.007485 and an
average MAPE of 1.1635%. This highlights the model's exceptional forecasting
accuracy and emphasizes its potential to improve short-term trading strategies.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2024 03:48:23 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Oak",
"Omkar",
""
],
[
"Nazre",
"Rukmini",
""
],
[
"Budke",
"Rujuta",
""
],
[
"Mahatekar",
"Yogita",
""
]
] | TITLE: A Novel Multivariate Bi-LSTM model for Short-Term Equity Price
Forecasting
ABSTRACT: Prediction models are crucial in the stock market as they aid in forecasting
future prices and trends, enabling investors to make informed decisions and
manage risks more effectively. In the Indian stock market, where volatility is
often high, accurate predictions can provide a significant edge in capitalizing
on market movements. While various models like regression and Artificial Neural
Networks (ANNs) have been explored for this purpose, studies have shown that
Long Short-Term Memory networks (LSTMs) are the most effective. This is because
they can capture complex temporal dependencies present in financial data. This
paper presents a Bidirectional Multivariate LSTM model designed to predict
short-term stock prices of Indian companies in the NIFTY 100 across four major
sectors. Both Univariate LSTM and Univariate Bidirectional LSTM models were
evaluated based on R2 score, RMSE, MSE, MAE, and MAPE. To improve predictive
accuracy, the analysis was extended to multivariate data. Additionally, 12
technical indicators, having high correlation values with the close
price(greater than 0.99) including EMA5, SMA5, TRIMA5, KAMA10 and the Bollinger
Bands were selected as variables to further optimize the prediction models. The
proposed Bidirectional Multivariate LSTM model, when applied to a dataset
containing these indicators, achieved an exceptionally high average R2 score of
99.4779% across the four stocks, which is 3.9833% higher than that of the
Unidirectional Multivariate LSTM without technical indicators. The proposed
model has an average RMSE of 0.0103955, an average MAE of 0.007485 and an
average MAPE of 1.1635%. This highlights the model's exceptional forecasting
accuracy and emphasizes its potential to improve short-term trading strategies.
|
2409.17397 | Konstantinos Skianis | Konstantinos Skianis, John Pavlopoulos, A. Seza Do\u{g}ru\"oz | Building Multilingual Datasets for Predicting Mental Health Severity
through LLMs: Prospects and Challenges | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) are increasingly being integrated into various
medical fields, including mental health support systems. However, there is a
gap in research regarding the effectiveness of LLMs in non-English mental
health support applications. To address this problem, we present a novel
multilingual adaptation of widely-used mental health datasets, translated from
English into six languages (e.g., Greek, Turkish, French, Portuguese, German,
and Finnish). This dataset enables a comprehensive evaluation of LLM
performance in detecting mental health conditions and assessing their severity
across multiple languages. By experimenting with GPT and Llama, we observe
considerable variability in performance across languages, despite being
evaluated on the same translated dataset. This inconsistency underscores the
complexities inherent in multilingual mental health support, where
language-specific nuances and mental health data coverage can affect the
accuracy of the models. Through comprehensive error analysis, we emphasize the
risks of relying exclusively on LLMs in medical settings (e.g., their potential
to contribute to misdiagnoses). Moreover, our proposed approach offers
significant cost savings for multilingual tasks, presenting a major advantage
for broad-scale implementation.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2024 22:14:34 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 09:56:15 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Skianis",
"Konstantinos",
""
],
[
"Pavlopoulos",
"John",
""
],
[
"Doğruöz",
"A. Seza",
""
]
] | TITLE: Building Multilingual Datasets for Predicting Mental Health Severity
through LLMs: Prospects and Challenges
ABSTRACT: Large Language Models (LLMs) are increasingly being integrated into various
medical fields, including mental health support systems. However, there is a
gap in research regarding the effectiveness of LLMs in non-English mental
health support applications. To address this problem, we present a novel
multilingual adaptation of widely-used mental health datasets, translated from
English into six languages (e.g., Greek, Turkish, French, Portuguese, German,
and Finnish). This dataset enables a comprehensive evaluation of LLM
performance in detecting mental health conditions and assessing their severity
across multiple languages. By experimenting with GPT and Llama, we observe
considerable variability in performance across languages, despite being
evaluated on the same translated dataset. This inconsistency underscores the
complexities inherent in multilingual mental health support, where
language-specific nuances and mental health data coverage can affect the
accuracy of the models. Through comprehensive error analysis, we emphasize the
risks of relying exclusively on LLMs in medical settings (e.g., their potential
to contribute to misdiagnoses). Moreover, our proposed approach offers
significant cost savings for multilingual tasks, presenting a major advantage
for broad-scale implementation.
|
2409.18261 | Mengchen Zhang | Mengchen Zhang, Tong Wu, Tai Wang, Tengfei Wang, Ziwei Liu, Dahua Lin | Omni6D: Large-Vocabulary 3D Object Dataset for Category-Level 6D Object
Pose Estimation | ECCV 2024 (poster). Github page: https://github.com/3DTopia/Omni6D | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | 6D object pose estimation aims at determining an object's translation,
rotation, and scale, typically from a single RGBD image. Recent advancements
have expanded this estimation from instance-level to category-level, allowing
models to generalize across unseen instances within the same category. However,
this generalization is limited by the narrow range of categories covered by
existing datasets, such as NOCS, which also tend to overlook common real-world
challenges like occlusion. To tackle these challenges, we introduce Omni6D, a
comprehensive RGBD dataset featuring a wide range of categories and varied
backgrounds, elevating the task to a more realistic context. 1) The dataset
comprises an extensive spectrum of 166 categories, 4688 instances adjusted to
the canonical pose, and over 0.8 million captures, significantly broadening the
scope for evaluation. 2) We introduce a symmetry-aware metric and conduct
systematic benchmarks of existing algorithms on Omni6D, offering a thorough
exploration of new challenges and insights. 3) Additionally, we propose an
effective fine-tuning approach that adapts models from previous datasets to our
extensive vocabulary setting. We believe this initiative will pave the way for
new insights and substantial progress in both the industrial and academic
fields, pushing forward the boundaries of general 6D pose estimation.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2024 20:13:33 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Sep 2024 02:06:02 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 04:47:17 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhang",
"Mengchen",
""
],
[
"Wu",
"Tong",
""
],
[
"Wang",
"Tai",
""
],
[
"Wang",
"Tengfei",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Lin",
"Dahua",
""
]
] | TITLE: Omni6D: Large-Vocabulary 3D Object Dataset for Category-Level 6D Object
Pose Estimation
ABSTRACT: 6D object pose estimation aims at determining an object's translation,
rotation, and scale, typically from a single RGBD image. Recent advancements
have expanded this estimation from instance-level to category-level, allowing
models to generalize across unseen instances within the same category. However,
this generalization is limited by the narrow range of categories covered by
existing datasets, such as NOCS, which also tend to overlook common real-world
challenges like occlusion. To tackle these challenges, we introduce Omni6D, a
comprehensive RGBD dataset featuring a wide range of categories and varied
backgrounds, elevating the task to a more realistic context. 1) The dataset
comprises an extensive spectrum of 166 categories, 4688 instances adjusted to
the canonical pose, and over 0.8 million captures, significantly broadening the
scope for evaluation. 2) We introduce a symmetry-aware metric and conduct
systematic benchmarks of existing algorithms on Omni6D, offering a thorough
exploration of new challenges and insights. 3) Additionally, we propose an
effective fine-tuning approach that adapts models from previous datasets to our
extensive vocabulary setting. We believe this initiative will pave the way for
new insights and substantial progress in both the industrial and academic
fields, pushing forward the boundaries of general 6D pose estimation.
|
2409.19821 | Baoru Huang | Bohan Zhan, Wang Zhao, Yi Fang, Bo Du, Francisco Vasconcelos, Danail
Stoyanov, Daniel S. Elson, Baoru Huang | Tracking Everything in Robotic-Assisted Surgery | 7 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurate tracking of tissues and instruments in videos is crucial for
Robotic-Assisted Minimally Invasive Surgery (RAMIS), as it enables the robot to
comprehend the surgical scene with precise locations and interactions of
tissues and tools. Traditional keypoint-based sparse tracking is limited by
featured points, while flow-based dense two-view matching suffers from
long-term drifts. Recently, the Tracking Any Point (TAP) algorithm was proposed
to overcome these limitations and achieve dense accurate long-term tracking.
However, its efficacy in surgical scenarios remains untested, largely due to
the lack of a comprehensive surgical tracking dataset for evaluation. To
address this gap, we introduce a new annotated surgical tracking dataset for
benchmarking tracking methods for surgical scenarios, comprising real-world
surgical videos with complex tissue and instrument motions. We extensively
evaluate state-of-the-art (SOTA) TAP-based algorithms on this dataset and
reveal their limitations in challenging surgical scenarios, including fast
instrument motion, severe occlusions, and motion blur, etc. Furthermore, we
propose a new tracking method, namely SurgMotion, to solve the challenges and
further improve the tracking performance. Our proposed method outperforms most
TAP-based algorithms in surgical instruments tracking, and especially
demonstrates significant improvements over baselines in challenging medical
videos. Our code and dataset are available at
https://github.com/zhanbh1019/SurgicalMotion.
| [
{
"version": "v1",
"created": "Sun, 29 Sep 2024 23:06:57 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 19:50:04 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhan",
"Bohan",
""
],
[
"Zhao",
"Wang",
""
],
[
"Fang",
"Yi",
""
],
[
"Du",
"Bo",
""
],
[
"Vasconcelos",
"Francisco",
""
],
[
"Stoyanov",
"Danail",
""
],
[
"Elson",
"Daniel S.",
""
],
[
"Huang",
"Baoru",
""
]
] | TITLE: Tracking Everything in Robotic-Assisted Surgery
ABSTRACT: Accurate tracking of tissues and instruments in videos is crucial for
Robotic-Assisted Minimally Invasive Surgery (RAMIS), as it enables the robot to
comprehend the surgical scene with precise locations and interactions of
tissues and tools. Traditional keypoint-based sparse tracking is limited by
featured points, while flow-based dense two-view matching suffers from
long-term drifts. Recently, the Tracking Any Point (TAP) algorithm was proposed
to overcome these limitations and achieve dense accurate long-term tracking.
However, its efficacy in surgical scenarios remains untested, largely due to
the lack of a comprehensive surgical tracking dataset for evaluation. To
address this gap, we introduce a new annotated surgical tracking dataset for
benchmarking tracking methods for surgical scenarios, comprising real-world
surgical videos with complex tissue and instrument motions. We extensively
evaluate state-of-the-art (SOTA) TAP-based algorithms on this dataset and
reveal their limitations in challenging surgical scenarios, including fast
instrument motion, severe occlusions, and motion blur, etc. Furthermore, we
propose a new tracking method, namely SurgMotion, to solve the challenges and
further improve the tracking performance. Our proposed method outperforms most
TAP-based algorithms in surgical instruments tracking, and especially
demonstrates significant improvements over baselines in challenging medical
videos. Our code and dataset are available at
https://github.com/zhanbh1019/SurgicalMotion.
|
2410.00990 | Jian Yang | Jian Yang, Xukun Wang, Wentao Wang, Guoming Li, Qihang Fang, Ruihong
Yuan, Tianyang Wang, Xiaomei Zhang, Yeying Jin, Zhaoxin Fan | LaDTalk: Latent Denoising for Synthesizing Talking Head Videos with High
Frequency Details | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Audio-driven talking head generation is a pivotal area within film-making and
Virtual Reality. Although existing methods have made significant strides
following the end-to-end paradigm, they still encounter challenges in producing
videos with high-frequency details due to their limited expressivity in this
domain. This limitation has prompted us to explore an effective post-processing
approach to synthesize photo-realistic talking head videos. Specifically, we
employ a pretrained Wav2Lip model as our foundation model, leveraging its
robust audio-lip alignment capabilities. Drawing on the theory of Lipschitz
Continuity, we have theoretically established the noise robustness of Vector
Quantised Auto Encoders (VQAEs). Our experiments further demonstrate that the
high-frequency texture deficiency of the foundation model can be temporally
consistently recovered by the Space-Optimised Vector Quantised Auto Encoder
(SOVQAE) we introduced, thereby facilitating the creation of realistic talking
head videos. We conduct experiments on both the conventional dataset and the
High-Frequency TalKing head (HFTK) dataset that we curated. The results
indicate that our method, LaDTalk, achieves new state-of-the-art video quality
and out-of-domain lip synchronization performance.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 18:32:02 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 06:17:16 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Yang",
"Jian",
""
],
[
"Wang",
"Xukun",
""
],
[
"Wang",
"Wentao",
""
],
[
"Li",
"Guoming",
""
],
[
"Fang",
"Qihang",
""
],
[
"Yuan",
"Ruihong",
""
],
[
"Wang",
"Tianyang",
""
],
[
"Zhang",
"Xiaomei",
""
],
[
"Jin",
"Yeying",
""
],
[
"Fan",
"Zhaoxin",
""
]
] | TITLE: LaDTalk: Latent Denoising for Synthesizing Talking Head Videos with High
Frequency Details
ABSTRACT: Audio-driven talking head generation is a pivotal area within film-making and
Virtual Reality. Although existing methods have made significant strides
following the end-to-end paradigm, they still encounter challenges in producing
videos with high-frequency details due to their limited expressivity in this
domain. This limitation has prompted us to explore an effective post-processing
approach to synthesize photo-realistic talking head videos. Specifically, we
employ a pretrained Wav2Lip model as our foundation model, leveraging its
robust audio-lip alignment capabilities. Drawing on the theory of Lipschitz
Continuity, we have theoretically established the noise robustness of Vector
Quantised Auto Encoders (VQAEs). Our experiments further demonstrate that the
high-frequency texture deficiency of the foundation model can be temporally
consistently recovered by the Space-Optimised Vector Quantised Auto Encoder
(SOVQAE) we introduced, thereby facilitating the creation of realistic talking
head videos. We conduct experiments on both the conventional dataset and the
High-Frequency TalKing head (HFTK) dataset that we curated. The results
indicate that our method, LaDTalk, achieves new state-of-the-art video quality
and out-of-domain lip synchronization performance.
|
2410.01180 | Hasnat Md Abdullah | Hasnat Md Abdullah, Tian Liu, Kangda Wei, Shu Kong, Ruihong Huang | UAL-Bench: The First Comprehensive Unusual Activity Localization
Benchmark | null | wacv(2025) 5801-5811 | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | Localizing unusual activities, such as human errors or surveillance
incidents, in videos holds practical significance. However, current video
understanding models struggle with localizing these unusual events likely
because of their insufficient representation in models' pretraining datasets.
To explore foundation models' capability in localizing unusual activity, we
introduce UAL-Bench, a comprehensive benchmark for unusual activity
localization, featuring three video datasets: UAG-OOPS, UAG-SSBD, UAG-FunQA,
and an instruction-tune dataset: OOPS-UAG-Instruct, to improve model
capabilities. UAL-Bench evaluates three approaches: Video-Language Models
(Vid-LLMs), instruction-tuned Vid-LLMs, and a novel integration of
Vision-Language Models and Large Language Models (VLM-LLM). Our results show
the VLM-LLM approach excels in localizing short-span unusual events and
predicting their onset (start time) more accurately than Vid-LLMs. We also
propose a new metric, R@1, TD <= p, to address limitations in existing
evaluation methods. Our findings highlight the challenges posed by
long-duration videos, particularly in autism diagnosis scenarios, and the need
for further advancements in localization techniques. Our work not only provides
a benchmark for unusual activity localization but also outlines the key
challenges for existing foundation models, suggesting future research
directions on this important task.
| [
{
"version": "v1",
"created": "Wed, 2 Oct 2024 02:33:09 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Abdullah",
"Hasnat Md",
""
],
[
"Liu",
"Tian",
""
],
[
"Wei",
"Kangda",
""
],
[
"Kong",
"Shu",
""
],
[
"Huang",
"Ruihong",
""
]
] | TITLE: UAL-Bench: The First Comprehensive Unusual Activity Localization
Benchmark
ABSTRACT: Localizing unusual activities, such as human errors or surveillance
incidents, in videos holds practical significance. However, current video
understanding models struggle with localizing these unusual events likely
because of their insufficient representation in models' pretraining datasets.
To explore foundation models' capability in localizing unusual activity, we
introduce UAL-Bench, a comprehensive benchmark for unusual activity
localization, featuring three video datasets: UAG-OOPS, UAG-SSBD, UAG-FunQA,
and an instruction-tune dataset: OOPS-UAG-Instruct, to improve model
capabilities. UAL-Bench evaluates three approaches: Video-Language Models
(Vid-LLMs), instruction-tuned Vid-LLMs, and a novel integration of
Vision-Language Models and Large Language Models (VLM-LLM). Our results show
the VLM-LLM approach excels in localizing short-span unusual events and
predicting their onset (start time) more accurately than Vid-LLMs. We also
propose a new metric, R@1, TD <= p, to address limitations in existing
evaluation methods. Our findings highlight the challenges posed by
long-duration videos, particularly in autism diagnosis scenarios, and the need
for further advancements in localization techniques. Our work not only provides
a benchmark for unusual activity localization but also outlines the key
challenges for existing foundation models, suggesting future research
directions on this important task.
|
2410.07081 | Ahmed H. Salamah | Ahmed H. Salamah, Kaixiang Zheng, Yiwen Liu and En-Hui Yang | JPEG Inspired Deep Learning | null | The Thirteenth International Conference on Learning
Representations 2025 (ICLR 2025) | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although it is traditionally believed that lossy image compression, such as
JPEG compression, has a negative impact on the performance of deep neural
networks (DNNs), it is shown by recent works that well-crafted JPEG compression
can actually improve the performance of deep learning (DL). Inspired by this,
we propose JPEG-DL, a novel DL framework that prepends any underlying DNN
architecture with a trainable JPEG compression layer. To make the quantization
operation in JPEG compression trainable, a new differentiable soft quantizer is
employed at the JPEG layer, and then the quantization operation and underlying
DNN are jointly trained. Extensive experiments show that in comparison with the
standard DL, JPEG-DL delivers significant accuracy improvements across various
datasets and model architectures while enhancing robustness against adversarial
attacks. Particularly, on some fine-grained image classification datasets,
JPEG-DL can increase prediction accuracy by as much as 20.9%. Our code is
available on https://github.com/AhmedHussKhalifa/JPEG-Inspired-DL.git.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2024 17:23:54 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Feb 2025 06:42:15 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 22:43:27 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Salamah",
"Ahmed H.",
""
],
[
"Zheng",
"Kaixiang",
""
],
[
"Liu",
"Yiwen",
""
],
[
"Yang",
"En-Hui",
""
]
] | TITLE: JPEG Inspired Deep Learning
ABSTRACT: Although it is traditionally believed that lossy image compression, such as
JPEG compression, has a negative impact on the performance of deep neural
networks (DNNs), it is shown by recent works that well-crafted JPEG compression
can actually improve the performance of deep learning (DL). Inspired by this,
we propose JPEG-DL, a novel DL framework that prepends any underlying DNN
architecture with a trainable JPEG compression layer. To make the quantization
operation in JPEG compression trainable, a new differentiable soft quantizer is
employed at the JPEG layer, and then the quantization operation and underlying
DNN are jointly trained. Extensive experiments show that in comparison with the
standard DL, JPEG-DL delivers significant accuracy improvements across various
datasets and model architectures while enhancing robustness against adversarial
attacks. Particularly, on some fine-grained image classification datasets,
JPEG-DL can increase prediction accuracy by as much as 20.9%. Our code is
available on https://github.com/AhmedHussKhalifa/JPEG-Inspired-DL.git.
|
2410.16646 | Saumya Gupta | Saumya Gupta, Dimitris Samaras, Chao Chen | TopoDiffusionNet: A Topology-aware Diffusion Model | Accepted to ICLR 2025 (Poster) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Diffusion models excel at creating visually impressive images but often
struggle to generate images with a specified topology. The Betti number, which
represents the number of structures in an image, is a fundamental measure in
topology. Yet, diffusion models fail to satisfy even this basic constraint.
This limitation restricts their utility in applications requiring exact
control, like robotics and environmental modeling. To address this, we propose
TopoDiffusionNet (TDN), a novel approach that enforces diffusion models to
maintain the desired topology. We leverage tools from topological data
analysis, particularly persistent homology, to extract the topological
structures within an image. We then design a topology-based objective function
to guide the denoising process, preserving intended structures while
suppressing noisy ones. Our experiments across four datasets demonstrate
significant improvements in topological accuracy. TDN is the first to integrate
topology with diffusion models, opening new avenues of research in this area.
Code available at https://github.com/Saumya-Gupta-26/TopoDiffusionNet
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2024 02:45:46 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 17:53:45 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Gupta",
"Saumya",
""
],
[
"Samaras",
"Dimitris",
""
],
[
"Chen",
"Chao",
""
]
] | TITLE: TopoDiffusionNet: A Topology-aware Diffusion Model
ABSTRACT: Diffusion models excel at creating visually impressive images but often
struggle to generate images with a specified topology. The Betti number, which
represents the number of structures in an image, is a fundamental measure in
topology. Yet, diffusion models fail to satisfy even this basic constraint.
This limitation restricts their utility in applications requiring exact
control, like robotics and environmental modeling. To address this, we propose
TopoDiffusionNet (TDN), a novel approach that enforces diffusion models to
maintain the desired topology. We leverage tools from topological data
analysis, particularly persistent homology, to extract the topological
structures within an image. We then design a topology-based objective function
to guide the denoising process, preserving intended structures while
suppressing noisy ones. Our experiments across four datasets demonstrate
significant improvements in topological accuracy. TDN is the first to integrate
topology with diffusion models, opening new avenues of research in this area.
Code available at https://github.com/Saumya-Gupta-26/TopoDiffusionNet
|
2410.17935 | Shiyue Zhang | Shiyue Zhang, Ziheng Cheng, Cheng Zhang | Semi-Implicit Functional Gradient Flow for Efficient Sampling | 46 pages, 13 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Particle-based variational inference methods (ParVIs) use nonparametric
variational families represented by particles to approximate the target
distribution according to the kernelized Wasserstein gradient flow for the
Kullback-Leibler (KL) divergence. Although functional gradient flows have been
introduced to expand the kernel space for better flexibility, the deterministic
updating mechanism may limit exploration and require expensive repetitive runs
for new samples. In this paper, we propose Semi-Implicit Functional Gradient
flow (SIFG), a functional gradient ParVI method that uses perturbed particles
with Gaussian noise as the approximation family. We show that the corresponding
functional gradient flow, which can be estimated via denoising score matching
with neural networks, exhibits strong theoretical convergence guarantees due to
a higher-order smoothness brought to the approximation family via Gaussian
perturbation. In addition, we present an adaptive version of our method that
automatically selects the appropriate noise magnitude during sampling, striking
a good balance between exploration efficiency and approximation accuracy.
Extensive experiments on both simulated and real-world datasets demonstrate the
effectiveness and efficiency of the proposed framework.
| [
{
"version": "v1",
"created": "Wed, 23 Oct 2024 15:00:30 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 12:56:31 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhang",
"Shiyue",
""
],
[
"Cheng",
"Ziheng",
""
],
[
"Zhang",
"Cheng",
""
]
] | TITLE: Semi-Implicit Functional Gradient Flow for Efficient Sampling
ABSTRACT: Particle-based variational inference methods (ParVIs) use nonparametric
variational families represented by particles to approximate the target
distribution according to the kernelized Wasserstein gradient flow for the
Kullback-Leibler (KL) divergence. Although functional gradient flows have been
introduced to expand the kernel space for better flexibility, the deterministic
updating mechanism may limit exploration and require expensive repetitive runs
for new samples. In this paper, we propose Semi-Implicit Functional Gradient
flow (SIFG), a functional gradient ParVI method that uses perturbed particles
with Gaussian noise as the approximation family. We show that the corresponding
functional gradient flow, which can be estimated via denoising score matching
with neural networks, exhibits strong theoretical convergence guarantees due to
a higher-order smoothness brought to the approximation family via Gaussian
perturbation. In addition, we present an adaptive version of our method that
automatically selects the appropriate noise magnitude during sampling, striking
a good balance between exploration efficiency and approximation accuracy.
Extensive experiments on both simulated and real-world datasets demonstrate the
effectiveness and efficiency of the proposed framework.
|
2410.18639 | Jinxu Lin | Jinxu Lin, Linwei Tao, Minjing Dong, Chang Xu | Diffusion Attribution Score: Evaluating Training Data Influence in
Diffusion Models | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | As diffusion models become increasingly popular, the misuse of copyrighted
and private images has emerged as a major concern. One promising solution to
mitigate this issue is identifying the contribution of specific training
samples in generative models, a process known as data attribution. Existing
data attribution methods for diffusion models typically quantify the
contribution of a training sample by evaluating the change in diffusion loss
when the sample is included or excluded from the training process. However, we
argue that the direct usage of diffusion loss cannot represent such a
contribution accurately due to the calculation of diffusion loss. Specifically,
these approaches measure the divergence between predicted and ground truth
distributions, which leads to an indirect comparison between the predicted
distributions and cannot represent the variances between model behaviors. To
address these issues, we aim to measure the direct comparison between predicted
distributions with an attribution score to analyse the training sample
importance, which is achieved by Diffusion Attribution Score (\textit{DAS}).
Underpinned by rigorous theoretical analysis, we elucidate the effectiveness of
DAS. Additionally, we explore strategies to accelerate DAS calculations,
facilitating its application to large-scale diffusion models. Our extensive
experiments across various datasets and diffusion models demonstrate that DAS
significantly surpasses previous benchmarks in terms of the linear
data-modelling score, establishing new state-of-the-art performance. Code is
available at \hyperlink{here}{https://github.com/Jinxu-Lin/DAS}.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2024 10:58:17 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Oct 2024 13:12:47 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 06:55:44 GMT"
},
{
"version": "v4",
"created": "Fri, 21 Mar 2025 05:57:29 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Lin",
"Jinxu",
""
],
[
"Tao",
"Linwei",
""
],
[
"Dong",
"Minjing",
""
],
[
"Xu",
"Chang",
""
]
] | TITLE: Diffusion Attribution Score: Evaluating Training Data Influence in
Diffusion Models
ABSTRACT: As diffusion models become increasingly popular, the misuse of copyrighted
and private images has emerged as a major concern. One promising solution to
mitigate this issue is identifying the contribution of specific training
samples in generative models, a process known as data attribution. Existing
data attribution methods for diffusion models typically quantify the
contribution of a training sample by evaluating the change in diffusion loss
when the sample is included or excluded from the training process. However, we
argue that the direct usage of diffusion loss cannot represent such a
contribution accurately due to the calculation of diffusion loss. Specifically,
these approaches measure the divergence between predicted and ground truth
distributions, which leads to an indirect comparison between the predicted
distributions and cannot represent the variances between model behaviors. To
address these issues, we aim to measure the direct comparison between predicted
distributions with an attribution score to analyse the training sample
importance, which is achieved by Diffusion Attribution Score (\textit{DAS}).
Underpinned by rigorous theoretical analysis, we elucidate the effectiveness of
DAS. Additionally, we explore strategies to accelerate DAS calculations,
facilitating its application to large-scale diffusion models. Our extensive
experiments across various datasets and diffusion models demonstrate that DAS
significantly surpasses previous benchmarks in terms of the linear
data-modelling score, establishing new state-of-the-art performance. Code is
available at \hyperlink{here}{https://github.com/Jinxu-Lin/DAS}.
|
2410.20109 | Junjie Li | Junjie Li, Jianghong Ma, Xiaofeng Zhang, Yuhang Li, Jianyang Shi | GiVE: Guiding Visual Encoder to Perceive Overlooked Information | This paper was accepted by ICME 2025 | null | null | null | cs.CV cs.AI cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal Large Language Models have advanced AI in applications like
text-to-video generation and visual question answering. These models rely on
visual encoders to convert non-text data into vectors, but current encoders
either lack semantic alignment or overlook non-salient objects. We propose the
Guiding Visual Encoder to Perceive Overlooked Information (GiVE) approach. GiVE
enhances visual representation with an Attention-Guided Adapter (AG-Adapter)
module and an Object-focused Visual Semantic Learning module. These incorporate
three novel loss terms: Object-focused Image-Text Contrast (OITC) loss,
Object-focused Image-Image Contrast (OIIC) loss, and Object-focused Image
Discrimination (OID) loss, improving object consideration, retrieval accuracy,
and comprehensiveness. Our contributions include dynamic visual focus
adjustment, novel loss functions to enhance object retrieval, and the
Multi-Object Instruction (MOInst) dataset. Experiments show our approach
achieves state-of-the-art performance.
| [
{
"version": "v1",
"created": "Sat, 26 Oct 2024 07:37:43 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 14:36:09 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Li",
"Junjie",
""
],
[
"Ma",
"Jianghong",
""
],
[
"Zhang",
"Xiaofeng",
""
],
[
"Li",
"Yuhang",
""
],
[
"Shi",
"Jianyang",
""
]
] | TITLE: GiVE: Guiding Visual Encoder to Perceive Overlooked Information
ABSTRACT: Multimodal Large Language Models have advanced AI in applications like
text-to-video generation and visual question answering. These models rely on
visual encoders to convert non-text data into vectors, but current encoders
either lack semantic alignment or overlook non-salient objects. We propose the
Guiding Visual Encoder to Perceive Overlooked Information (GiVE) approach. GiVE
enhances visual representation with an Attention-Guided Adapter (AG-Adapter)
module and an Object-focused Visual Semantic Learning module. These incorporate
three novel loss terms: Object-focused Image-Text Contrast (OITC) loss,
Object-focused Image-Image Contrast (OIIC) loss, and Object-focused Image
Discrimination (OID) loss, improving object consideration, retrieval accuracy,
and comprehensiveness. Our contributions include dynamic visual focus
adjustment, novel loss functions to enhance object retrieval, and the
Multi-Object Instruction (MOInst) dataset. Experiments show our approach
achieves state-of-the-art performance.
|
2410.21982 | Yuxuan Lin | Yuxuan Lin, Yang Chang, Xuan Tong, Jiawen Yu, Antonio Liotta, Guofan
Huang, Wei Song, Deyu Zeng, Zongze Wu, Yan Wang, Wenqiang Zhang | A Survey on RGB, 3D, and Multimodal Approaches for Unsupervised
Industrial Image Anomaly Detection | Accepted by Information Fusion | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the advancement of industrial informatization, unsupervised anomaly
detection technology effectively overcomes the scarcity of abnormal samples and
significantly enhances the automation and reliability of smart manufacturing.
As an important branch, industrial image anomaly detection focuses on
automatically identifying visual anomalies in industrial scenarios (such as
product surface defects, assembly errors, and equipment appearance anomalies)
through computer vision techniques. With the rapid development of Unsupervised
industrial Image Anomaly Detection (UIAD), excellent detection performance has
been achieved not only in RGB setting but also in 3D and multimodal (RGB and
3D) settings. However, existing surveys primarily focus on UIAD tasks in RGB
setting, with little discussion in 3D and multimodal settings. To address this
gap, this artical provides a comprehensive review of UIAD tasks in the three
modal settings. Specifically, we first introduce the task concept and process
of UIAD. We then overview the research on UIAD in three modal settings (RGB,
3D, and multimodal), including datasets and methods, and review multimodal
feature fusion strategies in multimodal setting. Finally, we summarize the main
challenges faced by UIAD tasks in the three modal settings, and offer insights
into future development directions, aiming to provide researchers with a
comprehensive reference and offer new perspectives for the advancement of
industrial informatization. Corresponding resources are available at
https://github.com/Sunny5250/Awesome-Multi-Setting-UIAD.
| [
{
"version": "v1",
"created": "Tue, 29 Oct 2024 12:12:45 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 04:51:16 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Lin",
"Yuxuan",
""
],
[
"Chang",
"Yang",
""
],
[
"Tong",
"Xuan",
""
],
[
"Yu",
"Jiawen",
""
],
[
"Liotta",
"Antonio",
""
],
[
"Huang",
"Guofan",
""
],
[
"Song",
"Wei",
""
],
[
"Zeng",
"Deyu",
""
],
[
"Wu",
"Zongze",
""
],
[
"Wang",
"Yan",
""
],
[
"Zhang",
"Wenqiang",
""
]
] | TITLE: A Survey on RGB, 3D, and Multimodal Approaches for Unsupervised
Industrial Image Anomaly Detection
ABSTRACT: In the advancement of industrial informatization, unsupervised anomaly
detection technology effectively overcomes the scarcity of abnormal samples and
significantly enhances the automation and reliability of smart manufacturing.
As an important branch, industrial image anomaly detection focuses on
automatically identifying visual anomalies in industrial scenarios (such as
product surface defects, assembly errors, and equipment appearance anomalies)
through computer vision techniques. With the rapid development of Unsupervised
industrial Image Anomaly Detection (UIAD), excellent detection performance has
been achieved not only in RGB setting but also in 3D and multimodal (RGB and
3D) settings. However, existing surveys primarily focus on UIAD tasks in RGB
setting, with little discussion in 3D and multimodal settings. To address this
gap, this artical provides a comprehensive review of UIAD tasks in the three
modal settings. Specifically, we first introduce the task concept and process
of UIAD. We then overview the research on UIAD in three modal settings (RGB,
3D, and multimodal), including datasets and methods, and review multimodal
feature fusion strategies in multimodal setting. Finally, we summarize the main
challenges faced by UIAD tasks in the three modal settings, and offer insights
into future development directions, aiming to provide researchers with a
comprehensive reference and offer new perspectives for the advancement of
industrial informatization. Corresponding resources are available at
https://github.com/Sunny5250/Awesome-Multi-Setting-UIAD.
|
2411.00239 | Shaohua Liu | Shaohua Liu, Junzhe Lu, Zuoya Gu, Jiajun Li, Yue Deng | Aquatic-GS: A Hybrid 3D Representation for Underwater Scenes | 13 pages, 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Representing underwater 3D scenes is a valuable yet complex task, as
attenuation and scattering effects during underwater imaging significantly
couple the information of the objects and the water. This coupling presents a
significant challenge for existing methods in effectively representing both the
objects and the water medium simultaneously. To address this challenge, we
propose Aquatic-GS, a hybrid 3D representation approach for underwater scenes
that effectively represents both the objects and the water medium.
Specifically, we construct a Neural Water Field (NWF) to implicitly model the
water parameters, while extending the latest 3D Gaussian Splatting (3DGS) to
model the objects explicitly. Both components are integrated through a
physics-based underwater image formation model to represent complex underwater
scenes. Moreover, to construct more precise scene geometry and details, we
design a Depth-Guided Optimization (DGO) mechanism that uses a pseudo-depth map
as auxiliary guidance. After optimization, Aquatic-GS enables the rendering of
novel underwater viewpoints and supports restoring the true appearance of
underwater scenes, as if the water medium were absent. Extensive experiments on
both simulated and real-world datasets demonstrate that Aquatic-GS surpasses
state-of-the-art underwater 3D representation methods, achieving better
rendering quality and real-time rendering performance with a 410x increase in
speed. Furthermore, regarding underwater image restoration, Aquatic-GS
outperforms representative dewatering methods in color correction, detail
recovery, and stability. Our models, code, and datasets can be accessed at
https://aquaticgs.github.io.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2024 22:24:56 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 07:26:27 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Liu",
"Shaohua",
""
],
[
"Lu",
"Junzhe",
""
],
[
"Gu",
"Zuoya",
""
],
[
"Li",
"Jiajun",
""
],
[
"Deng",
"Yue",
""
]
] | TITLE: Aquatic-GS: A Hybrid 3D Representation for Underwater Scenes
ABSTRACT: Representing underwater 3D scenes is a valuable yet complex task, as
attenuation and scattering effects during underwater imaging significantly
couple the information of the objects and the water. This coupling presents a
significant challenge for existing methods in effectively representing both the
objects and the water medium simultaneously. To address this challenge, we
propose Aquatic-GS, a hybrid 3D representation approach for underwater scenes
that effectively represents both the objects and the water medium.
Specifically, we construct a Neural Water Field (NWF) to implicitly model the
water parameters, while extending the latest 3D Gaussian Splatting (3DGS) to
model the objects explicitly. Both components are integrated through a
physics-based underwater image formation model to represent complex underwater
scenes. Moreover, to construct more precise scene geometry and details, we
design a Depth-Guided Optimization (DGO) mechanism that uses a pseudo-depth map
as auxiliary guidance. After optimization, Aquatic-GS enables the rendering of
novel underwater viewpoints and supports restoring the true appearance of
underwater scenes, as if the water medium were absent. Extensive experiments on
both simulated and real-world datasets demonstrate that Aquatic-GS surpasses
state-of-the-art underwater 3D representation methods, achieving better
rendering quality and real-time rendering performance with a 410x increase in
speed. Furthermore, regarding underwater image restoration, Aquatic-GS
outperforms representative dewatering methods in color correction, detail
recovery, and stability. Our models, code, and datasets can be accessed at
https://aquaticgs.github.io.
|
2411.02937 | Yangning Li | Yangning Li, Yinghui Li, Xinyu Wang, Yong Jiang, Zhen Zhang, Xinran
Zheng, Hui Wang, Hai-Tao Zheng, Fei Huang, Jingren Zhou, Philip S. Yu | Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA
Dataset and Self-adaptive Planning Agent | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal Retrieval Augmented Generation (mRAG) plays an important role in
mitigating the "hallucination" issue inherent in multimodal large language
models (MLLMs). Although promising, existing heuristic mRAGs typically
predefined fixed retrieval processes, which causes two issues: (1) Non-adaptive
Retrieval Queries. (2) Overloaded Retrieval Queries. However, these flaws
cannot be adequately reflected by current knowledge-seeking visual question
answering (VQA) datasets, since the most required knowledge can be readily
obtained with a standard two-step retrieval. To bridge the dataset gap, we
first construct Dyn-VQA dataset, consisting of three types of "dynamic"
questions, which require complex knowledge retrieval strategies variable in
query, tool, and time: (1) Questions with rapidly changing answers. (2)
Questions requiring multi-modal knowledge. (3) Multi-hop questions. Experiments
on Dyn-VQA reveal that existing heuristic mRAGs struggle to provide sufficient
and precisely relevant knowledge for dynamic questions due to their rigid
retrieval processes. Hence, we further propose the first self-adaptive planning
agent for multimodal retrieval, OmniSearch. The underlying idea is to emulate
the human behavior in question solution which dynamically decomposes complex
multimodal questions into sub-question chains with retrieval action. Extensive
experiments prove the effectiveness of our OmniSearch, also provide direction
for advancing mRAG. The code and dataset will be open-sourced at
https://github.com/Alibaba-NLP/OmniSearch.
| [
{
"version": "v1",
"created": "Tue, 5 Nov 2024 09:27:21 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Nov 2024 13:40:25 GMT"
},
{
"version": "v3",
"created": "Sun, 8 Dec 2024 18:48:49 GMT"
},
{
"version": "v4",
"created": "Fri, 21 Mar 2025 01:18:17 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Li",
"Yangning",
""
],
[
"Li",
"Yinghui",
""
],
[
"Wang",
"Xinyu",
""
],
[
"Jiang",
"Yong",
""
],
[
"Zhang",
"Zhen",
""
],
[
"Zheng",
"Xinran",
""
],
[
"Wang",
"Hui",
""
],
[
"Zheng",
"Hai-Tao",
""
],
[
"Huang",
"Fei",
""
],
[
"Zhou",
"Jingren",
""
],
[
"Yu",
"Philip S.",
""
]
] | TITLE: Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA
Dataset and Self-adaptive Planning Agent
ABSTRACT: Multimodal Retrieval Augmented Generation (mRAG) plays an important role in
mitigating the "hallucination" issue inherent in multimodal large language
models (MLLMs). Although promising, existing heuristic mRAGs typically
predefined fixed retrieval processes, which causes two issues: (1) Non-adaptive
Retrieval Queries. (2) Overloaded Retrieval Queries. However, these flaws
cannot be adequately reflected by current knowledge-seeking visual question
answering (VQA) datasets, since the most required knowledge can be readily
obtained with a standard two-step retrieval. To bridge the dataset gap, we
first construct Dyn-VQA dataset, consisting of three types of "dynamic"
questions, which require complex knowledge retrieval strategies variable in
query, tool, and time: (1) Questions with rapidly changing answers. (2)
Questions requiring multi-modal knowledge. (3) Multi-hop questions. Experiments
on Dyn-VQA reveal that existing heuristic mRAGs struggle to provide sufficient
and precisely relevant knowledge for dynamic questions due to their rigid
retrieval processes. Hence, we further propose the first self-adaptive planning
agent for multimodal retrieval, OmniSearch. The underlying idea is to emulate
the human behavior in question solution which dynamically decomposes complex
multimodal questions into sub-question chains with retrieval action. Extensive
experiments prove the effectiveness of our OmniSearch, also provide direction
for advancing mRAG. The code and dataset will be open-sourced at
https://github.com/Alibaba-NLP/OmniSearch.
|
2411.03714 | Felix Tempel | Felix Tempel, Espen Alexander F. Ihlen, Lars Adde, Inga Str\"umke | Explaining Human Activity Recognition with SHAP: Validating Insights
with Perturbation and Quantitative Measures | Published in Computers in Biology and Medicine | null | 10.1016/j.compbiomed.2025.109838 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In Human Activity Recognition (HAR), understanding the intricacy of body
movements within high-risk applications is essential. This study uses SHapley
Additive exPlanations (SHAP) to explain the decision-making process of Graph
Convolution Networks (GCNs) when classifying activities with skeleton data. We
employ SHAP to explain two real-world datasets: one for cerebral palsy (CP)
classification and the widely used NTU RGB+D 60 action recognition dataset. To
test the explanation, we introduce a novel perturbation approach that modifies
the model's edge importance matrix, allowing us to evaluate the impact of
specific body key points on prediction outcomes. To assess the fidelity of our
explanations, we employ informed perturbation, targeting body key points
identified as important by SHAP and comparing them against random perturbation
as a control condition. This perturbation enables a judgment on whether the
body key points are truly influential or non-influential based on the SHAP
values. Results on both datasets show that body key points identified as
important through SHAP have the largest influence on the accuracy, specificity,
and sensitivity metrics. Our findings highlight that SHAP can provide granular
insights into the input feature contribution to the prediction outcome of GCNs
in HAR tasks. This demonstrates the potential for more interpretable and
trustworthy models in high-stakes applications like healthcare or
rehabilitation.
| [
{
"version": "v1",
"created": "Wed, 6 Nov 2024 07:28:57 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 11:47:18 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Tempel",
"Felix",
""
],
[
"Ihlen",
"Espen Alexander F.",
""
],
[
"Adde",
"Lars",
""
],
[
"Strümke",
"Inga",
""
]
] | TITLE: Explaining Human Activity Recognition with SHAP: Validating Insights
with Perturbation and Quantitative Measures
ABSTRACT: In Human Activity Recognition (HAR), understanding the intricacy of body
movements within high-risk applications is essential. This study uses SHapley
Additive exPlanations (SHAP) to explain the decision-making process of Graph
Convolution Networks (GCNs) when classifying activities with skeleton data. We
employ SHAP to explain two real-world datasets: one for cerebral palsy (CP)
classification and the widely used NTU RGB+D 60 action recognition dataset. To
test the explanation, we introduce a novel perturbation approach that modifies
the model's edge importance matrix, allowing us to evaluate the impact of
specific body key points on prediction outcomes. To assess the fidelity of our
explanations, we employ informed perturbation, targeting body key points
identified as important by SHAP and comparing them against random perturbation
as a control condition. This perturbation enables a judgment on whether the
body key points are truly influential or non-influential based on the SHAP
values. Results on both datasets show that body key points identified as
important through SHAP have the largest influence on the accuracy, specificity,
and sensitivity metrics. Our findings highlight that SHAP can provide granular
insights into the input feature contribution to the prediction outcome of GCNs
in HAR tasks. This demonstrates the potential for more interpretable and
trustworthy models in high-stakes applications like healthcare or
rehabilitation.
|
2411.07885 | Constantin Ulrich | Constantin Ulrich and Tassilo Wald and Emily Tempus and Maximilian
Rokuss and Paul F. Jaeger and Klaus Maier-Hein | RadioActive: 3D Radiological Interactive Segmentation Benchmark | Undergoing Peer-Review | null | null | null | cs.CV cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Effortless and precise segmentation with minimal clinician effort could
greatly streamline clinical workflows. Recent interactive segmentation models,
inspired by METAs Segment Anything, have made significant progress but face
critical limitations in 3D radiology. These include impractical human
interaction requirements such as slice-by-slice operations for 2D models on 3D
data and a lack of iterative refinement. Prior studies have been hindered by
inadequate evaluation protocols, resulting in unreliable performance
assessments and inconsistent findings across studies. The RadioActive benchmark
addresses these challenges by providing a rigorous and reproducible evaluation
framework for interactive segmentation methods in clinically relevant
scenarios. It features diverse datasets, a wide range of target structures, and
the most impactful 2D and 3D interactive segmentation methods, all within a
flexible and extensible codebase. We also introduce advanced prompting
techniques that reduce interaction steps, enabling fair comparisons between 2D
and 3D models. Surprisingly, SAM2 outperforms all specialized medical 2D and 3D
models in a setting requiring only a few interactions to generate prompts for a
3D volume. This challenges prevailing assumptions and demonstrates that
general-purpose models surpass specialized medical approaches. By open-sourcing
RadioActive, we invite researchers to integrate their models and prompting
techniques, ensuring continuous and transparent evaluation of 3D medical
interactive models.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 15:47:17 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Nov 2024 09:02:25 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 15:47:12 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Ulrich",
"Constantin",
""
],
[
"Wald",
"Tassilo",
""
],
[
"Tempus",
"Emily",
""
],
[
"Rokuss",
"Maximilian",
""
],
[
"Jaeger",
"Paul F.",
""
],
[
"Maier-Hein",
"Klaus",
""
]
] | TITLE: RadioActive: 3D Radiological Interactive Segmentation Benchmark
ABSTRACT: Effortless and precise segmentation with minimal clinician effort could
greatly streamline clinical workflows. Recent interactive segmentation models,
inspired by METAs Segment Anything, have made significant progress but face
critical limitations in 3D radiology. These include impractical human
interaction requirements such as slice-by-slice operations for 2D models on 3D
data and a lack of iterative refinement. Prior studies have been hindered by
inadequate evaluation protocols, resulting in unreliable performance
assessments and inconsistent findings across studies. The RadioActive benchmark
addresses these challenges by providing a rigorous and reproducible evaluation
framework for interactive segmentation methods in clinically relevant
scenarios. It features diverse datasets, a wide range of target structures, and
the most impactful 2D and 3D interactive segmentation methods, all within a
flexible and extensible codebase. We also introduce advanced prompting
techniques that reduce interaction steps, enabling fair comparisons between 2D
and 3D models. Surprisingly, SAM2 outperforms all specialized medical 2D and 3D
models in a setting requiring only a few interactions to generate prompts for a
3D volume. This challenges prevailing assumptions and demonstrates that
general-purpose models surpass specialized medical approaches. By open-sourcing
RadioActive, we invite researchers to integrate their models and prompting
techniques, ensuring continuous and transparent evaluation of 3D medical
interactive models.
|
2411.07976 | Mahmut Gokmen | Mahmut S. Gokmen, Caner Ozcan, Moneera N. Haque, Steve W. Leung, C.
Seth Parker, W. Brent Seales, Cody Bumgardner | DINO-LG: A Task-Specific DINO Model for Coronary Calcium Scoring | Developed by Center for Applied Artificial Intelligence (CAAI),
University of Kentucky | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coronary artery disease (CAD), one of the leading causes of mortality
worldwide, necessitates effective risk assessment strategies, with coronary
artery calcium (CAC) scoring via computed tomography (CT) being a key method
for prevention. Traditional methods, primarily based on UNET architectures
implemented on pre-built models, face challenges like the scarcity of annotated
CT scans containing CAC and imbalanced datasets, leading to reduced performance
in segmentation and scoring tasks. In this study, we address these limitations
by incorporating the self-supervised learning (SSL) technique of DINO
(self-distillation with no labels), which trains without requiring CAC-specific
annotations, enhancing its robustness in generating distinct features. The
DINO-LG model, which leverages label guidance to focus on calcified areas,
achieves significant improvements, with a sensitivity of 89% and specificity of
90% for detecting CAC-containing CT slices, compared to the standard DINO
model's sensitivity of 79% and specificity of 77%. Additionally, false-negative
and false-positive rates are reduced by 49% and 59%, respectively, instilling
greater confidence in clinicians when ruling out calcification in low-risk
patients and minimizing unnecessary imaging reviews by radiologists. Further,
CAC scoring and segmentation tasks are conducted using a basic UNET
architecture, applied specifically to CT slices identified by the DINO-LG model
as containing calcified areas. This targeted approach enhances CAC scoring
accuracy by feeding the UNET model with relevant slices, significantly
improving diagnostic precision, reducing both false positives and false
negatives, and ultimately lowering overall healthcare costs by minimizing
unnecessary tests and treatments, presenting a valuable advancement in CAD risk
assessment.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 17:55:39 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Nov 2024 03:56:10 GMT"
},
{
"version": "v3",
"created": "Sun, 17 Nov 2024 02:51:16 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Nov 2024 02:57:56 GMT"
},
{
"version": "v5",
"created": "Wed, 27 Nov 2024 18:58:41 GMT"
},
{
"version": "v6",
"created": "Fri, 3 Jan 2025 17:40:42 GMT"
},
{
"version": "v7",
"created": "Fri, 21 Mar 2025 17:06:08 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Gokmen",
"Mahmut S.",
""
],
[
"Ozcan",
"Caner",
""
],
[
"Haque",
"Moneera N.",
""
],
[
"Leung",
"Steve W.",
""
],
[
"Parker",
"C. Seth",
""
],
[
"Seales",
"W. Brent",
""
],
[
"Bumgardner",
"Cody",
""
]
] | TITLE: DINO-LG: A Task-Specific DINO Model for Coronary Calcium Scoring
ABSTRACT: Coronary artery disease (CAD), one of the leading causes of mortality
worldwide, necessitates effective risk assessment strategies, with coronary
artery calcium (CAC) scoring via computed tomography (CT) being a key method
for prevention. Traditional methods, primarily based on UNET architectures
implemented on pre-built models, face challenges like the scarcity of annotated
CT scans containing CAC and imbalanced datasets, leading to reduced performance
in segmentation and scoring tasks. In this study, we address these limitations
by incorporating the self-supervised learning (SSL) technique of DINO
(self-distillation with no labels), which trains without requiring CAC-specific
annotations, enhancing its robustness in generating distinct features. The
DINO-LG model, which leverages label guidance to focus on calcified areas,
achieves significant improvements, with a sensitivity of 89% and specificity of
90% for detecting CAC-containing CT slices, compared to the standard DINO
model's sensitivity of 79% and specificity of 77%. Additionally, false-negative
and false-positive rates are reduced by 49% and 59%, respectively, instilling
greater confidence in clinicians when ruling out calcification in low-risk
patients and minimizing unnecessary imaging reviews by radiologists. Further,
CAC scoring and segmentation tasks are conducted using a basic UNET
architecture, applied specifically to CT slices identified by the DINO-LG model
as containing calcified areas. This targeted approach enhances CAC scoring
accuracy by feeding the UNET model with relevant slices, significantly
improving diagnostic precision, reducing both false positives and false
negatives, and ultimately lowering overall healthcare costs by minimizing
unnecessary tests and treatments, presenting a valuable advancement in CAD risk
assessment.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.