Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2409.13055 | Yan Song Hu | Yan Song Hu, Nicolas Abboud, Muhammad Qasim Ali, Adam Srebrnjak Yang,
Imad Elhajj, Daniel Asmar, Yuhao Chen, John S. Zelek | MGSO: Monocular Real-time Photometric SLAM with Efficient 3D Gaussian
Splatting | The final version of this work has been approved by the IEEE for
publication. This version may no longer be accessible without notice.
Copyright 2025 IEEE. Personal use of this material is permitted. Permission
from IEEE must be obtained for all other uses | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Real-time SLAM with dense 3D mapping is computationally challenging,
especially on resource-limited devices. The recent development of 3D Gaussian
Splatting (3DGS) offers a promising approach for real-time dense 3D
reconstruction. However, existing 3DGS-based SLAM systems struggle to balance
hardware simplicity, speed, and map quality. Most systems excel in one or two
of the aforementioned aspects but rarely achieve all. A key issue is the
difficulty of initializing 3D Gaussians while concurrently conducting SLAM. To
address these challenges, we present Monocular GSO (MGSO), a novel real-time
SLAM system that integrates photometric SLAM with 3DGS. Photometric SLAM
provides dense structured point clouds for 3DGS initialization, accelerating
optimization and producing more efficient maps with fewer Gaussians. As a
result, experiments show that our system generates reconstructions with a
balance of quality, memory efficiency, and speed that outperforms the
state-of-the-art. Furthermore, our system achieves all results using RGB
inputs. We evaluate the Replica, TUM-RGBD, and EuRoC datasets against current
live dense reconstruction systems. Not only do we surpass contemporary systems,
but experiments also show that we maintain our performance on laptop hardware,
making it a practical solution for robotics, A/R, and other real-time
applications.
| [
{
"version": "v1",
"created": "Thu, 19 Sep 2024 19:07:05 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 21:17:35 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Hu",
"Yan Song",
""
],
[
"Abboud",
"Nicolas",
""
],
[
"Ali",
"Muhammad Qasim",
""
],
[
"Yang",
"Adam Srebrnjak",
""
],
[
"Elhajj",
"Imad",
""
],
[
"Asmar",
"Daniel",
""
],
[
"Chen",
"Yuhao",
""
],
[
"Zelek",
"John S.",
""
]
] | TITLE: MGSO: Monocular Real-time Photometric SLAM with Efficient 3D Gaussian
Splatting
ABSTRACT: Real-time SLAM with dense 3D mapping is computationally challenging,
especially on resource-limited devices. The recent development of 3D Gaussian
Splatting (3DGS) offers a promising approach for real-time dense 3D
reconstruction. However, existing 3DGS-based SLAM systems struggle to balance
hardware simplicity, speed, and map quality. Most systems excel in one or two
of the aforementioned aspects but rarely achieve all. A key issue is the
difficulty of initializing 3D Gaussians while concurrently conducting SLAM. To
address these challenges, we present Monocular GSO (MGSO), a novel real-time
SLAM system that integrates photometric SLAM with 3DGS. Photometric SLAM
provides dense structured point clouds for 3DGS initialization, accelerating
optimization and producing more efficient maps with fewer Gaussians. As a
result, experiments show that our system generates reconstructions with a
balance of quality, memory efficiency, and speed that outperforms the
state-of-the-art. Furthermore, our system achieves all results using RGB
inputs. We evaluate the Replica, TUM-RGBD, and EuRoC datasets against current
live dense reconstruction systems. Not only do we surpass contemporary systems,
but experiments also show that we maintain our performance on laptop hardware,
making it a practical solution for robotics, A/R, and other real-time
applications.
|
2409.15180 | Phat Lam | Lam Pham, Phat Lam, Dat Tran, Hieu Tang, Tin Nguyen, Alexander
Schindler, Florian Skopik, Alexander Polonsky, Canh Vu | A Comprehensive Survey with Critical Analysis for Deepfake Speech
Detection | Journal preprint to be published at Computer Science Review | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Thanks to advancements in deep learning, speech generation systems now power
a variety of real-world applications, such as text-to-speech for individuals
with speech disorders, voice chatbots in call centers, cross-linguistic speech
translation, etc. While these systems can autonomously generate human-like
speech and replicate specific voices, they also pose risks when misused for
malicious purposes. This motivates the research community to develop models for
detecting synthesized speech (e.g., fake speech) generated by
deep-learning-based models, referred to as the Deepfake Speech Detection task.
As the Deepfake Speech Detection task has emerged in recent years, there are
not many survey papers proposed for this task. Additionally, existing surveys
for the Deepfake Speech Detection task tend to summarize techniques used to
construct a Deepfake Speech Detection system rather than providing a thorough
analysis. This gap motivated us to conduct a comprehensive survey, providing a
critical analysis of the challenges and developments in Deepfake Speech
Detection. Our survey is innovatively structured, offering an in-depth analysis
of current challenge competitions, public datasets, and the deep-learning
techniques that provide enhanced solutions to address existing challenges in
the field. From our analysis, we propose hypotheses on leveraging and combining
specific deep learning techniques to improve the effectiveness of Deepfake
Speech Detection systems. Beyond conducting a survey, we perform extensive
experiments to validate these hypotheses and propose a highly competitive model
for the task of Deepfake Speech Detection. Given the analysis and the
experimental results, we finally indicate potential and promising research
directions for the Deepfake Speech Detection task.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2024 16:34:53 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Oct 2024 12:30:06 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Nov 2024 07:48:20 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Mar 2025 13:59:13 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Pham",
"Lam",
""
],
[
"Lam",
"Phat",
""
],
[
"Tran",
"Dat",
""
],
[
"Tang",
"Hieu",
""
],
[
"Nguyen",
"Tin",
""
],
[
"Schindler",
"Alexander",
""
],
[
"Skopik",
"Florian",
""
],
[
"Polonsky",
"Alexander",
""
],
[
"Vu",
"Canh",
""
]
] | TITLE: A Comprehensive Survey with Critical Analysis for Deepfake Speech
Detection
ABSTRACT: Thanks to advancements in deep learning, speech generation systems now power
a variety of real-world applications, such as text-to-speech for individuals
with speech disorders, voice chatbots in call centers, cross-linguistic speech
translation, etc. While these systems can autonomously generate human-like
speech and replicate specific voices, they also pose risks when misused for
malicious purposes. This motivates the research community to develop models for
detecting synthesized speech (e.g., fake speech) generated by
deep-learning-based models, referred to as the Deepfake Speech Detection task.
As the Deepfake Speech Detection task has emerged in recent years, there are
not many survey papers proposed for this task. Additionally, existing surveys
for the Deepfake Speech Detection task tend to summarize techniques used to
construct a Deepfake Speech Detection system rather than providing a thorough
analysis. This gap motivated us to conduct a comprehensive survey, providing a
critical analysis of the challenges and developments in Deepfake Speech
Detection. Our survey is innovatively structured, offering an in-depth analysis
of current challenge competitions, public datasets, and the deep-learning
techniques that provide enhanced solutions to address existing challenges in
the field. From our analysis, we propose hypotheses on leveraging and combining
specific deep learning techniques to improve the effectiveness of Deepfake
Speech Detection systems. Beyond conducting a survey, we perform extensive
experiments to validate these hypotheses and propose a highly competitive model
for the task of Deepfake Speech Detection. Given the analysis and the
experimental results, we finally indicate potential and promising research
directions for the Deepfake Speech Detection task.
|
2410.01110 | Yazhou Zhu | Yazhou Zhu, Minxian Li, Qiaolin Ye, Shidong Wang, Tong Xin, Haofeng
Zhang | RobustEMD: Domain Robust Matching for Cross-domain Few-shot Medical
Image Segmentation | More details should be included, and more experiments | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Few-shot medical image segmentation (FSMIS) aims to perform the limited
annotated data learning in the medical image analysis scope. Despite the
progress has been achieved, current FSMIS models are all trained and deployed
on the same data domain, as is not consistent with the clinical reality that
medical imaging data is always across different data domains (e.g. imaging
modalities, institutions and equipment sequences). How to enhance the FSMIS
models to generalize well across the different specific medical imaging
domains? In this paper, we focus on the matching mechanism of the few-shot
semantic segmentation models and introduce an Earth Mover's Distance (EMD)
calculation based domain robust matching mechanism for the cross-domain
scenario. Specifically, we formulate the EMD transportation process between the
foreground support-query features, the texture structure aware weights
generation method, which proposes to perform the sobel based image gradient
calculation over the nodes, is introduced in the EMD matching flow to restrain
the domain relevant nodes. Besides, the point set level distance measurement
metric is introduced to calculated the cost for the transportation from support
set nodes to query set nodes. To evaluate the performance of our model, we
conduct experiments on three scenarios (i.e., cross-modal, cross-sequence and
cross-institution), which includes eight medical datasets and involves three
body regions, and the results demonstrate that our model achieves the SoTA
performance against the compared models.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 22:39:26 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Oct 2024 01:57:34 GMT"
},
{
"version": "v3",
"created": "Sun, 12 Jan 2025 03:40:23 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Mar 2025 13:25:39 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Zhu",
"Yazhou",
""
],
[
"Li",
"Minxian",
""
],
[
"Ye",
"Qiaolin",
""
],
[
"Wang",
"Shidong",
""
],
[
"Xin",
"Tong",
""
],
[
"Zhang",
"Haofeng",
""
]
] | TITLE: RobustEMD: Domain Robust Matching for Cross-domain Few-shot Medical
Image Segmentation
ABSTRACT: Few-shot medical image segmentation (FSMIS) aims to perform the limited
annotated data learning in the medical image analysis scope. Despite the
progress has been achieved, current FSMIS models are all trained and deployed
on the same data domain, as is not consistent with the clinical reality that
medical imaging data is always across different data domains (e.g. imaging
modalities, institutions and equipment sequences). How to enhance the FSMIS
models to generalize well across the different specific medical imaging
domains? In this paper, we focus on the matching mechanism of the few-shot
semantic segmentation models and introduce an Earth Mover's Distance (EMD)
calculation based domain robust matching mechanism for the cross-domain
scenario. Specifically, we formulate the EMD transportation process between the
foreground support-query features, the texture structure aware weights
generation method, which proposes to perform the sobel based image gradient
calculation over the nodes, is introduced in the EMD matching flow to restrain
the domain relevant nodes. Besides, the point set level distance measurement
metric is introduced to calculated the cost for the transportation from support
set nodes to query set nodes. To evaluate the performance of our model, we
conduct experiments on three scenarios (i.e., cross-modal, cross-sequence and
cross-institution), which includes eight medical datasets and involves three
body regions, and the results demonstrate that our model achieves the SoTA
performance against the compared models.
|
2410.07752 | Daniel Cores | Daniel Cores, Michael Dorkenwald, Manuel Mucientes, Cees G. M. Snoek,
Yuki M. Asano | Lost in Time: A New Temporal Benchmark for VideoLLMs | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Large language models have demonstrated impressive performance when
integrated with vision models even enabling video understanding. However,
evaluating video models presents its own unique challenges, for which several
benchmarks have been proposed. In this paper, we show that the currently most
used video-language benchmarks can be solved without requiring much temporal
reasoning. We identified three main issues in existing datasets: (i) static
information from single frames is often sufficient to solve the tasks (ii) the
text of the questions and candidate answers is overly informative, allowing
models to answer correctly without relying on any visual input (iii) world
knowledge alone can answer many of the questions, making the benchmarks a test
of knowledge replication rather than video reasoning. In addition, we found
that open-ended question-answering benchmarks for video understanding suffer
from similar issues while the automatic evaluation process with LLMs is
unreliable, making it an unsuitable alternative. As a solution, we propose
TVBench, a novel open-source video multiple-choice question-answering
benchmark, and demonstrate through extensive evaluations that it requires a
high level of temporal understanding. Surprisingly, we find that most recent
state-of-the-art video-language models perform similarly to random performance
on TVBench, with only a few models such as Qwen2-VL, and Tarsier clearly
surpassing this baseline.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 09:28:36 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jan 2025 11:21:25 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 09:46:02 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Cores",
"Daniel",
""
],
[
"Dorkenwald",
"Michael",
""
],
[
"Mucientes",
"Manuel",
""
],
[
"Snoek",
"Cees G. M.",
""
],
[
"Asano",
"Yuki M.",
""
]
] | TITLE: Lost in Time: A New Temporal Benchmark for VideoLLMs
ABSTRACT: Large language models have demonstrated impressive performance when
integrated with vision models even enabling video understanding. However,
evaluating video models presents its own unique challenges, for which several
benchmarks have been proposed. In this paper, we show that the currently most
used video-language benchmarks can be solved without requiring much temporal
reasoning. We identified three main issues in existing datasets: (i) static
information from single frames is often sufficient to solve the tasks (ii) the
text of the questions and candidate answers is overly informative, allowing
models to answer correctly without relying on any visual input (iii) world
knowledge alone can answer many of the questions, making the benchmarks a test
of knowledge replication rather than video reasoning. In addition, we found
that open-ended question-answering benchmarks for video understanding suffer
from similar issues while the automatic evaluation process with LLMs is
unreliable, making it an unsuitable alternative. As a solution, we propose
TVBench, a novel open-source video multiple-choice question-answering
benchmark, and demonstrate through extensive evaluations that it requires a
high level of temporal understanding. Surprisingly, we find that most recent
state-of-the-art video-language models perform similarly to random performance
on TVBench, with only a few models such as Qwen2-VL, and Tarsier clearly
surpassing this baseline.
|
2410.11391 | Vivin Vinod | Vivin Vinod and Peter Zaspel | Benchmarking Data Efficiency in $\Delta$-ML and Multifidelity Models for
Quantum Chemistry | Supplementary sections S1-S4 and FIG.~S1-S4, Table S1.Work modified
to include benchmarks for 3 more QC properties: first and second excitation
energies, magnitude of electronic dipole moment | null | null | null | physics.chem-ph cs.LG physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | The development of machine learning (ML) methods has made quantum chemistry
(QC) calculations more accessible by reducing the compute cost incurred in
conventional QC methods. This has since been translated into the overhead cost
of generating training data. Increased work in reducing the cost of generating
training data resulted in the development of $\Delta$-ML and multifidelity
machine learning methods which use data at more than one QC level of accuracy,
or fidelity.
This work compares the data costs associated with $\Delta$-ML, multifidelity
machine learning (MFML), and optimized MFML (o-MFML) in contrast with a newly
introduced Multifidelity$\Delta$-Machine Learning (MF$\Delta$ML) method for the
prediction of ground state energies, vertical excitation energies, and the
magnitude of electronic contribution of molecular dipole moments from the
multifidelity benchmark dataset QeMFi. This assessment is made on the basis of
training data generation cost associated with each model and is compared with
the single fidelity kernel ridge regression (KRR) case. The results indicate
that the use of multifidelity methods surpasses the standard $\Delta$-ML
approaches in cases of a large number of predictions. For applications which
require only a few evaluations to be made using ML models, while the
$\Delta$-ML method might be favored, the MF$\Delta$ML method is shown to be
more efficient.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 08:34:32 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Oct 2024 08:12:53 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 10:55:46 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Vinod",
"Vivin",
""
],
[
"Zaspel",
"Peter",
""
]
] | TITLE: Benchmarking Data Efficiency in $\Delta$-ML and Multifidelity Models for
Quantum Chemistry
ABSTRACT: The development of machine learning (ML) methods has made quantum chemistry
(QC) calculations more accessible by reducing the compute cost incurred in
conventional QC methods. This has since been translated into the overhead cost
of generating training data. Increased work in reducing the cost of generating
training data resulted in the development of $\Delta$-ML and multifidelity
machine learning methods which use data at more than one QC level of accuracy,
or fidelity.
This work compares the data costs associated with $\Delta$-ML, multifidelity
machine learning (MFML), and optimized MFML (o-MFML) in contrast with a newly
introduced Multifidelity$\Delta$-Machine Learning (MF$\Delta$ML) method for the
prediction of ground state energies, vertical excitation energies, and the
magnitude of electronic contribution of molecular dipole moments from the
multifidelity benchmark dataset QeMFi. This assessment is made on the basis of
training data generation cost associated with each model and is compared with
the single fidelity kernel ridge regression (KRR) case. The results indicate
that the use of multifidelity methods surpasses the standard $\Delta$-ML
approaches in cases of a large number of predictions. For applications which
require only a few evaluations to be made using ML models, while the
$\Delta$-ML method might be favored, the MF$\Delta$ML method is shown to be
more efficient.
|
2410.11392 | Vivin Vinod | Vivin Vinod and Peter Zaspel | Investigating Data Hierarchies in Multifidelity Machine Learning for
Excitation Energies | Modified errors to be relative MAE. Transferability tests of training
on QeMFi and testing on QUESTDB have now been added | null | 10.1021/acs.jctc.4c01491 | null | physics.chem-ph cs.LG physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | Recent progress in machine learning (ML) has made high-accuracy quantum
chemistry (QC) calculations more accessible. Of particular interest are
multifidelity machine learning (MFML) methods where training data from
differing accuracies or fidelities are used. These methods usually employ a
fixed scaling factor, $\gamma$, to relate the number of training samples across
different fidelities, which reflects the cost and assumed sparsity of the data.
This study investigates the impact of modifying $\gamma$ on model efficiency
and accuracy for the prediction of vertical excitation energies using the QeMFi
benchmark dataset. Further, this work introduces QC compute time informed
scaling factors, denoted as $\theta$, that vary based on QC compute times at
different fidelities. A novel error metric, error contours of MFML, is proposed
to provide a comprehensive view of model error contributions from each
fidelity. The results indicate that high model accuracy can be achieved with
just 2 training samples at the target fidelity when a larger number of samples
from lower fidelities are used. This is further illustrated through a novel
concept, the $\Gamma$-curve, which compares model error against the time-cost
of generating training samples, demonstrating that multifidelity models can
achieve high accuracy while minimizing training data costs.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 08:35:00 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 11:20:46 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Vinod",
"Vivin",
""
],
[
"Zaspel",
"Peter",
""
]
] | TITLE: Investigating Data Hierarchies in Multifidelity Machine Learning for
Excitation Energies
ABSTRACT: Recent progress in machine learning (ML) has made high-accuracy quantum
chemistry (QC) calculations more accessible. Of particular interest are
multifidelity machine learning (MFML) methods where training data from
differing accuracies or fidelities are used. These methods usually employ a
fixed scaling factor, $\gamma$, to relate the number of training samples across
different fidelities, which reflects the cost and assumed sparsity of the data.
This study investigates the impact of modifying $\gamma$ on model efficiency
and accuracy for the prediction of vertical excitation energies using the QeMFi
benchmark dataset. Further, this work introduces QC compute time informed
scaling factors, denoted as $\theta$, that vary based on QC compute times at
different fidelities. A novel error metric, error contours of MFML, is proposed
to provide a comprehensive view of model error contributions from each
fidelity. The results indicate that high model accuracy can be achieved with
just 2 training samples at the target fidelity when a larger number of samples
from lower fidelities are used. This is further illustrated through a novel
concept, the $\Gamma$-curve, which compares model error against the time-cost
of generating training samples, demonstrating that multifidelity models can
achieve high accuracy while minimizing training data costs.
|
2410.12255 | Ziqi Ji | Ziqi Ji, Penghao Duan, Gang Du | Enhancing machine learning turbulence model generalizability via tensor
basis normalization | null | null | null | null | physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | With the rapid advancement of machine learning techniques, the development
and study of machine learning turbulence models have become increasingly
prevalent. As a critical component of turbulence modeling, the constitutive
relationship between the Reynolds stress tensor and the mean flow quantities,
modeled using machine learning methods, faces a pressing challenge: the lack of
generalizability. To address this issue, we propose a novel tensor basis
normalization technique to improve machine learning turbulence models, grounded
in the general effective-viscosity hypothesis. In this study, we utilize direct
numerical simulation (DNS) results of periodic hill flows as training data to
develop a symbolic regression-based turbulence model based on the general
effective-viscosity hypothesis. Furthermore, we construct a systematic
validation dataset to evaluate the generalizability of our symbolic
regression-based turbulence model. This validation set includes periodic hills
with different aspect ratios from the training dataset, zero pressure gradient
flat plate flows, three-dimensional incompressible flows over a NACA0012
airfoil, and transonic axial compressor rotor flows. These validation cases
exhibit significant flow characteristics and geometrical variations,
progressively increasing their differences from the training dataset. Such a
diverse validation set is a robust benchmark to assess the generalizability of
the proposed turbulence model. Finally, we demonstrate that our symbolic
regression-based turbulence model performs effectively across validation cases,
encompassing various separation features, geometries, and Reynolds numbers.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 05:44:07 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Oct 2024 02:41:33 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 12:53:57 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Mar 2025 13:17:51 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ji",
"Ziqi",
""
],
[
"Duan",
"Penghao",
""
],
[
"Du",
"Gang",
""
]
] | TITLE: Enhancing machine learning turbulence model generalizability via tensor
basis normalization
ABSTRACT: With the rapid advancement of machine learning techniques, the development
and study of machine learning turbulence models have become increasingly
prevalent. As a critical component of turbulence modeling, the constitutive
relationship between the Reynolds stress tensor and the mean flow quantities,
modeled using machine learning methods, faces a pressing challenge: the lack of
generalizability. To address this issue, we propose a novel tensor basis
normalization technique to improve machine learning turbulence models, grounded
in the general effective-viscosity hypothesis. In this study, we utilize direct
numerical simulation (DNS) results of periodic hill flows as training data to
develop a symbolic regression-based turbulence model based on the general
effective-viscosity hypothesis. Furthermore, we construct a systematic
validation dataset to evaluate the generalizability of our symbolic
regression-based turbulence model. This validation set includes periodic hills
with different aspect ratios from the training dataset, zero pressure gradient
flat plate flows, three-dimensional incompressible flows over a NACA0012
airfoil, and transonic axial compressor rotor flows. These validation cases
exhibit significant flow characteristics and geometrical variations,
progressively increasing their differences from the training dataset. Such a
diverse validation set is a robust benchmark to assess the generalizability of
the proposed turbulence model. Finally, we demonstrate that our symbolic
regression-based turbulence model performs effectively across validation cases,
encompassing various separation features, geometries, and Reynolds numbers.
|
2410.13862 | Haofei Xu | Haofei Xu, Songyou Peng, Fangjinhua Wang, Hermann Blum, Daniel Barath,
Andreas Geiger, Marc Pollefeys | DepthSplat: Connecting Gaussian Splatting and Depth | CVPR 2025, Project page: https://haofeixu.github.io/depthsplat/,
Code: https://github.com/cvg/depthsplat | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Gaussian splatting and single-view depth estimation are typically studied in
isolation. In this paper, we present DepthSplat to connect Gaussian splatting
and depth estimation and study their interactions. More specifically, we first
contribute a robust multi-view depth model by leveraging pre-trained monocular
depth features, leading to high-quality feed-forward 3D Gaussian splatting
reconstructions. We also show that Gaussian splatting can serve as an
unsupervised pre-training objective for learning powerful depth models from
large-scale multi-view posed datasets. We validate the synergy between Gaussian
splatting and depth estimation through extensive ablation and cross-task
transfer experiments. Our DepthSplat achieves state-of-the-art performance on
ScanNet, RealEstate10K and DL3DV datasets in terms of both depth estimation and
novel view synthesis, demonstrating the mutual benefits of connecting both
tasks. In addition, DepthSplat enables feed-forward reconstruction from 12
input views (512x960 resolutions) in 0.6 seconds.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 17:59:58 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Nov 2024 22:34:19 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 15:20:52 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Xu",
"Haofei",
""
],
[
"Peng",
"Songyou",
""
],
[
"Wang",
"Fangjinhua",
""
],
[
"Blum",
"Hermann",
""
],
[
"Barath",
"Daniel",
""
],
[
"Geiger",
"Andreas",
""
],
[
"Pollefeys",
"Marc",
""
]
] | TITLE: DepthSplat: Connecting Gaussian Splatting and Depth
ABSTRACT: Gaussian splatting and single-view depth estimation are typically studied in
isolation. In this paper, we present DepthSplat to connect Gaussian splatting
and depth estimation and study their interactions. More specifically, we first
contribute a robust multi-view depth model by leveraging pre-trained monocular
depth features, leading to high-quality feed-forward 3D Gaussian splatting
reconstructions. We also show that Gaussian splatting can serve as an
unsupervised pre-training objective for learning powerful depth models from
large-scale multi-view posed datasets. We validate the synergy between Gaussian
splatting and depth estimation through extensive ablation and cross-task
transfer experiments. Our DepthSplat achieves state-of-the-art performance on
ScanNet, RealEstate10K and DL3DV datasets in terms of both depth estimation and
novel view synthesis, demonstrating the mutual benefits of connecting both
tasks. In addition, DepthSplat enables feed-forward reconstruction from 12
input views (512x960 resolutions) in 0.6 seconds.
|
2410.14103 | Chaorong Li | Li Chaorong, Ling Xudong, Yang Qiang, Qin Fengqing and Huang Yuanyuan | Extreme Precipitation Nowcasting using Multi-Task Latent Diffusion
Models | 15 pages, 14figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deep learning models have achieved remarkable progress in precipitation
prediction. However, they still face significant challenges in accurately
capturing spatial details of radar images, particularly in regions of high
precipitation intensity. This limitation results in reduced spatial
localization accuracy when predicting radar echo images across varying
precipitation intensities. To address this challenge, we propose an innovative
precipitation prediction approach termed the Multi-Task Latent Diffusion Model
(MTLDM). The core idea of MTLDM lies in the recognition that precipitation
radar images represent a combination of multiple components, each corresponding
to different precipitation intensities. Thus, we adopt a divide-and-conquer
strategy, decomposing radar images into several sub-images based on their
precipitation intensities and individually modeling these components. During
the prediction stage, MTLDM integrates these sub-image representations by
utilizing a trained latent-space rainfall diffusion model, followed by decoding
through a multi-task decoder to produce the final precipitation prediction.
Experimental evaluations conducted on the MRMS dataset demonstrate that the
proposed MTLDM method surpasses state-of-the-art techniques, achieving a
Critical Success Index (CSI) improvement of 13-26%.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 00:50:56 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Oct 2024 06:46:26 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 08:14:47 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Chaorong",
"Li",
""
],
[
"Xudong",
"Ling",
""
],
[
"Qiang",
"Yang",
""
],
[
"Fengqing",
"Qin",
""
],
[
"Yuanyuan",
"Huang",
""
]
] | TITLE: Extreme Precipitation Nowcasting using Multi-Task Latent Diffusion
Models
ABSTRACT: Deep learning models have achieved remarkable progress in precipitation
prediction. However, they still face significant challenges in accurately
capturing spatial details of radar images, particularly in regions of high
precipitation intensity. This limitation results in reduced spatial
localization accuracy when predicting radar echo images across varying
precipitation intensities. To address this challenge, we propose an innovative
precipitation prediction approach termed the Multi-Task Latent Diffusion Model
(MTLDM). The core idea of MTLDM lies in the recognition that precipitation
radar images represent a combination of multiple components, each corresponding
to different precipitation intensities. Thus, we adopt a divide-and-conquer
strategy, decomposing radar images into several sub-images based on their
precipitation intensities and individually modeling these components. During
the prediction stage, MTLDM integrates these sub-image representations by
utilizing a trained latent-space rainfall diffusion model, followed by decoding
through a multi-task decoder to produce the final precipitation prediction.
Experimental evaluations conducted on the MRMS dataset demonstrate that the
proposed MTLDM method surpasses state-of-the-art techniques, achieving a
Critical Success Index (CSI) improvement of 13-26%.
|
2410.14340 | Josiah Aklilu | Josiah Aklilu, Xiaohan Wang, Serena Yeung-Levy | Zero-shot Action Localization via the Confidence of Large
Vision-Language Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Precise action localization in untrimmed video is vital for fields such as
professional sports and minimally invasive surgery, where the delineation of
particular motions in recordings can dramatically enhance analysis. But in many
cases, large scale datasets with video-label pairs for localization are
unavailable, limiting the opportunity to fine-tune video-understanding models.
Recent developments in large vision-language models (LVLM) address this need
with impressive zero-shot capabilities in a variety of video understanding
tasks. However, the adaptation of LVLMs, with their powerful visual question
answering capabilities, to zero-shot localization in long-form video is still
relatively unexplored. To this end, we introduce a true Zero-shot Action
Localization method (ZEAL). Specifically, we leverage the built-in action
knowledge of a large language model (LLM) to inflate actions into detailed
descriptions of the archetypal start and end of the action. These descriptions
serve as queries to LVLM for generating frame-level confidence scores which can
be aggregated to produce localization outputs. The simplicity and flexibility
of our method lends it amenable to more capable LVLMs as they are developed,
and we demonstrate remarkable results in zero-shot action localization on a
challenging benchmark, without any training. Our code is publicly available at
$\href{https://github.com/josaklil-ai/zeal}{github.com/josaklil-ai/zeal}$.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 09:51:14 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 23:00:49 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Aklilu",
"Josiah",
""
],
[
"Wang",
"Xiaohan",
""
],
[
"Yeung-Levy",
"Serena",
""
]
] | TITLE: Zero-shot Action Localization via the Confidence of Large
Vision-Language Models
ABSTRACT: Precise action localization in untrimmed video is vital for fields such as
professional sports and minimally invasive surgery, where the delineation of
particular motions in recordings can dramatically enhance analysis. But in many
cases, large scale datasets with video-label pairs for localization are
unavailable, limiting the opportunity to fine-tune video-understanding models.
Recent developments in large vision-language models (LVLM) address this need
with impressive zero-shot capabilities in a variety of video understanding
tasks. However, the adaptation of LVLMs, with their powerful visual question
answering capabilities, to zero-shot localization in long-form video is still
relatively unexplored. To this end, we introduce a true Zero-shot Action
Localization method (ZEAL). Specifically, we leverage the built-in action
knowledge of a large language model (LLM) to inflate actions into detailed
descriptions of the archetypal start and end of the action. These descriptions
serve as queries to LVLM for generating frame-level confidence scores which can
be aggregated to produce localization outputs. The simplicity and flexibility
of our method lends it amenable to more capable LVLMs as they are developed,
and we demonstrate remarkable results in zero-shot action localization on a
challenging benchmark, without any training. Our code is publicly available at
$\href{https://github.com/josaklil-ai/zeal}{github.com/josaklil-ai/zeal}$.
|
2410.14489 | Rabea Khatun | Maksuda Akter, Rabea Khatun, Md. Alamin Talukder, Md. Manowarul Islam,
Md. Ashraf Uddin | An Integrated Deep Learning Model for Skin Cancer Detection Using Hybrid
Feature Fusion Technique | null | Biomedical Materials & Devices,2025 | null | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Skin cancer is a serious and potentially fatal disease caused by DNA damage.
Early detection significantly increases survival rates, making accurate
diagnosis crucial. In this groundbreaking study, we present a hybrid framework
based on Deep Learning (DL) that achieves precise classification of benign and
malignant skin lesions. Our approach begins with dataset preprocessing to
enhance classification accuracy, followed by training two separate pre-trained
DL models, InceptionV3 and DenseNet121. By fusing the results of each model
using the weighted sum rule, our system achieves exceptional accuracy rates.
Specifically, we achieve a 92.27% detection accuracy rate, 92.33% sensitivity,
92.22% specificity, 90.81% precision, and 91.57% F1-score, outperforming
existing models and demonstrating the robustness and trustworthiness of our
hybrid approach. Our study represents a significant advance in skin cancer
diagnosis and provides a promising foundation for further research in the
field. With the potential to save countless lives through earlier detection,
our hybrid deep-learning approach is a game-changer in the fight against skin
cancer.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 14:19:13 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Oct 2024 12:32:53 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Akter",
"Maksuda",
""
],
[
"Khatun",
"Rabea",
""
],
[
"Talukder",
"Md. Alamin",
""
],
[
"Islam",
"Md. Manowarul",
""
],
[
"Uddin",
"Md. Ashraf",
""
]
] | TITLE: An Integrated Deep Learning Model for Skin Cancer Detection Using Hybrid
Feature Fusion Technique
ABSTRACT: Skin cancer is a serious and potentially fatal disease caused by DNA damage.
Early detection significantly increases survival rates, making accurate
diagnosis crucial. In this groundbreaking study, we present a hybrid framework
based on Deep Learning (DL) that achieves precise classification of benign and
malignant skin lesions. Our approach begins with dataset preprocessing to
enhance classification accuracy, followed by training two separate pre-trained
DL models, InceptionV3 and DenseNet121. By fusing the results of each model
using the weighted sum rule, our system achieves exceptional accuracy rates.
Specifically, we achieve a 92.27% detection accuracy rate, 92.33% sensitivity,
92.22% specificity, 90.81% precision, and 91.57% F1-score, outperforming
existing models and demonstrating the robustness and trustworthiness of our
hybrid approach. Our study represents a significant advance in skin cancer
diagnosis and provides a promising foundation for further research in the
field. With the potential to save countless lives through earlier detection,
our hybrid deep-learning approach is a game-changer in the fight against skin
cancer.
|
2410.20016 | Zhecheng Li | Zhecheng Li, Yiwei Wang, Bryan Hooi, Yujun Cai, Zhen Xiong, Nanyun
Peng, Kai-wei Chang | Vulnerability of LLMs to Vertically Aligned Text Manipulations | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Text classification involves categorizing a given text, such as determining
its sentiment or identifying harmful content. With the advancement of large
language models (LLMs), these models have become highly effective at performing
text classification tasks. However, they still show vulnerabilities to
variations in text formatting. Recent research demonstrates that modifying
input formats, such as vertically aligning words for encoder-based models, can
substantially lower accuracy in text classification tasks. While easily
understood by humans, these inputs can significantly mislead models, posing a
potential risk of bypassing detection in real-world scenarios involving harmful
or sensitive information. With the expanding application of LLMs, a crucial
question arises: Do decoder-based LLMs exhibit similar vulnerabilities to
vertically formatted text input? In this paper, we investigate the impact of
vertical text input on the performance of various LLMs across multiple text
classification datasets and analyze the underlying causes. Our findings are as
follows: (i) Vertical text input significantly degrades the accuracy of LLMs in
text classification tasks. (ii) Chain of Thought (CoT) reasoning does not help
LLMs recognize vertical input or mitigate its vulnerability, but few-shot
learning with careful analysis does. (iii) We explore the underlying cause of
the vulnerability by analyzing the inherent issues in tokenization and
attention matrices.
| [
{
"version": "v1",
"created": "Sat, 26 Oct 2024 00:16:08 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 05:09:53 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Li",
"Zhecheng",
""
],
[
"Wang",
"Yiwei",
""
],
[
"Hooi",
"Bryan",
""
],
[
"Cai",
"Yujun",
""
],
[
"Xiong",
"Zhen",
""
],
[
"Peng",
"Nanyun",
""
],
[
"Chang",
"Kai-wei",
""
]
] | TITLE: Vulnerability of LLMs to Vertically Aligned Text Manipulations
ABSTRACT: Text classification involves categorizing a given text, such as determining
its sentiment or identifying harmful content. With the advancement of large
language models (LLMs), these models have become highly effective at performing
text classification tasks. However, they still show vulnerabilities to
variations in text formatting. Recent research demonstrates that modifying
input formats, such as vertically aligning words for encoder-based models, can
substantially lower accuracy in text classification tasks. While easily
understood by humans, these inputs can significantly mislead models, posing a
potential risk of bypassing detection in real-world scenarios involving harmful
or sensitive information. With the expanding application of LLMs, a crucial
question arises: Do decoder-based LLMs exhibit similar vulnerabilities to
vertically formatted text input? In this paper, we investigate the impact of
vertical text input on the performance of various LLMs across multiple text
classification datasets and analyze the underlying causes. Our findings are as
follows: (i) Vertical text input significantly degrades the accuracy of LLMs in
text classification tasks. (ii) Chain of Thought (CoT) reasoning does not help
LLMs recognize vertical input or mitigate its vulnerability, but few-shot
learning with careful analysis does. (iii) We explore the underlying cause of
the vulnerability by analyzing the inherent issues in tokenization and
attention matrices.
|
2410.20021 | Zhecheng Li | Zhecheng Li, Yiwei Wang, Bryan Hooi, Yujun Cai, Naifan Cheung, Nanyun
Peng, Kai-wei Chang | Think Carefully and Check Again! Meta-Generation Unlocking LLMs for
Low-Resource Cross-Lingual Summarization | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Cross-lingual summarization (CLS) aims to generate a summary for the source
text in a different target language. Currently, instruction-tuned large
language models (LLMs) excel at various English tasks. However, unlike
languages such as English, Chinese or Spanish, for those relatively
low-resource languages with limited usage or data, recent studies have shown
that LLMs' performance on CLS tasks remains unsatisfactory even with few-shot
settings. This raises the question: Are LLMs capable of handling cross-lingual
summarization tasks for low-resource languages? To resolve this question, we
fully explore the potential of large language models on cross-lingual
summarization task for low-resource languages through our four-step zero-shot
method: Summarization, Improvement, Translation and Refinement (SITR) with
correspondingly designed prompts. We test our proposed method with multiple
LLMs on two well-known cross-lingual summarization datasets with various
low-resource target languages. The results show that: i) GPT-3.5 and GPT-4
significantly and consistently outperform other baselines when using our
zero-shot SITR methods. ii) By employing our proposed method, we unlock the
potential of LLMs, enabling them to effectively handle cross-lingual
summarization tasks for relatively low-resource languages.
| [
{
"version": "v1",
"created": "Sat, 26 Oct 2024 00:39:44 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 05:11:24 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Li",
"Zhecheng",
""
],
[
"Wang",
"Yiwei",
""
],
[
"Hooi",
"Bryan",
""
],
[
"Cai",
"Yujun",
""
],
[
"Cheung",
"Naifan",
""
],
[
"Peng",
"Nanyun",
""
],
[
"Chang",
"Kai-wei",
""
]
] | TITLE: Think Carefully and Check Again! Meta-Generation Unlocking LLMs for
Low-Resource Cross-Lingual Summarization
ABSTRACT: Cross-lingual summarization (CLS) aims to generate a summary for the source
text in a different target language. Currently, instruction-tuned large
language models (LLMs) excel at various English tasks. However, unlike
languages such as English, Chinese or Spanish, for those relatively
low-resource languages with limited usage or data, recent studies have shown
that LLMs' performance on CLS tasks remains unsatisfactory even with few-shot
settings. This raises the question: Are LLMs capable of handling cross-lingual
summarization tasks for low-resource languages? To resolve this question, we
fully explore the potential of large language models on cross-lingual
summarization task for low-resource languages through our four-step zero-shot
method: Summarization, Improvement, Translation and Refinement (SITR) with
correspondingly designed prompts. We test our proposed method with multiple
LLMs on two well-known cross-lingual summarization datasets with various
low-resource target languages. The results show that: i) GPT-3.5 and GPT-4
significantly and consistently outperform other baselines when using our
zero-shot SITR methods. ii) By employing our proposed method, we unlock the
potential of LLMs, enabling them to effectively handle cross-lingual
summarization tasks for relatively low-resource languages.
|
2410.21306 | Farid Ariai | Farid Ariai and Gianluca Demartini | Natural Language Processing for the Legal Domain: A Survey of Tasks,
Datasets, Models, and Challenges | 35 pages | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Natural Language Processing (NLP) is revolutionising the way legal
professionals and laypersons operate in the legal field. The considerable
potential for NLP in the legal sector, especially in developing computational
tools for various legal processes, has captured the interest of researchers for
years. This survey follows the Preferred Reporting Items for Systematic Reviews
and Meta-Analyses framework, reviewing 154 studies, with a final selection of
133 after manual filtering. It explores foundational concepts related to NLP in
the legal domain, illustrating the unique aspects and challenges of processing
legal texts, such as extensive document length, complex language, and limited
open legal datasets. We provide an overview of NLP tasks specific to legal
text, such as Legal Document Summarisation, legal Named Entity Recognition,
Legal Question Answering, Legal Argument Mining, Legal Text Classification, and
Legal Judgement Prediction. In the section on legal Language Models (LMs), we
analyse both developed LMs and approaches for adapting general LMs to the legal
domain. Additionally, we identify 16 Open Research Challenges, including bias
in Artificial Intelligence applications, the need for more robust and
interpretable models, and improving explainability to handle the complexities
of legal language and reasoning.
| [
{
"version": "v1",
"created": "Fri, 25 Oct 2024 01:17:02 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 03:45:48 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ariai",
"Farid",
""
],
[
"Demartini",
"Gianluca",
""
]
] | TITLE: Natural Language Processing for the Legal Domain: A Survey of Tasks,
Datasets, Models, and Challenges
ABSTRACT: Natural Language Processing (NLP) is revolutionising the way legal
professionals and laypersons operate in the legal field. The considerable
potential for NLP in the legal sector, especially in developing computational
tools for various legal processes, has captured the interest of researchers for
years. This survey follows the Preferred Reporting Items for Systematic Reviews
and Meta-Analyses framework, reviewing 154 studies, with a final selection of
133 after manual filtering. It explores foundational concepts related to NLP in
the legal domain, illustrating the unique aspects and challenges of processing
legal texts, such as extensive document length, complex language, and limited
open legal datasets. We provide an overview of NLP tasks specific to legal
text, such as Legal Document Summarisation, legal Named Entity Recognition,
Legal Question Answering, Legal Argument Mining, Legal Text Classification, and
Legal Judgement Prediction. In the section on legal Language Models (LMs), we
analyse both developed LMs and approaches for adapting general LMs to the legal
domain. Additionally, we identify 16 Open Research Challenges, including bias
in Artificial Intelligence applications, the need for more robust and
interpretable models, and improving explainability to handle the complexities
of legal language and reasoning.
|
2411.04923 | Shehan Munasinghe | Shehan Munasinghe, Hanan Gani, Wenqi Zhu, Jiale Cao, Eric Xing, Fahad
Shahbaz Khan, Salman Khan | VideoGLaMM: A Large Multimodal Model for Pixel-Level Visual Grounding in
Videos | Technical Report of VideoGLaMM | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Fine-grained alignment between videos and text is challenging due to complex
spatial and temporal dynamics in videos. Existing video-based Large Multimodal
Models (LMMs) handle basic conversations but struggle with precise pixel-level
grounding in videos. To address this, we introduce VideoGLaMM, a LMM designed
for fine-grained pixel-level grounding in videos based on user-provided textual
inputs. Our design seamlessly connects three key components: a Large Language
Model, a dual vision encoder that emphasizes both spatial and temporal details,
and a spatio-temporal decoder for accurate mask generation. This connection is
facilitated via tunable V-L and L-V adapters that enable close Vision-Language
(VL) alignment. The architecture is trained to synchronize both spatial and
temporal elements of video content with textual instructions. To enable
fine-grained grounding, we curate a multimodal dataset featuring detailed
visually-grounded conversations using a semiautomatic annotation pipeline,
resulting in a diverse set of 38k video-QA triplets along with 83k objects and
671k masks. We evaluate VideoGLaMM on three challenging tasks: Grounded
Conversation Generation, Visual Grounding, and Referring Video Segmentation.
Experimental results show that our model consistently outperforms existing
approaches across all three tasks.
| [
{
"version": "v1",
"created": "Thu, 7 Nov 2024 17:59:27 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Feb 2025 13:51:14 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 10:08:13 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Munasinghe",
"Shehan",
""
],
[
"Gani",
"Hanan",
""
],
[
"Zhu",
"Wenqi",
""
],
[
"Cao",
"Jiale",
""
],
[
"Xing",
"Eric",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Khan",
"Salman",
""
]
] | TITLE: VideoGLaMM: A Large Multimodal Model for Pixel-Level Visual Grounding in
Videos
ABSTRACT: Fine-grained alignment between videos and text is challenging due to complex
spatial and temporal dynamics in videos. Existing video-based Large Multimodal
Models (LMMs) handle basic conversations but struggle with precise pixel-level
grounding in videos. To address this, we introduce VideoGLaMM, a LMM designed
for fine-grained pixel-level grounding in videos based on user-provided textual
inputs. Our design seamlessly connects three key components: a Large Language
Model, a dual vision encoder that emphasizes both spatial and temporal details,
and a spatio-temporal decoder for accurate mask generation. This connection is
facilitated via tunable V-L and L-V adapters that enable close Vision-Language
(VL) alignment. The architecture is trained to synchronize both spatial and
temporal elements of video content with textual instructions. To enable
fine-grained grounding, we curate a multimodal dataset featuring detailed
visually-grounded conversations using a semiautomatic annotation pipeline,
resulting in a diverse set of 38k video-QA triplets along with 83k objects and
671k masks. We evaluate VideoGLaMM on three challenging tasks: Grounded
Conversation Generation, Visual Grounding, and Referring Video Segmentation.
Experimental results show that our model consistently outperforms existing
approaches across all three tasks.
|
2411.10364 | Han Chen | Tianhao Ma, Han Chen, Juncheng Hu, Yungang Zhu, Ximing Li | Forming Auxiliary High-confident Instance-level Loss to Promote Learning
from Label Proportions | Accepted as a conference paper at CVPR 2025 | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Learning from label proportions (LLP), i.e., a challenging weakly-supervised
learning task, aims to train a classifier by using bags of instances and the
proportions of classes within bags, rather than annotated labels for each
instance. Beyond the traditional bag-level loss, the mainstream methodology of
LLP is to incorporate an auxiliary instance-level loss with pseudo-labels
formed by predictions. Unfortunately, we empirically observed that the
pseudo-labels are are often inaccurate due to over-smoothing, especially for
the scenarios with large bag sizes, hurting the classifier induction. To
alleviate this problem, we suggest a novel LLP method, namely Learning from
Label Proportions with Auxiliary High-confident Instance-level Loss
(L^2P-AHIL). Specifically, we propose a dual entropy-based weight (DEW) method
to adaptively measure the confidences of pseudo-labels. It simultaneously
emphasizes accurate predictions at the bag level and avoids overly smoothed
predictions. We then form high-confident instance-level loss with DEW, and
jointly optimize it with the bag-level loss in a self-training manner. The
experimental results on benchmark datasets show that L^2P-AHIL can surpass the
existing baseline methods, and the performance gain can be more significant as
the bag size increases. The implementation of our method is available at
https://github.com/TianhaoMa5/LLP-AHIL.
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2024 17:14:18 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 03:41:58 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ma",
"Tianhao",
""
],
[
"Chen",
"Han",
""
],
[
"Hu",
"Juncheng",
""
],
[
"Zhu",
"Yungang",
""
],
[
"Li",
"Ximing",
""
]
] | TITLE: Forming Auxiliary High-confident Instance-level Loss to Promote Learning
from Label Proportions
ABSTRACT: Learning from label proportions (LLP), i.e., a challenging weakly-supervised
learning task, aims to train a classifier by using bags of instances and the
proportions of classes within bags, rather than annotated labels for each
instance. Beyond the traditional bag-level loss, the mainstream methodology of
LLP is to incorporate an auxiliary instance-level loss with pseudo-labels
formed by predictions. Unfortunately, we empirically observed that the
pseudo-labels are are often inaccurate due to over-smoothing, especially for
the scenarios with large bag sizes, hurting the classifier induction. To
alleviate this problem, we suggest a novel LLP method, namely Learning from
Label Proportions with Auxiliary High-confident Instance-level Loss
(L^2P-AHIL). Specifically, we propose a dual entropy-based weight (DEW) method
to adaptively measure the confidences of pseudo-labels. It simultaneously
emphasizes accurate predictions at the bag level and avoids overly smoothed
predictions. We then form high-confident instance-level loss with DEW, and
jointly optimize it with the bag-level loss in a self-training manner. The
experimental results on benchmark datasets show that L^2P-AHIL can surpass the
existing baseline methods, and the performance gain can be more significant as
the bag size increases. The implementation of our method is available at
https://github.com/TianhaoMa5/LLP-AHIL.
|
2411.11874 | Dan Li | Dan Li, Hye-Bin Shin, Kang Yin and Seong-Whan Lee | Personalized Continual EEG Decoding: Retaining and Transferring
Knowledge | null | null | null | null | eess.SP cs.HC | http://creativecommons.org/licenses/by/4.0/ | The significant inter-subject variability in electroen-cephalogram (EEG)
signals often results in substantial changes to neural network weights as data
distributions shift. This variability frequently causes catastrophic forgetting
in continual EEG decoding tasks, where previously acquired knowledge is
overwritten as new subjects are introduced. While retraining the entire dataset
for each new subject can mitigate forgetting, this approach imposes significant
computational costs, rendering it impractical for real-world applications.
Therefore, an ideal brain-computer interface (BCI) model should incrementally
learn new information without requiring complete retraining, thereby reducing
computational overhead. Existing EEG decoding meth-ods typically rely on large,
centralized source-domain datasets for pre-training to improve model
generalization. However, in practical scenarios, data availability is often
constrained by privacy concerns. Furthermore, these methods are susceptible to
catastrophic forgetting in continual EEG decoding tasks, significantly limiting
their utility in long-term learning scenarios. To address these issues, we
propose the Personalized Continual EEG Decoding (PCED) framework for continual
EEG decoding. The framework uses Euclidean Alignment for fast domain
adap-tation, reducing inter-subject variability. To retain knowledge and
prevent forgetting, it includes an exemplar replay mechanism that preserves key
information from past tasks. A reservoir sampling-based memory management
strategy optimizes exemplar storage to handle memory constraints in long-term
learning. Experiments on the OpenBMI dataset with 54 subjects show that PCED
balances knowledge retention and classification performance, providing an
efficient solution for real-world BCI applications.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2024 05:28:29 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 05:18:00 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Li",
"Dan",
""
],
[
"Shin",
"Hye-Bin",
""
],
[
"Yin",
"Kang",
""
],
[
"Lee",
"Seong-Whan",
""
]
] | TITLE: Personalized Continual EEG Decoding: Retaining and Transferring
Knowledge
ABSTRACT: The significant inter-subject variability in electroen-cephalogram (EEG)
signals often results in substantial changes to neural network weights as data
distributions shift. This variability frequently causes catastrophic forgetting
in continual EEG decoding tasks, where previously acquired knowledge is
overwritten as new subjects are introduced. While retraining the entire dataset
for each new subject can mitigate forgetting, this approach imposes significant
computational costs, rendering it impractical for real-world applications.
Therefore, an ideal brain-computer interface (BCI) model should incrementally
learn new information without requiring complete retraining, thereby reducing
computational overhead. Existing EEG decoding meth-ods typically rely on large,
centralized source-domain datasets for pre-training to improve model
generalization. However, in practical scenarios, data availability is often
constrained by privacy concerns. Furthermore, these methods are susceptible to
catastrophic forgetting in continual EEG decoding tasks, significantly limiting
their utility in long-term learning scenarios. To address these issues, we
propose the Personalized Continual EEG Decoding (PCED) framework for continual
EEG decoding. The framework uses Euclidean Alignment for fast domain
adap-tation, reducing inter-subject variability. To retain knowledge and
prevent forgetting, it includes an exemplar replay mechanism that preserves key
information from past tasks. A reservoir sampling-based memory management
strategy optimizes exemplar storage to handle memory constraints in long-term
learning. Experiments on the OpenBMI dataset with 54 subjects show that PCED
balances knowledge retention and classification performance, providing an
efficient solution for real-world BCI applications.
|
2411.12355 | Yudong Han | Yudong Han, Qingpei Guo, Liyuan Pan, Liu Liu, Yu Guan, Ming Yang | DynFocus: Dynamic Cooperative Network Empowers LLMs with Video
Understanding | Accepted by CVPR 25 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The challenge in LLM-based video understanding lies in preserving visual and
semantic information in long videos while maintaining a memory-affordable token
count. However, redundancy and correspondence in videos have hindered the
performance potential of existing methods. Through statistical learning on
current datasets, we observe that redundancy occurs in both repeated and
answer-irrelevant frames, and the corresponding frames vary with different
questions. This suggests the possibility of adopting dynamic encoding to
balance detailed video information preservation with token budget reduction. To
this end, we propose a dynamic cooperative network, DynFocus, for
memory-efficient video encoding in this paper. Specifically, i) a Dynamic Event
Prototype Estimation (DPE) module to dynamically select meaningful frames for
question answering; (ii) a Compact Cooperative Encoding (CCE) module that
encodes meaningful frames with detailed visual appearance and the remaining
frames with sketchy perception separately. We evaluate our method on five
publicly available benchmarks, and experimental results consistently
demonstrate that our method achieves competitive performance.
| [
{
"version": "v1",
"created": "Tue, 19 Nov 2024 09:16:54 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 10:31:35 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Han",
"Yudong",
""
],
[
"Guo",
"Qingpei",
""
],
[
"Pan",
"Liyuan",
""
],
[
"Liu",
"Liu",
""
],
[
"Guan",
"Yu",
""
],
[
"Yang",
"Ming",
""
]
] | TITLE: DynFocus: Dynamic Cooperative Network Empowers LLMs with Video
Understanding
ABSTRACT: The challenge in LLM-based video understanding lies in preserving visual and
semantic information in long videos while maintaining a memory-affordable token
count. However, redundancy and correspondence in videos have hindered the
performance potential of existing methods. Through statistical learning on
current datasets, we observe that redundancy occurs in both repeated and
answer-irrelevant frames, and the corresponding frames vary with different
questions. This suggests the possibility of adopting dynamic encoding to
balance detailed video information preservation with token budget reduction. To
this end, we propose a dynamic cooperative network, DynFocus, for
memory-efficient video encoding in this paper. Specifically, i) a Dynamic Event
Prototype Estimation (DPE) module to dynamically select meaningful frames for
question answering; (ii) a Compact Cooperative Encoding (CCE) module that
encodes meaningful frames with detailed visual appearance and the remaining
frames with sketchy perception separately. We evaluate our method on five
publicly available benchmarks, and experimental results consistently
demonstrate that our method achieves competitive performance.
|
2411.13059 | Rohith Peddi | Rohith Peddi, Saurabh, Ayush Abhay Shrivastava, Parag Singla, Vibhav
Gogate | Towards Unbiased and Robust Spatio-Temporal Scene Graph Generation and
Anticipation | CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Spatio-Temporal Scene Graphs (STSGs) provide a concise and expressive
representation of dynamic scenes by modeling objects and their evolving
relationships over time. However, real-world visual relationships often exhibit
a long-tailed distribution, causing existing methods for tasks like Video Scene
Graph Generation (VidSGG) and Scene Graph Anticipation (SGA) to produce biased
scene graphs. To this end, we propose ImparTail, a novel training framework
that leverages loss masking and curriculum learning to mitigate bias in the
generation and anticipation of spatio-temporal scene graphs. Unlike prior
methods that add extra architectural components to learn unbiased estimators,
we propose an impartial training objective that reduces the dominance of head
classes during learning and focuses on underrepresented tail relationships. Our
curriculum-driven mask generation strategy further empowers the model to
adaptively adjust its bias mitigation strategy over time, enabling more
balanced and robust estimations. To thoroughly assess performance under various
distribution shifts, we also introduce two new tasks Robust Spatio-Temporal
Scene Graph Generation and Robust Scene Graph Anticipation offering a
challenging benchmark for evaluating the resilience of STSG models. Extensive
experiments on the Action Genome dataset demonstrate the superior unbiased
performance and robustness of our method compared to existing baselines.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 06:15:28 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 02:19:43 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Peddi",
"Rohith",
""
],
[
"Saurabh",
"",
""
],
[
"Shrivastava",
"Ayush Abhay",
""
],
[
"Singla",
"Parag",
""
],
[
"Gogate",
"Vibhav",
""
]
] | TITLE: Towards Unbiased and Robust Spatio-Temporal Scene Graph Generation and
Anticipation
ABSTRACT: Spatio-Temporal Scene Graphs (STSGs) provide a concise and expressive
representation of dynamic scenes by modeling objects and their evolving
relationships over time. However, real-world visual relationships often exhibit
a long-tailed distribution, causing existing methods for tasks like Video Scene
Graph Generation (VidSGG) and Scene Graph Anticipation (SGA) to produce biased
scene graphs. To this end, we propose ImparTail, a novel training framework
that leverages loss masking and curriculum learning to mitigate bias in the
generation and anticipation of spatio-temporal scene graphs. Unlike prior
methods that add extra architectural components to learn unbiased estimators,
we propose an impartial training objective that reduces the dominance of head
classes during learning and focuses on underrepresented tail relationships. Our
curriculum-driven mask generation strategy further empowers the model to
adaptively adjust its bias mitigation strategy over time, enabling more
balanced and robust estimations. To thoroughly assess performance under various
distribution shifts, we also introduce two new tasks Robust Spatio-Temporal
Scene Graph Generation and Robust Scene Graph Anticipation offering a
challenging benchmark for evaluating the resilience of STSG models. Extensive
experiments on the Action Genome dataset demonstrate the superior unbiased
performance and robustness of our method compared to existing baselines.
|
2411.15553 | Kaisheng Liang | Kaisheng Liang, Xuelong Dai, Yanjie Li, Dong Wang, Bin Xiao | Improving Transferable Targeted Attacks with Feature Tuning Mixup | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks (DNNs) exhibit vulnerability to adversarial examples
that can transfer across different DNN models. A particularly challenging
problem is developing transferable targeted attacks that can mislead DNN models
into predicting specific target classes. While various methods have been
proposed to enhance attack transferability, they often incur substantial
computational costs while yielding limited improvements. Recent clean feature
mixup methods use random clean features to perturb the feature space but lack
optimization for disrupting adversarial examples, overlooking the advantages of
attack-specific perturbations. In this paper, we propose Feature Tuning Mixup
(FTM), a novel method that enhances targeted attack transferability by
combining both random and optimized noises in the feature space. FTM introduces
learnable feature perturbations and employs an efficient stochastic update
strategy for optimization. These learnable perturbations facilitate the
generation of more robust adversarial examples with improved transferability.
We further demonstrate that attack performance can be enhanced through an
ensemble of multiple FTM-perturbed surrogate models. Extensive experiments on
the ImageNet-compatible dataset across various DNN models demonstrate that our
method achieves significant improvements over state-of-the-art methods while
maintaining low computational cost.
| [
{
"version": "v1",
"created": "Sat, 23 Nov 2024 13:18:25 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 07:01:56 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Liang",
"Kaisheng",
""
],
[
"Dai",
"Xuelong",
""
],
[
"Li",
"Yanjie",
""
],
[
"Wang",
"Dong",
""
],
[
"Xiao",
"Bin",
""
]
] | TITLE: Improving Transferable Targeted Attacks with Feature Tuning Mixup
ABSTRACT: Deep neural networks (DNNs) exhibit vulnerability to adversarial examples
that can transfer across different DNN models. A particularly challenging
problem is developing transferable targeted attacks that can mislead DNN models
into predicting specific target classes. While various methods have been
proposed to enhance attack transferability, they often incur substantial
computational costs while yielding limited improvements. Recent clean feature
mixup methods use random clean features to perturb the feature space but lack
optimization for disrupting adversarial examples, overlooking the advantages of
attack-specific perturbations. In this paper, we propose Feature Tuning Mixup
(FTM), a novel method that enhances targeted attack transferability by
combining both random and optimized noises in the feature space. FTM introduces
learnable feature perturbations and employs an efficient stochastic update
strategy for optimization. These learnable perturbations facilitate the
generation of more robust adversarial examples with improved transferability.
We further demonstrate that attack performance can be enhanced through an
ensemble of multiple FTM-perturbed surrogate models. Extensive experiments on
the ImageNet-compatible dataset across various DNN models demonstrate that our
method achieves significant improvements over state-of-the-art methods while
maintaining low computational cost.
|
2411.15927 | Haebin Shin | Haebin Shin, Lei Ji, Yeyun Gong, Sungdong Kim, Eunbi Choi, Minjoon Seo | Generative Prompt Internalization | NAACL 2025 (Main Conference) | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Prompts used in recent large language model based applications are often
fixed and lengthy, leading to significant computational overhead. To address
this challenge, we propose Generative Prompt Internalization (GenPI), a
lightweight method that employs a joint training approach. GenPI not only
replicates the behavior of models with prompt inputs but also generates the
content of the prompt along with reasons for why the model's behavior should
change accordingly. We demonstrate that our approach effectively internalizes
complex prompts across various agent-based application scenarios. For effective
training without interactions with the dedicated environments, we introduce a
data synthesis technique that autonomously collects conversational datasets by
swapping the roles of the agent and environment. This method is especially
useful in scenarios where only a predefined prompt is available without a
corresponding training dataset. By internalizing complex prompts, Generative
Prompt Internalization enables high performance and efficient inference without
the need for explicit prompts.
| [
{
"version": "v1",
"created": "Sun, 24 Nov 2024 17:32:20 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Feb 2025 14:55:26 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 00:38:02 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Shin",
"Haebin",
""
],
[
"Ji",
"Lei",
""
],
[
"Gong",
"Yeyun",
""
],
[
"Kim",
"Sungdong",
""
],
[
"Choi",
"Eunbi",
""
],
[
"Seo",
"Minjoon",
""
]
] | TITLE: Generative Prompt Internalization
ABSTRACT: Prompts used in recent large language model based applications are often
fixed and lengthy, leading to significant computational overhead. To address
this challenge, we propose Generative Prompt Internalization (GenPI), a
lightweight method that employs a joint training approach. GenPI not only
replicates the behavior of models with prompt inputs but also generates the
content of the prompt along with reasons for why the model's behavior should
change accordingly. We demonstrate that our approach effectively internalizes
complex prompts across various agent-based application scenarios. For effective
training without interactions with the dedicated environments, we introduce a
data synthesis technique that autonomously collects conversational datasets by
swapping the roles of the agent and environment. This method is especially
useful in scenarios where only a predefined prompt is available without a
corresponding training dataset. By internalizing complex prompts, Generative
Prompt Internalization enables high performance and efficient inference without
the need for explicit prompts.
|
2411.18335 | Charles Corbi\`ere | Mehdi Zayene, Jannik Endres, Albias Havolli, Charles Corbi\`ere, Salim
Cherkaoui, Alexandre Kontouli, Alexandre Alahi | Helvipad: A Real-World Dataset for Omnidirectional Stereo Depth
Estimation | Accepted to CVPR 2025. Project page:
https://vita-epfl.github.io/Helvipad | null | null | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Despite progress in stereo depth estimation, omnidirectional imaging remains
underexplored, mainly due to the lack of appropriate data. We introduce
Helvipad, a real-world dataset for omnidirectional stereo depth estimation,
featuring 40K video frames from video sequences across diverse environments,
including crowded indoor and outdoor scenes with various lighting conditions.
Collected using two 360{\deg} cameras in a top-bottom setup and a LiDAR sensor,
the dataset includes accurate depth and disparity labels by projecting 3D point
clouds onto equirectangular images. Additionally, we provide an augmented
training set with an increased label density by using depth completion. We
benchmark leading stereo depth estimation models for both standard and
omnidirectional images. The results show that while recent stereo methods
perform decently, a challenge persists in accurately estimating depth in
omnidirectional imaging. To address this, we introduce necessary adaptations to
stereo models, leading to improved performance.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2024 13:34:41 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 13:57:14 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Zayene",
"Mehdi",
""
],
[
"Endres",
"Jannik",
""
],
[
"Havolli",
"Albias",
""
],
[
"Corbière",
"Charles",
""
],
[
"Cherkaoui",
"Salim",
""
],
[
"Kontouli",
"Alexandre",
""
],
[
"Alahi",
"Alexandre",
""
]
] | TITLE: Helvipad: A Real-World Dataset for Omnidirectional Stereo Depth
Estimation
ABSTRACT: Despite progress in stereo depth estimation, omnidirectional imaging remains
underexplored, mainly due to the lack of appropriate data. We introduce
Helvipad, a real-world dataset for omnidirectional stereo depth estimation,
featuring 40K video frames from video sequences across diverse environments,
including crowded indoor and outdoor scenes with various lighting conditions.
Collected using two 360{\deg} cameras in a top-bottom setup and a LiDAR sensor,
the dataset includes accurate depth and disparity labels by projecting 3D point
clouds onto equirectangular images. Additionally, we provide an augmented
training set with an increased label density by using depth completion. We
benchmark leading stereo depth estimation models for both standard and
omnidirectional images. The results show that while recent stereo methods
perform decently, a challenge persists in accurately estimating depth in
omnidirectional imaging. To address this, we introduce necessary adaptations to
stereo models, leading to improved performance.
|
2411.18936 | Meng Tang | Weimin Qiu, Jieke Wang, Meng Tang | Self-Cross Diffusion Guidance for Text-to-Image Synthesis of Similar
Subjects | Conference on Computer Vision and Pattern Recognition (CVPR), 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Diffusion models achieved unprecedented fidelity and diversity for
synthesizing image, video, 3D assets, etc. However, subject mixing is an
unresolved issue for diffusion-based image synthesis, particularly for
synthesizing multiple similar-looking subjects. We propose Self-Cross Diffusion
Guidance to penalize the overlap between cross-attention maps and the
aggregated self-attention map. Compared to previous methods based on
self-attention or cross-attention alone, our guidance is more effective in
eliminating subject mixing. What's more, our guidance addresses subject mixing
for all relevant patches beyond the most discriminant one, e.g., the beak of a
bird. For each subject, we aggregate self-attention maps of patches with higher
cross-attention values. Thus, the aggregated self-attention map forms a region
that the whole subject attends to. Our training-free method boosts the
performance of both Unet-based and Transformer-based diffusion models such as
the Stable Diffusion series. We also release a similar subjects dataset (SSD),
a challenging benchmark, and utilize GPT-4o for automatic and reliable
evaluation. Extensive qualitative and quantitative results demonstrate the
effectiveness of our self-cross diffusion guidance.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 05:58:03 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 19:58:03 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Qiu",
"Weimin",
""
],
[
"Wang",
"Jieke",
""
],
[
"Tang",
"Meng",
""
]
] | TITLE: Self-Cross Diffusion Guidance for Text-to-Image Synthesis of Similar
Subjects
ABSTRACT: Diffusion models achieved unprecedented fidelity and diversity for
synthesizing image, video, 3D assets, etc. However, subject mixing is an
unresolved issue for diffusion-based image synthesis, particularly for
synthesizing multiple similar-looking subjects. We propose Self-Cross Diffusion
Guidance to penalize the overlap between cross-attention maps and the
aggregated self-attention map. Compared to previous methods based on
self-attention or cross-attention alone, our guidance is more effective in
eliminating subject mixing. What's more, our guidance addresses subject mixing
for all relevant patches beyond the most discriminant one, e.g., the beak of a
bird. For each subject, we aggregate self-attention maps of patches with higher
cross-attention values. Thus, the aggregated self-attention map forms a region
that the whole subject attends to. Our training-free method boosts the
performance of both Unet-based and Transformer-based diffusion models such as
the Stable Diffusion series. We also release a similar subjects dataset (SSD),
a challenging benchmark, and utilize GPT-4o for automatic and reliable
evaluation. Extensive qualitative and quantitative results demonstrate the
effectiveness of our self-cross diffusion guidance.
|
2411.19122 | Davide Carbone | Alessandro Licciardi (1 and 2), Davide Carbone (4), Lamberto Rondoni
(1 and 2) and Alessandro Nagar (2 and 3) ((1) DISMA, Politecnico di Torino,
(2) INFN, Sezione di Torino, (3) Institut des Hautes Etudes Scientifiques,
(4) Laboratoire de Physique de l'Ecole Normale Superi\`eure, ENS Universit\`e
PSL) | Wavelet Scattering Transform for Gravitational Waves Analysis. An
Application to Glitch Characterization | null | null | null | null | gr-qc astro-ph.IM physics.data-an | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Gravitational waves, first predicted by Albert Einstein within the framework
of general relativity, were confirmed in 2015 by the LIGO/Virgo collaboration,
marking a pivotal breakthrough in astrophysics. Despite this achievement, a key
challenge remains in distinguishing true gravitational wave signals from noise
artifacts, or "glitches," which can distort data and affect the quality of
observations. Current state-of-the-art methods, such as the Q-transform, are
widely used for signal processing, but face limitations when addressing certain
types of signals. In this study, we investigate the Wavelet Scattering
Transform (WST), a recent signal analysis method, as a complementary approach.
Theoretical motivation for WST arises from its stability under signal
deformations and its equivariance properties, which make it particularly suited
for the complex nature of gravitational wave data. Our experiments on the LIGO
O1a dataset show that WST simplifies classification tasks and enables the use
of more efficient architectures compared to traditional methods. Furthermore,
we explore the potential benefits of integrating WST with the Q-transform,
demonstrating that ensemble methods exploiting both techniques can capture
complementary features of the signal and improve overall performance. This work
contributes to advancing machine learning applications in gravitational wave
analysis, introducing refined preprocessing techniques that improve signal
detection and classification.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 13:12:32 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 10:52:36 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Licciardi",
"Alessandro",
"",
"1 and 2"
],
[
"Carbone",
"Davide",
"",
"1 and 2"
],
[
"Rondoni",
"Lamberto",
"",
"1 and 2"
],
[
"Nagar",
"Alessandro",
"",
"2 and 3"
]
] | TITLE: Wavelet Scattering Transform for Gravitational Waves Analysis. An
Application to Glitch Characterization
ABSTRACT: Gravitational waves, first predicted by Albert Einstein within the framework
of general relativity, were confirmed in 2015 by the LIGO/Virgo collaboration,
marking a pivotal breakthrough in astrophysics. Despite this achievement, a key
challenge remains in distinguishing true gravitational wave signals from noise
artifacts, or "glitches," which can distort data and affect the quality of
observations. Current state-of-the-art methods, such as the Q-transform, are
widely used for signal processing, but face limitations when addressing certain
types of signals. In this study, we investigate the Wavelet Scattering
Transform (WST), a recent signal analysis method, as a complementary approach.
Theoretical motivation for WST arises from its stability under signal
deformations and its equivariance properties, which make it particularly suited
for the complex nature of gravitational wave data. Our experiments on the LIGO
O1a dataset show that WST simplifies classification tasks and enables the use
of more efficient architectures compared to traditional methods. Furthermore,
we explore the potential benefits of integrating WST with the Q-transform,
demonstrating that ensemble methods exploiting both techniques can capture
complementary features of the signal and improve overall performance. This work
contributes to advancing machine learning applications in gravitational wave
analysis, introducing refined preprocessing techniques that improve signal
detection and classification.
|
2412.00171 | Weixin Mao | Weixin Mao, Weiheng Zhong, Zhou Jiang, Dong Fang, Zhongyue Zhang,
Zihan Lan, Haosheng Li, Fan Jia, Tiancai Wang, Haoqiang Fan, Osamu Yoshie | RoboMatrix: A Skill-centric Hierarchical Framework for Scalable Robot
Task Planning and Execution in Open-World | 17 pages, 16 figures | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing robot policies predominantly adopt the task-centric approach,
requiring end-to-end task data collection. This results in limited
generalization to new tasks and difficulties in pinpointing errors within
long-horizon, multi-stage tasks. To address this, we propose RoboMatrix, a
skill-centric hierarchical framework designed for scalable robot task planning
and execution in open-world environments. RoboMatrix extracts general
meta-skills from diverse complex tasks, enabling the completion of unseen tasks
through skill composition. Its architecture consists of a high-level scheduling
layer that utilizes large language models (LLMs) for task decomposition, an
intermediate skill layer housing meta-skill models, and a low-level hardware
layer for robot control. A key innovation of our work is the introduction of
the first unified vision-language-action (VLA) model capable of seamlessly
integrating both movement and manipulation within one model. This is achieved
by combining vision and language prompts to generate discrete actions.
Experimental results demonstrate that RoboMatrix achieves a 50% higher success
rate than task-centric baselines when applied to unseen objects, scenes, and
tasks. To advance open-world robotics research, we will open-source code,
hardware designs, model weights, and datasets at
https://github.com/WayneMao/RoboMatrix.
| [
{
"version": "v1",
"created": "Fri, 29 Nov 2024 17:36:03 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Dec 2024 10:02:45 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 09:43:25 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Mao",
"Weixin",
""
],
[
"Zhong",
"Weiheng",
""
],
[
"Jiang",
"Zhou",
""
],
[
"Fang",
"Dong",
""
],
[
"Zhang",
"Zhongyue",
""
],
[
"Lan",
"Zihan",
""
],
[
"Li",
"Haosheng",
""
],
[
"Jia",
"Fan",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Fan",
"Haoqiang",
""
],
[
"Yoshie",
"Osamu",
""
]
] | TITLE: RoboMatrix: A Skill-centric Hierarchical Framework for Scalable Robot
Task Planning and Execution in Open-World
ABSTRACT: Existing robot policies predominantly adopt the task-centric approach,
requiring end-to-end task data collection. This results in limited
generalization to new tasks and difficulties in pinpointing errors within
long-horizon, multi-stage tasks. To address this, we propose RoboMatrix, a
skill-centric hierarchical framework designed for scalable robot task planning
and execution in open-world environments. RoboMatrix extracts general
meta-skills from diverse complex tasks, enabling the completion of unseen tasks
through skill composition. Its architecture consists of a high-level scheduling
layer that utilizes large language models (LLMs) for task decomposition, an
intermediate skill layer housing meta-skill models, and a low-level hardware
layer for robot control. A key innovation of our work is the introduction of
the first unified vision-language-action (VLA) model capable of seamlessly
integrating both movement and manipulation within one model. This is achieved
by combining vision and language prompts to generate discrete actions.
Experimental results demonstrate that RoboMatrix achieves a 50% higher success
rate than task-centric baselines when applied to unseen objects, scenes, and
tasks. To advance open-world robotics research, we will open-source code,
hardware designs, model weights, and datasets at
https://github.com/WayneMao/RoboMatrix.
|
2412.00578 | Alex Hanson | Alex Hanson, Allen Tu, Geng Lin, Vasu Singla, Matthias Zwicker, Tom
Goldstein | Speedy-Splat: Fast 3D Gaussian Splatting with Sparse Pixels and Sparse
Primitives | CVPR 2025, Project Page: https://speedysplat.github.io/ | null | null | null | cs.CV cs.GR | http://creativecommons.org/licenses/by/4.0/ | 3D Gaussian Splatting (3D-GS) is a recent 3D scene reconstruction technique
that enables real-time rendering of novel views by modeling scenes as
parametric point clouds of differentiable 3D Gaussians. However, its rendering
speed and model size still present bottlenecks, especially in
resource-constrained settings. In this paper, we identify and address two key
inefficiencies in 3D-GS to substantially improve rendering speed. These
improvements also yield the ancillary benefits of reduced model size and
training time. First, we optimize the rendering pipeline to precisely localize
Gaussians in the scene, boosting rendering speed without altering visual
fidelity. Second, we introduce a novel pruning technique and integrate it into
the training pipeline, significantly reducing model size and training time
while further raising rendering speed. Our Speedy-Splat approach combines these
techniques to accelerate average rendering speed by a drastic
$\mathit{6.71\times}$ across scenes from the Mip-NeRF 360, Tanks & Temples, and
Deep Blending datasets.
| [
{
"version": "v1",
"created": "Sat, 30 Nov 2024 20:25:56 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 20:30:29 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Hanson",
"Alex",
""
],
[
"Tu",
"Allen",
""
],
[
"Lin",
"Geng",
""
],
[
"Singla",
"Vasu",
""
],
[
"Zwicker",
"Matthias",
""
],
[
"Goldstein",
"Tom",
""
]
] | TITLE: Speedy-Splat: Fast 3D Gaussian Splatting with Sparse Pixels and Sparse
Primitives
ABSTRACT: 3D Gaussian Splatting (3D-GS) is a recent 3D scene reconstruction technique
that enables real-time rendering of novel views by modeling scenes as
parametric point clouds of differentiable 3D Gaussians. However, its rendering
speed and model size still present bottlenecks, especially in
resource-constrained settings. In this paper, we identify and address two key
inefficiencies in 3D-GS to substantially improve rendering speed. These
improvements also yield the ancillary benefits of reduced model size and
training time. First, we optimize the rendering pipeline to precisely localize
Gaussians in the scene, boosting rendering speed without altering visual
fidelity. Second, we introduce a novel pruning technique and integrate it into
the training pipeline, significantly reducing model size and training time
while further raising rendering speed. Our Speedy-Splat approach combines these
techniques to accelerate average rendering speed by a drastic
$\mathit{6.71\times}$ across scenes from the Mip-NeRF 360, Tanks & Temples, and
Deep Blending datasets.
|
2412.00759 | Xin Xie | Xin Xie and Dong Gong | DyMO: Training-Free Diffusion Model Alignment with Dynamic
Multi-Objective Scheduling | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-to-image diffusion model alignment is critical for improving the
alignment between the generated images and human preferences. While
training-based methods are constrained by high computational costs and dataset
requirements, training-free alignment methods remain underexplored and are
often limited by inaccurate guidance. We propose a plug-and-play training-free
alignment method, DyMO, for aligning the generated images and human preferences
during inference. Apart from text-aware human preference scores, we introduce a
semantic alignment objective for enhancing the semantic alignment in the early
stages of diffusion, relying on the fact that the attention maps are effective
reflections of the semantics in noisy images. We propose dynamic scheduling of
multiple objectives and intermediate recurrent steps to reflect the
requirements at different steps. Experiments with diverse pre-trained diffusion
models and metrics demonstrate the effectiveness and robustness of the proposed
method.
| [
{
"version": "v1",
"created": "Sun, 1 Dec 2024 10:32:47 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Dec 2024 04:00:09 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 08:53:39 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Xie",
"Xin",
""
],
[
"Gong",
"Dong",
""
]
] | TITLE: DyMO: Training-Free Diffusion Model Alignment with Dynamic
Multi-Objective Scheduling
ABSTRACT: Text-to-image diffusion model alignment is critical for improving the
alignment between the generated images and human preferences. While
training-based methods are constrained by high computational costs and dataset
requirements, training-free alignment methods remain underexplored and are
often limited by inaccurate guidance. We propose a plug-and-play training-free
alignment method, DyMO, for aligning the generated images and human preferences
during inference. Apart from text-aware human preference scores, we introduce a
semantic alignment objective for enhancing the semantic alignment in the early
stages of diffusion, relying on the fact that the attention maps are effective
reflections of the semantics in noisy images. We propose dynamic scheduling of
multiple objectives and intermediate recurrent steps to reflect the
requirements at different steps. Experiments with diverse pre-trained diffusion
models and metrics demonstrate the effectiveness and robustness of the proposed
method.
|
2412.01987 | Tom\'a\v{s} Sou\v{c}ek | Tom\'a\v{s} Sou\v{c}ek, Prajwal Gatti, Michael Wray, Ivan Laptev, Dima
Damen, Josef Sivic | ShowHowTo: Generating Scene-Conditioned Step-by-Step Visual Instructions | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this work is to generate step-by-step visual instructions in the
form of a sequence of images, given an input image that provides the scene
context and the sequence of textual instructions. This is a challenging problem
as it requires generating multi-step image sequences to achieve a complex goal
while being grounded in a specific environment. Part of the challenge stems
from the lack of large-scale training data for this problem. The contribution
of this work is thus three-fold. First, we introduce an automatic approach for
collecting large step-by-step visual instruction training data from
instructional videos. We apply this approach to one million videos and create a
large-scale, high-quality dataset of 0.6M sequences of image-text pairs.
Second, we develop and train ShowHowTo, a video diffusion model capable of
generating step-by-step visual instructions consistent with the provided input
image. Third, we evaluate the generated image sequences across three dimensions
of accuracy (step, scene, and task) and show our model achieves
state-of-the-art results on all of them. Our code, dataset, and trained models
are publicly available.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 21:40:17 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 19:50:08 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Souček",
"Tomáš",
""
],
[
"Gatti",
"Prajwal",
""
],
[
"Wray",
"Michael",
""
],
[
"Laptev",
"Ivan",
""
],
[
"Damen",
"Dima",
""
],
[
"Sivic",
"Josef",
""
]
] | TITLE: ShowHowTo: Generating Scene-Conditioned Step-by-Step Visual Instructions
ABSTRACT: The goal of this work is to generate step-by-step visual instructions in the
form of a sequence of images, given an input image that provides the scene
context and the sequence of textual instructions. This is a challenging problem
as it requires generating multi-step image sequences to achieve a complex goal
while being grounded in a specific environment. Part of the challenge stems
from the lack of large-scale training data for this problem. The contribution
of this work is thus three-fold. First, we introduce an automatic approach for
collecting large step-by-step visual instruction training data from
instructional videos. We apply this approach to one million videos and create a
large-scale, high-quality dataset of 0.6M sequences of image-text pairs.
Second, we develop and train ShowHowTo, a video diffusion model capable of
generating step-by-step visual instructions consistent with the provided input
image. Third, we evaluate the generated image sequences across three dimensions
of accuracy (step, scene, and task) and show our model achieves
state-of-the-art results on all of them. Our code, dataset, and trained models
are publicly available.
|
2412.02734 | Zhaofeng Hu | Zhaofeng Hu, Sifan Zhou, Shibo Zhao, Zhihang Yuan, Ci-Jyun Liang | MVCTrack: Boosting 3D Point Cloud Tracking via Multimodal-Guided Virtual
Cues | Accepted by ICRA 2025 | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | 3D single object tracking is essential in autonomous driving and robotics.
Existing methods often struggle with sparse and incomplete point cloud
scenarios. To address these limitations, we propose a Multimodal-guided Virtual
Cues Projection (MVCP) scheme that generates virtual cues to enrich sparse
point clouds. Additionally, we introduce an enhanced tracker MVCTrack based on
the generated virtual cues. Specifically, the MVCP scheme seamlessly integrates
RGB sensors into LiDAR-based systems, leveraging a set of 2D detections to
create dense 3D virtual cues that significantly improve the sparsity of point
clouds. These virtual cues can naturally integrate with existing LiDAR-based 3D
trackers, yielding substantial performance gains. Extensive experiments
demonstrate that our method achieves competitive performance on the NuScenes
dataset.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 18:18:33 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Dec 2024 06:17:48 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 14:21:17 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Mar 2025 23:48:06 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Hu",
"Zhaofeng",
""
],
[
"Zhou",
"Sifan",
""
],
[
"Zhao",
"Shibo",
""
],
[
"Yuan",
"Zhihang",
""
],
[
"Liang",
"Ci-Jyun",
""
]
] | TITLE: MVCTrack: Boosting 3D Point Cloud Tracking via Multimodal-Guided Virtual
Cues
ABSTRACT: 3D single object tracking is essential in autonomous driving and robotics.
Existing methods often struggle with sparse and incomplete point cloud
scenarios. To address these limitations, we propose a Multimodal-guided Virtual
Cues Projection (MVCP) scheme that generates virtual cues to enrich sparse
point clouds. Additionally, we introduce an enhanced tracker MVCTrack based on
the generated virtual cues. Specifically, the MVCP scheme seamlessly integrates
RGB sensors into LiDAR-based systems, leveraging a set of 2D detections to
create dense 3D virtual cues that significantly improve the sparsity of point
clouds. These virtual cues can naturally integrate with existing LiDAR-based 3D
trackers, yielding substantial performance gains. Extensive experiments
demonstrate that our method achieves competitive performance on the NuScenes
dataset.
|
2412.03146 | Huai Yu | Huai Yu, Junhao Wang, Yao He, Wen Yang, Gui-Song Xia | MCVO: A Generic Visual Odometry for Arbitrarily Arranged Multi-Cameras | 8 pages, 8 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Making multi-camera visual SLAM systems easier to set up and more robust to
the environment is attractive for vision robots. Existing monocular and
binocular vision SLAM systems have narrow sensing Field-of-View (FoV),
resulting in degenerated accuracy and limited robustness in textureless
environments. Thus multi-camera SLAM systems are gaining attention because they
can provide redundancy with much wider FoV. However, the usual arbitrary
placement and orientation of multiple cameras make the pose scale estimation
and system updating challenging. To address these problems, we propose a robust
visual odometry system for rigidly-bundled arbitrarily-arranged multi-cameras,
namely MCVO, which can achieve metric-scale state estimation with high
flexibility in the cameras' arrangement. Specifically, we first design a
learning-based feature tracking framework to shift the pressure of CPU
processing of multiple video streams to GPU. Then we initialize the odometry
system with the metric-scale poses under the rigid constraints between moving
cameras. Finally, we fuse the features of the multi-cameras in the back-end to
achieve robust pose estimation and online scale optimization. Additionally,
multi-camera features help improve the loop detection for pose graph
optimization. Experiments on KITTI-360 and MultiCamData datasets validate its
robustness over arbitrarily arranged cameras. Compared with other stereo and
multi-camera visual SLAM systems, our method obtains higher pose accuracy with
better generalization ability. Our codes and online demos are available at
https://github.com/JunhaoWang615/MCVO
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 09:13:03 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 08:52:12 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Yu",
"Huai",
""
],
[
"Wang",
"Junhao",
""
],
[
"He",
"Yao",
""
],
[
"Yang",
"Wen",
""
],
[
"Xia",
"Gui-Song",
""
]
] | TITLE: MCVO: A Generic Visual Odometry for Arbitrarily Arranged Multi-Cameras
ABSTRACT: Making multi-camera visual SLAM systems easier to set up and more robust to
the environment is attractive for vision robots. Existing monocular and
binocular vision SLAM systems have narrow sensing Field-of-View (FoV),
resulting in degenerated accuracy and limited robustness in textureless
environments. Thus multi-camera SLAM systems are gaining attention because they
can provide redundancy with much wider FoV. However, the usual arbitrary
placement and orientation of multiple cameras make the pose scale estimation
and system updating challenging. To address these problems, we propose a robust
visual odometry system for rigidly-bundled arbitrarily-arranged multi-cameras,
namely MCVO, which can achieve metric-scale state estimation with high
flexibility in the cameras' arrangement. Specifically, we first design a
learning-based feature tracking framework to shift the pressure of CPU
processing of multiple video streams to GPU. Then we initialize the odometry
system with the metric-scale poses under the rigid constraints between moving
cameras. Finally, we fuse the features of the multi-cameras in the back-end to
achieve robust pose estimation and online scale optimization. Additionally,
multi-camera features help improve the loop detection for pose graph
optimization. Experiments on KITTI-360 and MultiCamData datasets validate its
robustness over arbitrarily arranged cameras. Compared with other stereo and
multi-camera visual SLAM systems, our method obtains higher pose accuracy with
better generalization ability. Our codes and online demos are available at
https://github.com/JunhaoWang615/MCVO
|
2412.07626 | Bin Wang | Linke Ouyang, Yuan Qu, Hongbin Zhou, Jiawei Zhu, Rui Zhang, Qunshu
Lin, Bin Wang, Zhiyuan Zhao, Man Jiang, Xiaomeng Zhao, Jin Shi, Fan Wu, Pei
Chu, Minghao Liu, Zhenxiang Li, Chao Xu, Bo Zhang, Botian Shi, Zhongying Tu,
Conghui He | OmniDocBench: Benchmarking Diverse PDF Document Parsing with
Comprehensive Annotations | Accepted by CVPR2025 | null | null | null | cs.CV cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Document content extraction is a critical task in computer vision,
underpinning the data needs of large language models (LLMs) and
retrieval-augmented generation (RAG) systems. Despite recent progress, current
document parsing methods have not been fairly and comprehensively evaluated due
to the narrow coverage of document types and the simplified, unrealistic
evaluation procedures in existing benchmarks. To address these gaps, we
introduce OmniDocBench, a novel benchmark featuring high-quality annotations
across nine document sources, including academic papers, textbooks, and more
challenging cases such as handwritten notes and densely typeset newspapers.
OmniDocBench supports flexible, multi-level evaluations--ranging from an
end-to-end assessment to the task-specific and attribute--based analysis using
19 layout categories and 15 attribute labels. We conduct a thorough evaluation
of both pipeline-based methods and end-to-end vision-language models, revealing
their strengths and weaknesses across different document types. OmniDocBench
sets a new standard for the fair, diverse, and fine-grained evaluation in
document parsing. Dataset and code are available at
https://github.com/opendatalab/OmniDocBench.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 16:05:56 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 06:19:32 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ouyang",
"Linke",
""
],
[
"Qu",
"Yuan",
""
],
[
"Zhou",
"Hongbin",
""
],
[
"Zhu",
"Jiawei",
""
],
[
"Zhang",
"Rui",
""
],
[
"Lin",
"Qunshu",
""
],
[
"Wang",
"Bin",
""
],
[
"Zhao",
"Zhiyuan",
""
],
[
"Jiang",
"Man",
""
],
[
"Zhao",
"Xiaomeng",
""
],
[
"Shi",
"Jin",
""
],
[
"Wu",
"Fan",
""
],
[
"Chu",
"Pei",
""
],
[
"Liu",
"Minghao",
""
],
[
"Li",
"Zhenxiang",
""
],
[
"Xu",
"Chao",
""
],
[
"Zhang",
"Bo",
""
],
[
"Shi",
"Botian",
""
],
[
"Tu",
"Zhongying",
""
],
[
"He",
"Conghui",
""
]
] | TITLE: OmniDocBench: Benchmarking Diverse PDF Document Parsing with
Comprehensive Annotations
ABSTRACT: Document content extraction is a critical task in computer vision,
underpinning the data needs of large language models (LLMs) and
retrieval-augmented generation (RAG) systems. Despite recent progress, current
document parsing methods have not been fairly and comprehensively evaluated due
to the narrow coverage of document types and the simplified, unrealistic
evaluation procedures in existing benchmarks. To address these gaps, we
introduce OmniDocBench, a novel benchmark featuring high-quality annotations
across nine document sources, including academic papers, textbooks, and more
challenging cases such as handwritten notes and densely typeset newspapers.
OmniDocBench supports flexible, multi-level evaluations--ranging from an
end-to-end assessment to the task-specific and attribute--based analysis using
19 layout categories and 15 attribute labels. We conduct a thorough evaluation
of both pipeline-based methods and end-to-end vision-language models, revealing
their strengths and weaknesses across different document types. OmniDocBench
sets a new standard for the fair, diverse, and fine-grained evaluation in
document parsing. Dataset and code are available at
https://github.com/opendatalab/OmniDocBench.
|
2412.07761 | Jingxi Chen | Jingxi Chen, Brandon Y. Feng, Haoming Cai, Tianfu Wang, Levi Burner,
Dehao Yuan, Cornelia Fermuller, Christopher A. Metzler, Yiannis Aloimonos | Repurposing Pre-trained Video Diffusion Models for Event-based Video
Interpolation | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video Frame Interpolation aims to recover realistic missing frames between
observed frames, generating a high-frame-rate video from a low-frame-rate
video. However, without additional guidance, the large motion between frames
makes this problem ill-posed. Event-based Video Frame Interpolation (EVFI)
addresses this challenge by using sparse, high-temporal-resolution event
measurements as motion guidance. This guidance allows EVFI methods to
significantly outperform frame-only methods. However, to date, EVFI methods
have relied on a limited set of paired event-frame training data, severely
limiting their performance and generalization capabilities. In this work, we
overcome the limited data challenge by adapting pre-trained video diffusion
models trained on internet-scale datasets to EVFI. We experimentally validate
our approach on real-world EVFI datasets, including a new one that we
introduce. Our method outperforms existing methods and generalizes across
cameras far better than existing approaches.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 18:55:30 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 17:58:16 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Chen",
"Jingxi",
""
],
[
"Feng",
"Brandon Y.",
""
],
[
"Cai",
"Haoming",
""
],
[
"Wang",
"Tianfu",
""
],
[
"Burner",
"Levi",
""
],
[
"Yuan",
"Dehao",
""
],
[
"Fermuller",
"Cornelia",
""
],
[
"Metzler",
"Christopher A.",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] | TITLE: Repurposing Pre-trained Video Diffusion Models for Event-based Video
Interpolation
ABSTRACT: Video Frame Interpolation aims to recover realistic missing frames between
observed frames, generating a high-frame-rate video from a low-frame-rate
video. However, without additional guidance, the large motion between frames
makes this problem ill-posed. Event-based Video Frame Interpolation (EVFI)
addresses this challenge by using sparse, high-temporal-resolution event
measurements as motion guidance. This guidance allows EVFI methods to
significantly outperform frame-only methods. However, to date, EVFI methods
have relied on a limited set of paired event-frame training data, severely
limiting their performance and generalization capabilities. In this work, we
overcome the limited data challenge by adapting pre-trained video diffusion
models trained on internet-scale datasets to EVFI. We experimentally validate
our approach on real-world EVFI datasets, including a new one that we
introduce. Our method outperforms existing methods and generalizes across
cameras far better than existing approaches.
|
2412.10084 | Briac Toussaint | Briac Toussaint, Diego Thomas, Jean-S\'ebastien Franco | ProbeSDF: Light Field Probes for Neural Surface Reconstruction | 10 pages, 10 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | SDF-based differential rendering frameworks have achieved state-of-the-art
multiview 3D shape reconstruction. In this work, we re-examine this family of
approaches by minimally reformulating its core appearance model in a way that
simultaneously yields faster computation and increased performance. To this
goal, we exhibit a physically-inspired minimal radiance parametrization
decoupling angular and spatial contributions, by encoding them with a small
number of features stored in two respective volumetric grids of different
resolutions. Requiring as little as four parameters per voxel, and a tiny MLP
call inside a single fully fused kernel, our approach allows to enhance
performance with both surface and image (PSNR) metrics, while providing a
significant training speedup and real-time rendering. We show this performance
to be consistently achieved on real data over two widely different and popular
application fields, generic object and human subject shape reconstruction,
using four representative and challenging datasets.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 12:18:26 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 12:37:14 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Toussaint",
"Briac",
""
],
[
"Thomas",
"Diego",
""
],
[
"Franco",
"Jean-Sébastien",
""
]
] | TITLE: ProbeSDF: Light Field Probes for Neural Surface Reconstruction
ABSTRACT: SDF-based differential rendering frameworks have achieved state-of-the-art
multiview 3D shape reconstruction. In this work, we re-examine this family of
approaches by minimally reformulating its core appearance model in a way that
simultaneously yields faster computation and increased performance. To this
goal, we exhibit a physically-inspired minimal radiance parametrization
decoupling angular and spatial contributions, by encoding them with a small
number of features stored in two respective volumetric grids of different
resolutions. Requiring as little as four parameters per voxel, and a tiny MLP
call inside a single fully fused kernel, our approach allows to enhance
performance with both surface and image (PSNR) metrics, while providing a
significant training speedup and real-time rendering. We show this performance
to be consistently achieved on real data over two widely different and popular
application fields, generic object and human subject shape reconstruction,
using four representative and challenging datasets.
|
2412.10308 | Yan Xia | Yan Xia, Yunxiang Lu, Rui Song, Oussema Dhaouadi, Jo\~ao F. Henriques,
Daniel Cremers | TrafficLoc: Localizing Traffic Surveillance Cameras in 3D Scenes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tackle the problem of localizing traffic cameras within a 3D reference map
and propose a novel image-to-point cloud registration (I2P) method, TrafficLoc,
in a coarse-tofine matching fashion. To overcome the lack of large-scale
real-world intersection datasets, we first introduce Carla Intersection, a new
simulated dataset with 75 urban and rural intersections in Carla. We find that
current I2P methods struggle with cross-modal matching under large viewpoint
differences, especially at traffic intersections. TrafficLoc thus employs a
novel Geometry-guided Attention Loss (GAL) to focus only on the corresponding
geometric regions under different viewpoints during 2D-3D feature fusion. To
address feature inconsistency in paired image patch-point groups, we further
propose Inter-intra Contrastive Learning (ICL) to enhance separating 2D
patch/3D group features within each intra-modality and introduce Dense Training
Alignment (DTA) with soft-argmax for improving position regression. Extensive
experiments show our TrafficLoc greatly improves the performance over the SOTA
I2P methods (up to 86%) on Carla Intersection and generalizes well to
real-world data. TrafficLoc also achieves new SOTA performance on KITTI and
NuScenes datasets, demonstrating the superiority across both in-vehicle and
traffic cameras. Our project page is publicly available at
https://tum-luk.github.io/projects/trafficloc/.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 17:42:53 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 09:18:04 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Xia",
"Yan",
""
],
[
"Lu",
"Yunxiang",
""
],
[
"Song",
"Rui",
""
],
[
"Dhaouadi",
"Oussema",
""
],
[
"Henriques",
"João F.",
""
],
[
"Cremers",
"Daniel",
""
]
] | TITLE: TrafficLoc: Localizing Traffic Surveillance Cameras in 3D Scenes
ABSTRACT: We tackle the problem of localizing traffic cameras within a 3D reference map
and propose a novel image-to-point cloud registration (I2P) method, TrafficLoc,
in a coarse-tofine matching fashion. To overcome the lack of large-scale
real-world intersection datasets, we first introduce Carla Intersection, a new
simulated dataset with 75 urban and rural intersections in Carla. We find that
current I2P methods struggle with cross-modal matching under large viewpoint
differences, especially at traffic intersections. TrafficLoc thus employs a
novel Geometry-guided Attention Loss (GAL) to focus only on the corresponding
geometric regions under different viewpoints during 2D-3D feature fusion. To
address feature inconsistency in paired image patch-point groups, we further
propose Inter-intra Contrastive Learning (ICL) to enhance separating 2D
patch/3D group features within each intra-modality and introduce Dense Training
Alignment (DTA) with soft-argmax for improving position regression. Extensive
experiments show our TrafficLoc greatly improves the performance over the SOTA
I2P methods (up to 86%) on Carla Intersection and generalizes well to
real-world data. TrafficLoc also achieves new SOTA performance on KITTI and
NuScenes datasets, demonstrating the superiority across both in-vehicle and
traffic cameras. Our project page is publicly available at
https://tum-luk.github.io/projects/trafficloc/.
|
2412.11102 | XiMing Xing | Ximing Xing, Juncheng Hu, Guotao Liang, Jing Zhang, Dong Xu, Qian Yu | Empowering LLMs to Understand and Generate Complex Vector Graphics | Accepted by CVPR 2025. Project Page:
https://ximinng.github.io/LLM4SVGProject/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The unprecedented advancements in Large Language Models (LLMs) have
profoundly impacted natural language processing but have yet to fully embrace
the realm of scalable vector graphics (SVG) generation. While LLMs encode
partial knowledge of SVG data from web pages during training, recent findings
suggest that semantically ambiguous and tokenized representations within LLMs
may result in hallucinations in vector primitive predictions. Additionally, LLM
training typically lacks modeling and understanding of the rendering sequence
of vector paths, which can lead to occlusion between output vector primitives.
In this paper, we present LLM4SVG, an initial yet substantial step toward
bridging this gap by enabling LLMs to better understand and generate vector
graphics. LLM4SVG facilitates a deeper understanding of SVG components through
learnable semantic tokens, which precisely encode these tokens and their
corresponding properties to generate semantically aligned SVG outputs. Using a
series of learnable semantic tokens, a structured dataset for instruction
following is developed to support comprehension and generation across two
primary tasks. Our method introduces a modular architecture to existing large
language models, integrating semantic tags, vector instruction encoders,
fine-tuned commands, and powerful LLMs to tightly combine geometric,
appearance, and language information. To overcome the scarcity of SVG-text
instruction data, we developed an automated data generation pipeline that
collected our SVGX-SFT Dataset, consisting of high-quality human-designed SVGs
and 580k SVG instruction following data specifically crafted for LLM training,
which facilitated the adoption of the supervised fine-tuning strategy popular
in LLM development.
| [
{
"version": "v1",
"created": "Sun, 15 Dec 2024 07:49:31 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jan 2025 07:22:51 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 15:35:29 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Xing",
"Ximing",
""
],
[
"Hu",
"Juncheng",
""
],
[
"Liang",
"Guotao",
""
],
[
"Zhang",
"Jing",
""
],
[
"Xu",
"Dong",
""
],
[
"Yu",
"Qian",
""
]
] | TITLE: Empowering LLMs to Understand and Generate Complex Vector Graphics
ABSTRACT: The unprecedented advancements in Large Language Models (LLMs) have
profoundly impacted natural language processing but have yet to fully embrace
the realm of scalable vector graphics (SVG) generation. While LLMs encode
partial knowledge of SVG data from web pages during training, recent findings
suggest that semantically ambiguous and tokenized representations within LLMs
may result in hallucinations in vector primitive predictions. Additionally, LLM
training typically lacks modeling and understanding of the rendering sequence
of vector paths, which can lead to occlusion between output vector primitives.
In this paper, we present LLM4SVG, an initial yet substantial step toward
bridging this gap by enabling LLMs to better understand and generate vector
graphics. LLM4SVG facilitates a deeper understanding of SVG components through
learnable semantic tokens, which precisely encode these tokens and their
corresponding properties to generate semantically aligned SVG outputs. Using a
series of learnable semantic tokens, a structured dataset for instruction
following is developed to support comprehension and generation across two
primary tasks. Our method introduces a modular architecture to existing large
language models, integrating semantic tags, vector instruction encoders,
fine-tuned commands, and powerful LLMs to tightly combine geometric,
appearance, and language information. To overcome the scarcity of SVG-text
instruction data, we developed an automated data generation pipeline that
collected our SVGX-SFT Dataset, consisting of high-quality human-designed SVGs
and 580k SVG instruction following data specifically crafted for LLM training,
which facilitated the adoption of the supervised fine-tuning strategy popular
in LLM development.
|
2412.12877 | Samuel Teodoro | Samuel Teodoro, Agus Gunawan, Soo Ye Kim, Jihyong Oh, Munchurl Kim | PRIMEdit: Probability Redistribution for Instance-aware Multi-object
Video Editing with Benchmark Dataset | The first two authors contributed equally to this work. The last two
authors are co-corresponding authors. Please visit our project page at
https://kaist-viclab.github.io/primedit-site/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent AI-based video editing has enabled users to edit videos through simple
text prompts, significantly simplifying the editing process. However, recent
zero-shot video editing techniques primarily focus on global or single-object
edits, which can lead to unintended changes in other parts of the video. When
multiple objects require localized edits, existing methods face challenges,
such as unfaithful editing, editing leakage, and lack of suitable evaluation
datasets and metrics. To overcome these limitations, we propose
$\textbf{P}$robability $\textbf{R}$edistribution for $\textbf{I}$nstance-aware
$\textbf{M}$ulti-object Video $\textbf{Edit}$ing ($\textbf{PRIMEdit}$).
PRIMEdit is a zero-shot framework that introduces two key modules: (i)
Instance-centric Probability Redistribution (IPR) to ensure precise
localization and faithful editing and (ii) Disentangled Multi-instance Sampling
(DMS) to prevent editing leakage. Additionally, we present our new MIVE Dataset
for video editing featuring diverse video scenarios, and introduce the
Cross-Instance Accuracy (CIA) Score to evaluate editing leakage in
multi-instance video editing tasks. Our extensive qualitative, quantitative,
and user study evaluations demonstrate that PRIMEdit significantly outperforms
recent state-of-the-art methods in terms of editing faithfulness, accuracy, and
leakage prevention, setting a new benchmark for multi-instance video editing.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 13:00:04 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 02:49:28 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Teodoro",
"Samuel",
""
],
[
"Gunawan",
"Agus",
""
],
[
"Kim",
"Soo Ye",
""
],
[
"Oh",
"Jihyong",
""
],
[
"Kim",
"Munchurl",
""
]
] | TITLE: PRIMEdit: Probability Redistribution for Instance-aware Multi-object
Video Editing with Benchmark Dataset
ABSTRACT: Recent AI-based video editing has enabled users to edit videos through simple
text prompts, significantly simplifying the editing process. However, recent
zero-shot video editing techniques primarily focus on global or single-object
edits, which can lead to unintended changes in other parts of the video. When
multiple objects require localized edits, existing methods face challenges,
such as unfaithful editing, editing leakage, and lack of suitable evaluation
datasets and metrics. To overcome these limitations, we propose
$\textbf{P}$robability $\textbf{R}$edistribution for $\textbf{I}$nstance-aware
$\textbf{M}$ulti-object Video $\textbf{Edit}$ing ($\textbf{PRIMEdit}$).
PRIMEdit is a zero-shot framework that introduces two key modules: (i)
Instance-centric Probability Redistribution (IPR) to ensure precise
localization and faithful editing and (ii) Disentangled Multi-instance Sampling
(DMS) to prevent editing leakage. Additionally, we present our new MIVE Dataset
for video editing featuring diverse video scenarios, and introduce the
Cross-Instance Accuracy (CIA) Score to evaluate editing leakage in
multi-instance video editing tasks. Our extensive qualitative, quantitative,
and user study evaluations demonstrate that PRIMEdit significantly outperforms
recent state-of-the-art methods in terms of editing faithfulness, accuracy, and
leakage prevention, setting a new benchmark for multi-instance video editing.
|
2412.14963 | Yiyu Zhuang Zhuang | Yiyu Zhuang, Jiaxi Lv, Hao Wen, Qing Shuai, Ailing Zeng, Hao Zhu,
Shifeng Chen, Yujiu Yang, Xun Cao, Wei Liu | IDOL: Instant Photorealistic 3D Human Creation from a Single Image | 22 pages, 16 figures, includes main content, supplementary materials,
and references | CVPR 2025 | null | null | cs.CV cs.GR cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Creating a high-fidelity, animatable 3D full-body avatar from a single image
is a challenging task due to the diverse appearance and poses of humans and the
limited availability of high-quality training data. To achieve fast and
high-quality human reconstruction, this work rethinks the task from the
perspectives of dataset, model, and representation. First, we introduce a
large-scale HUman-centric GEnerated dataset, HuGe100K, consisting of 100K
diverse, photorealistic sets of human images. Each set contains 24-view frames
in specific human poses, generated using a pose-controllable
image-to-multi-view model. Next, leveraging the diversity in views, poses, and
appearances within HuGe100K, we develop a scalable feed-forward transformer
model to predict a 3D human Gaussian representation in a uniform space from a
given human image. This model is trained to disentangle human pose, body shape,
clothing geometry, and texture. The estimated Gaussians can be animated without
post-processing. We conduct comprehensive experiments to validate the
effectiveness of the proposed dataset and method. Our model demonstrates the
ability to efficiently reconstruct photorealistic humans at 1K resolution from
a single input image using a single GPU instantly. Additionally, it seamlessly
supports various applications, as well as shape and texture editing tasks.
Project page: https://yiyuzhuang.github.io/IDOL/.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2024 15:43:05 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 03:48:17 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Zhuang",
"Yiyu",
""
],
[
"Lv",
"Jiaxi",
""
],
[
"Wen",
"Hao",
""
],
[
"Shuai",
"Qing",
""
],
[
"Zeng",
"Ailing",
""
],
[
"Zhu",
"Hao",
""
],
[
"Chen",
"Shifeng",
""
],
[
"Yang",
"Yujiu",
""
],
[
"Cao",
"Xun",
""
],
[
"Liu",
"Wei",
""
]
] | TITLE: IDOL: Instant Photorealistic 3D Human Creation from a Single Image
ABSTRACT: Creating a high-fidelity, animatable 3D full-body avatar from a single image
is a challenging task due to the diverse appearance and poses of humans and the
limited availability of high-quality training data. To achieve fast and
high-quality human reconstruction, this work rethinks the task from the
perspectives of dataset, model, and representation. First, we introduce a
large-scale HUman-centric GEnerated dataset, HuGe100K, consisting of 100K
diverse, photorealistic sets of human images. Each set contains 24-view frames
in specific human poses, generated using a pose-controllable
image-to-multi-view model. Next, leveraging the diversity in views, poses, and
appearances within HuGe100K, we develop a scalable feed-forward transformer
model to predict a 3D human Gaussian representation in a uniform space from a
given human image. This model is trained to disentangle human pose, body shape,
clothing geometry, and texture. The estimated Gaussians can be animated without
post-processing. We conduct comprehensive experiments to validate the
effectiveness of the proposed dataset and method. Our model demonstrates the
ability to efficiently reconstruct photorealistic humans at 1K resolution from
a single input image using a single GPU instantly. Additionally, it seamlessly
supports various applications, as well as shape and texture editing tasks.
Project page: https://yiyuzhuang.github.io/IDOL/.
|
2412.15690 | Hongbo Li | Hongbo Li and Lingjie Duan | Theory of Mixture-of-Experts for Mobile Edge Computing | This is the technical report for our paper accepted by INFOCOM 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In mobile edge computing (MEC) networks, mobile users generate diverse
machine learning tasks dynamically over time. These tasks are typically
offloaded to the nearest available edge server, by considering communication
and computational efficiency. However, its operation does not ensure that each
server specializes in a specific type of tasks and leads to severe overfitting
or catastrophic forgetting of previous tasks. To improve the continual learning
(CL) performance of online tasks, we are the first to introduce
mixture-of-experts (MoE) theory in MEC networks and save MEC operation from the
increasing generalization error over time. Our MoE theory treats each MEC
server as an expert and dynamically adapts to changes in server availability by
considering data transfer and computation time. Unlike existing MoE models
designed for offline tasks, ours is tailored for handling continuous streams of
tasks in the MEC environment. We introduce an adaptive gating network in MEC to
adaptively identify and route newly arrived tasks of unknown data distributions
to available experts, enabling each expert to specialize in a specific type of
tasks upon convergence. We derived the minimum number of experts required to
match each task with a specialized, available expert. Our MoE approach
consistently reduces the overall generalization error over time, unlike the
traditional MEC approach. Interestingly, when the number of experts is
sufficient to ensure convergence, adding more experts delays the convergence
time and worsens the generalization error. Finally, we perform extensive
experiments on real datasets in deep neural networks (DNNs) to verify our
theoretical results.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2024 09:09:10 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 19:55:56 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Li",
"Hongbo",
""
],
[
"Duan",
"Lingjie",
""
]
] | TITLE: Theory of Mixture-of-Experts for Mobile Edge Computing
ABSTRACT: In mobile edge computing (MEC) networks, mobile users generate diverse
machine learning tasks dynamically over time. These tasks are typically
offloaded to the nearest available edge server, by considering communication
and computational efficiency. However, its operation does not ensure that each
server specializes in a specific type of tasks and leads to severe overfitting
or catastrophic forgetting of previous tasks. To improve the continual learning
(CL) performance of online tasks, we are the first to introduce
mixture-of-experts (MoE) theory in MEC networks and save MEC operation from the
increasing generalization error over time. Our MoE theory treats each MEC
server as an expert and dynamically adapts to changes in server availability by
considering data transfer and computation time. Unlike existing MoE models
designed for offline tasks, ours is tailored for handling continuous streams of
tasks in the MEC environment. We introduce an adaptive gating network in MEC to
adaptively identify and route newly arrived tasks of unknown data distributions
to available experts, enabling each expert to specialize in a specific type of
tasks upon convergence. We derived the minimum number of experts required to
match each task with a specialized, available expert. Our MoE approach
consistently reduces the overall generalization error over time, unlike the
traditional MEC approach. Interestingly, when the number of experts is
sufficient to ensure convergence, adding more experts delays the convergence
time and worsens the generalization error. Finally, we perform extensive
experiments on real datasets in deep neural networks (DNNs) to verify our
theoretical results.
|
2412.16840 | Yi Liu | Yi Liu, Chengxin Li, Xiaohui Dong, Lei Li, Dingwen Zhang, Shoukun Xu,
Jungong Han | Seamless Detection: Unifying Salient Object Detection and Camouflaged
Object Detection | null | Expert Systems with Applications, 2025 | 10.1016/j.eswa.2025.126912 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Achieving joint learning of Salient Object Detection (SOD) and Camouflaged
Object Detection (COD) is extremely challenging due to their distinct object
characteristics, i.e., saliency and camouflage. The only preliminary research
treats them as two contradictory tasks, training models on large-scale labeled
data alternately for each task and assessing them independently. However, such
task-specific mechanisms fail to meet real-world demands for addressing unknown
tasks effectively. To address this issue, in this paper, we pioneer a
task-agnostic framework to unify SOD and COD. To this end, inspired by the
agreeable nature of binary segmentation for SOD and COD, we propose a
Contrastive Distillation Paradigm (CDP) to distil the foreground from the
background, facilitating the identification of salient and camouflaged objects
amidst their surroundings. To probe into the contribution of our CDP, we design
a simple yet effective contextual decoder involving the interval-layer and
global context, which achieves an inference speed of 67 fps. Besides the
supervised setting, our CDP can be seamlessly integrated into unsupervised
settings, eliminating the reliance on extensive human annotations. Experiments
on public SOD and COD datasets demonstrate the superiority of our proposed
framework in both supervised and unsupervised settings, compared with existing
state-of-the-art approaches. Code is available on
https://github.com/liuyi1989/Seamless-Detection.
| [
{
"version": "v1",
"created": "Sun, 22 Dec 2024 03:25:43 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Liu",
"Yi",
""
],
[
"Li",
"Chengxin",
""
],
[
"Dong",
"Xiaohui",
""
],
[
"Li",
"Lei",
""
],
[
"Zhang",
"Dingwen",
""
],
[
"Xu",
"Shoukun",
""
],
[
"Han",
"Jungong",
""
]
] | TITLE: Seamless Detection: Unifying Salient Object Detection and Camouflaged
Object Detection
ABSTRACT: Achieving joint learning of Salient Object Detection (SOD) and Camouflaged
Object Detection (COD) is extremely challenging due to their distinct object
characteristics, i.e., saliency and camouflage. The only preliminary research
treats them as two contradictory tasks, training models on large-scale labeled
data alternately for each task and assessing them independently. However, such
task-specific mechanisms fail to meet real-world demands for addressing unknown
tasks effectively. To address this issue, in this paper, we pioneer a
task-agnostic framework to unify SOD and COD. To this end, inspired by the
agreeable nature of binary segmentation for SOD and COD, we propose a
Contrastive Distillation Paradigm (CDP) to distil the foreground from the
background, facilitating the identification of salient and camouflaged objects
amidst their surroundings. To probe into the contribution of our CDP, we design
a simple yet effective contextual decoder involving the interval-layer and
global context, which achieves an inference speed of 67 fps. Besides the
supervised setting, our CDP can be seamlessly integrated into unsupervised
settings, eliminating the reliance on extensive human annotations. Experiments
on public SOD and COD datasets demonstrate the superiority of our proposed
framework in both supervised and unsupervised settings, compared with existing
state-of-the-art approaches. Code is available on
https://github.com/liuyi1989/Seamless-Detection.
|
2412.16906 | Quan Dao | Quan Dao, Hao Phung, Trung Dao, Dimitris Metaxas, Anh Tran | Self-Corrected Flow Distillation for Consistent One-Step and Few-Step
Text-to-Image Generation | Accepted at AAAI 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Flow matching has emerged as a promising framework for training generative
models, demonstrating impressive empirical performance while offering relative
ease of training compared to diffusion-based models. However, this method still
requires numerous function evaluations in the sampling process. To address
these limitations, we introduce a self-corrected flow distillation method that
effectively integrates consistency models and adversarial training within the
flow-matching framework. This work is a pioneer in achieving consistent
generation quality in both few-step and one-step sampling. Our extensive
experiments validate the effectiveness of our method, yielding superior results
both quantitatively and qualitatively on CelebA-HQ and zero-shot benchmarks on
the COCO dataset. Our implementation is released at
https://github.com/VinAIResearch/SCFlow
| [
{
"version": "v1",
"created": "Sun, 22 Dec 2024 07:48:49 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 03:47:02 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Dao",
"Quan",
""
],
[
"Phung",
"Hao",
""
],
[
"Dao",
"Trung",
""
],
[
"Metaxas",
"Dimitris",
""
],
[
"Tran",
"Anh",
""
]
] | TITLE: Self-Corrected Flow Distillation for Consistent One-Step and Few-Step
Text-to-Image Generation
ABSTRACT: Flow matching has emerged as a promising framework for training generative
models, demonstrating impressive empirical performance while offering relative
ease of training compared to diffusion-based models. However, this method still
requires numerous function evaluations in the sampling process. To address
these limitations, we introduce a self-corrected flow distillation method that
effectively integrates consistency models and adversarial training within the
flow-matching framework. This work is a pioneer in achieving consistent
generation quality in both few-step and one-step sampling. Our extensive
experiments validate the effectiveness of our method, yielding superior results
both quantitatively and qualitatively on CelebA-HQ and zero-shot benchmarks on
the COCO dataset. Our implementation is released at
https://github.com/VinAIResearch/SCFlow
|
2412.17056 | Malte Schilling | Fabian Ridder and Malte Schilling | The HalluRAG Dataset: Detecting Closed-Domain Hallucinations in RAG
Applications Using an LLM's Internal States | 19 pages, 3 figures | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Detecting hallucinations in large language models (LLMs) is critical for
enhancing their reliability and trustworthiness. Most research focuses on
hallucinations as deviations from information seen during training. However,
the opaque nature of an LLM's parametric knowledge complicates the
understanding of why generated texts appear ungrounded: The LLM might not have
picked up the necessary knowledge from large and often inaccessible datasets,
or the information might have been changed or contradicted during further
training. Our focus is on hallucinations involving information not used in
training, which we determine by using recency to ensure the information emerged
after a cut-off date. This study investigates these hallucinations by detecting
them at sentence level using different internal states of various LLMs. We
present HalluRAG, a dataset designed to train classifiers on these
hallucinations. Depending on the model and quantization, MLPs trained on
HalluRAG detect hallucinations with test accuracies ranging up to 75 %, with
Mistral-7B-Instruct-v0.1 achieving the highest test accuracies. Our results
show that IAVs detect hallucinations as effectively as CEVs and reveal that
answerable and unanswerable prompts are encoded differently as separate
classifiers for these categories improved accuracy. However, HalluRAG showed
some limited generalizability, advocating for more diversity in datasets on
hallucinations.
| [
{
"version": "v1",
"created": "Sun, 22 Dec 2024 15:08:24 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 10:50:21 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ridder",
"Fabian",
""
],
[
"Schilling",
"Malte",
""
]
] | TITLE: The HalluRAG Dataset: Detecting Closed-Domain Hallucinations in RAG
Applications Using an LLM's Internal States
ABSTRACT: Detecting hallucinations in large language models (LLMs) is critical for
enhancing their reliability and trustworthiness. Most research focuses on
hallucinations as deviations from information seen during training. However,
the opaque nature of an LLM's parametric knowledge complicates the
understanding of why generated texts appear ungrounded: The LLM might not have
picked up the necessary knowledge from large and often inaccessible datasets,
or the information might have been changed or contradicted during further
training. Our focus is on hallucinations involving information not used in
training, which we determine by using recency to ensure the information emerged
after a cut-off date. This study investigates these hallucinations by detecting
them at sentence level using different internal states of various LLMs. We
present HalluRAG, a dataset designed to train classifiers on these
hallucinations. Depending on the model and quantization, MLPs trained on
HalluRAG detect hallucinations with test accuracies ranging up to 75 %, with
Mistral-7B-Instruct-v0.1 achieving the highest test accuracies. Our results
show that IAVs detect hallucinations as effectively as CEVs and reveal that
answerable and unanswerable prompts are encoded differently as separate
classifiers for these categories improved accuracy. However, HalluRAG showed
some limited generalizability, advocating for more diversity in datasets on
hallucinations.
|
2501.00599 | Yuqian Yuan | Yuqian Yuan, Hang Zhang, Wentong Li, Zesen Cheng, Boqiang Zhang, Long
Li, Xin Li, Deli Zhao, Wenqiao Zhang, Yueting Zhuang, Jianke Zhu, Lidong Bing | VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with
Video LLM | 17 pages, 14 figures, technical report | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video Large Language Models (Video LLMs) have recently exhibited remarkable
capabilities in general video understanding. However, they mainly focus on
holistic comprehension and struggle with capturing fine-grained spatial and
temporal details. Besides, the lack of high-quality object-level video
instruction data and a comprehensive benchmark further hinders their
advancements. To tackle these challenges, we introduce the VideoRefer Suite to
empower Video LLM for finer-level spatial-temporal video understanding, i.e.,
enabling perception and reasoning on any objects throughout the video.
Specially, we thoroughly develop VideoRefer Suite across three essential
aspects: dataset, model, and benchmark. Firstly, we introduce a multi-agent
data engine to meticulously curate a large-scale, high-quality object-level
video instruction dataset, termed VideoRefer-700K. Next, we present the
VideoRefer model, which equips a versatile spatial-temporal object encoder to
capture precise regional and sequential representations. Finally, we
meticulously create a VideoRefer-Bench to comprehensively assess the
spatial-temporal understanding capability of a Video LLM, evaluating it across
various aspects. Extensive experiments and analyses demonstrate that our
VideoRefer model not only achieves promising performance on video referring
benchmarks but also facilitates general video understanding capabilities.
| [
{
"version": "v1",
"created": "Tue, 31 Dec 2024 18:56:46 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jan 2025 14:38:30 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 08:10:15 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Yuan",
"Yuqian",
""
],
[
"Zhang",
"Hang",
""
],
[
"Li",
"Wentong",
""
],
[
"Cheng",
"Zesen",
""
],
[
"Zhang",
"Boqiang",
""
],
[
"Li",
"Long",
""
],
[
"Li",
"Xin",
""
],
[
"Zhao",
"Deli",
""
],
[
"Zhang",
"Wenqiao",
""
],
[
"Zhuang",
"Yueting",
""
],
[
"Zhu",
"Jianke",
""
],
[
"Bing",
"Lidong",
""
]
] | TITLE: VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with
Video LLM
ABSTRACT: Video Large Language Models (Video LLMs) have recently exhibited remarkable
capabilities in general video understanding. However, they mainly focus on
holistic comprehension and struggle with capturing fine-grained spatial and
temporal details. Besides, the lack of high-quality object-level video
instruction data and a comprehensive benchmark further hinders their
advancements. To tackle these challenges, we introduce the VideoRefer Suite to
empower Video LLM for finer-level spatial-temporal video understanding, i.e.,
enabling perception and reasoning on any objects throughout the video.
Specially, we thoroughly develop VideoRefer Suite across three essential
aspects: dataset, model, and benchmark. Firstly, we introduce a multi-agent
data engine to meticulously curate a large-scale, high-quality object-level
video instruction dataset, termed VideoRefer-700K. Next, we present the
VideoRefer model, which equips a versatile spatial-temporal object encoder to
capture precise regional and sequential representations. Finally, we
meticulously create a VideoRefer-Bench to comprehensively assess the
spatial-temporal understanding capability of a Video LLM, evaluating it across
various aspects. Extensive experiments and analyses demonstrate that our
VideoRefer model not only achieves promising performance on video referring
benchmarks but also facilitates general video understanding capabilities.
|
2501.01453 | Ali Rabeh | Ali Rabeh, Ethan Herron, Aditya Balu, Soumik Sarkar, Chinmay Hegde,
Adarsh Krishnamurthy, Baskar Ganapathysubramanian | Geometry Matters: Benchmarking Scientific ML Approaches for Flow
Prediction around Complex Geometries | null | null | null | null | cs.LG physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | Rapid and accurate simulations of fluid dynamics around complicated geometric
bodies are critical in a variety of engineering and scientific applications,
including aerodynamics and biomedical flows. However, while scientific machine
learning (SciML) has shown considerable promise, most studies in this field are
limited to simple geometries, and complex, real-world scenarios are
underexplored. This paper addresses this gap by benchmarking diverse SciML
models, including neural operators and vision transformer-based foundation
models, for fluid flow prediction over intricate geometries. Using a
high-fidelity dataset of steady-state flows across various geometries, we
evaluate the impact of geometric representations -- Signed Distance Fields
(SDF) and binary masks -- on model accuracy, scalability, and generalization.
Central to this effort is the introduction of a novel, unified scoring
framework that integrates metrics for global accuracy, boundary layer fidelity,
and physical consistency to enable a robust, comparative evaluation of model
performance. Our findings demonstrate that newer foundation models
significantly outperform neural operators, particularly in data-limited
scenarios, and that SDF representations yield superior results with sufficient
training data. Despite these promises, all models struggle with
out-of-distribution generalization, highlighting a critical challenge for
future SciML applications. By advancing both evaluation models and modeling
capabilities, our work paves the way for robust and scalable ML solutions for
fluid dynamics across complex geometries.
| [
{
"version": "v1",
"created": "Tue, 31 Dec 2024 00:23:15 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 23:26:27 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Rabeh",
"Ali",
""
],
[
"Herron",
"Ethan",
""
],
[
"Balu",
"Aditya",
""
],
[
"Sarkar",
"Soumik",
""
],
[
"Hegde",
"Chinmay",
""
],
[
"Krishnamurthy",
"Adarsh",
""
],
[
"Ganapathysubramanian",
"Baskar",
""
]
] | TITLE: Geometry Matters: Benchmarking Scientific ML Approaches for Flow
Prediction around Complex Geometries
ABSTRACT: Rapid and accurate simulations of fluid dynamics around complicated geometric
bodies are critical in a variety of engineering and scientific applications,
including aerodynamics and biomedical flows. However, while scientific machine
learning (SciML) has shown considerable promise, most studies in this field are
limited to simple geometries, and complex, real-world scenarios are
underexplored. This paper addresses this gap by benchmarking diverse SciML
models, including neural operators and vision transformer-based foundation
models, for fluid flow prediction over intricate geometries. Using a
high-fidelity dataset of steady-state flows across various geometries, we
evaluate the impact of geometric representations -- Signed Distance Fields
(SDF) and binary masks -- on model accuracy, scalability, and generalization.
Central to this effort is the introduction of a novel, unified scoring
framework that integrates metrics for global accuracy, boundary layer fidelity,
and physical consistency to enable a robust, comparative evaluation of model
performance. Our findings demonstrate that newer foundation models
significantly outperform neural operators, particularly in data-limited
scenarios, and that SDF representations yield superior results with sufficient
training data. Despite these promises, all models struggle with
out-of-distribution generalization, highlighting a critical challenge for
future SciML applications. By advancing both evaluation models and modeling
capabilities, our work paves the way for robust and scalable ML solutions for
fluid dynamics across complex geometries.
|
2501.08325 | Jiwen Yu | Jiwen Yu, Yiran Qin, Xintao Wang, Pengfei Wan, Di Zhang, Xihui Liu | GameFactory: Creating New Games with Generative Interactive Videos | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Generative videos have the potential to revolutionize game development by
autonomously creating new content. In this paper, we present GameFactory, a
framework for action-controlled scene-generalizable game video generation. We
first address the fundamental challenge of action controllability by
introducing GF-Minecraft, a action-annotated game video dataset without human
bias, and developing a action control module that enables precise control over
both keyboard and mouse inputs. We further extend to support autoregressive
generation for unlimited-length interactive videos. More importantly,
GameFactory tackles the critical challenge of scene-generalizable action
control, which most existing methods fail to address. To enable the creation of
entirely new and diverse games beyond fixed styles and scenes, we leverage the
open-domain generative priors from pre-trained video diffusion models. To
bridge the domain gap between open-domain priors and small-scale game datasets,
we propose a multi-phase training strategy with a domain adapter that decouples
game style learning from action control. This decoupling ensures that action
control learning is no longer bound to specific game styles, thereby achieving
scene-generalizable action control. Experimental results demonstrate that
GameFactory effectively generates open-domain action-controllable game videos,
representing a significant step forward in AI-driven game generation. Our
dataset and project page are publicly available at
https://yujiwen.github.io/gamefactory/.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2025 18:57:21 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 03:34:45 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Yu",
"Jiwen",
""
],
[
"Qin",
"Yiran",
""
],
[
"Wang",
"Xintao",
""
],
[
"Wan",
"Pengfei",
""
],
[
"Zhang",
"Di",
""
],
[
"Liu",
"Xihui",
""
]
] | TITLE: GameFactory: Creating New Games with Generative Interactive Videos
ABSTRACT: Generative videos have the potential to revolutionize game development by
autonomously creating new content. In this paper, we present GameFactory, a
framework for action-controlled scene-generalizable game video generation. We
first address the fundamental challenge of action controllability by
introducing GF-Minecraft, a action-annotated game video dataset without human
bias, and developing a action control module that enables precise control over
both keyboard and mouse inputs. We further extend to support autoregressive
generation for unlimited-length interactive videos. More importantly,
GameFactory tackles the critical challenge of scene-generalizable action
control, which most existing methods fail to address. To enable the creation of
entirely new and diverse games beyond fixed styles and scenes, we leverage the
open-domain generative priors from pre-trained video diffusion models. To
bridge the domain gap between open-domain priors and small-scale game datasets,
we propose a multi-phase training strategy with a domain adapter that decouples
game style learning from action control. This decoupling ensures that action
control learning is no longer bound to specific game styles, thereby achieving
scene-generalizable action control. Experimental results demonstrate that
GameFactory effectively generates open-domain action-controllable game videos,
representing a significant step forward in AI-driven game generation. Our
dataset and project page are publicly available at
https://yujiwen.github.io/gamefactory/.
|
2501.13420 | Jinghan You | Jinghan You, Shanglin Li, Yuanrui Sun, Jiangchuan Wei, Mingyu Guo,
Chao Feng, Jiao Ran | LVFace: Progressive Cluster Optimization for Large Vision Models in Face
Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision Transformers (ViTs) have revolutionized large-scale visual modeling,
yet remain underexplored in face recognition (FR) where CNNs still dominate. We
identify a critical bottleneck: CNN-inspired training paradigms fail to unlock
ViT's potential, leading to suboptimal performance and convergence
instability.To address this challenge, we propose LVFace, a ViT-based FR model
that integrates Progressive Cluster Optimization (PCO) to achieve superior
results. Specifically, PCO sequentially applies negative class sub-sampling
(NCS) for robust and fast feature alignment from random initialization, feature
expectation penalties for centroid stabilization, performing cluster boundary
refinement through full-batch training without NCS constraints. LVFace
establishes a new state-of-the-art face recognition baseline, surpassing
leading approaches such as UniFace and TopoFR across multiple benchmarks.
Extensive experiments demonstrate that LVFace delivers consistent performance
gains, while exhibiting scalability to large-scale datasets and compatibility
with mainstream VLMs and LLMs. Notably, LVFace secured 1st place in the ICCV
2021 Masked Face Recognition (MFR)-Ongoing Challenge (March 2025), proving its
efficacy in real-world scenarios.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 06:48:48 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 03:43:57 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"You",
"Jinghan",
""
],
[
"Li",
"Shanglin",
""
],
[
"Sun",
"Yuanrui",
""
],
[
"Wei",
"Jiangchuan",
""
],
[
"Guo",
"Mingyu",
""
],
[
"Feng",
"Chao",
""
],
[
"Ran",
"Jiao",
""
]
] | TITLE: LVFace: Progressive Cluster Optimization for Large Vision Models in Face
Recognition
ABSTRACT: Vision Transformers (ViTs) have revolutionized large-scale visual modeling,
yet remain underexplored in face recognition (FR) where CNNs still dominate. We
identify a critical bottleneck: CNN-inspired training paradigms fail to unlock
ViT's potential, leading to suboptimal performance and convergence
instability.To address this challenge, we propose LVFace, a ViT-based FR model
that integrates Progressive Cluster Optimization (PCO) to achieve superior
results. Specifically, PCO sequentially applies negative class sub-sampling
(NCS) for robust and fast feature alignment from random initialization, feature
expectation penalties for centroid stabilization, performing cluster boundary
refinement through full-batch training without NCS constraints. LVFace
establishes a new state-of-the-art face recognition baseline, surpassing
leading approaches such as UniFace and TopoFR across multiple benchmarks.
Extensive experiments demonstrate that LVFace delivers consistent performance
gains, while exhibiting scalability to large-scale datasets and compatibility
with mainstream VLMs and LLMs. Notably, LVFace secured 1st place in the ICCV
2021 Masked Face Recognition (MFR)-Ongoing Challenge (March 2025), proving its
efficacy in real-world scenarios.
|
2501.14677 | Peiqing Yang | Peiqing Yang, Shangchen Zhou, Jixin Zhao, Qingyi Tao, Chen Change Loy | MatAnyone: Stable Video Matting with Consistent Memory Propagation | Project page: https://pq-yang.github.io/projects/MatAnyone | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Auxiliary-free human video matting methods, which rely solely on input
frames, often struggle with complex or ambiguous backgrounds. To address this,
we propose MatAnyone, a robust framework tailored for target-assigned video
matting. Specifically, building on a memory-based paradigm, we introduce a
consistent memory propagation module via region-adaptive memory fusion, which
adaptively integrates memory from the previous frame. This ensures semantic
stability in core regions while preserving fine-grained details along object
boundaries. For robust training, we present a larger, high-quality, and diverse
dataset for video matting. Additionally, we incorporate a novel training
strategy that efficiently leverages large-scale segmentation data, boosting
matting stability. With this new network design, dataset, and training
strategy, MatAnyone delivers robust and accurate video matting results in
diverse real-world scenarios, outperforming existing methods.
| [
{
"version": "v1",
"created": "Fri, 24 Jan 2025 17:56:24 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 06:56:38 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Yang",
"Peiqing",
""
],
[
"Zhou",
"Shangchen",
""
],
[
"Zhao",
"Jixin",
""
],
[
"Tao",
"Qingyi",
""
],
[
"Loy",
"Chen Change",
""
]
] | TITLE: MatAnyone: Stable Video Matting with Consistent Memory Propagation
ABSTRACT: Auxiliary-free human video matting methods, which rely solely on input
frames, often struggle with complex or ambiguous backgrounds. To address this,
we propose MatAnyone, a robust framework tailored for target-assigned video
matting. Specifically, building on a memory-based paradigm, we introduce a
consistent memory propagation module via region-adaptive memory fusion, which
adaptively integrates memory from the previous frame. This ensures semantic
stability in core regions while preserving fine-grained details along object
boundaries. For robust training, we present a larger, high-quality, and diverse
dataset for video matting. Additionally, we incorporate a novel training
strategy that efficiently leverages large-scale segmentation data, boosting
matting stability. With this new network design, dataset, and training
strategy, MatAnyone delivers robust and accurate video matting results in
diverse real-world scenarios, outperforming existing methods.
|
2501.15831 | Christian Tinauer | Christian Tinauer and Maximilian Sackl and Rudolf Stollberger and
Stefan Ropele and Christian Langkammer | Pfungst and Clever Hans: Identifying the unintended cues in a widely
used Alzheimer's disease MRI dataset using explainable deep learning | null | null | null | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Backgrounds.
Deep neural networks have demonstrated high accuracy in classifying
Alzheimer's disease (AD). This study aims to enlighten the underlying black-box
nature and reveal individual contributions of T1-weighted (T1w) gray-white
matter texture, volumetric information and preprocessing on classification
performance.
Methods.
We utilized T1w MRI data from the Alzheimer's Disease Neuroimaging Initiative
to distinguish matched AD patients (990 MRIs) from healthy controls (990 MRIs).
Preprocessing included skull stripping and binarization at varying thresholds
to systematically eliminate texture information. A deep neural network was
trained on these configurations, and the model performance was compared using
McNemar tests with discrete Bonferroni-Holm correction. Layer-wise Relevance
Propagation (LRP) and structural similarity metrics between heatmaps were
applied to analyze learned features.
Results.
Classification performance metrics (accuracy, sensitivity, and specificity)
were comparable across all configurations, indicating a negligible influence of
T1w gray- and white signal texture. Models trained on binarized images
demonstrated similar feature performance and relevance distributions, with
volumetric features such as atrophy and skull-stripping features emerging as
primary contributors.
Conclusions.
We revealed a previously undiscovered Clever Hans effect in a widely used AD
MRI dataset. Deep neural networks classification predominantly rely on
volumetric features, while eliminating gray-white matter T1w texture did not
decrease the performance. This study clearly demonstrates an overestimation of
the importance of gray-white matter contrasts, at least for widely used
structural T1w images, and highlights potential misinterpretation of
performance metrics.
| [
{
"version": "v1",
"created": "Mon, 27 Jan 2025 07:37:37 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 14:41:10 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Tinauer",
"Christian",
""
],
[
"Sackl",
"Maximilian",
""
],
[
"Stollberger",
"Rudolf",
""
],
[
"Ropele",
"Stefan",
""
],
[
"Langkammer",
"Christian",
""
]
] | TITLE: Pfungst and Clever Hans: Identifying the unintended cues in a widely
used Alzheimer's disease MRI dataset using explainable deep learning
ABSTRACT: Backgrounds.
Deep neural networks have demonstrated high accuracy in classifying
Alzheimer's disease (AD). This study aims to enlighten the underlying black-box
nature and reveal individual contributions of T1-weighted (T1w) gray-white
matter texture, volumetric information and preprocessing on classification
performance.
Methods.
We utilized T1w MRI data from the Alzheimer's Disease Neuroimaging Initiative
to distinguish matched AD patients (990 MRIs) from healthy controls (990 MRIs).
Preprocessing included skull stripping and binarization at varying thresholds
to systematically eliminate texture information. A deep neural network was
trained on these configurations, and the model performance was compared using
McNemar tests with discrete Bonferroni-Holm correction. Layer-wise Relevance
Propagation (LRP) and structural similarity metrics between heatmaps were
applied to analyze learned features.
Results.
Classification performance metrics (accuracy, sensitivity, and specificity)
were comparable across all configurations, indicating a negligible influence of
T1w gray- and white signal texture. Models trained on binarized images
demonstrated similar feature performance and relevance distributions, with
volumetric features such as atrophy and skull-stripping features emerging as
primary contributors.
Conclusions.
We revealed a previously undiscovered Clever Hans effect in a widely used AD
MRI dataset. Deep neural networks classification predominantly rely on
volumetric features, while eliminating gray-white matter T1w texture did not
decrease the performance. This study clearly demonstrates an overestimation of
the importance of gray-white matter contrasts, at least for widely used
structural T1w images, and highlights potential misinterpretation of
performance metrics.
|
2502.01441 | Quan Dao | Quan Dao, Khanh Doan, Di Liu, Trung Le, Dimitris Metaxas | Improved Training Technique for Latent Consistency Models | Accepted at ICLR 2025 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Consistency models are a new family of generative models capable of producing
high-quality samples in either a single step or multiple steps. Recently,
consistency models have demonstrated impressive performance, achieving results
on par with diffusion models in the pixel space. However, the success of
scaling consistency training to large-scale datasets, particularly for
text-to-image and video generation tasks, is determined by performance in the
latent space. In this work, we analyze the statistical differences between
pixel and latent spaces, discovering that latent data often contains highly
impulsive outliers, which significantly degrade the performance of iCT in the
latent space. To address this, we replace Pseudo-Huber losses with Cauchy
losses, effectively mitigating the impact of outliers. Additionally, we
introduce a diffusion loss at early timesteps and employ optimal transport (OT)
coupling to further enhance performance. Lastly, we introduce the adaptive
scaling-$c$ scheduler to manage the robust training process and adopt
Non-scaling LayerNorm in the architecture to better capture the statistics of
the features and reduce outlier impact. With these strategies, we successfully
train latent consistency models capable of high-quality sampling with one or
two steps, significantly narrowing the performance gap between latent
consistency and diffusion models. The implementation is released here:
https://github.com/quandao10/sLCT/
| [
{
"version": "v1",
"created": "Mon, 3 Feb 2025 15:25:58 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 03:30:17 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Dao",
"Quan",
""
],
[
"Doan",
"Khanh",
""
],
[
"Liu",
"Di",
""
],
[
"Le",
"Trung",
""
],
[
"Metaxas",
"Dimitris",
""
]
] | TITLE: Improved Training Technique for Latent Consistency Models
ABSTRACT: Consistency models are a new family of generative models capable of producing
high-quality samples in either a single step or multiple steps. Recently,
consistency models have demonstrated impressive performance, achieving results
on par with diffusion models in the pixel space. However, the success of
scaling consistency training to large-scale datasets, particularly for
text-to-image and video generation tasks, is determined by performance in the
latent space. In this work, we analyze the statistical differences between
pixel and latent spaces, discovering that latent data often contains highly
impulsive outliers, which significantly degrade the performance of iCT in the
latent space. To address this, we replace Pseudo-Huber losses with Cauchy
losses, effectively mitigating the impact of outliers. Additionally, we
introduce a diffusion loss at early timesteps and employ optimal transport (OT)
coupling to further enhance performance. Lastly, we introduce the adaptive
scaling-$c$ scheduler to manage the robust training process and adopt
Non-scaling LayerNorm in the architecture to better capture the statistics of
the features and reduce outlier impact. With these strategies, we successfully
train latent consistency models capable of high-quality sampling with one or
two steps, significantly narrowing the performance gap between latent
consistency and diffusion models. The implementation is released here:
https://github.com/quandao10/sLCT/
|
2502.04074 | Yihua Cheng | Yihua Cheng, Hengfei Wang, Zhongqun Zhang, Yang Yue, Bo Eun Kim, Feng
Lu, Hyung Jin Chang | 3D Prior is All You Need: Cross-Task Few-shot 2D Gaze Estimation | CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | 3D and 2D gaze estimation share the fundamental objective of capturing eye
movements but are traditionally treated as two distinct research domains. In
this paper, we introduce a novel cross-task few-shot 2D gaze estimation
approach, aiming to adapt a pre-trained 3D gaze estimation network for 2D gaze
prediction on unseen devices using only a few training images. This task is
highly challenging due to the domain gap between 3D and 2D gaze, unknown screen
poses, and limited training data. To address these challenges, we propose a
novel framework that bridges the gap between 3D and 2D gaze. Our framework
contains a physics-based differentiable projection module with learnable
parameters to model screen poses and project 3D gaze into 2D gaze. The
framework is fully differentiable and can integrate into existing 3D gaze
networks without modifying their original architecture. Additionally, we
introduce a dynamic pseudo-labelling strategy for flipped images, which is
particularly challenging for 2D labels due to unknown screen poses. To overcome
this, we reverse the projection process by converting 2D labels to 3D space,
where flipping is performed. Notably, this 3D space is not aligned with the
camera coordinate system, so we learn a dynamic transformation matrix to
compensate for this misalignment. We evaluate our method on MPIIGaze, EVE, and
GazeCapture datasets, collected respectively on laptops, desktop computers, and
mobile devices. The superior performance highlights the effectiveness of our
approach, and demonstrates its strong potential for real-world applications.
| [
{
"version": "v1",
"created": "Thu, 6 Feb 2025 13:37:09 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 02:35:00 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 21:53:43 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Cheng",
"Yihua",
""
],
[
"Wang",
"Hengfei",
""
],
[
"Zhang",
"Zhongqun",
""
],
[
"Yue",
"Yang",
""
],
[
"Kim",
"Bo Eun",
""
],
[
"Lu",
"Feng",
""
],
[
"Chang",
"Hyung Jin",
""
]
] | TITLE: 3D Prior is All You Need: Cross-Task Few-shot 2D Gaze Estimation
ABSTRACT: 3D and 2D gaze estimation share the fundamental objective of capturing eye
movements but are traditionally treated as two distinct research domains. In
this paper, we introduce a novel cross-task few-shot 2D gaze estimation
approach, aiming to adapt a pre-trained 3D gaze estimation network for 2D gaze
prediction on unseen devices using only a few training images. This task is
highly challenging due to the domain gap between 3D and 2D gaze, unknown screen
poses, and limited training data. To address these challenges, we propose a
novel framework that bridges the gap between 3D and 2D gaze. Our framework
contains a physics-based differentiable projection module with learnable
parameters to model screen poses and project 3D gaze into 2D gaze. The
framework is fully differentiable and can integrate into existing 3D gaze
networks without modifying their original architecture. Additionally, we
introduce a dynamic pseudo-labelling strategy for flipped images, which is
particularly challenging for 2D labels due to unknown screen poses. To overcome
this, we reverse the projection process by converting 2D labels to 3D space,
where flipping is performed. Notably, this 3D space is not aligned with the
camera coordinate system, so we learn a dynamic transformation matrix to
compensate for this misalignment. We evaluate our method on MPIIGaze, EVE, and
GazeCapture datasets, collected respectively on laptops, desktop computers, and
mobile devices. The superior performance highlights the effectiveness of our
approach, and demonstrates its strong potential for real-world applications.
|
2502.04144 | Dima Damen | Toby Perrett, Ahmad Darkhalil, Saptarshi Sinha, Omar Emara, Sam
Pollard, Kranti Parida, Kaiting Liu, Prajwal Gatti, Siddhant Bansal, Kevin
Flanagan, Jacob Chalk, Zhifan Zhu, Rhodri Guerrier, Fahd Abdelazim, Bin Zhu,
Davide Moltisanti, Michael Wray, Hazel Doughty, Dima Damen | HD-EPIC: A Highly-Detailed Egocentric Video Dataset | Accepted at CVPR 2025. Project Webpage and Dataset:
http://hd-epic.github.io | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present a validation dataset of newly-collected kitchen-based egocentric
videos, manually annotated with highly detailed and interconnected ground-truth
labels covering: recipe steps, fine-grained actions, ingredients with
nutritional values, moving objects, and audio annotations. Importantly, all
annotations are grounded in 3D through digital twinning of the scene, fixtures,
object locations, and primed with gaze. Footage is collected from unscripted
recordings in diverse home environments, making HDEPIC the first dataset
collected in-the-wild but with detailed annotations matching those in
controlled lab environments.
We show the potential of our highly-detailed annotations through a
challenging VQA benchmark of 26K questions assessing the capability to
recognise recipes, ingredients, nutrition, fine-grained actions, 3D perception,
object motion, and gaze direction. The powerful long-context Gemini Pro only
achieves 38.5% on this benchmark, showcasing its difficulty and highlighting
shortcomings in current VLMs. We additionally assess action recognition, sound
recognition, and long-term video-object segmentation on HD-EPIC.
HD-EPIC is 41 hours of video in 9 kitchens with digital twins of 413 kitchen
fixtures, capturing 69 recipes, 59K fine-grained actions, 51K audio events, 20K
object movements and 37K object masks lifted to 3D. On average, we have 263
annotations per minute of our unscripted videos.
| [
{
"version": "v1",
"created": "Thu, 6 Feb 2025 15:25:05 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 04:54:54 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Perrett",
"Toby",
""
],
[
"Darkhalil",
"Ahmad",
""
],
[
"Sinha",
"Saptarshi",
""
],
[
"Emara",
"Omar",
""
],
[
"Pollard",
"Sam",
""
],
[
"Parida",
"Kranti",
""
],
[
"Liu",
"Kaiting",
""
],
[
"Gatti",
"Prajwal",
""
],
[
"Bansal",
"Siddhant",
""
],
[
"Flanagan",
"Kevin",
""
],
[
"Chalk",
"Jacob",
""
],
[
"Zhu",
"Zhifan",
""
],
[
"Guerrier",
"Rhodri",
""
],
[
"Abdelazim",
"Fahd",
""
],
[
"Zhu",
"Bin",
""
],
[
"Moltisanti",
"Davide",
""
],
[
"Wray",
"Michael",
""
],
[
"Doughty",
"Hazel",
""
],
[
"Damen",
"Dima",
""
]
] | TITLE: HD-EPIC: A Highly-Detailed Egocentric Video Dataset
ABSTRACT: We present a validation dataset of newly-collected kitchen-based egocentric
videos, manually annotated with highly detailed and interconnected ground-truth
labels covering: recipe steps, fine-grained actions, ingredients with
nutritional values, moving objects, and audio annotations. Importantly, all
annotations are grounded in 3D through digital twinning of the scene, fixtures,
object locations, and primed with gaze. Footage is collected from unscripted
recordings in diverse home environments, making HDEPIC the first dataset
collected in-the-wild but with detailed annotations matching those in
controlled lab environments.
We show the potential of our highly-detailed annotations through a
challenging VQA benchmark of 26K questions assessing the capability to
recognise recipes, ingredients, nutrition, fine-grained actions, 3D perception,
object motion, and gaze direction. The powerful long-context Gemini Pro only
achieves 38.5% on this benchmark, showcasing its difficulty and highlighting
shortcomings in current VLMs. We additionally assess action recognition, sound
recognition, and long-term video-object segmentation on HD-EPIC.
HD-EPIC is 41 hours of video in 9 kitchens with digital twins of 413 kitchen
fixtures, capturing 69 recipes, 59K fine-grained actions, 51K audio events, 20K
object movements and 37K object masks lifted to 3D. On average, we have 263
annotations per minute of our unscripted videos.
|
2502.05176 | Yu-Lun Liu | Chung-Ho Wu, Yang-Jung Chen, Ying-Huan Chen, Jie-Ying Lee, Bo-Hsu Ke,
Chun-Wei Tuan Mu, Yi-Chuan Huang, Chin-Yang Lin, Min-Hung Chen, Yen-Yu Lin,
Yu-Lun Liu | AuraFusion360: Augmented Unseen Region Alignment for Reference-based
360{\deg} Unbounded Scene Inpainting | Paper accepted to CVPR 2025. Project page:
https://kkennethwu.github.io/aurafusion360/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Three-dimensional scene inpainting is crucial for applications from virtual
reality to architectural visualization, yet existing methods struggle with view
consistency and geometric accuracy in 360{\deg} unbounded scenes. We present
AuraFusion360, a novel reference-based method that enables high-quality object
removal and hole filling in 3D scenes represented by Gaussian Splatting. Our
approach introduces (1) depth-aware unseen mask generation for accurate
occlusion identification, (2) Adaptive Guided Depth Diffusion, a zero-shot
method for accurate initial point placement without requiring additional
training, and (3) SDEdit-based detail enhancement for multi-view coherence. We
also introduce 360-USID, the first comprehensive dataset for 360{\deg}
unbounded scene inpainting with ground truth. Extensive experiments demonstrate
that AuraFusion360 significantly outperforms existing methods, achieving
superior perceptual quality while maintaining geometric accuracy across
dramatic viewpoint changes.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2025 18:59:55 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 16:21:19 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Wu",
"Chung-Ho",
""
],
[
"Chen",
"Yang-Jung",
""
],
[
"Chen",
"Ying-Huan",
""
],
[
"Lee",
"Jie-Ying",
""
],
[
"Ke",
"Bo-Hsu",
""
],
[
"Mu",
"Chun-Wei Tuan",
""
],
[
"Huang",
"Yi-Chuan",
""
],
[
"Lin",
"Chin-Yang",
""
],
[
"Chen",
"Min-Hung",
""
],
[
"Lin",
"Yen-Yu",
""
],
[
"Liu",
"Yu-Lun",
""
]
] | TITLE: AuraFusion360: Augmented Unseen Region Alignment for Reference-based
360{\deg} Unbounded Scene Inpainting
ABSTRACT: Three-dimensional scene inpainting is crucial for applications from virtual
reality to architectural visualization, yet existing methods struggle with view
consistency and geometric accuracy in 360{\deg} unbounded scenes. We present
AuraFusion360, a novel reference-based method that enables high-quality object
removal and hole filling in 3D scenes represented by Gaussian Splatting. Our
approach introduces (1) depth-aware unseen mask generation for accurate
occlusion identification, (2) Adaptive Guided Depth Diffusion, a zero-shot
method for accurate initial point placement without requiring additional
training, and (3) SDEdit-based detail enhancement for multi-view coherence. We
also introduce 360-USID, the first comprehensive dataset for 360{\deg}
unbounded scene inpainting with ground truth. Extensive experiments demonstrate
that AuraFusion360 significantly outperforms existing methods, achieving
superior perceptual quality while maintaining geometric accuracy across
dramatic viewpoint changes.
|
2502.05374 | Chongyu Fan | Chongyu Fan, Jinghan Jia, Yihua Zhang, Anil Ramakrishna, Mingyi Hong,
Sijia Liu | Towards LLM Unlearning Resilient to Relearning Attacks: A
Sharpness-Aware Minimization Perspective and Beyond | null | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | The LLM unlearning technique has recently been introduced to comply with data
regulations and address the safety and ethical concerns of LLMs by removing the
undesired data-model influence. However, state-of-the-art unlearning methods
face a critical vulnerability: they are susceptible to ``relearning'' the
removed information from a small number of forget data points, known as
relearning attacks. In this paper, we systematically investigate how to make
unlearned models robust against such attacks. For the first time, we establish
a connection between robust unlearning and sharpness-aware minimization (SAM)
through a unified robust optimization framework, in an analogy to adversarial
training designed to defend against adversarial attacks. Our analysis for SAM
reveals that smoothness optimization plays a pivotal role in mitigating
relearning attacks. Thus, we further explore diverse smoothing strategies to
enhance unlearning robustness. Extensive experiments on benchmark datasets,
including WMDP and MUSE, demonstrate that SAM and other smoothness optimization
approaches consistently improve the resistance of LLM unlearning to relearning
attacks. Notably, smoothness-enhanced unlearning also helps defend against
(input-level) jailbreaking attacks, broadening our proposal's impact in
robustifying LLM unlearning. Codes are available at
https://github.com/OPTML-Group/Unlearn-Smooth.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2025 23:03:55 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Feb 2025 20:04:22 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 12:18:42 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Fan",
"Chongyu",
""
],
[
"Jia",
"Jinghan",
""
],
[
"Zhang",
"Yihua",
""
],
[
"Ramakrishna",
"Anil",
""
],
[
"Hong",
"Mingyi",
""
],
[
"Liu",
"Sijia",
""
]
] | TITLE: Towards LLM Unlearning Resilient to Relearning Attacks: A
Sharpness-Aware Minimization Perspective and Beyond
ABSTRACT: The LLM unlearning technique has recently been introduced to comply with data
regulations and address the safety and ethical concerns of LLMs by removing the
undesired data-model influence. However, state-of-the-art unlearning methods
face a critical vulnerability: they are susceptible to ``relearning'' the
removed information from a small number of forget data points, known as
relearning attacks. In this paper, we systematically investigate how to make
unlearned models robust against such attacks. For the first time, we establish
a connection between robust unlearning and sharpness-aware minimization (SAM)
through a unified robust optimization framework, in an analogy to adversarial
training designed to defend against adversarial attacks. Our analysis for SAM
reveals that smoothness optimization plays a pivotal role in mitigating
relearning attacks. Thus, we further explore diverse smoothing strategies to
enhance unlearning robustness. Extensive experiments on benchmark datasets,
including WMDP and MUSE, demonstrate that SAM and other smoothness optimization
approaches consistently improve the resistance of LLM unlearning to relearning
attacks. Notably, smoothness-enhanced unlearning also helps defend against
(input-level) jailbreaking attacks, broadening our proposal's impact in
robustifying LLM unlearning. Codes are available at
https://github.com/OPTML-Group/Unlearn-Smooth.
|
2502.08013 | arXiv Admin | Frederick Pembroke, Eleanor Featherstonehaugh, Sebastian Wetherington,
Harriet Fitzgerald, Maximilian Featherington, Peter Idliman | Hierarchical Manifold Projection for Ransomware Detection: A Novel
Geometric Approach to Identifying Malicious Encryption Patterns | arXiv admin note: This paper has been withdrawn by arXiv due to
disputed and unverifiable authorship | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Encryption-based cyber threats continue to evolve, employing increasingly
sophisticated techniques to bypass traditional detection mechanisms. Many
existing classification strategies depend on static rule sets, signature-based
matching, or machine learning models that require extensive labeled datasets,
making them ineffective against emerging ransomware families that exhibit
polymorphic and adversarial behaviors. A novel classification framework
structured through hierarchical manifold projection introduces a mathematical
approach to detecting malicious encryption workflows, preserving geometric
consistencies that differentiate ransomware-induced modifications from benign
cryptographic operations. The proposed methodology transforms encryption
sequences into structured manifold embeddings, ensuring classification
robustness through non-Euclidean feature separability rather than reliance on
static indicators. Generalization capabilities remain stable across diverse
ransomware variants, as hierarchical decomposition techniques capture
multi-scale encryption characteristics while maintaining resilience against
code obfuscation and execution flow modifications. Empirical analysis
demonstrates that detection accuracy remains high even when encryption key
variability, delayed execution tactics, or API call obfuscation strategies are
introduced, reinforcing the reliability of manifold-based classification.
Real-time scalability assessments confirm that the proposed approach maintains
computational efficiency across increasing dataset volumes, validating its
applicability to large-scale threat detection scenarios.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 23:20:58 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 12:57:24 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Pembroke",
"Frederick",
""
],
[
"Featherstonehaugh",
"Eleanor",
""
],
[
"Wetherington",
"Sebastian",
""
],
[
"Fitzgerald",
"Harriet",
""
],
[
"Featherington",
"Maximilian",
""
],
[
"Idliman",
"Peter",
""
]
] | TITLE: Hierarchical Manifold Projection for Ransomware Detection: A Novel
Geometric Approach to Identifying Malicious Encryption Patterns
ABSTRACT: Encryption-based cyber threats continue to evolve, employing increasingly
sophisticated techniques to bypass traditional detection mechanisms. Many
existing classification strategies depend on static rule sets, signature-based
matching, or machine learning models that require extensive labeled datasets,
making them ineffective against emerging ransomware families that exhibit
polymorphic and adversarial behaviors. A novel classification framework
structured through hierarchical manifold projection introduces a mathematical
approach to detecting malicious encryption workflows, preserving geometric
consistencies that differentiate ransomware-induced modifications from benign
cryptographic operations. The proposed methodology transforms encryption
sequences into structured manifold embeddings, ensuring classification
robustness through non-Euclidean feature separability rather than reliance on
static indicators. Generalization capabilities remain stable across diverse
ransomware variants, as hierarchical decomposition techniques capture
multi-scale encryption characteristics while maintaining resilience against
code obfuscation and execution flow modifications. Empirical analysis
demonstrates that detection accuracy remains high even when encryption key
variability, delayed execution tactics, or API call obfuscation strategies are
introduced, reinforcing the reliability of manifold-based classification.
Real-time scalability assessments confirm that the proposed approach maintains
computational efficiency across increasing dataset volumes, validating its
applicability to large-scale threat detection scenarios.
|
2502.19694 | Burhaneddin Yaman | Xin Ye, Burhaneddin Yaman, Sheng Cheng, Feng Tao, Abhirup Mallik, Liu
Ren | BEVDiffuser: Plug-and-Play Diffusion Model for BEV Denoising with
Ground-Truth Guidance | CVPR 2025 | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bird's-eye-view (BEV) representations play a crucial role in autonomous
driving tasks. Despite recent advancements in BEV generation, inherent noise,
stemming from sensor limitations and the learning process, remains largely
unaddressed, resulting in suboptimal BEV representations that adversely impact
the performance of downstream tasks. To address this, we propose BEVDiffuser, a
novel diffusion model that effectively denoises BEV feature maps using the
ground-truth object layout as guidance. BEVDiffuser can be operated in a
plug-and-play manner during training time to enhance existing BEV models
without requiring any architectural modifications. Extensive experiments on the
challenging nuScenes dataset demonstrate BEVDiffuser's exceptional denoising
and generation capabilities, which enable significant enhancement to existing
BEV models, as evidenced by notable improvements of 12.3\% in mAP and 10.1\% in
NDS achieved for 3D object detection without introducing additional
computational complexity. Moreover, substantial improvements in long-tail
object detection and under challenging weather and lighting conditions further
validate BEVDiffuser's effectiveness in denoising and enhancing BEV
representations.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 02:11:29 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 22:27:08 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ye",
"Xin",
""
],
[
"Yaman",
"Burhaneddin",
""
],
[
"Cheng",
"Sheng",
""
],
[
"Tao",
"Feng",
""
],
[
"Mallik",
"Abhirup",
""
],
[
"Ren",
"Liu",
""
]
] | TITLE: BEVDiffuser: Plug-and-Play Diffusion Model for BEV Denoising with
Ground-Truth Guidance
ABSTRACT: Bird's-eye-view (BEV) representations play a crucial role in autonomous
driving tasks. Despite recent advancements in BEV generation, inherent noise,
stemming from sensor limitations and the learning process, remains largely
unaddressed, resulting in suboptimal BEV representations that adversely impact
the performance of downstream tasks. To address this, we propose BEVDiffuser, a
novel diffusion model that effectively denoises BEV feature maps using the
ground-truth object layout as guidance. BEVDiffuser can be operated in a
plug-and-play manner during training time to enhance existing BEV models
without requiring any architectural modifications. Extensive experiments on the
challenging nuScenes dataset demonstrate BEVDiffuser's exceptional denoising
and generation capabilities, which enable significant enhancement to existing
BEV models, as evidenced by notable improvements of 12.3\% in mAP and 10.1\% in
NDS achieved for 3D object detection without introducing additional
computational complexity. Moreover, substantial improvements in long-tail
object detection and under challenging weather and lighting conditions further
validate BEVDiffuser's effectiveness in denoising and enhancing BEV
representations.
|
2502.21257 | Yuheng Ji | Yuheng Ji, Huajie Tan, Jiayu Shi, Xiaoshuai Hao, Yuan Zhang, Hengyuan
Zhang, Pengwei Wang, Mengdi Zhao, Yao Mu, Pengju An, Xinda Xue, Qinghang Su,
Huaihai Lyu, Xiaolong Zheng, Jiaming Liu, Zhongyuan Wang, Shanghang Zhang | RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract
to Concrete | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in Multimodal Large Language Models (MLLMs) have shown
remarkable capabilities across various multimodal contexts. However, their
application in robotic scenarios, particularly for long-horizon manipulation
tasks, reveals significant limitations. These limitations arise from the
current MLLMs lacking three essential robotic brain capabilities: Planning
Capability, which involves decomposing complex manipulation instructions into
manageable sub-tasks; Affordance Perception, the ability to recognize and
interpret the affordances of interactive objects; and Trajectory Prediction,
the foresight to anticipate the complete manipulation trajectory necessary for
successful execution. To enhance the robotic brain's core capabilities from
abstract to concrete, we introduce ShareRobot, a high-quality heterogeneous
dataset that labels multi-dimensional information such as task planning, object
affordance, and end-effector trajectory. ShareRobot's diversity and accuracy
have been meticulously refined by three human annotators. Building on this
dataset, we developed RoboBrain, an MLLM-based model that combines robotic and
general multi-modal data, utilizes a multi-stage training strategy, and
incorporates long videos and high-resolution images to improve its robotic
manipulation capabilities. Extensive experiments demonstrate that RoboBrain
achieves state-of-the-art performance across various robotic tasks,
highlighting its potential to advance robotic brain capabilities.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 17:30:39 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 05:46:03 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ji",
"Yuheng",
""
],
[
"Tan",
"Huajie",
""
],
[
"Shi",
"Jiayu",
""
],
[
"Hao",
"Xiaoshuai",
""
],
[
"Zhang",
"Yuan",
""
],
[
"Zhang",
"Hengyuan",
""
],
[
"Wang",
"Pengwei",
""
],
[
"Zhao",
"Mengdi",
""
],
[
"Mu",
"Yao",
""
],
[
"An",
"Pengju",
""
],
[
"Xue",
"Xinda",
""
],
[
"Su",
"Qinghang",
""
],
[
"Lyu",
"Huaihai",
""
],
[
"Zheng",
"Xiaolong",
""
],
[
"Liu",
"Jiaming",
""
],
[
"Wang",
"Zhongyuan",
""
],
[
"Zhang",
"Shanghang",
""
]
] | TITLE: RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract
to Concrete
ABSTRACT: Recent advancements in Multimodal Large Language Models (MLLMs) have shown
remarkable capabilities across various multimodal contexts. However, their
application in robotic scenarios, particularly for long-horizon manipulation
tasks, reveals significant limitations. These limitations arise from the
current MLLMs lacking three essential robotic brain capabilities: Planning
Capability, which involves decomposing complex manipulation instructions into
manageable sub-tasks; Affordance Perception, the ability to recognize and
interpret the affordances of interactive objects; and Trajectory Prediction,
the foresight to anticipate the complete manipulation trajectory necessary for
successful execution. To enhance the robotic brain's core capabilities from
abstract to concrete, we introduce ShareRobot, a high-quality heterogeneous
dataset that labels multi-dimensional information such as task planning, object
affordance, and end-effector trajectory. ShareRobot's diversity and accuracy
have been meticulously refined by three human annotators. Building on this
dataset, we developed RoboBrain, an MLLM-based model that combines robotic and
general multi-modal data, utilizes a multi-stage training strategy, and
incorporates long videos and high-resolution images to improve its robotic
manipulation capabilities. Extensive experiments demonstrate that RoboBrain
achieves state-of-the-art performance across various robotic tasks,
highlighting its potential to advance robotic brain capabilities.
|
2503.02115 | Jimmy Yu | Jimmy K. Yu, Marcos Mart\'inez-Romero, Matthew Horridge, Mete U.
Akdogan, Mark A. Musen | A General-Purpose Data Harmonization Framework: Supporting Reproducible
and Scalable Data Integration in the RADx Data Hub | submitted to the AMIA 2025 Annual Symposium | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the age of big data, it is important for primary research data to follow
the FAIR principles of findability, accessibility, interoperability, and
reusability. Data harmonization enhances interoperability and reusability by
aligning heterogeneous data under standardized representations, benefiting both
repository curators responsible for upholding data quality standards and
consumers who require unified datasets. However, data harmonization is
difficult in practice, requiring significant domain and technical expertise. We
present a software framework to facilitate principled and reproducible
harmonization protocols. Our framework implements a novel strategy of building
harmonization transformations from parameterizable primitive operations, such
as the assignment of numerical values to user-specified categories, with
automated bookkeeping for executed transformations. We establish our data
representation model and harmonization strategy and then report a
proof-of-concept application in the context of the RADx Data Hub. Our framework
enables data practitioners to execute transparent and reproducible
harmonization protocols that align closely with their research goals.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 22:56:35 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 18:46:45 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Yu",
"Jimmy K.",
""
],
[
"Martínez-Romero",
"Marcos",
""
],
[
"Horridge",
"Matthew",
""
],
[
"Akdogan",
"Mete U.",
""
],
[
"Musen",
"Mark A.",
""
]
] | TITLE: A General-Purpose Data Harmonization Framework: Supporting Reproducible
and Scalable Data Integration in the RADx Data Hub
ABSTRACT: In the age of big data, it is important for primary research data to follow
the FAIR principles of findability, accessibility, interoperability, and
reusability. Data harmonization enhances interoperability and reusability by
aligning heterogeneous data under standardized representations, benefiting both
repository curators responsible for upholding data quality standards and
consumers who require unified datasets. However, data harmonization is
difficult in practice, requiring significant domain and technical expertise. We
present a software framework to facilitate principled and reproducible
harmonization protocols. Our framework implements a novel strategy of building
harmonization transformations from parameterizable primitive operations, such
as the assignment of numerical values to user-specified categories, with
automated bookkeeping for executed transformations. We establish our data
representation model and harmonization strategy and then report a
proof-of-concept application in the context of the RADx Data Hub. Our framework
enables data practitioners to execute transparent and reproducible
harmonization protocols that align closely with their research goals.
|
2503.02857 | Nuria Chandra | Nuria Alina Chandra, Ryan Murtfeldt, Lin Qiu, Arnab Karmakar, Hannah
Lee, Emmanuel Tanumihardja, Kevin Farhat, Ben Caffee, Sejin Paik, Changyeon
Lee, Jongwook Choi, Aerin Kim, Oren Etzioni | Deepfake-Eval-2024: A Multi-Modal In-the-Wild Benchmark of Deepfakes
Circulated in 2024 | null | null | null | null | cs.CV cs.AI cs.CY | http://creativecommons.org/licenses/by-sa/4.0/ | In the age of increasingly realistic generative AI, robust deepfake detection
is essential for mitigating fraud and disinformation. While many deepfake
detectors report high accuracy on academic datasets, we show that these
academic benchmarks are out of date and not representative of real-world
deepfakes. We introduce Deepfake-Eval-2024, a new deepfake detection benchmark
consisting of in-the-wild deepfakes collected from social media and deepfake
detection platform users in 2024. Deepfake-Eval-2024 consists of 45 hours of
videos, 56.5 hours of audio, and 1,975 images, encompassing the latest
manipulation technologies. The benchmark contains diverse media content from 88
different websites in 52 different languages. We find that the performance of
open-source state-of-the-art deepfake detection models drops precipitously when
evaluated on Deepfake-Eval-2024, with AUC decreasing by 50% for video, 48% for
audio, and 45% for image models compared to previous benchmarks. We also
evaluate commercial deepfake detection models and models finetuned on
Deepfake-Eval-2024, and find that they have superior performance to
off-the-shelf open-source models, but do not yet reach the accuracy of deepfake
forensic analysts. The dataset is available at
https://github.com/nuriachandra/Deepfake-Eval-2024.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 18:33:22 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 20:24:16 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 20:46:15 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Chandra",
"Nuria Alina",
""
],
[
"Murtfeldt",
"Ryan",
""
],
[
"Qiu",
"Lin",
""
],
[
"Karmakar",
"Arnab",
""
],
[
"Lee",
"Hannah",
""
],
[
"Tanumihardja",
"Emmanuel",
""
],
[
"Farhat",
"Kevin",
""
],
[
"Caffee",
"Ben",
""
],
[
"Paik",
"Sejin",
""
],
[
"Lee",
"Changyeon",
""
],
[
"Choi",
"Jongwook",
""
],
[
"Kim",
"Aerin",
""
],
[
"Etzioni",
"Oren",
""
]
] | TITLE: Deepfake-Eval-2024: A Multi-Modal In-the-Wild Benchmark of Deepfakes
Circulated in 2024
ABSTRACT: In the age of increasingly realistic generative AI, robust deepfake detection
is essential for mitigating fraud and disinformation. While many deepfake
detectors report high accuracy on academic datasets, we show that these
academic benchmarks are out of date and not representative of real-world
deepfakes. We introduce Deepfake-Eval-2024, a new deepfake detection benchmark
consisting of in-the-wild deepfakes collected from social media and deepfake
detection platform users in 2024. Deepfake-Eval-2024 consists of 45 hours of
videos, 56.5 hours of audio, and 1,975 images, encompassing the latest
manipulation technologies. The benchmark contains diverse media content from 88
different websites in 52 different languages. We find that the performance of
open-source state-of-the-art deepfake detection models drops precipitously when
evaluated on Deepfake-Eval-2024, with AUC decreasing by 50% for video, 48% for
audio, and 45% for image models compared to previous benchmarks. We also
evaluate commercial deepfake detection models and models finetuned on
Deepfake-Eval-2024, and find that they have superior performance to
off-the-shelf open-source models, but do not yet reach the accuracy of deepfake
forensic analysts. The dataset is available at
https://github.com/nuriachandra/Deepfake-Eval-2024.
|
2503.06056 | Weixi Zheng | Weixi Zheng, Aoling Huang, Jingping Yuan, Haoyu Zhao, Zhou Zhao,
Yongchao Xu, Thierry G\'eraud | Pathological Prior-Guided Multiple Instance Learning For Mitigating
Catastrophic Forgetting in Breast Cancer Whole Slide Image Classification | ICASSP2025(Oral) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In histopathology, intelligent diagnosis of Whole Slide Images (WSIs) is
essential for automating and objectifying diagnoses, reducing the workload of
pathologists. However, diagnostic models often face the challenge of forgetting
previously learned data during incremental training on datasets from different
sources. To address this issue, we propose a new framework PaGMIL to mitigate
catastrophic forgetting in breast cancer WSI classification. Our framework
introduces two key components into the common MIL model architecture. First, it
leverages microscopic pathological prior to select more accurate and diverse
representative patches for MIL. Secondly, it trains separate classification
heads for each task and uses macroscopic pathological prior knowledge, treating
the thumbnail as a prompt guide (PG) to select the appropriate classification
head. We evaluate the continual learning performance of PaGMIL across several
public breast cancer datasets. PaGMIL achieves a better balance between the
performance of the current task and the retention of previous tasks,
outperforming other continual learning methods. Our code will be open-sourced
upon acceptance.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 04:51:58 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 06:58:28 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Zheng",
"Weixi",
""
],
[
"Huang",
"Aoling",
""
],
[
"Yuan",
"Jingping",
""
],
[
"Zhao",
"Haoyu",
""
],
[
"Zhao",
"Zhou",
""
],
[
"Xu",
"Yongchao",
""
],
[
"Géraud",
"Thierry",
""
]
] | TITLE: Pathological Prior-Guided Multiple Instance Learning For Mitigating
Catastrophic Forgetting in Breast Cancer Whole Slide Image Classification
ABSTRACT: In histopathology, intelligent diagnosis of Whole Slide Images (WSIs) is
essential for automating and objectifying diagnoses, reducing the workload of
pathologists. However, diagnostic models often face the challenge of forgetting
previously learned data during incremental training on datasets from different
sources. To address this issue, we propose a new framework PaGMIL to mitigate
catastrophic forgetting in breast cancer WSI classification. Our framework
introduces two key components into the common MIL model architecture. First, it
leverages microscopic pathological prior to select more accurate and diverse
representative patches for MIL. Secondly, it trains separate classification
heads for each task and uses macroscopic pathological prior knowledge, treating
the thumbnail as a prompt guide (PG) to select the appropriate classification
head. We evaluate the continual learning performance of PaGMIL across several
public breast cancer datasets. PaGMIL achieves a better balance between the
performance of the current task and the retention of previous tasks,
outperforming other continual learning methods. Our code will be open-sourced
upon acceptance.
|
2503.07588 | Junwei Luo | Junwei Luo, Yingying Zhang, Xue Yang, Kang Wu, Qi Zhu, Lei Liang,
Jingdong Chen, Yansheng Li | When Large Vision-Language Model Meets Large Remote Sensing Imagery:
Coarse-to-Fine Text-Guided Token Pruning | 12 pages, 6 figures, 7 tables | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient vision-language understanding of large Remote Sensing Images (RSIs)
is meaningful but challenging. Current Large Vision-Language Models (LVLMs)
typically employ limited pre-defined grids to process images, leading to
information loss when handling gigapixel RSIs. Conversely, using unlimited
grids significantly increases computational costs. To preserve image details
while reducing computational complexity, we propose a text-guided token pruning
method with Dynamic Image Pyramid (DIP) integration. Our method introduces: (i)
a Region Focus Module (RFM) that leverages text-aware region localization
capability to identify critical vision tokens, and (ii) a coarse-to-fine image
tile selection and vision token pruning strategy based on DIP, which is guided
by RFM outputs and avoids directly processing the entire large imagery.
Additionally, existing benchmarks for evaluating LVLMs' perception ability on
large RSI suffer from limited question diversity and constrained image sizes.
We construct a new benchmark named LRS-VQA, which contains 7,333 QA pairs
across 8 categories, with image length up to 27,328 pixels. Our method
outperforms existing high-resolution strategies on four datasets using the same
data. Moreover, compared to existing token reduction methods, our approach
demonstrates higher efficiency under high-resolution settings. Dataset and code
are in https://github.com/VisionXLab/LRS-VQA.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:51:16 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 15:05:34 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Luo",
"Junwei",
""
],
[
"Zhang",
"Yingying",
""
],
[
"Yang",
"Xue",
""
],
[
"Wu",
"Kang",
""
],
[
"Zhu",
"Qi",
""
],
[
"Liang",
"Lei",
""
],
[
"Chen",
"Jingdong",
""
],
[
"Li",
"Yansheng",
""
]
] | TITLE: When Large Vision-Language Model Meets Large Remote Sensing Imagery:
Coarse-to-Fine Text-Guided Token Pruning
ABSTRACT: Efficient vision-language understanding of large Remote Sensing Images (RSIs)
is meaningful but challenging. Current Large Vision-Language Models (LVLMs)
typically employ limited pre-defined grids to process images, leading to
information loss when handling gigapixel RSIs. Conversely, using unlimited
grids significantly increases computational costs. To preserve image details
while reducing computational complexity, we propose a text-guided token pruning
method with Dynamic Image Pyramid (DIP) integration. Our method introduces: (i)
a Region Focus Module (RFM) that leverages text-aware region localization
capability to identify critical vision tokens, and (ii) a coarse-to-fine image
tile selection and vision token pruning strategy based on DIP, which is guided
by RFM outputs and avoids directly processing the entire large imagery.
Additionally, existing benchmarks for evaluating LVLMs' perception ability on
large RSI suffer from limited question diversity and constrained image sizes.
We construct a new benchmark named LRS-VQA, which contains 7,333 QA pairs
across 8 categories, with image length up to 27,328 pixels. Our method
outperforms existing high-resolution strategies on four datasets using the same
data. Moreover, compared to existing token reduction methods, our approach
demonstrates higher efficiency under high-resolution settings. Dataset and code
are in https://github.com/VisionXLab/LRS-VQA.
|
2503.07633 | Ismael Abdulrahman | Ismael Abdulrahman | A Quantum Neural Network Transfer-Learning Model for Forecasting
Problems with Continuous and Discrete Variables | null | null | null | null | cs.LG cs.SY eess.SY quant-ph | http://creativecommons.org/licenses/by/4.0/ | This study introduces simple yet effective continuous- and discrete-variable
quantum neural network (QNN) models as a transfer-learning approach for
forecasting tasks. The CV-QNN features a single quantum layer with two qubits
to establish entanglement and utilizes a minimal set of quantum gates,
including displacement, rotation, beam splitter, squeezing, and a non-Gaussian
cubic-phase gate, with a maximum of eight trainable parameters. A key advantage
of this model is its ability to be trained on a single dataset, after which the
learned parameters can be transferred to other forecasting problems with little
to no fine-tuning. Initially trained on the Kurdistan load demand dataset, the
model's frozen parameters are successfully applied to various forecasting
tasks, including energy consumption, traffic flow, weather conditions, and
cryptocurrency price prediction, demonstrating strong performance. Furthermore,
the study introduces a discrete-variable quantum model with an equivalent 2-
and 4-wire configuration and presents a performance assessment, showing good
but relatively lower effectiveness compared to the continuous-variable model.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 22:38:51 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 13:35:29 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Abdulrahman",
"Ismael",
""
]
] | TITLE: A Quantum Neural Network Transfer-Learning Model for Forecasting
Problems with Continuous and Discrete Variables
ABSTRACT: This study introduces simple yet effective continuous- and discrete-variable
quantum neural network (QNN) models as a transfer-learning approach for
forecasting tasks. The CV-QNN features a single quantum layer with two qubits
to establish entanglement and utilizes a minimal set of quantum gates,
including displacement, rotation, beam splitter, squeezing, and a non-Gaussian
cubic-phase gate, with a maximum of eight trainable parameters. A key advantage
of this model is its ability to be trained on a single dataset, after which the
learned parameters can be transferred to other forecasting problems with little
to no fine-tuning. Initially trained on the Kurdistan load demand dataset, the
model's frozen parameters are successfully applied to various forecasting
tasks, including energy consumption, traffic flow, weather conditions, and
cryptocurrency price prediction, demonstrating strong performance. Furthermore,
the study introduces a discrete-variable quantum model with an equivalent 2-
and 4-wire configuration and presents a performance assessment, showing good
but relatively lower effectiveness compared to the continuous-variable model.
|
2503.08098 | Yuheng Ma | Yuheng Ma, Feiyu Jiang, Zifeng Zhao, Hanfang Yang, Yi Yu | Locally Private Nonparametric Contextual Multi-armed Bandits | null | null | null | null | stat.ML cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by privacy concerns in sequential decision-making on sensitive
data, we address the challenge of nonparametric contextual multi-armed bandits
(MAB) under local differential privacy (LDP). We develop a
uniform-confidence-bound-type estimator, showing its minimax optimality
supported by a matching minimax lower bound. We further consider the case where
auxiliary datasets are available, subject also to (possibly heterogeneous) LDP
constraints. Under the widely-used covariate shift framework, we propose a
jump-start scheme to effectively utilize the auxiliary data, the minimax
optimality of which is further established by a matching lower bound.
Comprehensive experiments on both synthetic and real-world datasets validate
our theoretical results and underscore the effectiveness of the proposed
methods.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 07:00:57 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 16:13:14 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ma",
"Yuheng",
""
],
[
"Jiang",
"Feiyu",
""
],
[
"Zhao",
"Zifeng",
""
],
[
"Yang",
"Hanfang",
""
],
[
"Yu",
"Yi",
""
]
] | TITLE: Locally Private Nonparametric Contextual Multi-armed Bandits
ABSTRACT: Motivated by privacy concerns in sequential decision-making on sensitive
data, we address the challenge of nonparametric contextual multi-armed bandits
(MAB) under local differential privacy (LDP). We develop a
uniform-confidence-bound-type estimator, showing its minimax optimality
supported by a matching minimax lower bound. We further consider the case where
auxiliary datasets are available, subject also to (possibly heterogeneous) LDP
constraints. Under the widely-used covariate shift framework, we propose a
jump-start scheme to effectively utilize the auxiliary data, the minimax
optimality of which is further established by a matching lower bound.
Comprehensive experiments on both synthetic and real-world datasets validate
our theoretical results and underscore the effectiveness of the proposed
methods.
|
2503.10603 | Yanjun Chi | Jun Yu and Lingsi Zhu and Yanjun Chi and Yunxiang Zhang and Yang Zheng
and Yongqi Wang and Xilong Lu | Technical Approach for the EMI Challenge in the 8th Affective Behavior
Analysis in-the-Wild Competition | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emotional Mimicry Intensity (EMI) estimation plays a pivotal role in
understanding human social behavior and advancing human-computer interaction.
The core challenges lie in dynamic correlation modeling and robust fusion of
multimodal temporal signals. To address the limitations of existing
methods--insufficient exploitation of cross-modal synergies, sensitivity to
noise, and constrained fine-grained alignment capabilities--this paper proposes
a dual-stage cross-modal alignment framework. Stage 1 develops vision-text and
audio-text contrastive learning networks based on a CLIP architecture,
achieving preliminary feature-space alignment through modality-decoupled
pre-training. Stage 2 introduces a temporal-aware dynamic fusion module
integrating Temporal Convolutional Networks (TCN) and gated bidirectional LSTM
to capture macro-evolution patterns of facial expressions and local dynamics of
acoustic features, respectively. A novel quality-guided fusion strategy further
enables differentiable weight allocation for modality compensation under
occlusion and noise. Experiments on the Hume-Vidmimic2 dataset demonstrate
superior performance with an average Pearson correlation coefficient of 0.51
across six emotion dimensions on the validate set. Remarkably, our method
achieved 0.68 on the test set, securing runner-up in the EMI Challenge Track of
the 8th ABAW (Affective Behavior Analysis in the Wild) Competition, offering a
novel pathway for fine-grained emotion analysis in open environments.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:46:16 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 09:55:43 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 08:46:00 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Yu",
"Jun",
""
],
[
"Zhu",
"Lingsi",
""
],
[
"Chi",
"Yanjun",
""
],
[
"Zhang",
"Yunxiang",
""
],
[
"Zheng",
"Yang",
""
],
[
"Wang",
"Yongqi",
""
],
[
"Lu",
"Xilong",
""
]
] | TITLE: Technical Approach for the EMI Challenge in the 8th Affective Behavior
Analysis in-the-Wild Competition
ABSTRACT: Emotional Mimicry Intensity (EMI) estimation plays a pivotal role in
understanding human social behavior and advancing human-computer interaction.
The core challenges lie in dynamic correlation modeling and robust fusion of
multimodal temporal signals. To address the limitations of existing
methods--insufficient exploitation of cross-modal synergies, sensitivity to
noise, and constrained fine-grained alignment capabilities--this paper proposes
a dual-stage cross-modal alignment framework. Stage 1 develops vision-text and
audio-text contrastive learning networks based on a CLIP architecture,
achieving preliminary feature-space alignment through modality-decoupled
pre-training. Stage 2 introduces a temporal-aware dynamic fusion module
integrating Temporal Convolutional Networks (TCN) and gated bidirectional LSTM
to capture macro-evolution patterns of facial expressions and local dynamics of
acoustic features, respectively. A novel quality-guided fusion strategy further
enables differentiable weight allocation for modality compensation under
occlusion and noise. Experiments on the Hume-Vidmimic2 dataset demonstrate
superior performance with an average Pearson correlation coefficient of 0.51
across six emotion dimensions on the validate set. Remarkably, our method
achieved 0.68 on the test set, securing runner-up in the EMI Challenge Track of
the 8th ABAW (Affective Behavior Analysis in the Wild) Competition, offering a
novel pathway for fine-grained emotion analysis in open environments.
|
2503.13060 | Sparsh Mittal | Harshal Kausadikar and Tanvi Kale and Onkar Susladkar and Sparsh
Mittal | Historic Scripts to Modern Vision: A Novel Dataset and A VLM Framework
for Transliteration of Modi Script to Devanagari | Under submission at a conference | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In medieval India, the Marathi language was written using the Modi script.
The texts written in Modi script include extensive knowledge about medieval
sciences, medicines, land records and authentic evidence about Indian history.
Around 40 million documents are in poor condition and have not yet been
transliterated. Furthermore, only a few experts in this domain can
transliterate this script into English or Devanagari. Most of the past research
predominantly focuses on individual character recognition. A system that can
transliterate Modi script documents to Devanagari script is needed. We propose
the MoDeTrans dataset, comprising 2,043 images of Modi script documents
accompanied by their corresponding textual transliterations in Devanagari. We
further introduce MoScNet (\textbf{Mo}di \textbf{Sc}ript \textbf{Net}work), a
novel Vision-Language Model (VLM) framework for transliterating Modi script
images into Devanagari text. MoScNet leverages Knowledge Distillation, where a
student model learns from a teacher model to enhance transliteration
performance. The final student model of MoScNet has better performance than the
teacher model while having 163$\times$ lower parameters. Our work is the first
to perform direct transliteration from the handwritten Modi script to the
Devanagari script. MoScNet also shows competitive results on the optical
character recognition (OCR) task.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 11:07:29 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 05:11:40 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Kausadikar",
"Harshal",
""
],
[
"Kale",
"Tanvi",
""
],
[
"Susladkar",
"Onkar",
""
],
[
"Mittal",
"Sparsh",
""
]
] | TITLE: Historic Scripts to Modern Vision: A Novel Dataset and A VLM Framework
for Transliteration of Modi Script to Devanagari
ABSTRACT: In medieval India, the Marathi language was written using the Modi script.
The texts written in Modi script include extensive knowledge about medieval
sciences, medicines, land records and authentic evidence about Indian history.
Around 40 million documents are in poor condition and have not yet been
transliterated. Furthermore, only a few experts in this domain can
transliterate this script into English or Devanagari. Most of the past research
predominantly focuses on individual character recognition. A system that can
transliterate Modi script documents to Devanagari script is needed. We propose
the MoDeTrans dataset, comprising 2,043 images of Modi script documents
accompanied by their corresponding textual transliterations in Devanagari. We
further introduce MoScNet (\textbf{Mo}di \textbf{Sc}ript \textbf{Net}work), a
novel Vision-Language Model (VLM) framework for transliterating Modi script
images into Devanagari text. MoScNet leverages Knowledge Distillation, where a
student model learns from a teacher model to enhance transliteration
performance. The final student model of MoScNet has better performance than the
teacher model while having 163$\times$ lower parameters. Our work is the first
to perform direct transliteration from the handwritten Modi script to the
Devanagari script. MoScNet also shows competitive results on the optical
character recognition (OCR) task.
|
2503.13281 | Xiaodi Li | Xiaodi Li, Shaika Chowdhury, Chung Il Wi, Maria Vassilaki, Xiaoke Liu,
Terence T Sio, Owen Garrick, Young J Juhn, James R Cerhan, Cui Tao, and Nansu
Zong | LLM-Match: An Open-Sourced Patient Matching Model Based on Large
Language Models and Retrieval-Augmented Generation | 10 pages, 1 figure | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Patient matching is the process of linking patients to appropriate clinical
trials by accurately identifying and matching their medical records with trial
eligibility criteria. We propose LLM-Match, a novel framework for patient
matching leveraging fine-tuned open-source large language models. Our approach
consists of four key components. First, a retrieval-augmented generation (RAG)
module extracts relevant patient context from a vast pool of electronic health
records (EHRs). Second, a prompt generation module constructs input prompts by
integrating trial eligibility criteria (both inclusion and exclusion criteria),
patient context, and system instructions. Third, a fine-tuning module with a
classification head optimizes the model parameters using structured prompts and
ground-truth labels. Fourth, an evaluation module assesses the fine-tuned
model's performance on the testing datasets. We evaluated LLM-Match on four
open datasets - n2c2, SIGIR, TREC 2021, and TREC 2022 - using open-source
models, comparing it against TrialGPT, Zero-Shot, and GPT-4-based closed
models. LLM-Match outperformed all baselines.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 15:31:55 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 14:56:41 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 19:32:25 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Li",
"Xiaodi",
""
],
[
"Chowdhury",
"Shaika",
""
],
[
"Wi",
"Chung Il",
""
],
[
"Vassilaki",
"Maria",
""
],
[
"Liu",
"Xiaoke",
""
],
[
"Sio",
"Terence T",
""
],
[
"Garrick",
"Owen",
""
],
[
"Juhn",
"Young J",
""
],
[
"Cerhan",
"James R",
""
],
[
"Tao",
"Cui",
""
],
[
"Zong",
"Nansu",
""
]
] | TITLE: LLM-Match: An Open-Sourced Patient Matching Model Based on Large
Language Models and Retrieval-Augmented Generation
ABSTRACT: Patient matching is the process of linking patients to appropriate clinical
trials by accurately identifying and matching their medical records with trial
eligibility criteria. We propose LLM-Match, a novel framework for patient
matching leveraging fine-tuned open-source large language models. Our approach
consists of four key components. First, a retrieval-augmented generation (RAG)
module extracts relevant patient context from a vast pool of electronic health
records (EHRs). Second, a prompt generation module constructs input prompts by
integrating trial eligibility criteria (both inclusion and exclusion criteria),
patient context, and system instructions. Third, a fine-tuning module with a
classification head optimizes the model parameters using structured prompts and
ground-truth labels. Fourth, an evaluation module assesses the fine-tuned
model's performance on the testing datasets. We evaluated LLM-Match on four
open datasets - n2c2, SIGIR, TREC 2021, and TREC 2022 - using open-source
models, comparing it against TrialGPT, Zero-Shot, and GPT-4-based closed
models. LLM-Match outperformed all baselines.
|
2503.13925 | Da Kuang | Da Kuang, Guanwen Qiu, Junhyong Kim | Reconstructing Cell Lineage Trees from Phenotypic Features with Metric
Learning | null | null | null | null | cs.LG q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How a single fertilized cell gives rise to a complex array of specialized
cell types in development is a central question in biology. The cells grow,
divide, and acquire differentiated characteristics through poorly understood
molecular processes. A key approach to studying developmental processes is to
infer the tree graph of cell lineage division and differentiation histories,
providing an analytical framework for dissecting individual cells' molecular
decisions during replication and differentiation. Although genetically
engineered lineage-tracing methods have advanced the field, they are either
infeasible or ethically constrained in many organisms. In contrast, modern
single-cell technologies can measure high-content molecular profiles (e.g.,
transcriptomes) in a wide range of biological systems.
Here, we introduce CellTreeQM, a novel deep learning method based on
transformer architectures that learns an embedding space with geometric
properties optimized for tree-graph inference. By formulating lineage
reconstruction as a tree-metric learning problem, we have systematically
explored supervised, weakly supervised, and unsupervised training settings and
present a Lineage Reconstruction Benchmark to facilitate comprehensive
evaluation of our learning method. We benchmarked the method on (1) synthetic
data modeled via Brownian motion with independent noise and spurious signals
and (2) lineage-resolved single-cell RNA sequencing datasets. Experimental
results show that CellTreeQM recovers lineage structures with minimal
supervision and limited data, offering a scalable framework for uncovering cell
lineage relationships in challenging animal models. To our knowledge, this is
the first method to cast cell lineage inference explicitly as a metric learning
task, paving the way for future computational models aimed at uncovering the
molecular dynamics of cell lineage.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 05:41:03 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Kuang",
"Da",
""
],
[
"Qiu",
"Guanwen",
""
],
[
"Kim",
"Junhyong",
""
]
] | TITLE: Reconstructing Cell Lineage Trees from Phenotypic Features with Metric
Learning
ABSTRACT: How a single fertilized cell gives rise to a complex array of specialized
cell types in development is a central question in biology. The cells grow,
divide, and acquire differentiated characteristics through poorly understood
molecular processes. A key approach to studying developmental processes is to
infer the tree graph of cell lineage division and differentiation histories,
providing an analytical framework for dissecting individual cells' molecular
decisions during replication and differentiation. Although genetically
engineered lineage-tracing methods have advanced the field, they are either
infeasible or ethically constrained in many organisms. In contrast, modern
single-cell technologies can measure high-content molecular profiles (e.g.,
transcriptomes) in a wide range of biological systems.
Here, we introduce CellTreeQM, a novel deep learning method based on
transformer architectures that learns an embedding space with geometric
properties optimized for tree-graph inference. By formulating lineage
reconstruction as a tree-metric learning problem, we have systematically
explored supervised, weakly supervised, and unsupervised training settings and
present a Lineage Reconstruction Benchmark to facilitate comprehensive
evaluation of our learning method. We benchmarked the method on (1) synthetic
data modeled via Brownian motion with independent noise and spurious signals
and (2) lineage-resolved single-cell RNA sequencing datasets. Experimental
results show that CellTreeQM recovers lineage structures with minimal
supervision and limited data, offering a scalable framework for uncovering cell
lineage relationships in challenging animal models. To our knowledge, this is
the first method to cast cell lineage inference explicitly as a metric learning
task, paving the way for future computational models aimed at uncovering the
molecular dynamics of cell lineage.
|
2503.15851 | Zhenglin Zhou | Zhenglin Zhou, Fan Ma, Hehe Fan, Tat-Seng Chua | Zero-1-to-A: Zero-Shot One Image to Animatable Head Avatars Using Video
Diffusion | Accepted by CVPR 2025, project page:
https://zhenglinzhou.github.io/Zero-1-to-A/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Animatable head avatar generation typically requires extensive data for
training. To reduce the data requirements, a natural solution is to leverage
existing data-free static avatar generation methods, such as pre-trained
diffusion models with score distillation sampling (SDS), which align avatars
with pseudo ground-truth outputs from the diffusion model. However, directly
distilling 4D avatars from video diffusion often leads to over-smooth results
due to spatial and temporal inconsistencies in the generated video. To address
this issue, we propose Zero-1-to-A, a robust method that synthesizes a spatial
and temporal consistency dataset for 4D avatar reconstruction using the video
diffusion model. Specifically, Zero-1-to-A iteratively constructs video
datasets and optimizes animatable avatars in a progressive manner, ensuring
that avatar quality increases smoothly and consistently throughout the learning
process. This progressive learning involves two stages: (1) Spatial Consistency
Learning fixes expressions and learns from front-to-side views, and (2)
Temporal Consistency Learning fixes views and learns from relaxed to
exaggerated expressions, generating 4D avatars in a simple-to-complex manner.
Extensive experiments demonstrate that Zero-1-to-A improves fidelity, animation
quality, and rendering speed compared to existing diffusion-based methods,
providing a solution for lifelike avatar creation. Code is publicly available
at: https://github.com/ZhenglinZhou/Zero-1-to-A.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 05:07:46 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 04:56:40 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Zhou",
"Zhenglin",
""
],
[
"Ma",
"Fan",
""
],
[
"Fan",
"Hehe",
""
],
[
"Chua",
"Tat-Seng",
""
]
] | TITLE: Zero-1-to-A: Zero-Shot One Image to Animatable Head Avatars Using Video
Diffusion
ABSTRACT: Animatable head avatar generation typically requires extensive data for
training. To reduce the data requirements, a natural solution is to leverage
existing data-free static avatar generation methods, such as pre-trained
diffusion models with score distillation sampling (SDS), which align avatars
with pseudo ground-truth outputs from the diffusion model. However, directly
distilling 4D avatars from video diffusion often leads to over-smooth results
due to spatial and temporal inconsistencies in the generated video. To address
this issue, we propose Zero-1-to-A, a robust method that synthesizes a spatial
and temporal consistency dataset for 4D avatar reconstruction using the video
diffusion model. Specifically, Zero-1-to-A iteratively constructs video
datasets and optimizes animatable avatars in a progressive manner, ensuring
that avatar quality increases smoothly and consistently throughout the learning
process. This progressive learning involves two stages: (1) Spatial Consistency
Learning fixes expressions and learns from front-to-side views, and (2)
Temporal Consistency Learning fixes views and learns from relaxed to
exaggerated expressions, generating 4D avatars in a simple-to-complex manner.
Extensive experiments demonstrate that Zero-1-to-A improves fidelity, animation
quality, and rendering speed compared to existing diffusion-based methods,
providing a solution for lifelike avatar creation. Code is publicly available
at: https://github.com/ZhenglinZhou/Zero-1-to-A.
|
2503.16067 | Tim Seizinger | Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V. Conde, Zongwei
Wu, Radu Timofte | Bokehlicious: Photorealistic Bokeh Rendering with Controllable Apertures | Technical Report | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Bokeh rendering methods play a key role in creating the visually appealing,
softly blurred backgrounds seen in professional photography. While recent
learning-based approaches show promising results, generating realistic Bokeh
with variable strength remains challenging. Existing methods require additional
inputs and suffer from unrealistic Bokeh reproduction due to reliance on
synthetic data. In this work, we propose Bokehlicious, a highly efficient
network that provides intuitive control over Bokeh strength through an
Aperture-Aware Attention mechanism, mimicking the physical lens aperture. To
further address the lack of high-quality real-world data, we present RealBokeh,
a novel dataset featuring 23,000 high-resolution (24-MP) images captured by
professional photographers, covering diverse scenes with varied aperture and
focal length settings. Evaluations on both our new RealBokeh and established
Bokeh rendering benchmarks show that Bokehlicious consistently outperforms SOTA
methods while significantly reducing computational cost and exhibiting strong
zero-shot generalization. Our method and dataset further extend to defocus
deblurring, achieving competitive results on the RealDOF benchmark. Our code
and data can be found at https://github.com/TimSeizinger/Bokehlicious
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 12:00:45 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 13:43:25 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Seizinger",
"Tim",
""
],
[
"Vasluianu",
"Florin-Alexandru",
""
],
[
"Conde",
"Marcos V.",
""
],
[
"Wu",
"Zongwei",
""
],
[
"Timofte",
"Radu",
""
]
] | TITLE: Bokehlicious: Photorealistic Bokeh Rendering with Controllable Apertures
ABSTRACT: Bokeh rendering methods play a key role in creating the visually appealing,
softly blurred backgrounds seen in professional photography. While recent
learning-based approaches show promising results, generating realistic Bokeh
with variable strength remains challenging. Existing methods require additional
inputs and suffer from unrealistic Bokeh reproduction due to reliance on
synthetic data. In this work, we propose Bokehlicious, a highly efficient
network that provides intuitive control over Bokeh strength through an
Aperture-Aware Attention mechanism, mimicking the physical lens aperture. To
further address the lack of high-quality real-world data, we present RealBokeh,
a novel dataset featuring 23,000 high-resolution (24-MP) images captured by
professional photographers, covering diverse scenes with varied aperture and
focal length settings. Evaluations on both our new RealBokeh and established
Bokeh rendering benchmarks show that Bokehlicious consistently outperforms SOTA
methods while significantly reducing computational cost and exhibiting strong
zero-shot generalization. Our method and dataset further extend to defocus
deblurring, achieving competitive results on the RealDOF benchmark. Our code
and data can be found at https://github.com/TimSeizinger/Bokehlicious
|
2503.17175 | Duanrui Yu | Duanrui Yu, Jing You, Xin Pei, Anqi Qu, Dingyu Wang, Shaocheng Jia | Which2comm: An Efficient Collaborative Perception Framework for 3D
Object Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative perception allows real-time inter-agent information exchange
and thus offers invaluable opportunities to enhance the perception capabilities
of individual agents. However, limited communication bandwidth in practical
scenarios restricts the inter-agent data transmission volume, consequently
resulting in performance declines in collaborative perception systems. This
implies a trade-off between perception performance and communication cost. To
address this issue, we propose Which2comm, a novel multi-agent 3D object
detection framework leveraging object-level sparse features. By integrating
semantic information of objects into 3D object detection boxes, we introduce
semantic detection boxes (SemDBs). Innovatively transmitting these
information-rich object-level sparse features among agents not only
significantly reduces the demanding communication volume, but also improves 3D
object detection performance. Specifically, a fully sparse network is
constructed to extract SemDBs from individual agents; a temporal fusion
approach with a relative temporal encoding mechanism is utilized to obtain the
comprehensive spatiotemporal features. Extensive experiments on the V2XSet and
OPV2V datasets demonstrate that Which2comm consistently outperforms other
state-of-the-art methods on both perception performance and communication cost,
exhibiting better robustness to real-world latency. These results present that
for multi-agent collaborative 3D object detection, transmitting only
object-level sparse features is sufficient to achieve high-precision and robust
performance.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 14:24:07 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 12:10:22 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Yu",
"Duanrui",
""
],
[
"You",
"Jing",
""
],
[
"Pei",
"Xin",
""
],
[
"Qu",
"Anqi",
""
],
[
"Wang",
"Dingyu",
""
],
[
"Jia",
"Shaocheng",
""
]
] | TITLE: Which2comm: An Efficient Collaborative Perception Framework for 3D
Object Detection
ABSTRACT: Collaborative perception allows real-time inter-agent information exchange
and thus offers invaluable opportunities to enhance the perception capabilities
of individual agents. However, limited communication bandwidth in practical
scenarios restricts the inter-agent data transmission volume, consequently
resulting in performance declines in collaborative perception systems. This
implies a trade-off between perception performance and communication cost. To
address this issue, we propose Which2comm, a novel multi-agent 3D object
detection framework leveraging object-level sparse features. By integrating
semantic information of objects into 3D object detection boxes, we introduce
semantic detection boxes (SemDBs). Innovatively transmitting these
information-rich object-level sparse features among agents not only
significantly reduces the demanding communication volume, but also improves 3D
object detection performance. Specifically, a fully sparse network is
constructed to extract SemDBs from individual agents; a temporal fusion
approach with a relative temporal encoding mechanism is utilized to obtain the
comprehensive spatiotemporal features. Extensive experiments on the V2XSet and
OPV2V datasets demonstrate that Which2comm consistently outperforms other
state-of-the-art methods on both perception performance and communication cost,
exhibiting better robustness to real-world latency. These results present that
for multi-agent collaborative 3D object detection, transmitting only
object-level sparse features is sufficient to achieve high-precision and robust
performance.
|
2503.17514 | Christopher A. Choquette-Choo | Ken Ziyu Liu, Christopher A. Choquette-Choo, Matthew Jagielski, Peter
Kairouz, Sanmi Koyejo, Percy Liang, Nicolas Papernot | Language Models May Verbatim Complete Text They Were Not Explicitly
Trained On | Main text: 9 pages, 7 figures, 1 table. Appendix: 29 pages, 20
tables, 15 figures | null | null | null | cs.CL cs.AI cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | An important question today is whether a given text was used to train a large
language model (LLM). A \emph{completion} test is often employed: check if the
LLM completes a sufficiently complex text. This, however, requires a
ground-truth definition of membership; most commonly, it is defined as a member
based on the $n$-gram overlap between the target text and any text in the
dataset. In this work, we demonstrate that this $n$-gram based membership
definition can be effectively gamed. We study scenarios where sequences are
\emph{non-members} for a given $n$ and we find that completion tests still
succeed. We find many natural cases of this phenomenon by retraining LLMs from
scratch after removing all training samples that were completed; these cases
include exact duplicates, near-duplicates, and even short overlaps. They
showcase that it is difficult to find a single viable choice of $n$ for
membership definitions. Using these insights, we design adversarial datasets
that can cause a given target sequence to be completed without containing it,
for any reasonable choice of $n$. Our findings highlight the inadequacy of
$n$-gram membership, suggesting membership definitions fail to account for
auxiliary information available to the training algorithm.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 19:57:04 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 04:43:33 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Liu",
"Ken Ziyu",
""
],
[
"Choquette-Choo",
"Christopher A.",
""
],
[
"Jagielski",
"Matthew",
""
],
[
"Kairouz",
"Peter",
""
],
[
"Koyejo",
"Sanmi",
""
],
[
"Liang",
"Percy",
""
],
[
"Papernot",
"Nicolas",
""
]
] | TITLE: Language Models May Verbatim Complete Text They Were Not Explicitly
Trained On
ABSTRACT: An important question today is whether a given text was used to train a large
language model (LLM). A \emph{completion} test is often employed: check if the
LLM completes a sufficiently complex text. This, however, requires a
ground-truth definition of membership; most commonly, it is defined as a member
based on the $n$-gram overlap between the target text and any text in the
dataset. In this work, we demonstrate that this $n$-gram based membership
definition can be effectively gamed. We study scenarios where sequences are
\emph{non-members} for a given $n$ and we find that completion tests still
succeed. We find many natural cases of this phenomenon by retraining LLMs from
scratch after removing all training samples that were completed; these cases
include exact duplicates, near-duplicates, and even short overlaps. They
showcase that it is difficult to find a single viable choice of $n$ for
membership definitions. Using these insights, we design adversarial datasets
that can cause a given target sequence to be completed without containing it,
for any reasonable choice of $n$. Our findings highlight the inadequacy of
$n$-gram membership, suggesting membership definitions fail to account for
auxiliary information available to the training algorithm.
|
2503.17896 | Hong Zheng | Hong Zheng, Yucheng Chen, Nan Mu, Xiaoning Li | Multi-Disease-Aware Training Strategy for Cardiac MR Image Segmentation | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate segmentation of the ventricles from cardiac magnetic resonance
images (CMRIs) is crucial for enhancing the diagnosis and analysis of heart
conditions. Deep learning-based segmentation methods have recently garnered
significant attention due to their impressive performance. However, these
segmentation methods are typically good at partitioning regularly shaped
organs, such as the left ventricle (LV) and the myocardium (MYO), whereas they
perform poorly on irregularly shaped organs, such as the right ventricle (RV).
In this study, we argue that this limitation of segmentation models stems from
their insufficient generalization ability to address the distribution shift of
segmentation targets across slices, cardiac phases, and disease conditions. To
overcome this issue, we present a Multi-Disease-Aware Training Strategy (MTS)
and restructure the introduced CMRI datasets into multi-disease datasets.
Additionally, we propose a specialized data processing technique for
preprocessing input images to support the MTS. To validate the effectiveness of
our method, we performed control group experiments and cross-validation tests.
The experimental results show that (1) network models trained using our
proposed strategy achieved superior segmentation performance, particularly in
RV segmentation, and (2) these networks exhibited robust performance even when
applied to data from unknown diseases.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 01:29:27 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 01:56:08 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Zheng",
"Hong",
""
],
[
"Chen",
"Yucheng",
""
],
[
"Mu",
"Nan",
""
],
[
"Li",
"Xiaoning",
""
]
] | TITLE: Multi-Disease-Aware Training Strategy for Cardiac MR Image Segmentation
ABSTRACT: Accurate segmentation of the ventricles from cardiac magnetic resonance
images (CMRIs) is crucial for enhancing the diagnosis and analysis of heart
conditions. Deep learning-based segmentation methods have recently garnered
significant attention due to their impressive performance. However, these
segmentation methods are typically good at partitioning regularly shaped
organs, such as the left ventricle (LV) and the myocardium (MYO), whereas they
perform poorly on irregularly shaped organs, such as the right ventricle (RV).
In this study, we argue that this limitation of segmentation models stems from
their insufficient generalization ability to address the distribution shift of
segmentation targets across slices, cardiac phases, and disease conditions. To
overcome this issue, we present a Multi-Disease-Aware Training Strategy (MTS)
and restructure the introduced CMRI datasets into multi-disease datasets.
Additionally, we propose a specialized data processing technique for
preprocessing input images to support the MTS. To validate the effectiveness of
our method, we performed control group experiments and cross-validation tests.
The experimental results show that (1) network models trained using our
proposed strategy achieved superior segmentation performance, particularly in
RV segmentation, and (2) these networks exhibited robust performance even when
applied to data from unknown diseases.
|
2503.17908 | Yongqi Huang | Yongqi Huang, Jitao Zhao, Dongxiao He, Di Jin, Yuxiao Huang, Zhen Wang | Does GCL Need a Large Number of Negative Samples? Enhancing Graph
Contrastive Learning with Effective and Efficient Negative Sampling | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Contrastive Learning (GCL) aims to self-supervised learn
low-dimensional graph representations, primarily through instance
discrimination, which involves manually mining positive and negative pairs from
graphs, increasing the similarity of positive pairs while decreasing negative
pairs. Drawing from the success of Contrastive Learning (CL) in other domains,
a consensus has been reached that the effectiveness of GCLs depends on a large
number of negative pairs. As a result, despite the significant computational
overhead, GCLs typically leverage as many negative node pairs as possible to
improve model performance. However, given that nodes within a graph are
interconnected, we argue that nodes cannot be treated as independent instances.
Therefore, we challenge this consensus: Does employing more negative nodes lead
to a more effective GCL model? To answer this, we explore the role of negative
nodes in the commonly used InfoNCE loss for GCL and observe that: (1)
Counterintuitively, a large number of negative nodes can actually hinder the
model's ability to distinguish nodes with different semantics. (2) A smaller
number of high-quality and non-topologically coupled negative nodes are
sufficient to enhance the discriminability of representations. Based on these
findings, we propose a new method called GCL with Effective and Efficient
Negative samples, E2Neg, which learns discriminative representations using only
a very small set of representative negative samples. E2Neg significantly
reduces computational overhead and speeds up model training. We demonstrate the
effectiveness and efficiency of E2Neg across multiple datasets compared to
other GCL methods.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 03:09:31 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Huang",
"Yongqi",
""
],
[
"Zhao",
"Jitao",
""
],
[
"He",
"Dongxiao",
""
],
[
"Jin",
"Di",
""
],
[
"Huang",
"Yuxiao",
""
],
[
"Wang",
"Zhen",
""
]
] | TITLE: Does GCL Need a Large Number of Negative Samples? Enhancing Graph
Contrastive Learning with Effective and Efficient Negative Sampling
ABSTRACT: Graph Contrastive Learning (GCL) aims to self-supervised learn
low-dimensional graph representations, primarily through instance
discrimination, which involves manually mining positive and negative pairs from
graphs, increasing the similarity of positive pairs while decreasing negative
pairs. Drawing from the success of Contrastive Learning (CL) in other domains,
a consensus has been reached that the effectiveness of GCLs depends on a large
number of negative pairs. As a result, despite the significant computational
overhead, GCLs typically leverage as many negative node pairs as possible to
improve model performance. However, given that nodes within a graph are
interconnected, we argue that nodes cannot be treated as independent instances.
Therefore, we challenge this consensus: Does employing more negative nodes lead
to a more effective GCL model? To answer this, we explore the role of negative
nodes in the commonly used InfoNCE loss for GCL and observe that: (1)
Counterintuitively, a large number of negative nodes can actually hinder the
model's ability to distinguish nodes with different semantics. (2) A smaller
number of high-quality and non-topologically coupled negative nodes are
sufficient to enhance the discriminability of representations. Based on these
findings, we propose a new method called GCL with Effective and Efficient
Negative samples, E2Neg, which learns discriminative representations using only
a very small set of representative negative samples. E2Neg significantly
reduces computational overhead and speeds up model training. We demonstrate the
effectiveness and efficiency of E2Neg across multiple datasets compared to
other GCL methods.
|
2503.17935 | Koustubh Phalak | Koustubh Phalak, Junde Li and Swaroop Ghosh | Dataset Distillation for Quantum Neural Networks | 5 pages, 4 figures, 2 tables | null | null | null | cs.LG quant-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Training Quantum Neural Networks (QNNs) on large amount of classical data can
be both time consuming as well as expensive. Higher amount of training data
would require higher number of gradient descent steps to reach convergence.
This, in turn would imply that the QNN will require higher number of quantum
executions, thereby driving up its overall execution cost. In this work, we
propose performing the dataset distillation process for QNNs, where we use a
novel quantum variant of classical LeNet model containing residual connection
and trainable Hermitian observable in the Parametric Quantum Circuit (PQC) of
the QNN. This approach yields highly informative yet small number of training
data at similar performance as the original data. We perform distillation for
MNIST and Cifar-10 datasets, and on comparison with classical models observe
that both the datasets yield reasonably similar post-inferencing accuracy on
quantum LeNet (91.9% MNIST, 50.3% Cifar-10) compared to classical LeNet (94%
MNIST, 54% Cifar-10). We also introduce a non-trainable Hermitian for ensuring
stability in the distillation process and note marginal reduction of up to 1.8%
(1.3%) for MNIST (Cifar-10) dataset.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 04:33:39 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 02:31:38 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Phalak",
"Koustubh",
""
],
[
"Li",
"Junde",
""
],
[
"Ghosh",
"Swaroop",
""
]
] | TITLE: Dataset Distillation for Quantum Neural Networks
ABSTRACT: Training Quantum Neural Networks (QNNs) on large amount of classical data can
be both time consuming as well as expensive. Higher amount of training data
would require higher number of gradient descent steps to reach convergence.
This, in turn would imply that the QNN will require higher number of quantum
executions, thereby driving up its overall execution cost. In this work, we
propose performing the dataset distillation process for QNNs, where we use a
novel quantum variant of classical LeNet model containing residual connection
and trainable Hermitian observable in the Parametric Quantum Circuit (PQC) of
the QNN. This approach yields highly informative yet small number of training
data at similar performance as the original data. We perform distillation for
MNIST and Cifar-10 datasets, and on comparison with classical models observe
that both the datasets yield reasonably similar post-inferencing accuracy on
quantum LeNet (91.9% MNIST, 50.3% Cifar-10) compared to classical LeNet (94%
MNIST, 54% Cifar-10). We also introduce a non-trainable Hermitian for ensuring
stability in the distillation process and note marginal reduction of up to 1.8%
(1.3%) for MNIST (Cifar-10) dataset.
|
2503.17975 | Yuzhi Li | Yuzhi Li, Haojun Xu, Feng Tian | Shot Sequence Ordering for Video Editing: Benchmarks, Metrics, and
Cinematology-Inspired Computing Methods | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rising popularity of short video platforms, the demand for video
production has increased substantially. However, high-quality video creation
continues to rely heavily on professional editing skills and a nuanced
understanding of visual language. To address this challenge, the Shot Sequence
Ordering (SSO) task in AI-assisted video editing has emerged as a pivotal
approach for enhancing video storytelling and the overall viewing experience.
Nevertheless, the progress in this field has been impeded by a lack of publicly
available benchmark datasets. In response, this paper introduces two novel
benchmark datasets, AVE-Order and ActivityNet-Order. Additionally, we employ
the Kendall Tau distance as an evaluation metric for the SSO task and propose
the Kendall Tau Distance-Cross Entropy Loss. We further introduce the concept
of Cinematology Embedding, which incorporates movie metadata and shot labels as
prior knowledge into the SSO model, and constructs the AVE-Meta dataset to
validate the method's effectiveness. Experimental results indicate that the
proposed loss function and method substantially enhance SSO task accuracy. All
datasets are publicly accessible at https://github.com/litchiar/ShotSeqBench.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 08:04:45 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 11:37:52 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Li",
"Yuzhi",
""
],
[
"Xu",
"Haojun",
""
],
[
"Tian",
"Feng",
""
]
] | TITLE: Shot Sequence Ordering for Video Editing: Benchmarks, Metrics, and
Cinematology-Inspired Computing Methods
ABSTRACT: With the rising popularity of short video platforms, the demand for video
production has increased substantially. However, high-quality video creation
continues to rely heavily on professional editing skills and a nuanced
understanding of visual language. To address this challenge, the Shot Sequence
Ordering (SSO) task in AI-assisted video editing has emerged as a pivotal
approach for enhancing video storytelling and the overall viewing experience.
Nevertheless, the progress in this field has been impeded by a lack of publicly
available benchmark datasets. In response, this paper introduces two novel
benchmark datasets, AVE-Order and ActivityNet-Order. Additionally, we employ
the Kendall Tau distance as an evaluation metric for the SSO task and propose
the Kendall Tau Distance-Cross Entropy Loss. We further introduce the concept
of Cinematology Embedding, which incorporates movie metadata and shot labels as
prior knowledge into the SSO model, and constructs the AVE-Meta dataset to
validate the method's effectiveness. Experimental results indicate that the
proposed loss function and method substantially enhance SSO task accuracy. All
datasets are publicly accessible at https://github.com/litchiar/ShotSeqBench.
|
2503.18155 | Kelly Marshall | Kelly O. Marshall, Omid Poursaeed, Sergiu Oprea, Amit Kumar, Anushrut
Jignasu, Chinmay Hegde, Yilei Li, Rakesh Ranjan | Decorum: A Language-Based Approach For Style-Conditioned Synthesis of
Indoor 3D Scenes | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D indoor scene generation is an important problem for the design of digital
and real-world environments. To automate this process, a scene generation model
should be able to not only generate plausible scene layouts, but also take into
consideration visual features and style preferences. Existing methods for this
task exhibit very limited control over these attributes, only allowing text
inputs in the form of simple object-level descriptions or pairwise spatial
relationships. Our proposed method Decorum enables users to control the scene
generation process with natural language by adopting language-based
representations at each stage. This enables us to harness recent advancements
in Large Language Models (LLMs) to model language-to-language mappings. In
addition, we show that using a text-based representation allows us to select
furniture for our scenes using a novel object retrieval method based on
multimodal LLMs. Evaluations on the benchmark 3D-FRONT dataset show that our
methods achieve improvements over existing work in text-conditioned scene
synthesis and object retrieval.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 17:48:44 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 15:58:36 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Marshall",
"Kelly O.",
""
],
[
"Poursaeed",
"Omid",
""
],
[
"Oprea",
"Sergiu",
""
],
[
"Kumar",
"Amit",
""
],
[
"Jignasu",
"Anushrut",
""
],
[
"Hegde",
"Chinmay",
""
],
[
"Li",
"Yilei",
""
],
[
"Ranjan",
"Rakesh",
""
]
] | TITLE: Decorum: A Language-Based Approach For Style-Conditioned Synthesis of
Indoor 3D Scenes
ABSTRACT: 3D indoor scene generation is an important problem for the design of digital
and real-world environments. To automate this process, a scene generation model
should be able to not only generate plausible scene layouts, but also take into
consideration visual features and style preferences. Existing methods for this
task exhibit very limited control over these attributes, only allowing text
inputs in the form of simple object-level descriptions or pairwise spatial
relationships. Our proposed method Decorum enables users to control the scene
generation process with natural language by adopting language-based
representations at each stage. This enables us to harness recent advancements
in Large Language Models (LLMs) to model language-to-language mappings. In
addition, we show that using a text-based representation allows us to select
furniture for our scenes using a novel object retrieval method based on
multimodal LLMs. Evaluations on the benchmark 3D-FRONT dataset show that our
methods achieve improvements over existing work in text-conditioned scene
synthesis and object retrieval.
|
2503.18167 | Suman Adhya | Suman Adhya, Avishek Lahiri, Debarshi Kumar Sanyal, Partha Pratim Das | Evaluating Negative Sampling Approaches for Neural Topic Models | Code is available at: https://github.com/AdhyaSuman/Eval_NegTM | in IEEE Transactions on Artificial Intelligence, vol. 5, no. 11,
pp. 5630-5642, Nov. 2024 | 10.1109/TAI.2024.3432857 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Negative sampling has emerged as an effective technique that enables deep
learning models to learn better representations by introducing the paradigm of
learn-to-compare. The goal of this approach is to add robustness to deep
learning models to learn better representation by comparing the positive
samples against the negative ones. Despite its numerous demonstrations in
various areas of computer vision and natural language processing, a
comprehensive study of the effect of negative sampling in an unsupervised
domain like topic modeling has not been well explored. In this paper, we
present a comprehensive analysis of the impact of different negative sampling
strategies on neural topic models. We compare the performance of several
popular neural topic models by incorporating a negative sampling technique in
the decoder of variational autoencoder-based neural topic models. Experiments
on four publicly available datasets demonstrate that integrating negative
sampling into topic models results in significant enhancements across multiple
aspects, including improved topic coherence, richer topic diversity, and more
accurate document classification. Manual evaluations also indicate that the
inclusion of negative sampling into neural topic models enhances the quality of
the generated topics. These findings highlight the potential of negative
sampling as a valuable tool for advancing the effectiveness of neural topic
models.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 18:39:01 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 05:53:08 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Adhya",
"Suman",
""
],
[
"Lahiri",
"Avishek",
""
],
[
"Sanyal",
"Debarshi Kumar",
""
],
[
"Das",
"Partha Pratim",
""
]
] | TITLE: Evaluating Negative Sampling Approaches for Neural Topic Models
ABSTRACT: Negative sampling has emerged as an effective technique that enables deep
learning models to learn better representations by introducing the paradigm of
learn-to-compare. The goal of this approach is to add robustness to deep
learning models to learn better representation by comparing the positive
samples against the negative ones. Despite its numerous demonstrations in
various areas of computer vision and natural language processing, a
comprehensive study of the effect of negative sampling in an unsupervised
domain like topic modeling has not been well explored. In this paper, we
present a comprehensive analysis of the impact of different negative sampling
strategies on neural topic models. We compare the performance of several
popular neural topic models by incorporating a negative sampling technique in
the decoder of variational autoencoder-based neural topic models. Experiments
on four publicly available datasets demonstrate that integrating negative
sampling into topic models results in significant enhancements across multiple
aspects, including improved topic coherence, richer topic diversity, and more
accurate document classification. Manual evaluations also indicate that the
inclusion of negative sampling into neural topic models enhances the quality of
the generated topics. These findings highlight the potential of negative
sampling as a valuable tool for advancing the effectiveness of neural topic
models.
|
2503.18314 | Christoforos Spartalis | Christoforos N. Spartalis, Theodoros Semertzidis, Efstratios Gavves,
Petros Daras | LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty | Accepted as a main conference paper at CVPR 2025
(https://cvpr.thecvf.com/virtual/2025/poster/33292) | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present LoTUS, a novel Machine Unlearning (MU) method that eliminates the
influence of training samples from pre-trained models, avoiding retraining from
scratch. LoTUS smooths the prediction probabilities of the model up to an
information-theoretic bound, mitigating its over-confidence stemming from data
memorization. We evaluate LoTUS on Transformer and ResNet18 models against
eight baselines across five public datasets. Beyond established MU benchmarks,
we evaluate unlearning on ImageNet1k, a large-scale dataset, where retraining
is impractical, simulating real-world conditions. Moreover, we introduce the
novel Retrain-Free Jensen-Shannon Divergence (RF-JSD) metric to enable
evaluation under real-world conditions. The experimental results show that
LoTUS outperforms state-of-the-art methods in terms of both efficiency and
effectiveness. Code: https://github.com/cspartalis/LoTUS.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 03:34:23 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 06:23:57 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Spartalis",
"Christoforos N.",
""
],
[
"Semertzidis",
"Theodoros",
""
],
[
"Gavves",
"Efstratios",
""
],
[
"Daras",
"Petros",
""
]
] | TITLE: LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty
ABSTRACT: We present LoTUS, a novel Machine Unlearning (MU) method that eliminates the
influence of training samples from pre-trained models, avoiding retraining from
scratch. LoTUS smooths the prediction probabilities of the model up to an
information-theoretic bound, mitigating its over-confidence stemming from data
memorization. We evaluate LoTUS on Transformer and ResNet18 models against
eight baselines across five public datasets. Beyond established MU benchmarks,
we evaluate unlearning on ImageNet1k, a large-scale dataset, where retraining
is impractical, simulating real-world conditions. Moreover, we introduce the
novel Retrain-Free Jensen-Shannon Divergence (RF-JSD) metric to enable
evaluation under real-world conditions. The experimental results show that
LoTUS outperforms state-of-the-art methods in terms of both efficiency and
effectiveness. Code: https://github.com/cspartalis/LoTUS.
|
2503.18406 | Sherry X. Chen | Sherry X. Chen, Misha Sra, and Pradeep Sen | Instruct-CLIP: Improving Instruction-Guided Image Editing with Automated
Data Refinement Using Contrastive Learning | Computer Vision and Pattern Recognition 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Although natural language instructions offer an intuitive way to guide
automated image editing, deep-learning models often struggle to achieve
high-quality results, largely due to the difficulty of creating large,
high-quality training datasets. To do this, previous approaches have typically
relied on text-to-image (T2I) generative models to produce pairs of original
and edited images that simulate the input/output of an instruction-guided
image-editing model. However, these image pairs often fail to align with the
specified edit instructions due to the limitations of T2I models, which
negatively impacts models trained on such datasets. To address this, we present
Instruct-CLIP (I-CLIP), a selfsupervised method that learns the semantic
changes between original and edited images to refine and better align the
instructions in existing datasets. Furthermore, we adapt Instruct-CLIP to
handle noisy latent images and diffusion timesteps so that it can be used to
train latent diffusion models (LDMs) and efficiently enforce alignment between
the edit instruction and the image changes in latent space at any step of the
diffusion pipeline. We use Instruct-CLIP to correct the InstructPix2Pix dataset
and get over 120K refined samples we then use to fine-tune their model, guided
by our novel I-CLIP-based loss function. The resulting model can produce edits
that are more aligned with the given instructions. Our code and dataset are
available at https://github.com/SherryXTChen/Instruct-CLIP.git.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 07:25:44 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 05:30:02 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Chen",
"Sherry X.",
""
],
[
"Sra",
"Misha",
""
],
[
"Sen",
"Pradeep",
""
]
] | TITLE: Instruct-CLIP: Improving Instruction-Guided Image Editing with Automated
Data Refinement Using Contrastive Learning
ABSTRACT: Although natural language instructions offer an intuitive way to guide
automated image editing, deep-learning models often struggle to achieve
high-quality results, largely due to the difficulty of creating large,
high-quality training datasets. To do this, previous approaches have typically
relied on text-to-image (T2I) generative models to produce pairs of original
and edited images that simulate the input/output of an instruction-guided
image-editing model. However, these image pairs often fail to align with the
specified edit instructions due to the limitations of T2I models, which
negatively impacts models trained on such datasets. To address this, we present
Instruct-CLIP (I-CLIP), a selfsupervised method that learns the semantic
changes between original and edited images to refine and better align the
instructions in existing datasets. Furthermore, we adapt Instruct-CLIP to
handle noisy latent images and diffusion timesteps so that it can be used to
train latent diffusion models (LDMs) and efficiently enforce alignment between
the edit instruction and the image changes in latent space at any step of the
diffusion pipeline. We use Instruct-CLIP to correct the InstructPix2Pix dataset
and get over 120K refined samples we then use to fine-tune their model, guided
by our novel I-CLIP-based loss function. The resulting model can produce edits
that are more aligned with the given instructions. Our code and dataset are
available at https://github.com/SherryXTChen/Instruct-CLIP.git.
|
2503.18430 | Zhichao Sun | Zhichao Sun, Huazhang Hu, Yidong Ma, Gang Liu, Nemo Chen, Xu Tang, Yao
Hu, Yongchao Xu | CQ-DINO: Mitigating Gradient Dilution via Category Queries for Vast
Vocabulary Object Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | With the exponential growth of data, traditional object detection methods are
increasingly struggling to handle vast vocabulary object detection tasks
effectively. We analyze two key limitations of classification-based detectors:
positive gradient dilution, where rare positive categories receive insufficient
learning signals, and hard negative gradient dilution, where discriminative
gradients are overwhelmed by numerous easy negatives. To address these
challenges, we propose CQ-DINO, a category query-based object detection
framework that reformulates classification as a contrastive task between object
queries and learnable category queries. Our method introduces image-guided
query selection, which reduces the negative space by adaptively retrieving
top-K relevant categories per image via cross-attention, thereby rebalancing
gradient distributions and facilitating implicit hard example mining.
Furthermore, CQ-DINO flexibly integrates explicit hierarchical category
relationships in structured datasets (e.g., V3Det) or learns implicit category
correlations via self-attention in generic datasets (e.g., COCO). Experiments
demonstrate that CQ-DINO achieves superior performance on the challenging V3Det
benchmark (surpassing previous methods by 2.1% AP) while maintaining
competitiveness in COCO. Our work provides a scalable solution for real-world
detection systems requiring wide category coverage. The dataset and code will
be publicly at https://github.com/RedAIGC/CQ-DINO.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 08:22:55 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 07:39:46 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Sun",
"Zhichao",
""
],
[
"Hu",
"Huazhang",
""
],
[
"Ma",
"Yidong",
""
],
[
"Liu",
"Gang",
""
],
[
"Chen",
"Nemo",
""
],
[
"Tang",
"Xu",
""
],
[
"Hu",
"Yao",
""
],
[
"Xu",
"Yongchao",
""
]
] | TITLE: CQ-DINO: Mitigating Gradient Dilution via Category Queries for Vast
Vocabulary Object Detection
ABSTRACT: With the exponential growth of data, traditional object detection methods are
increasingly struggling to handle vast vocabulary object detection tasks
effectively. We analyze two key limitations of classification-based detectors:
positive gradient dilution, where rare positive categories receive insufficient
learning signals, and hard negative gradient dilution, where discriminative
gradients are overwhelmed by numerous easy negatives. To address these
challenges, we propose CQ-DINO, a category query-based object detection
framework that reformulates classification as a contrastive task between object
queries and learnable category queries. Our method introduces image-guided
query selection, which reduces the negative space by adaptively retrieving
top-K relevant categories per image via cross-attention, thereby rebalancing
gradient distributions and facilitating implicit hard example mining.
Furthermore, CQ-DINO flexibly integrates explicit hierarchical category
relationships in structured datasets (e.g., V3Det) or learns implicit category
correlations via self-attention in generic datasets (e.g., COCO). Experiments
demonstrate that CQ-DINO achieves superior performance on the challenging V3Det
benchmark (surpassing previous methods by 2.1% AP) while maintaining
competitiveness in COCO. Our work provides a scalable solution for real-world
detection systems requiring wide category coverage. The dataset and code will
be publicly at https://github.com/RedAIGC/CQ-DINO.
|
2503.18458 | Yaohua Tang | Luchao Wang, Qian Ren, Kaimin Liao, Hua Wang, Zhi Chen, Yaohua Tang | StableGS: A Floater-Free Framework for 3D Gaussian Splatting | null | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recent years have witnessed remarkable success of 3D Gaussian Splatting
(3DGS) in novel view synthesis, surpassing prior differentiable rendering
methods in both quality and efficiency. However, its training process suffers
from coupled opacity-color optimization that frequently converges to local
minima, producing floater artifacts that degrade visual fidelity. We present
StableGS, a framework that eliminates floaters through cross-view depth
consistency constraints while introducing a dual-opacity GS model to decouple
geometry and material properties of translucent objects. To further enhance
reconstruction quality in weakly-textured regions, we integrate DUSt3R depth
estimation, significantly improving geometric stability. Our method
fundamentally addresses 3DGS training instabilities, outperforming existing
state-of-the-art methods across open-source datasets.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 09:02:51 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 02:48:12 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Wang",
"Luchao",
""
],
[
"Ren",
"Qian",
""
],
[
"Liao",
"Kaimin",
""
],
[
"Wang",
"Hua",
""
],
[
"Chen",
"Zhi",
""
],
[
"Tang",
"Yaohua",
""
]
] | TITLE: StableGS: A Floater-Free Framework for 3D Gaussian Splatting
ABSTRACT: Recent years have witnessed remarkable success of 3D Gaussian Splatting
(3DGS) in novel view synthesis, surpassing prior differentiable rendering
methods in both quality and efficiency. However, its training process suffers
from coupled opacity-color optimization that frequently converges to local
minima, producing floater artifacts that degrade visual fidelity. We present
StableGS, a framework that eliminates floaters through cross-view depth
consistency constraints while introducing a dual-opacity GS model to decouple
geometry and material properties of translucent objects. To further enhance
reconstruction quality in weakly-textured regions, we integrate DUSt3R depth
estimation, significantly improving geometric stability. Our method
fundamentally addresses 3DGS training instabilities, outperforming existing
state-of-the-art methods across open-source datasets.
|
2503.18527 | Daniel Panangian | Soulaimene Turki, Daniel Panangian, Houda Chaabouni-Chouayakh, Ksenia
Bittner | AIM2PC: Aerial Image to 3D Building Point Cloud Reconstruction | Accepted to ISPRS Geospatial Week 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Three-dimensional urban reconstruction of buildings from single-view images
has attracted significant attention over the past two decades. However, recent
methods primarily focus on rooftops from aerial images, often overlooking
essential geometrical details. Additionally, there is a notable lack of
datasets containing complete 3D point clouds for entire buildings, along with
challenges in obtaining reliable camera pose information for aerial images.
This paper addresses these challenges by presenting a novel methodology, AIM2PC
, which utilizes our generated dataset that includes complete 3D point clouds
and determined camera poses. Our approach takes features from a single aerial
image as input and concatenates them with essential additional conditions, such
as binary masks and Sobel edge maps, to enable more edge-aware reconstruction.
By incorporating a point cloud diffusion model based on Centered denoising
Diffusion Probabilistic Models (CDPM), we project these concatenated features
onto the partially denoised point cloud using our camera poses at each
diffusion step. The proposed method is able to reconstruct the complete 3D
building point cloud, including wall information and demonstrates superior
performance compared to existing baseline techniques. To allow further
comparisons with our methodology the dataset has been made available at
https://github.com/Soulaimene/AIM2PCDataset
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 10:34:07 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 09:44:41 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Turki",
"Soulaimene",
""
],
[
"Panangian",
"Daniel",
""
],
[
"Chaabouni-Chouayakh",
"Houda",
""
],
[
"Bittner",
"Ksenia",
""
]
] | TITLE: AIM2PC: Aerial Image to 3D Building Point Cloud Reconstruction
ABSTRACT: Three-dimensional urban reconstruction of buildings from single-view images
has attracted significant attention over the past two decades. However, recent
methods primarily focus on rooftops from aerial images, often overlooking
essential geometrical details. Additionally, there is a notable lack of
datasets containing complete 3D point clouds for entire buildings, along with
challenges in obtaining reliable camera pose information for aerial images.
This paper addresses these challenges by presenting a novel methodology, AIM2PC
, which utilizes our generated dataset that includes complete 3D point clouds
and determined camera poses. Our approach takes features from a single aerial
image as input and concatenates them with essential additional conditions, such
as binary masks and Sobel edge maps, to enable more edge-aware reconstruction.
By incorporating a point cloud diffusion model based on Centered denoising
Diffusion Probabilistic Models (CDPM), we project these concatenated features
onto the partially denoised point cloud using our camera poses at each
diffusion step. The proposed method is able to reconstruct the complete 3D
building point cloud, including wall information and demonstrates superior
performance compared to existing baseline techniques. To allow further
comparisons with our methodology the dataset has been made available at
https://github.com/Soulaimene/AIM2PCDataset
|
2503.18584 | Zhiwei Shi | Zhiwei Shi, Chengxi Zhu, Fan Yang, Jun Yan, Zheyun Qin, Songquan Shi
and Zhumin Chen | A Universal Model Combining Differential Equations and Neural Networks
for Ball Trajectory Prediction | This submission was made without my advisor's consent, and I
mistakenly uploaded an incorrect version of the paper. Additionally, some
content in the paper should not be made publicly available at this time, as
per my advisor's wishes. I apologize for any inconvenience this may have
caused | null | null | null | cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | This paper presents a data driven universal ball trajectory prediction method
integrated with physics equations. Existing methods are designed for specific
ball types and struggle to generalize. This challenge arises from three key
factors. First, learning-based models require large datasets but suffer from
accuracy drops in unseen scenarios. Second, physics-based models rely on
complex formulas and detailed inputs, yet accurately obtaining ball states,
such as spin, is often impractical. Third, integrating physical principles with
neural networks to achieve high accuracy, fast inference, and strong
generalization remains difficult. To address these issues, we propose an
innovative approach that incorporates physics-based equations and neural
networks. We first derive three generalized physical formulas. Then, using a
neural network and observed trajectory points, we infer certain parameters
while fitting the remaining ones. These formulas enable precise trajectory
prediction with minimal training data: only a few dozen samples. Extensive
experiments demonstrate our method superiority in generalization, real-time
performance, and accuracy.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 11:41:47 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 10:50:57 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Shi",
"Zhiwei",
""
],
[
"Zhu",
"Chengxi",
""
],
[
"Yang",
"Fan",
""
],
[
"Yan",
"Jun",
""
],
[
"Qin",
"Zheyun",
""
],
[
"Shi",
"Songquan",
""
],
[
"Chen",
"Zhumin",
""
]
] | TITLE: A Universal Model Combining Differential Equations and Neural Networks
for Ball Trajectory Prediction
ABSTRACT: This paper presents a data driven universal ball trajectory prediction method
integrated with physics equations. Existing methods are designed for specific
ball types and struggle to generalize. This challenge arises from three key
factors. First, learning-based models require large datasets but suffer from
accuracy drops in unseen scenarios. Second, physics-based models rely on
complex formulas and detailed inputs, yet accurately obtaining ball states,
such as spin, is often impractical. Third, integrating physical principles with
neural networks to achieve high accuracy, fast inference, and strong
generalization remains difficult. To address these issues, we propose an
innovative approach that incorporates physics-based equations and neural
networks. We first derive three generalized physical formulas. Then, using a
neural network and observed trajectory points, we infer certain parameters
while fitting the remaining ones. These formulas enable precise trajectory
prediction with minimal training data: only a few dozen samples. Extensive
experiments demonstrate our method superiority in generalization, real-time
performance, and accuracy.
|
2503.18673 | Taeyeop Lee | Taeyeop Lee, Bowen Wen, Minjun Kang, Gyuree Kang, In So Kweon, Kuk-Jin
Yoon | Any6D: Model-free 6D Pose Estimation of Novel Objects | CVPR 2025, Project Page: https://taeyeop.com/any6d | null | null | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Any6D, a model-free framework for 6D object pose estimation that
requires only a single RGB-D anchor image to estimate both the 6D pose and size
of unknown objects in novel scenes. Unlike existing methods that rely on
textured 3D models or multiple viewpoints, Any6D leverages a joint object
alignment process to enhance 2D-3D alignment and metric scale estimation for
improved pose accuracy. Our approach integrates a render-and-compare strategy
to generate and refine pose hypotheses, enabling robust performance in
scenarios with occlusions, non-overlapping views, diverse lighting conditions,
and large cross-environment variations. We evaluate our method on five
challenging datasets: REAL275, Toyota-Light, HO3D, YCBINEOAT, and LM-O,
demonstrating its effectiveness in significantly outperforming state-of-the-art
methods for novel object pose estimation. Project page:
https://taeyeop.com/any6d
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 13:46:21 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 06:18:47 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Lee",
"Taeyeop",
""
],
[
"Wen",
"Bowen",
""
],
[
"Kang",
"Minjun",
""
],
[
"Kang",
"Gyuree",
""
],
[
"Kweon",
"In So",
""
],
[
"Yoon",
"Kuk-Jin",
""
]
] | TITLE: Any6D: Model-free 6D Pose Estimation of Novel Objects
ABSTRACT: We introduce Any6D, a model-free framework for 6D object pose estimation that
requires only a single RGB-D anchor image to estimate both the 6D pose and size
of unknown objects in novel scenes. Unlike existing methods that rely on
textured 3D models or multiple viewpoints, Any6D leverages a joint object
alignment process to enhance 2D-3D alignment and metric scale estimation for
improved pose accuracy. Our approach integrates a render-and-compare strategy
to generate and refine pose hypotheses, enabling robust performance in
scenarios with occlusions, non-overlapping views, diverse lighting conditions,
and large cross-environment variations. We evaluate our method on five
challenging datasets: REAL275, Toyota-Light, HO3D, YCBINEOAT, and LM-O,
demonstrating its effectiveness in significantly outperforming state-of-the-art
methods for novel object pose estimation. Project page:
https://taeyeop.com/any6d
|
2503.18840 | Meva Himmetoglu | Meva Himmetoglu, Ilja Ciernik, Ender Konukoglu (for the Alzheimer's
Disease Neuroimaging Initiative) | Learning to segment anatomy and lesions from disparately labeled sources
in brain MRI | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Segmenting healthy tissue structures alongside lesions in brain Magnetic
Resonance Images (MRI) remains a challenge for today's algorithms due to
lesion-caused disruption of the anatomy and lack of jointly labeled training
datasets, where both healthy tissues and lesions are labeled on the same
images. In this paper, we propose a method that is robust to lesion-caused
disruptions and can be trained from disparately labeled training sets, i.e.,
without requiring jointly labeled samples, to automatically segment both. In
contrast to prior work, we decouple healthy tissue and lesion segmentation in
two paths to leverage multi-sequence acquisitions and merge information with an
attention mechanism. During inference, an image-specific adaptation reduces
adverse influences of lesion regions on healthy tissue predictions. During
training, the adaptation is taken into account through meta-learning and
co-training is used to learn from disparately labeled training images. Our
model shows an improved performance on several anatomical structures and
lesions on a publicly available brain glioblastoma dataset compared to the
state-of-the-art segmentation methods.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 16:13:04 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 10:52:26 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Himmetoglu",
"Meva",
"",
"for the Alzheimer's\n Disease Neuroimaging Initiative"
],
[
"Ciernik",
"Ilja",
"",
"for the Alzheimer's\n Disease Neuroimaging Initiative"
],
[
"Konukoglu",
"Ender",
"",
"for the Alzheimer's\n Disease Neuroimaging Initiative"
]
] | TITLE: Learning to segment anatomy and lesions from disparately labeled sources
in brain MRI
ABSTRACT: Segmenting healthy tissue structures alongside lesions in brain Magnetic
Resonance Images (MRI) remains a challenge for today's algorithms due to
lesion-caused disruption of the anatomy and lack of jointly labeled training
datasets, where both healthy tissues and lesions are labeled on the same
images. In this paper, we propose a method that is robust to lesion-caused
disruptions and can be trained from disparately labeled training sets, i.e.,
without requiring jointly labeled samples, to automatically segment both. In
contrast to prior work, we decouple healthy tissue and lesion segmentation in
two paths to leverage multi-sequence acquisitions and merge information with an
attention mechanism. During inference, an image-specific adaptation reduces
adverse influences of lesion regions on healthy tissue predictions. During
training, the adaptation is taken into account through meta-learning and
co-training is used to learn from disparately labeled training images. Our
model shows an improved performance on several anatomical structures and
lesions on a publicly available brain glioblastoma dataset compared to the
state-of-the-art segmentation methods.
|
2503.18854 | Kai Zeng | Ruichuan An, Sihan Yang, Ming Lu, Renrui Zhang, Kai Zeng, Yulin Luo,
Jiajun Cao, Hao Liang, Ying Chen, Qi She, Shanghang Zhang, Wentao Zhang | MC-LLaVA: Multi-Concept Personalized Vision-Language Model | I sincerely apologize for any inconvenience caused. We actually
uploaded this paper to arXiv in November 2024, as arXiv:2411.11706. During
this update, we did not consider the replacement operation of arXiv, which
led to duplicate submissions. We have made modifications at the original
address arXiv:2411.11706 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current vision-language models (VLMs) show exceptional abilities across
diverse tasks, such as visual question answering. To enhance user experience,
recent studies investigate VLM personalization to understand user-provided
concepts. However, they mainly focus on single-concept personalization,
neglecting the existence and interplay of multiple concepts, which limits
real-world applicability. This paper proposes the first multi-concept
personalization paradigm, MC-LLaVA. Specifically, MC-LLaVA employs a
multi-concept instruction tuning strategy, effectively integrating multiple
concepts in a single training step. To reduce the costs related to joint
training, we propose a personalized textual prompt that uses visual token
information to initialize concept tokens. Additionally, we introduce a
personalized visual prompt during inference, aggregating location confidence
maps for enhanced recognition and grounding capabilities. To advance
multi-concept personalization research, we further contribute a high-quality
instruction tuning dataset. We carefully collect images with multiple
characters and objects from movies and manually generate question-answer
samples for multi-concept scenarios, featuring superior diversity.
Comprehensive qualitative and quantitative experiments demonstrate that
MC-LLaVA can achieve impressive multi-concept personalized responses, paving
the way for VLMs to become better user-specific assistants. The code and
dataset will be publicly available at https://github.com/arctanxarc/MC-LLaVA}.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 16:32:17 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 13:50:20 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"An",
"Ruichuan",
""
],
[
"Yang",
"Sihan",
""
],
[
"Lu",
"Ming",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Zeng",
"Kai",
""
],
[
"Luo",
"Yulin",
""
],
[
"Cao",
"Jiajun",
""
],
[
"Liang",
"Hao",
""
],
[
"Chen",
"Ying",
""
],
[
"She",
"Qi",
""
],
[
"Zhang",
"Shanghang",
""
],
[
"Zhang",
"Wentao",
""
]
] | TITLE: MC-LLaVA: Multi-Concept Personalized Vision-Language Model
ABSTRACT: Current vision-language models (VLMs) show exceptional abilities across
diverse tasks, such as visual question answering. To enhance user experience,
recent studies investigate VLM personalization to understand user-provided
concepts. However, they mainly focus on single-concept personalization,
neglecting the existence and interplay of multiple concepts, which limits
real-world applicability. This paper proposes the first multi-concept
personalization paradigm, MC-LLaVA. Specifically, MC-LLaVA employs a
multi-concept instruction tuning strategy, effectively integrating multiple
concepts in a single training step. To reduce the costs related to joint
training, we propose a personalized textual prompt that uses visual token
information to initialize concept tokens. Additionally, we introduce a
personalized visual prompt during inference, aggregating location confidence
maps for enhanced recognition and grounding capabilities. To advance
multi-concept personalization research, we further contribute a high-quality
instruction tuning dataset. We carefully collect images with multiple
characters and objects from movies and manually generate question-answer
samples for multi-concept scenarios, featuring superior diversity.
Comprehensive qualitative and quantitative experiments demonstrate that
MC-LLaVA can achieve impressive multi-concept personalized responses, paving
the way for VLMs to become better user-specific assistants. The code and
dataset will be publicly available at https://github.com/arctanxarc/MC-LLaVA}.
|
2503.18957 | Yixuan Wang | Yixuan Wang, Paul Stynes, Pramod Pathak, Cristina Muntean | A Real-Time Human Action Recognition Model for Assisted Living | 12 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensuring the safety and well-being of elderly and vulnerable populations in
assisted living environments is a critical concern. Computer vision presents an
innovative and powerful approach to predicting health risks through video
monitoring, employing human action recognition (HAR) technology. However,
real-time prediction of human actions with high performance and efficiency is a
challenge. This research proposes a real-time human action recognition model
that combines a deep learning model and a live video prediction and alert
system, in order to predict falls, staggering and chest pain for residents in
assisted living. Six thousand RGB video samples from the NTU RGB+D 60 dataset
were selected to create a dataset with four classes: Falling, Staggering, Chest
Pain, and Normal, with the Normal class comprising 40 daily activities.
Transfer learning technique was applied to train four state-of-the-art HAR
models on a GPU server, namely, UniFormerV2, TimeSformer, I3D, and SlowFast.
Results of the four models are presented in this paper based on class-wise and
macro performance metrics, inference efficiency, model complexity and
computational costs. TimeSformer is proposed for developing the real-time human
action recognition model, leveraging its leading macro F1 score (95.33%),
recall (95.49%), and precision (95.19%) along with significantly higher
inference throughput compared to the others. This research provides insights to
enhance safety and health of the elderly and people with chronic illnesses in
assisted living environments, fostering sustainable care, smarter communities
and industry innovation.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 20:22:17 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Wang",
"Yixuan",
""
],
[
"Stynes",
"Paul",
""
],
[
"Pathak",
"Pramod",
""
],
[
"Muntean",
"Cristina",
""
]
] | TITLE: A Real-Time Human Action Recognition Model for Assisted Living
ABSTRACT: Ensuring the safety and well-being of elderly and vulnerable populations in
assisted living environments is a critical concern. Computer vision presents an
innovative and powerful approach to predicting health risks through video
monitoring, employing human action recognition (HAR) technology. However,
real-time prediction of human actions with high performance and efficiency is a
challenge. This research proposes a real-time human action recognition model
that combines a deep learning model and a live video prediction and alert
system, in order to predict falls, staggering and chest pain for residents in
assisted living. Six thousand RGB video samples from the NTU RGB+D 60 dataset
were selected to create a dataset with four classes: Falling, Staggering, Chest
Pain, and Normal, with the Normal class comprising 40 daily activities.
Transfer learning technique was applied to train four state-of-the-art HAR
models on a GPU server, namely, UniFormerV2, TimeSformer, I3D, and SlowFast.
Results of the four models are presented in this paper based on class-wise and
macro performance metrics, inference efficiency, model complexity and
computational costs. TimeSformer is proposed for developing the real-time human
action recognition model, leveraging its leading macro F1 score (95.33%),
recall (95.49%), and precision (95.19%) along with significantly higher
inference throughput compared to the others. This research provides insights to
enhance safety and health of the elderly and people with chronic illnesses in
assisted living environments, fostering sustainable care, smarter communities
and industry innovation.
|
2503.18973 | Muhammad Ahmad | Muhammad Ahmad, Sardar Usman, Ildar Batyrshin, Muhammad Muzammil, K.
Sajid, M. Hasnain, Muhammad Jalal, and Grigori Sidorov | Automated diagnosis of lung diseases using vision transformer: a
comparative study on chest x-ray classification | null | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Lung disease is a significant health issue, particularly in
children and elderly individuals. It often results from lung infections and is
one of the leading causes of mortality in children. Globally, lung-related
diseases claim many lives each year, making early and accurate diagnoses
crucial. Radiographs are valuable tools for the diagnosis of such conditions.
The most prevalent lung diseases, including pneumonia, asthma, allergies,
chronic obstructive pulmonary disease (COPD), bronchitis, emphysema, and lung
cancer, represent significant public health challenges. Early prediction of
these conditions is critical, as it allows for the identification of risk
factors and implementation of preventive measures to reduce the likelihood of
disease onset
Methods: In this study, we utilized a dataset comprising 3,475 chest X-ray
images sourced from from Mendeley Data provided by Talukder, M. A. (2023) [14],
categorized into three classes: normal, lung opacity, and pneumonia. We applied
five pre-trained deep learning models, including CNN, ResNet50, DenseNet,
CheXNet, and U-Net, as well as two transfer learning algorithms such as Vision
Transformer (ViT) and Shifted Window (Swin) to classify these images. This
approach aims to address diagnostic issues in lung abnormalities by reducing
reliance on human intervention through automated classification systems. Our
analysis was conducted in both binary and multiclass settings. Results: In the
binary classification, we focused on distinguishing between normal and viral
pneumonia cases, whereas in the multi-class classification, all three classes
(normal, lung opacity, and viral pneumonia) were included. Our proposed
methodology (ViT) achieved remarkable performance, with accuracy rates of 99%
for binary classification and 95.25% for multiclass classification.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 04:35:17 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ahmad",
"Muhammad",
""
],
[
"Usman",
"Sardar",
""
],
[
"Batyrshin",
"Ildar",
""
],
[
"Muzammil",
"Muhammad",
""
],
[
"Sajid",
"K.",
""
],
[
"Hasnain",
"M.",
""
],
[
"Jalal",
"Muhammad",
""
],
[
"Sidorov",
"Grigori",
""
]
] | TITLE: Automated diagnosis of lung diseases using vision transformer: a
comparative study on chest x-ray classification
ABSTRACT: Background: Lung disease is a significant health issue, particularly in
children and elderly individuals. It often results from lung infections and is
one of the leading causes of mortality in children. Globally, lung-related
diseases claim many lives each year, making early and accurate diagnoses
crucial. Radiographs are valuable tools for the diagnosis of such conditions.
The most prevalent lung diseases, including pneumonia, asthma, allergies,
chronic obstructive pulmonary disease (COPD), bronchitis, emphysema, and lung
cancer, represent significant public health challenges. Early prediction of
these conditions is critical, as it allows for the identification of risk
factors and implementation of preventive measures to reduce the likelihood of
disease onset
Methods: In this study, we utilized a dataset comprising 3,475 chest X-ray
images sourced from from Mendeley Data provided by Talukder, M. A. (2023) [14],
categorized into three classes: normal, lung opacity, and pneumonia. We applied
five pre-trained deep learning models, including CNN, ResNet50, DenseNet,
CheXNet, and U-Net, as well as two transfer learning algorithms such as Vision
Transformer (ViT) and Shifted Window (Swin) to classify these images. This
approach aims to address diagnostic issues in lung abnormalities by reducing
reliance on human intervention through automated classification systems. Our
analysis was conducted in both binary and multiclass settings. Results: In the
binary classification, we focused on distinguishing between normal and viral
pneumonia cases, whereas in the multi-class classification, all three classes
(normal, lung opacity, and viral pneumonia) were included. Our proposed
methodology (ViT) achieved remarkable performance, with accuracy rates of 99%
for binary classification and 95.25% for multiclass classification.
|
2503.18982 | Liang Zhang | Liang Zhang, Jionghao Lin, John Sabatini, Diego Zapata-Rivera, Carol
Forsyth, Yang Jiang, John Hollander, Xiangen Hu, Arthur C. Graesser | Generative Data Imputation for Sparse Learner Performance Data Using
Generative Adversarial Imputation Networks | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Learner performance data collected by Intelligent Tutoring Systems (ITSs),
such as responses to questions, is essential for modeling and predicting
learners' knowledge states. However, missing responses due to skips or
incomplete attempts create data sparsity, challenging accurate assessment and
personalized instruction. To address this, we propose a generative imputation
approach using Generative Adversarial Imputation Networks (GAIN). Our method
features a three-dimensional (3D) framework (learners, questions, and
attempts), flexibly accommodating various sparsity levels. Enhanced by
convolutional neural networks and optimized with a least squares loss function,
the GAIN-based method aligns input and output dimensions to question-attempt
matrices along the learners' dimension. Extensive experiments using datasets
from AutoTutor Adult Reading Comprehension (ARC), ASSISTments, and MATHia
demonstrate that our approach significantly outperforms tensor factorization
and alternative GAN methods in imputation accuracy across different attempt
scenarios. Bayesian Knowledge Tracing (BKT) further validates the effectiveness
of the imputed data by estimating learning parameters: initial knowledge
(P(L0)), learning rate (P(T)), guess rate (P(G)), and slip rate (P(S)). Results
indicate the imputed data enhances model fit and closely mirrors original
distributions, capturing underlying learning behaviors reliably.
Kullback-Leibler (KL) divergence assessments confirm minimal divergence,
showing the imputed data preserves essential learning characteristics
effectively. These findings underscore GAIN's capability as a robust imputation
tool in ITSs, alleviating data sparsity and supporting adaptive, individualized
instruction, ultimately leading to more precise and responsive learner
assessments and improved educational outcomes.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 06:11:53 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Zhang",
"Liang",
""
],
[
"Lin",
"Jionghao",
""
],
[
"Sabatini",
"John",
""
],
[
"Zapata-Rivera",
"Diego",
""
],
[
"Forsyth",
"Carol",
""
],
[
"Jiang",
"Yang",
""
],
[
"Hollander",
"John",
""
],
[
"Hu",
"Xiangen",
""
],
[
"Graesser",
"Arthur C.",
""
]
] | TITLE: Generative Data Imputation for Sparse Learner Performance Data Using
Generative Adversarial Imputation Networks
ABSTRACT: Learner performance data collected by Intelligent Tutoring Systems (ITSs),
such as responses to questions, is essential for modeling and predicting
learners' knowledge states. However, missing responses due to skips or
incomplete attempts create data sparsity, challenging accurate assessment and
personalized instruction. To address this, we propose a generative imputation
approach using Generative Adversarial Imputation Networks (GAIN). Our method
features a three-dimensional (3D) framework (learners, questions, and
attempts), flexibly accommodating various sparsity levels. Enhanced by
convolutional neural networks and optimized with a least squares loss function,
the GAIN-based method aligns input and output dimensions to question-attempt
matrices along the learners' dimension. Extensive experiments using datasets
from AutoTutor Adult Reading Comprehension (ARC), ASSISTments, and MATHia
demonstrate that our approach significantly outperforms tensor factorization
and alternative GAN methods in imputation accuracy across different attempt
scenarios. Bayesian Knowledge Tracing (BKT) further validates the effectiveness
of the imputed data by estimating learning parameters: initial knowledge
(P(L0)), learning rate (P(T)), guess rate (P(G)), and slip rate (P(S)). Results
indicate the imputed data enhances model fit and closely mirrors original
distributions, capturing underlying learning behaviors reliably.
Kullback-Leibler (KL) divergence assessments confirm minimal divergence,
showing the imputed data preserves essential learning characteristics
effectively. These findings underscore GAIN's capability as a robust imputation
tool in ITSs, alleviating data sparsity and supporting adaptive, individualized
instruction, ultimately leading to more precise and responsive learner
assessments and improved educational outcomes.
|
2503.18986 | Jian Ma | Jian Ma, Xinchen Lyu, Jun Jiang, Qimei Cui, Haipeng Yao, Xiaofeng Tao | SplitFrozen: Split Learning with Device-side Model Frozen for
Fine-Tuning LLM on Heterogeneous Resource-Constrained Devices | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-tuning large language models (LLMs) on private, on-device data can
empower tailored personalized AI agents. However, fine-tuning LLMs on
resource-constrained edge devices faces significant challenges, including
excessive computation overhead, device heterogeneity, and data imbalance. This
paper proposes SplitFrozen, a split learning framework that enables efficient
LLM fine-tuning by strategically freezing device-side model layers while
centralizing parameter-efficient fine-tuning on the server. Our framework
partitions LLMs into device-side frozen layers and server-side fine-tuning
layers, where heterogeneous resource-constrained devices execute only forward
propagation. To minimize server-side training costs, we integrate Low-Rank
Adaptation (LoRA) into the server-side layers. A pipeline parallelism strategy
further optimizes training efficiency by decoupling device-server computations
and leveraging decomposed backward propagation. Experiments on GPT-2 with the
MRPC, MNLI-matched, and SST-2 datasets demonstrate that SplitFrozen outperforms
FedLoRA and SplitLoRA by 69.4\% model accuracy under extremely imbalanced data,
while reducing up to 86.8\% device-side computations and 50.2\% total training
time. Experiments also validate the scalability of SplitFrozen on content
generation task using Llama-3.2 model on GSM8K dataset.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 08:03:44 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ma",
"Jian",
""
],
[
"Lyu",
"Xinchen",
""
],
[
"Jiang",
"Jun",
""
],
[
"Cui",
"Qimei",
""
],
[
"Yao",
"Haipeng",
""
],
[
"Tao",
"Xiaofeng",
""
]
] | TITLE: SplitFrozen: Split Learning with Device-side Model Frozen for
Fine-Tuning LLM on Heterogeneous Resource-Constrained Devices
ABSTRACT: Fine-tuning large language models (LLMs) on private, on-device data can
empower tailored personalized AI agents. However, fine-tuning LLMs on
resource-constrained edge devices faces significant challenges, including
excessive computation overhead, device heterogeneity, and data imbalance. This
paper proposes SplitFrozen, a split learning framework that enables efficient
LLM fine-tuning by strategically freezing device-side model layers while
centralizing parameter-efficient fine-tuning on the server. Our framework
partitions LLMs into device-side frozen layers and server-side fine-tuning
layers, where heterogeneous resource-constrained devices execute only forward
propagation. To minimize server-side training costs, we integrate Low-Rank
Adaptation (LoRA) into the server-side layers. A pipeline parallelism strategy
further optimizes training efficiency by decoupling device-server computations
and leveraging decomposed backward propagation. Experiments on GPT-2 with the
MRPC, MNLI-matched, and SST-2 datasets demonstrate that SplitFrozen outperforms
FedLoRA and SplitLoRA by 69.4\% model accuracy under extremely imbalanced data,
while reducing up to 86.8\% device-side computations and 50.2\% total training
time. Experiments also validate the scalability of SplitFrozen on content
generation task using Llama-3.2 model on GSM8K dataset.
|
2503.18991 | Rosy Cheng | Ruoxi Cheng, Shuirong Cao | SRMIR: Shadow Reward Models Based on Introspective Reasoning for LLM
Alignment | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aligning large language models (LLMs) with human preferences and values is
vital for application. However, current alignment methods face three main
limitations: (1) reliance on costly human annotation; (2) alignment tax; (3)
shallow alignment vulnerable to jailbreak attacks. Additionally, current
alignment datasets often suffer from uneven distributions, leading to
overrepresentation of some topics and neglect of others. To address these
issues, we propose SRMIR (Shadow Reward Models Based on Introspective
Reasoning), inspired by shadow models in membership inference attacks. We first
construct a balanced safety Chain of Draft (CoD) dataset across $7$ harmful
types with structured prompt leveraging the introspective reasoning
capabilities of LLMs, then train a set of specialized reward models to guide
policy optimization through Group Relative Policy Optimization (GRPO). We apply
two strategies, linear combination and categorized approach, to integrate
shadow reward models for policy optimization. By comparison, we find that the
latter achieves superior alignment despite higher computational costs.
Experiments across several LLMs demonstrate SRMIR significantly outperforms
existing methods.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 16:40:29 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Cheng",
"Ruoxi",
""
],
[
"Cao",
"Shuirong",
""
]
] | TITLE: SRMIR: Shadow Reward Models Based on Introspective Reasoning for LLM
Alignment
ABSTRACT: Aligning large language models (LLMs) with human preferences and values is
vital for application. However, current alignment methods face three main
limitations: (1) reliance on costly human annotation; (2) alignment tax; (3)
shallow alignment vulnerable to jailbreak attacks. Additionally, current
alignment datasets often suffer from uneven distributions, leading to
overrepresentation of some topics and neglect of others. To address these
issues, we propose SRMIR (Shadow Reward Models Based on Introspective
Reasoning), inspired by shadow models in membership inference attacks. We first
construct a balanced safety Chain of Draft (CoD) dataset across $7$ harmful
types with structured prompt leveraging the introspective reasoning
capabilities of LLMs, then train a set of specialized reward models to guide
policy optimization through Group Relative Policy Optimization (GRPO). We apply
two strategies, linear combination and categorized approach, to integrate
shadow reward models for policy optimization. By comparison, we find that the
latter achieves superior alignment despite higher computational costs.
Experiments across several LLMs demonstrate SRMIR significantly outperforms
existing methods.
|
2503.18996 | Jos\'e Alberto Ben\'itez-Andrades Ph.D. | Jos\'e Alberto Ben\'itez-Andrades, Camino Prada-Garc\'ia, Nicol\'as
Ord\'as-Reyes, Marta Esteban Blanco, Alicia Merayo, Antonio Serrano-Garc\'ia | Enhanced prediction of spine surgery outcomes using advanced machine
learning techniques and oversampling methods | null | Health Inf Sci Syst 13, 24 (2025) | 10.1007/s13755-025-00343-9 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The study proposes an advanced machine learning approach to predict spine
surgery outcomes by incorporating oversampling techniques and grid search
optimization. A variety of models including GaussianNB, ComplementNB, KNN,
Decision Tree, and optimized versions with RandomOverSampler and SMOTE were
tested on a dataset of 244 patients, which included pre-surgical, psychometric,
socioeconomic, and analytical variables. The enhanced KNN models achieved up to
76% accuracy and a 67% F1-score, while grid-search optimization further
improved performance. The findings underscore the potential of these advanced
techniques to aid healthcare professionals in decision-making, with future
research needed to refine these models on larger and more diverse datasets.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 22:39:19 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Benítez-Andrades",
"José Alberto",
""
],
[
"Prada-García",
"Camino",
""
],
[
"Ordás-Reyes",
"Nicolás",
""
],
[
"Blanco",
"Marta Esteban",
""
],
[
"Merayo",
"Alicia",
""
],
[
"Serrano-García",
"Antonio",
""
]
] | TITLE: Enhanced prediction of spine surgery outcomes using advanced machine
learning techniques and oversampling methods
ABSTRACT: The study proposes an advanced machine learning approach to predict spine
surgery outcomes by incorporating oversampling techniques and grid search
optimization. A variety of models including GaussianNB, ComplementNB, KNN,
Decision Tree, and optimized versions with RandomOverSampler and SMOTE were
tested on a dataset of 244 patients, which included pre-surgical, psychometric,
socioeconomic, and analytical variables. The enhanced KNN models achieved up to
76% accuracy and a 67% F1-score, while grid-search optimization further
improved performance. The findings underscore the potential of these advanced
techniques to aid healthcare professionals in decision-making, with future
research needed to refine these models on larger and more diverse datasets.
|
2503.18997 | Tonmoy Ghosh | Tonmoy Ghosh and Edward Sazonov | Improving Food Image Recognition with Noisy Vision Transformer | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Food image recognition is a challenging task in computer vision due to the
high variability and complexity of food images. In this study, we investigate
the potential of Noisy Vision Transformers (NoisyViT) for improving food
classification performance. By introducing noise into the learning process,
NoisyViT reduces task complexity and adjusts the entropy of the system, leading
to enhanced model accuracy. We fine-tune NoisyViT on three benchmark datasets:
Food2K (2,000 categories, ~1M images), Food-101 (101 categories, ~100K images),
and CNFOOD-241 (241 categories, ~190K images). The performance of NoisyViT is
evaluated against state-of-the-art food recognition models. Our results
demonstrate that NoisyViT achieves Top-1 accuracies of 95%, 99.5%, and 96.6% on
Food2K, Food-101, and CNFOOD-241, respectively, significantly outperforming
existing approaches. This study underscores the potential of NoisyViT for
dietary assessment, nutritional monitoring, and healthcare applications, paving
the way for future advancements in vision-based food computing. Code for
reproducing NoisyViT for food recognition is available at NoisyViT_Food.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 03:03:00 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ghosh",
"Tonmoy",
""
],
[
"Sazonov",
"Edward",
""
]
] | TITLE: Improving Food Image Recognition with Noisy Vision Transformer
ABSTRACT: Food image recognition is a challenging task in computer vision due to the
high variability and complexity of food images. In this study, we investigate
the potential of Noisy Vision Transformers (NoisyViT) for improving food
classification performance. By introducing noise into the learning process,
NoisyViT reduces task complexity and adjusts the entropy of the system, leading
to enhanced model accuracy. We fine-tune NoisyViT on three benchmark datasets:
Food2K (2,000 categories, ~1M images), Food-101 (101 categories, ~100K images),
and CNFOOD-241 (241 categories, ~190K images). The performance of NoisyViT is
evaluated against state-of-the-art food recognition models. Our results
demonstrate that NoisyViT achieves Top-1 accuracies of 95%, 99.5%, and 96.6% on
Food2K, Food-101, and CNFOOD-241, respectively, significantly outperforming
existing approaches. This study underscores the potential of NoisyViT for
dietary assessment, nutritional monitoring, and healthcare applications, paving
the way for future advancements in vision-based food computing. Code for
reproducing NoisyViT for food recognition is available at NoisyViT_Food.
|
2503.19001 | Kangwei Liu | Kangwei Liu, Junwu Liu, Yun Cao, Jinlin Guo, Xiaowei Yi | DisentTalk: Cross-lingual Talking Face Generation via Semantic
Disentangled Diffusion Model | null | Accpeted by ICME 2025 | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in talking face generation have significantly improved facial
animation synthesis. However, existing approaches face fundamental limitations:
3DMM-based methods maintain temporal consistency but lack fine-grained regional
control, while Stable Diffusion-based methods enable spatial manipulation but
suffer from temporal inconsistencies. The integration of these approaches is
hindered by incompatible control mechanisms and semantic entanglement of facial
representations. This paper presents DisentTalk, introducing a data-driven
semantic disentanglement framework that decomposes 3DMM expression parameters
into meaningful subspaces for fine-grained facial control. Building upon this
disentangled representation, we develop a hierarchical latent diffusion
architecture that operates in 3DMM parameter space, integrating region-aware
attention mechanisms to ensure both spatial precision and temporal coherence.
To address the scarcity of high-quality Chinese training data, we introduce
CHDTF, a Chinese high-definition talking face dataset. Extensive experiments
show superior performance over existing methods across multiple metrics,
including lip synchronization, expression quality, and temporal consistency.
Project Page: https://kangweiiliu.github.io/DisentTalk.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 11:46:34 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Liu",
"Kangwei",
""
],
[
"Liu",
"Junwu",
""
],
[
"Cao",
"Yun",
""
],
[
"Guo",
"Jinlin",
""
],
[
"Yi",
"Xiaowei",
""
]
] | TITLE: DisentTalk: Cross-lingual Talking Face Generation via Semantic
Disentangled Diffusion Model
ABSTRACT: Recent advances in talking face generation have significantly improved facial
animation synthesis. However, existing approaches face fundamental limitations:
3DMM-based methods maintain temporal consistency but lack fine-grained regional
control, while Stable Diffusion-based methods enable spatial manipulation but
suffer from temporal inconsistencies. The integration of these approaches is
hindered by incompatible control mechanisms and semantic entanglement of facial
representations. This paper presents DisentTalk, introducing a data-driven
semantic disentanglement framework that decomposes 3DMM expression parameters
into meaningful subspaces for fine-grained facial control. Building upon this
disentangled representation, we develop a hierarchical latent diffusion
architecture that operates in 3DMM parameter space, integrating region-aware
attention mechanisms to ensure both spatial precision and temporal coherence.
To address the scarcity of high-quality Chinese training data, we introduce
CHDTF, a Chinese high-definition talking face dataset. Extensive experiments
show superior performance over existing methods across multiple metrics,
including lip synchronization, expression quality, and temporal consistency.
Project Page: https://kangweiiliu.github.io/DisentTalk.
|
2503.19005 | Abdul Qayyum | Abdul Qayyum, Moona Mazher, Devran Ugurlu, Jose Alonso Solis Lemus,
Cristobal Rodero, Steven A Niederer | Foundation Model for Whole-Heart Segmentation: Leveraging
Student-Teacher Learning in Multi-Modal Medical Imaging | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Whole-heart segmentation from CT and MRI scans is crucial for cardiovascular
disease analysis, yet existing methods struggle with modality-specific biases
and the need for extensive labeled datasets. To address these challenges, we
propose a foundation model for whole-heart segmentation using a self-supervised
learning (SSL) framework based on a student-teacher architecture. Our model is
pretrained on a large, unlabeled dataset of CT and MRI scans, leveraging the
xLSTM backbone to capture long-range spatial dependencies and complex
anatomical structures in 3D medical images. By incorporating multi-modal
pretraining, our approach ensures strong generalization across both CT and MRI
modalities, mitigating modality-specific variations and improving segmentation
accuracy in diverse clinical settings. The use of large-scale unlabeled data
significantly reduces the dependency on manual annotations, enabling robust
performance even with limited labeled data. We further introduce an
xLSTM-UNet-based architecture for downstream whole-heart segmentation tasks,
demonstrating its effectiveness on few-label CT and MRI datasets. Our results
validate the robustness and adaptability of the proposed model, highlighting
its potential for advancing automated whole-heart segmentation in medical
imaging.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:47:54 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Qayyum",
"Abdul",
""
],
[
"Mazher",
"Moona",
""
],
[
"Ugurlu",
"Devran",
""
],
[
"Lemus",
"Jose Alonso Solis",
""
],
[
"Rodero",
"Cristobal",
""
],
[
"Niederer",
"Steven A",
""
]
] | TITLE: Foundation Model for Whole-Heart Segmentation: Leveraging
Student-Teacher Learning in Multi-Modal Medical Imaging
ABSTRACT: Whole-heart segmentation from CT and MRI scans is crucial for cardiovascular
disease analysis, yet existing methods struggle with modality-specific biases
and the need for extensive labeled datasets. To address these challenges, we
propose a foundation model for whole-heart segmentation using a self-supervised
learning (SSL) framework based on a student-teacher architecture. Our model is
pretrained on a large, unlabeled dataset of CT and MRI scans, leveraging the
xLSTM backbone to capture long-range spatial dependencies and complex
anatomical structures in 3D medical images. By incorporating multi-modal
pretraining, our approach ensures strong generalization across both CT and MRI
modalities, mitigating modality-specific variations and improving segmentation
accuracy in diverse clinical settings. The use of large-scale unlabeled data
significantly reduces the dependency on manual annotations, enabling robust
performance even with limited labeled data. We further introduce an
xLSTM-UNet-based architecture for downstream whole-heart segmentation tasks,
demonstrating its effectiveness on few-label CT and MRI datasets. Our results
validate the robustness and adaptability of the proposed model, highlighting
its potential for advancing automated whole-heart segmentation in medical
imaging.
|
2503.19012 | Lidong Wang | Lingyan Ran, Lidong Wang, Guangcong Wang, Peng Wang, Yanning Zhang | DiffV2IR: Visible-to-Infrared Diffusion Model via Vision-Language
Understanding | Project page: https://diffv2ir.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of translating visible-to-infrared images (V2IR) is inherently
challenging due to three main obstacles: 1) achieving semantic-aware
translation, 2) managing the diverse wavelength spectrum in infrared imagery,
and 3) the scarcity of comprehensive infrared datasets. Current leading methods
tend to treat V2IR as a conventional image-to-image synthesis challenge, often
overlooking these specific issues. To address this, we introduce DiffV2IR, a
novel framework for image translation comprising two key elements: a
Progressive Learning Module (PLM) and a Vision-Language Understanding Module
(VLUM). PLM features an adaptive diffusion model architecture that leverages
multi-stage knowledge learning to infrared transition from full-range to target
wavelength. To improve V2IR translation, VLUM incorporates unified
Vision-Language Understanding. We also collected a large infrared dataset,
IR-500K, which includes 500,000 infrared images compiled by various scenes and
objects under various environmental conditions. Through the combination of PLM,
VLUM, and the extensive IR-500K dataset, DiffV2IR markedly improves the
performance of V2IR. Experiments validate DiffV2IR's excellence in producing
high-quality translations, establishing its efficacy and broad applicability.
The code, dataset, and DiffV2IR model will be available at
https://github.com/LidongWang-26/DiffV2IR.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 17:58:09 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ran",
"Lingyan",
""
],
[
"Wang",
"Lidong",
""
],
[
"Wang",
"Guangcong",
""
],
[
"Wang",
"Peng",
""
],
[
"Zhang",
"Yanning",
""
]
] | TITLE: DiffV2IR: Visible-to-Infrared Diffusion Model via Vision-Language
Understanding
ABSTRACT: The task of translating visible-to-infrared images (V2IR) is inherently
challenging due to three main obstacles: 1) achieving semantic-aware
translation, 2) managing the diverse wavelength spectrum in infrared imagery,
and 3) the scarcity of comprehensive infrared datasets. Current leading methods
tend to treat V2IR as a conventional image-to-image synthesis challenge, often
overlooking these specific issues. To address this, we introduce DiffV2IR, a
novel framework for image translation comprising two key elements: a
Progressive Learning Module (PLM) and a Vision-Language Understanding Module
(VLUM). PLM features an adaptive diffusion model architecture that leverages
multi-stage knowledge learning to infrared transition from full-range to target
wavelength. To improve V2IR translation, VLUM incorporates unified
Vision-Language Understanding. We also collected a large infrared dataset,
IR-500K, which includes 500,000 infrared images compiled by various scenes and
objects under various environmental conditions. Through the combination of PLM,
VLUM, and the extensive IR-500K dataset, DiffV2IR markedly improves the
performance of V2IR. Experiments validate DiffV2IR's excellence in producing
high-quality translations, establishing its efficacy and broad applicability.
The code, dataset, and DiffV2IR model will be available at
https://github.com/LidongWang-26/DiffV2IR.
|
2503.19043 | Jean-Philippe Bruneton | J.-P. Bruneton | Enhancing Symbolic Regression with Quality-Diversity and
Physics-Inspired Constraints | 23 pages, 1 figure, submitted to Journal of Machine Learning research | null | null | null | cs.NE cs.SC physics.data-an | http://creativecommons.org/licenses/by/4.0/ | This paper presents QDSR, an advanced symbolic Regression (SR) system that
integrates genetic programming (GP), a quality-diversity (QD) algorithm, and a
dimensional analysis (DA) engine. Our method focuses on exact symbolic recovery
of known expressions from datasets, with a particular emphasis on the
Feynman-AI benchmark. On this widely used collection of 117 physics equations,
QDSR achieves an exact recovery rate of 91.6~$\%$, surpassing all previous SR
methods by over 20 percentage points. Our method also exhibits strong
robustness to noise. Beyond QD and DA, this high success rate results from a
profitable trade-off between vocabulary expressiveness and search space size:
we show that significantly expanding the vocabulary with precomputed meaningful
variables (e.g., dimensionless combinations and well-chosen scalar products)
often reduces equation complexity, ultimately leading to better performance.
Ablation studies will also show that QD alone already outperforms the
state-of-the-art. This suggests that a simple integration of QD, by projecting
individuals onto a QD grid, can significantly boost performance in existing
algorithms, without requiring major system overhauls.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 18:13:49 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Bruneton",
"J. -P.",
""
]
] | TITLE: Enhancing Symbolic Regression with Quality-Diversity and
Physics-Inspired Constraints
ABSTRACT: This paper presents QDSR, an advanced symbolic Regression (SR) system that
integrates genetic programming (GP), a quality-diversity (QD) algorithm, and a
dimensional analysis (DA) engine. Our method focuses on exact symbolic recovery
of known expressions from datasets, with a particular emphasis on the
Feynman-AI benchmark. On this widely used collection of 117 physics equations,
QDSR achieves an exact recovery rate of 91.6~$\%$, surpassing all previous SR
methods by over 20 percentage points. Our method also exhibits strong
robustness to noise. Beyond QD and DA, this high success rate results from a
profitable trade-off between vocabulary expressiveness and search space size:
we show that significantly expanding the vocabulary with precomputed meaningful
variables (e.g., dimensionless combinations and well-chosen scalar products)
often reduces equation complexity, ultimately leading to better performance.
Ablation studies will also show that QD alone already outperforms the
state-of-the-art. This suggests that a simple integration of QD, by projecting
individuals onto a QD grid, can significantly boost performance in existing
algorithms, without requiring major system overhauls.
|
2503.19062 | Alexander Lobashev | Maria Larchenko, Alexander Lobashev, Dmitry Guskov, Vladimir
Vladimirovich Palyulin | Color Transfer with Modulated Flows | AAAI 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this work, we introduce Modulated Flows (ModFlows), a novel approach for
color transfer between images based on rectified flows. The primary goal of the
color transfer is to adjust the colors of a target image to match the color
distribution of a reference image. Our technique is based on optimal transport
and executes color transfer as an invertible transformation within the RGB
color space. The ModFlows utilizes the bijective property of flows, enabling us
to introduce a common intermediate color distribution and build a dataset of
rectified flows. We train an encoder on this dataset to predict the weights of
a rectified model for new images. After training on a set of optimal transport
plans, our approach can generate plans for new pairs of distributions without
additional fine-tuning. We additionally show that the trained encoder provides
an image embedding, associated only with its color style. The presented method
is capable of processing 4K images and achieves the state-of-the-art
performance in terms of content and style similarity. Our source code is
available at https://github.com/maria-larchenko/modflows
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 18:39:54 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Larchenko",
"Maria",
""
],
[
"Lobashev",
"Alexander",
""
],
[
"Guskov",
"Dmitry",
""
],
[
"Palyulin",
"Vladimir Vladimirovich",
""
]
] | TITLE: Color Transfer with Modulated Flows
ABSTRACT: In this work, we introduce Modulated Flows (ModFlows), a novel approach for
color transfer between images based on rectified flows. The primary goal of the
color transfer is to adjust the colors of a target image to match the color
distribution of a reference image. Our technique is based on optimal transport
and executes color transfer as an invertible transformation within the RGB
color space. The ModFlows utilizes the bijective property of flows, enabling us
to introduce a common intermediate color distribution and build a dataset of
rectified flows. We train an encoder on this dataset to predict the weights of
a rectified model for new images. After training on a set of optimal transport
plans, our approach can generate plans for new pairs of distributions without
additional fine-tuning. We additionally show that the trained encoder provides
an image embedding, associated only with its color style. The presented method
is capable of processing 4K images and achieves the state-of-the-art
performance in terms of content and style similarity. Our source code is
available at https://github.com/maria-larchenko/modflows
|
2503.19068 | Sacha Braun Mr | Sacha Braun, Liviu Aolaritei, Michael I. Jordan, Francis Bach | Minimum Volume Conformal Sets for Multivariate Regression | null | null | null | null | stat.ML cs.AI cs.LG stat.ME stat.OT | http://creativecommons.org/licenses/by/4.0/ | Conformal prediction provides a principled framework for constructing
predictive sets with finite-sample validity. While much of the focus has been
on univariate response variables, existing multivariate methods either impose
rigid geometric assumptions or rely on flexible but computationally expensive
approaches that do not explicitly optimize prediction set volume. We propose an
optimization-driven framework based on a novel loss function that directly
learns minimum-volume covering sets while ensuring valid coverage. This
formulation naturally induces a new nonconformity score for conformal
prediction, which adapts to the residual distribution and covariates. Our
approach optimizes over prediction sets defined by arbitrary norm balls,
including single and multi-norm formulations. Additionally, by jointly
optimizing both the predictive model and predictive uncertainty, we obtain
prediction sets that are tight, informative, and computationally efficient, as
demonstrated in our experiments on real-world datasets.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 18:54:22 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Braun",
"Sacha",
""
],
[
"Aolaritei",
"Liviu",
""
],
[
"Jordan",
"Michael I.",
""
],
[
"Bach",
"Francis",
""
]
] | TITLE: Minimum Volume Conformal Sets for Multivariate Regression
ABSTRACT: Conformal prediction provides a principled framework for constructing
predictive sets with finite-sample validity. While much of the focus has been
on univariate response variables, existing multivariate methods either impose
rigid geometric assumptions or rely on flexible but computationally expensive
approaches that do not explicitly optimize prediction set volume. We propose an
optimization-driven framework based on a novel loss function that directly
learns minimum-volume covering sets while ensuring valid coverage. This
formulation naturally induces a new nonconformity score for conformal
prediction, which adapts to the residual distribution and covariates. Our
approach optimizes over prediction sets defined by arbitrary norm balls,
including single and multi-norm formulations. Additionally, by jointly
optimizing both the predictive model and predictive uncertainty, we obtain
prediction sets that are tight, informative, and computationally efficient, as
demonstrated in our experiments on real-world datasets.
|
2503.19074 | Swakkhar Shatabda | Osman Goni, Himadri Saha Arka, Mithun Halder, Mir Moynuddin Ahmed
Shibly, and Swakkhar Shatabda | HingeRLC-GAN: Combating Mode Collapse with Hinge Loss and RLC
Regularization | null | 27th International Conference on Pattern Recognition, ICPR 2024 | 10.1007/978-3-031-78389-0_25 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advances in Generative Adversarial Networks (GANs) have demonstrated
their capability for producing high-quality images. However, a significant
challenge remains mode collapse, which occurs when the generator produces a
limited number of data patterns that do not reflect the diversity of the
training dataset. This study addresses this issue by proposing a number of
architectural changes aimed at increasing the diversity and stability of GAN
models. We start by improving the loss function with Wasserstein loss and
Gradient Penalty to better capture the full range of data variations. We also
investigate various network architectures and conclude that ResNet
significantly contributes to increased diversity. Building on these findings,
we introduce HingeRLC-GAN, a novel approach that combines RLC Regularization
and the Hinge loss function. With a FID Score of 18 and a KID Score of 0.001,
our approach outperforms existing methods by effectively balancing training
stability and increased diversity.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 19:00:28 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Goni",
"Osman",
""
],
[
"Arka",
"Himadri Saha",
""
],
[
"Halder",
"Mithun",
""
],
[
"Shibly",
"Mir Moynuddin Ahmed",
""
],
[
"Shatabda",
"Swakkhar",
""
]
] | TITLE: HingeRLC-GAN: Combating Mode Collapse with Hinge Loss and RLC
Regularization
ABSTRACT: Recent advances in Generative Adversarial Networks (GANs) have demonstrated
their capability for producing high-quality images. However, a significant
challenge remains mode collapse, which occurs when the generator produces a
limited number of data patterns that do not reflect the diversity of the
training dataset. This study addresses this issue by proposing a number of
architectural changes aimed at increasing the diversity and stability of GAN
models. We start by improving the loss function with Wasserstein loss and
Gradient Penalty to better capture the full range of data variations. We also
investigate various network architectures and conclude that ResNet
significantly contributes to increased diversity. Building on these findings,
we introduce HingeRLC-GAN, a novel approach that combines RLC Regularization
and the Hinge loss function. With a FID Score of 18 and a KID Score of 0.001,
our approach outperforms existing methods by effectively balancing training
stability and increased diversity.
|
2503.19085 | Debdipta Goswami | Ananda Chakrabarti, Indranil Nayak, Debdipta Goswami | Temporally-Consistent Bilinearly Recurrent Autoencoders for Control
Systems | 6 pages, 6 figures, 1 table, to appear in American Control Conference
2025 | null | null | null | eess.SY cs.SY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper introduces the temporally-consistent bilinearly recurrent
autoencoder (tcBLRAN), a Koopman operator based neural network architecture for
modeling a control-affine nonlinear control system. The proposed method extends
traditional Koopman autoencoders (KAE) by incorporating bilinear recurrent
dynamics that are consistent across predictions, enabling accurate long-term
forecasting for control-affine systems. This overcomes the roadblock that KAEs
face when encountered with limited and noisy training datasets, resulting in a
lack of generalizability due to inconsistency in training data. Through a blend
of deep learning and dynamical systems theory, tcBLRAN demonstrates superior
performance in capturing complex behaviors and control systems dynamics,
providing a superior data-driven modeling technique for control systems and
outperforming the state-of-the-art Koopman bilinear form (KBF) learned by
autoencoder networks.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 19:15:56 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Chakrabarti",
"Ananda",
""
],
[
"Nayak",
"Indranil",
""
],
[
"Goswami",
"Debdipta",
""
]
] | TITLE: Temporally-Consistent Bilinearly Recurrent Autoencoders for Control
Systems
ABSTRACT: This paper introduces the temporally-consistent bilinearly recurrent
autoencoder (tcBLRAN), a Koopman operator based neural network architecture for
modeling a control-affine nonlinear control system. The proposed method extends
traditional Koopman autoencoders (KAE) by incorporating bilinear recurrent
dynamics that are consistent across predictions, enabling accurate long-term
forecasting for control-affine systems. This overcomes the roadblock that KAEs
face when encountered with limited and noisy training datasets, resulting in a
lack of generalizability due to inconsistency in training data. Through a blend
of deep learning and dynamical systems theory, tcBLRAN demonstrates superior
performance in capturing complex behaviors and control systems dynamics,
providing a superior data-driven modeling technique for control systems and
outperforming the state-of-the-art Koopman bilinear form (KBF) learned by
autoencoder networks.
|
2503.19100 | Shartaz Khan Akash | Md. Barkat Ullah Tusher, Shartaz Khan Akash, Amirul Islam Showmik | Anomaly Detection Using Computer Vision: A Comparative Analysis of Class
Distinction and Performance Metrics | 6 pages, 4 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper showcases an experimental study on anomaly detection using
computer vision. The study focuses on class distinction and performance
evaluation, combining OpenCV with deep learning techniques while employing a
TensorFlow-based convolutional neural network for real-time face recognition
and classification. The system effectively distinguishes among three classes:
authorized personnel (admin), intruders, and non-human entities. A
MobileNetV2-based deep learning model is utilized to optimize real-time
performance, ensuring high computational efficiency without compromising
accuracy. Extensive dataset preprocessing, including image augmentation and
normalization, enhances the models generalization capabilities. Our analysis
demonstrates classification accuracies of 90.20% for admin, 98.60% for
intruders, and 75.80% for non-human detection, while maintaining an average
processing rate of 30 frames per second. The study leverages transfer learning,
batch normalization, and Adam optimization to achieve stable and robust
learning, and a comparative analysis of class differentiation strategies
highlights the impact of feature extraction techniques and training
methodologies. The results indicate that advanced feature selection and data
augmentation significantly enhance detection performance, particularly in
distinguishing human from non-human scenes. As an experimental study, this
research provides critical insights into optimizing deep learning-based
surveillance systems for high-security environments and improving the accuracy
and efficiency of real-time anomaly detection.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 19:36:47 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Tusher",
"Md. Barkat Ullah",
""
],
[
"Akash",
"Shartaz Khan",
""
],
[
"Showmik",
"Amirul Islam",
""
]
] | TITLE: Anomaly Detection Using Computer Vision: A Comparative Analysis of Class
Distinction and Performance Metrics
ABSTRACT: This paper showcases an experimental study on anomaly detection using
computer vision. The study focuses on class distinction and performance
evaluation, combining OpenCV with deep learning techniques while employing a
TensorFlow-based convolutional neural network for real-time face recognition
and classification. The system effectively distinguishes among three classes:
authorized personnel (admin), intruders, and non-human entities. A
MobileNetV2-based deep learning model is utilized to optimize real-time
performance, ensuring high computational efficiency without compromising
accuracy. Extensive dataset preprocessing, including image augmentation and
normalization, enhances the models generalization capabilities. Our analysis
demonstrates classification accuracies of 90.20% for admin, 98.60% for
intruders, and 75.80% for non-human detection, while maintaining an average
processing rate of 30 frames per second. The study leverages transfer learning,
batch normalization, and Adam optimization to achieve stable and robust
learning, and a comparative analysis of class differentiation strategies
highlights the impact of feature extraction techniques and training
methodologies. The results indicate that advanced feature selection and data
augmentation significantly enhance detection performance, particularly in
distinguishing human from non-human scenes. As an experimental study, this
research provides critical insights into optimizing deep learning-based
surveillance systems for high-security environments and improving the accuracy
and efficiency of real-time anomaly detection.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.