Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2411.16778 | Bo Liu | Bo Liu, Ke Zou, Liming Zhan, Zexin Lu, Xiaoyu Dong, Yidi Chen,
Chengqiang Xie, Jiannong Cao, Xiao-Ming Wu, Huazhu Fu | GEMeX: A Large-Scale, Groundable, and Explainable Medical VQA Benchmark
for Chest X-ray Diagnosis | This project is available at https://www.med-vqa.com/GEMeX | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical Visual Question Answering (Med-VQA) combines computer vision and
natural language processing to automatically answer clinical inquiries about
medical images. However, current Med-VQA datasets exhibit two significant
limitations: (1) they often lack visual and textual explanations for answers,
hindering comprehension for patients and junior doctors; (2) they typically
offer a narrow range of question formats, inadequately reflecting the diverse
requirements in practical scenarios. These limitations pose significant
challenges to the development of a reliable and user-friendly Med-VQA system.
To address these challenges, we introduce a large-scale, Groundable, and
Explainable Medical VQA benchmark for chest X-ray diagnosis (GEMeX), featuring
several innovative components: (1) a multi-modal explainability mechanism that
offers detailed visual and textual explanations for each question-answer pair,
thereby enhancing answer comprehensibility; (2) four question types,
open-ended, closed-ended, single-choice, and multiple-choice, to better reflect
practical needs. With 151,025 images and 1,605,575 questions, GEMeX is the
currently largest chest X-ray VQA dataset. Evaluation of 12 representative
large vision language models (LVLMs) on GEMeX reveals suboptimal performance,
underscoring the dataset's complexity. Meanwhile, we propose a strong model by
fine-tuning an existing LVLM on the GEMeX training set. The substantial
performance improvement showcases the dataset's effectiveness. The benchmark is
available at https://www.med-vqa.com/GEMeX.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 07:36:46 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 03:25:56 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Bo",
""
],
[
"Zou",
"Ke",
""
],
[
"Zhan",
"Liming",
""
],
[
"Lu",
"Zexin",
""
],
[
"Dong",
"Xiaoyu",
""
],
[
"Chen",
"Yidi",
""
],
[
"Xie",
"Chengqiang",
""
],
[
"Cao",
"Jiannong",
""
],
[
"Wu",
"Xiao-Ming",
""
],
[
"Fu",
"Huazhu",
""
]
] | TITLE: GEMeX: A Large-Scale, Groundable, and Explainable Medical VQA Benchmark
for Chest X-ray Diagnosis
ABSTRACT: Medical Visual Question Answering (Med-VQA) combines computer vision and
natural language processing to automatically answer clinical inquiries about
medical images. However, current Med-VQA datasets exhibit two significant
limitations: (1) they often lack visual and textual explanations for answers,
hindering comprehension for patients and junior doctors; (2) they typically
offer a narrow range of question formats, inadequately reflecting the diverse
requirements in practical scenarios. These limitations pose significant
challenges to the development of a reliable and user-friendly Med-VQA system.
To address these challenges, we introduce a large-scale, Groundable, and
Explainable Medical VQA benchmark for chest X-ray diagnosis (GEMeX), featuring
several innovative components: (1) a multi-modal explainability mechanism that
offers detailed visual and textual explanations for each question-answer pair,
thereby enhancing answer comprehensibility; (2) four question types,
open-ended, closed-ended, single-choice, and multiple-choice, to better reflect
practical needs. With 151,025 images and 1,605,575 questions, GEMeX is the
currently largest chest X-ray VQA dataset. Evaluation of 12 representative
large vision language models (LVLMs) on GEMeX reveals suboptimal performance,
underscoring the dataset's complexity. Meanwhile, we propose a strong model by
fine-tuning an existing LVLM on the GEMeX training set. The substantial
performance improvement showcases the dataset's effectiveness. The benchmark is
available at https://www.med-vqa.com/GEMeX.
|
2411.16799 | Yang Li | Yuchen Xia, Quan Yuan, Guiyang Luo, Xiaoyuan Fu, Yang Li, Xuanhan Zhu,
Tianyou Luo, Siheng Chen, Jinglin Li | One is Plenty: A Polymorphic Feature Interpreter for Immutable
Heterogeneous Collaborative Perception | CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative perception in autonomous driving significantly enhances the
perception capabilities of individual agents. Immutable heterogeneity, where
agents have different and fixed perception networks, presents a major challenge
due to the semantic gap in exchanged intermediate features without modifying
the perception networks. Most existing methods bridge the semantic gap through
interpreters. However, they either require training a new interpreter for each
new agent type, limiting extensibility, or rely on a two-stage interpretation
via an intermediate standardized semantic space, causing cumulative semantic
loss. To achieve both extensibility in immutable heterogeneous scenarios and
low-loss feature interpretation, we propose PolyInter, a polymorphic feature
interpreter. It provides an extension point where new agents integrate by
overriding only their specific prompts, which are learnable parameters that
guide interpretation, while reusing PolyInter's remaining parameters. By
leveraging polymorphism, our design enables a single interpreter to accommodate
diverse agents and interpret their features into the ego agent's semantic
space. Experiments on the OPV2V dataset demonstrate that PolyInter improves
collaborative perception precision by up to 11.1% compared to SOTA
interpreters, while comparable results can be achieved by training only 1.4% of
PolyInter's parameters when adapting to new agents. Code is available at
https://github.com/yuchen-xia/PolyInter.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 11:47:26 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 06:21:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xia",
"Yuchen",
""
],
[
"Yuan",
"Quan",
""
],
[
"Luo",
"Guiyang",
""
],
[
"Fu",
"Xiaoyuan",
""
],
[
"Li",
"Yang",
""
],
[
"Zhu",
"Xuanhan",
""
],
[
"Luo",
"Tianyou",
""
],
[
"Chen",
"Siheng",
""
],
[
"Li",
"Jinglin",
""
]
] | TITLE: One is Plenty: A Polymorphic Feature Interpreter for Immutable
Heterogeneous Collaborative Perception
ABSTRACT: Collaborative perception in autonomous driving significantly enhances the
perception capabilities of individual agents. Immutable heterogeneity, where
agents have different and fixed perception networks, presents a major challenge
due to the semantic gap in exchanged intermediate features without modifying
the perception networks. Most existing methods bridge the semantic gap through
interpreters. However, they either require training a new interpreter for each
new agent type, limiting extensibility, or rely on a two-stage interpretation
via an intermediate standardized semantic space, causing cumulative semantic
loss. To achieve both extensibility in immutable heterogeneous scenarios and
low-loss feature interpretation, we propose PolyInter, a polymorphic feature
interpreter. It provides an extension point where new agents integrate by
overriding only their specific prompts, which are learnable parameters that
guide interpretation, while reusing PolyInter's remaining parameters. By
leveraging polymorphism, our design enables a single interpreter to accommodate
diverse agents and interpret their features into the ego agent's semantic
space. Experiments on the OPV2V dataset demonstrate that PolyInter improves
collaborative perception precision by up to 11.1% compared to SOTA
interpreters, while comparable results can be achieved by training only 1.4% of
PolyInter's parameters when adapting to new agents. Code is available at
https://github.com/yuchen-xia/PolyInter.
|
2411.17188 | Dongping Chen | Dongping Chen, Ruoxi Chen, Shu Pu, Zhaoyi Liu, Yanru Wu, Caixi Chen,
Benlin Liu, Yue Huang, Yao Wan, Pan Zhou, Ranjay Krishna | Interleaved Scene Graphs for Interleaved Text-and-Image Generation
Assessment | Accepted by ICLR 2025 as Spotlight. Project homepage:
https://interleave-eval.github.io/ | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | Many real-world user queries (e.g. "How do to make egg fried rice?") could
benefit from systems capable of generating responses with both textual steps
with accompanying images, similar to a cookbook. Models designed to generate
interleaved text and images face challenges in ensuring consistency within and
across these modalities. To address these challenges, we present ISG, a
comprehensive evaluation framework for interleaved text-and-image generation.
ISG leverages a scene graph structure to capture relationships between text and
image blocks, evaluating responses on four levels of granularity: holistic,
structural, block-level, and image-specific. This multi-tiered evaluation
allows for a nuanced assessment of consistency, coherence, and accuracy, and
provides interpretable question-answer feedback. In conjunction with ISG, we
introduce a benchmark, ISG-Bench, encompassing 1,150 samples across 8
categories and 21 subcategories. This benchmark dataset includes complex
language-vision dependencies and golden answers to evaluate models effectively
on vision-centric tasks such as style transfer, a challenging area for current
models. Using ISG-Bench, we demonstrate that recent unified vision-language
models perform poorly on generating interleaved content. While compositional
approaches that combine separate language and image models show a 111%
improvement over unified models at the holistic level, their performance
remains suboptimal at both block and image levels. To facilitate future work,
we develop ISG-Agent, a baseline agent employing a "plan-execute-refine"
pipeline to invoke tools, achieving a 122% performance improvement.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 07:55:57 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 16:16:20 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Dongping",
""
],
[
"Chen",
"Ruoxi",
""
],
[
"Pu",
"Shu",
""
],
[
"Liu",
"Zhaoyi",
""
],
[
"Wu",
"Yanru",
""
],
[
"Chen",
"Caixi",
""
],
[
"Liu",
"Benlin",
""
],
[
"Huang",
"Yue",
""
],
[
"Wan",
"Yao",
""
],
[
"Zhou",
"Pan",
""
],
[
"Krishna",
"Ranjay",
""
]
] | TITLE: Interleaved Scene Graphs for Interleaved Text-and-Image Generation
Assessment
ABSTRACT: Many real-world user queries (e.g. "How do to make egg fried rice?") could
benefit from systems capable of generating responses with both textual steps
with accompanying images, similar to a cookbook. Models designed to generate
interleaved text and images face challenges in ensuring consistency within and
across these modalities. To address these challenges, we present ISG, a
comprehensive evaluation framework for interleaved text-and-image generation.
ISG leverages a scene graph structure to capture relationships between text and
image blocks, evaluating responses on four levels of granularity: holistic,
structural, block-level, and image-specific. This multi-tiered evaluation
allows for a nuanced assessment of consistency, coherence, and accuracy, and
provides interpretable question-answer feedback. In conjunction with ISG, we
introduce a benchmark, ISG-Bench, encompassing 1,150 samples across 8
categories and 21 subcategories. This benchmark dataset includes complex
language-vision dependencies and golden answers to evaluate models effectively
on vision-centric tasks such as style transfer, a challenging area for current
models. Using ISG-Bench, we demonstrate that recent unified vision-language
models perform poorly on generating interleaved content. While compositional
approaches that combine separate language and image models show a 111%
improvement over unified models at the holistic level, their performance
remains suboptimal at both block and image levels. To facilitate future work,
we develop ISG-Agent, a baseline agent employing a "plan-execute-refine"
pipeline to invoke tools, achieving a 122% performance improvement.
|
2411.17687 | Sudarshan Ambasamudram Rajagopalan | Sudarshan Rajagopalan, Nithin Gopalakrishnan Nair, Jay N. Paranjape,
Vishal M. Patel | GenDeg: Diffusion-based Degradation Synthesis for Generalizable
All-In-One Image Restoration | Accepted to CVPR 2025. Project Page:
https://sudraj2002.github.io/gendegpage/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning-based models for All-In-One Image Restoration (AIOR) have
achieved significant advancements in recent years. However, their practical
applicability is limited by poor generalization to samples outside the training
distribution. This limitation arises primarily from insufficient diversity in
degradation variations and scenes within existing datasets, resulting in
inadequate representations of real-world scenarios. Additionally, capturing
large-scale real-world paired data for degradations such as haze, low-light,
and raindrops is often cumbersome and sometimes infeasible. In this paper, we
leverage the generative capabilities of latent diffusion models to synthesize
high-quality degraded images from their clean counterparts. Specifically, we
introduce GenDeg, a degradation and intensity-aware conditional diffusion model
capable of producing diverse degradation patterns on clean images. Using
GenDeg, we synthesize over 550k samples across six degradation types: haze,
rain, snow, motion blur, low-light, and raindrops. These generated samples are
integrated with existing datasets to form the GenDS dataset, comprising over
750k samples. Our experiments reveal that image restoration models trained on
the GenDS dataset exhibit significant improvements in out-of-distribution
performance compared to those trained solely on existing datasets. Furthermore,
we provide comprehensive analyses on implications of diffusion model-based
synthetic degradations for AIOR.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 18:55:49 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 18:40:40 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Rajagopalan",
"Sudarshan",
""
],
[
"Nair",
"Nithin Gopalakrishnan",
""
],
[
"Paranjape",
"Jay N.",
""
],
[
"Patel",
"Vishal M.",
""
]
] | TITLE: GenDeg: Diffusion-based Degradation Synthesis for Generalizable
All-In-One Image Restoration
ABSTRACT: Deep learning-based models for All-In-One Image Restoration (AIOR) have
achieved significant advancements in recent years. However, their practical
applicability is limited by poor generalization to samples outside the training
distribution. This limitation arises primarily from insufficient diversity in
degradation variations and scenes within existing datasets, resulting in
inadequate representations of real-world scenarios. Additionally, capturing
large-scale real-world paired data for degradations such as haze, low-light,
and raindrops is often cumbersome and sometimes infeasible. In this paper, we
leverage the generative capabilities of latent diffusion models to synthesize
high-quality degraded images from their clean counterparts. Specifically, we
introduce GenDeg, a degradation and intensity-aware conditional diffusion model
capable of producing diverse degradation patterns on clean images. Using
GenDeg, we synthesize over 550k samples across six degradation types: haze,
rain, snow, motion blur, low-light, and raindrops. These generated samples are
integrated with existing datasets to form the GenDS dataset, comprising over
750k samples. Our experiments reveal that image restoration models trained on
the GenDS dataset exhibit significant improvements in out-of-distribution
performance compared to those trained solely on existing datasets. Furthermore,
we provide comprehensive analyses on implications of diffusion model-based
synthetic degradations for AIOR.
|
2411.17845 | Soorena Salari | Soorena Salari, Arash Harirpoush, Hassan Rivaz, Yiming Xiao | CABLD: Contrast-Agnostic Brain Landmark Detection with Consistency-Based
Regularization | 16 pages, 7 figures, 3 tables | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Anatomical landmark detection in medical images is essential for various
clinical and research applications, including disease diagnosis and surgical
planning. However, manual landmark annotation is time-consuming and requires
significant expertise. Existing deep learning (DL) methods often require large
amounts of well-annotated data, which are costly to acquire. In this paper, we
introduce CABLD, a novel self-supervised DL framework for 3D brain landmark
detection in unlabeled scans with varying contrasts by using only a single
reference example. To achieve this, we employed an inter-subject landmark
consistency loss with an image registration loss while introducing a 3D
convolution-based contrast augmentation strategy to promote model
generalization to new contrasts. Additionally, we utilize an adaptive mixed
loss function to schedule the contributions of different sub-tasks for optimal
outcomes. We demonstrate the proposed method with the intricate task of
MRI-based 3D brain landmark detection. With comprehensive experiments on four
diverse clinical and public datasets, including both T1w and T2w MRI scans at
different MRI field strengths, we demonstrate that CABLD outperforms the
state-of-the-art methods in terms of mean radial errors (MREs) and success
detection rates (SDRs). Our framework provides a robust and accurate solution
for anatomical landmark detection, reducing the need for extensively annotated
datasets and generalizing well across different imaging contrasts. Our code
will be publicly available at: https://github.com/HealthX-Lab/CABLD.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 19:56:29 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 21:21:44 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Salari",
"Soorena",
""
],
[
"Harirpoush",
"Arash",
""
],
[
"Rivaz",
"Hassan",
""
],
[
"Xiao",
"Yiming",
""
]
] | TITLE: CABLD: Contrast-Agnostic Brain Landmark Detection with Consistency-Based
Regularization
ABSTRACT: Anatomical landmark detection in medical images is essential for various
clinical and research applications, including disease diagnosis and surgical
planning. However, manual landmark annotation is time-consuming and requires
significant expertise. Existing deep learning (DL) methods often require large
amounts of well-annotated data, which are costly to acquire. In this paper, we
introduce CABLD, a novel self-supervised DL framework for 3D brain landmark
detection in unlabeled scans with varying contrasts by using only a single
reference example. To achieve this, we employed an inter-subject landmark
consistency loss with an image registration loss while introducing a 3D
convolution-based contrast augmentation strategy to promote model
generalization to new contrasts. Additionally, we utilize an adaptive mixed
loss function to schedule the contributions of different sub-tasks for optimal
outcomes. We demonstrate the proposed method with the intricate task of
MRI-based 3D brain landmark detection. With comprehensive experiments on four
diverse clinical and public datasets, including both T1w and T2w MRI scans at
different MRI field strengths, we demonstrate that CABLD outperforms the
state-of-the-art methods in terms of mean radial errors (MREs) and success
detection rates (SDRs). Our framework provides a robust and accurate solution
for anatomical landmark detection, reducing the need for extensively annotated
datasets and generalizing well across different imaging contrasts. Our code
will be publicly available at: https://github.com/HealthX-Lab/CABLD.
|
2411.18673 | Sherwin Bahmani | Sherwin Bahmani, Ivan Skorokhodov, Guocheng Qian, Aliaksandr Siarohin,
Willi Menapace, Andrea Tagliasacchi, David B. Lindell, Sergey Tulyakov | AC3D: Analyzing and Improving 3D Camera Control in Video Diffusion
Transformers | CVPR 2025; Project Page: https://snap-research.github.io/ac3d/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Numerous works have recently integrated 3D camera control into foundational
text-to-video models, but the resulting camera control is often imprecise, and
video generation quality suffers. In this work, we analyze camera motion from a
first principles perspective, uncovering insights that enable precise 3D camera
manipulation without compromising synthesis quality. First, we determine that
motion induced by camera movements in videos is low-frequency in nature. This
motivates us to adjust train and test pose conditioning schedules, accelerating
training convergence while improving visual and motion quality. Then, by
probing the representations of an unconditional video diffusion transformer, we
observe that they implicitly perform camera pose estimation under the hood, and
only a sub-portion of their layers contain the camera information. This
suggested us to limit the injection of camera conditioning to a subset of the
architecture to prevent interference with other video features, leading to a 4x
reduction of training parameters, improved training speed, and 10% higher
visual quality. Finally, we complement the typical dataset for camera control
learning with a curated dataset of 20K diverse, dynamic videos with stationary
cameras. This helps the model distinguish between camera and scene motion and
improves the dynamics of generated pose-conditioned videos. We compound these
findings to design the Advanced 3D Camera Control (AC3D) architecture, the new
state-of-the-art model for generative video modeling with camera control.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2024 18:49:13 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Dec 2024 04:43:30 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 15:32:50 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Bahmani",
"Sherwin",
""
],
[
"Skorokhodov",
"Ivan",
""
],
[
"Qian",
"Guocheng",
""
],
[
"Siarohin",
"Aliaksandr",
""
],
[
"Menapace",
"Willi",
""
],
[
"Tagliasacchi",
"Andrea",
""
],
[
"Lindell",
"David B.",
""
],
[
"Tulyakov",
"Sergey",
""
]
] | TITLE: AC3D: Analyzing and Improving 3D Camera Control in Video Diffusion
Transformers
ABSTRACT: Numerous works have recently integrated 3D camera control into foundational
text-to-video models, but the resulting camera control is often imprecise, and
video generation quality suffers. In this work, we analyze camera motion from a
first principles perspective, uncovering insights that enable precise 3D camera
manipulation without compromising synthesis quality. First, we determine that
motion induced by camera movements in videos is low-frequency in nature. This
motivates us to adjust train and test pose conditioning schedules, accelerating
training convergence while improving visual and motion quality. Then, by
probing the representations of an unconditional video diffusion transformer, we
observe that they implicitly perform camera pose estimation under the hood, and
only a sub-portion of their layers contain the camera information. This
suggested us to limit the injection of camera conditioning to a subset of the
architecture to prevent interference with other video features, leading to a 4x
reduction of training parameters, improved training speed, and 10% higher
visual quality. Finally, we complement the typical dataset for camera control
learning with a curated dataset of 20K diverse, dynamic videos with stationary
cameras. This helps the model distinguish between camera and scene motion and
improves the dynamics of generated pose-conditioned videos. We compound these
findings to design the Advanced 3D Camera Control (AC3D) architecture, the new
state-of-the-art model for generative video modeling with camera control.
|
2411.19715 | Yuezun Li | Xinjie Cui, Yuezun Li, Ao Luo, Jiaran Zhou, Junyu Dong | Forensics Adapter: Adapting CLIP for Generalizable Face Forgery
Detection | CVPR 2025 | null | null | null | cs.CV cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe the Forensics Adapter, an adapter network designed to transform
CLIP into an effective and generalizable face forgery detector. Although CLIP
is highly versatile, adapting it for face forgery detection is non-trivial as
forgery-related knowledge is entangled with a wide range of unrelated
knowledge. Existing methods treat CLIP merely as a feature extractor, lacking
task-specific adaptation, which limits their effectiveness. To address this, we
introduce an adapter to learn face forgery traces -- the blending boundaries
unique to forged faces, guided by task-specific objectives. Then we enhance the
CLIP visual tokens with a dedicated interaction strategy that communicates
knowledge across CLIP and the adapter. Since the adapter is alongside CLIP, its
versatility is highly retained, naturally ensuring strong generalizability in
face forgery detection. With only 5.7M trainable parameters, our method
achieves a significant performance boost, improving by approximately 7% on
average across five standard datasets. We believe the proposed method can serve
as a baseline for future CLIP-based face forgery detection methods. The code is
available at https://github.com/OUC-VAS/ForensicsAdapter.
| [
{
"version": "v1",
"created": "Fri, 29 Nov 2024 14:02:11 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 09:41:55 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Cui",
"Xinjie",
""
],
[
"Li",
"Yuezun",
""
],
[
"Luo",
"Ao",
""
],
[
"Zhou",
"Jiaran",
""
],
[
"Dong",
"Junyu",
""
]
] | TITLE: Forensics Adapter: Adapting CLIP for Generalizable Face Forgery
Detection
ABSTRACT: We describe the Forensics Adapter, an adapter network designed to transform
CLIP into an effective and generalizable face forgery detector. Although CLIP
is highly versatile, adapting it for face forgery detection is non-trivial as
forgery-related knowledge is entangled with a wide range of unrelated
knowledge. Existing methods treat CLIP merely as a feature extractor, lacking
task-specific adaptation, which limits their effectiveness. To address this, we
introduce an adapter to learn face forgery traces -- the blending boundaries
unique to forged faces, guided by task-specific objectives. Then we enhance the
CLIP visual tokens with a dedicated interaction strategy that communicates
knowledge across CLIP and the adapter. Since the adapter is alongside CLIP, its
versatility is highly retained, naturally ensuring strong generalizability in
face forgery detection. With only 5.7M trainable parameters, our method
achieves a significant performance boost, improving by approximately 7% on
average across five standard datasets. We believe the proposed method can serve
as a baseline for future CLIP-based face forgery detection methods. The code is
available at https://github.com/OUC-VAS/ForensicsAdapter.
|
2412.00119 | Luca Colombo | Luca Colombo, Fabrizio Pittorino, Manuel Roveri | Training Multi-Layer Binary Neural Networks With Local Binary Error
Signals | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Binary Neural Networks (BNNs) significantly reduce computational complexity
and memory usage in machine and deep learning by representing weights and
activations with just one bit. However, most existing training algorithms for
BNNs rely on quantization-aware floating-point Stochastic Gradient Descent
(SGD), limiting the full exploitation of binary operations to the inference
phase only. In this work, we propose, for the first time, a fully binary and
gradient-free training algorithm for multi-layer BNNs, eliminating the need for
back-propagated floating-point gradients. Specifically, the proposed algorithm
relies on local binary error signals and binary weight updates, employing
integer-valued hidden weights that serve as a synaptic metaplasticity
mechanism, thereby enhancing its neurobiological plausibility. The fully binary
and gradient-free algorithm introduced in this paper enables the training of
binary multi-layer perceptrons with binary inputs, weights, and activations, by
using exclusively XNOR, Popcount, and increment/decrement operations.
Experimental results on multi-class classification benchmarks show test
accuracy improvements of up to +35.47% over the only existing fully binary
single-layer state-of-the-art solution. Compared to full-precision SGD, our
solution improves test accuracy by up to +41.31% under the same total memory
demand$\unicode{x2013}$including the model, activations, and input
dataset$\unicode{x2013}$while also reducing computational cost by two orders of
magnitude in terms of the total number of equivalent Boolean gates. The
proposed algorithm is made available to the scientific community as a public
repository.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 09:12:04 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 12:59:38 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Colombo",
"Luca",
""
],
[
"Pittorino",
"Fabrizio",
""
],
[
"Roveri",
"Manuel",
""
]
] | TITLE: Training Multi-Layer Binary Neural Networks With Local Binary Error
Signals
ABSTRACT: Binary Neural Networks (BNNs) significantly reduce computational complexity
and memory usage in machine and deep learning by representing weights and
activations with just one bit. However, most existing training algorithms for
BNNs rely on quantization-aware floating-point Stochastic Gradient Descent
(SGD), limiting the full exploitation of binary operations to the inference
phase only. In this work, we propose, for the first time, a fully binary and
gradient-free training algorithm for multi-layer BNNs, eliminating the need for
back-propagated floating-point gradients. Specifically, the proposed algorithm
relies on local binary error signals and binary weight updates, employing
integer-valued hidden weights that serve as a synaptic metaplasticity
mechanism, thereby enhancing its neurobiological plausibility. The fully binary
and gradient-free algorithm introduced in this paper enables the training of
binary multi-layer perceptrons with binary inputs, weights, and activations, by
using exclusively XNOR, Popcount, and increment/decrement operations.
Experimental results on multi-class classification benchmarks show test
accuracy improvements of up to +35.47% over the only existing fully binary
single-layer state-of-the-art solution. Compared to full-precision SGD, our
solution improves test accuracy by up to +41.31% under the same total memory
demand$\unicode{x2013}$including the model, activations, and input
dataset$\unicode{x2013}$while also reducing computational cost by two orders of
magnitude in terms of the total number of equivalent Boolean gates. The
proposed algorithm is made available to the scientific community as a public
repository.
|
2412.00133 | Friedhelm Hamann | Friedhelm Hamann, Daniel Gehrig, Filbert Febryanto, Kostas Daniilidis,
Guillermo Gallego | ETAP: Event-based Tracking of Any Point | 17 pages, 15 figures, 8 tables. Project page:
https://github.com/tub-rip/ETAP | IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
Nashville, 2025 | null | null | cs.CV cs.LG cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Tracking any point (TAP) recently shifted the motion estimation paradigm from
focusing on individual salient points with local templates to tracking
arbitrary points with global image contexts. However, while research has mostly
focused on driving the accuracy of models in nominal settings, addressing
scenarios with difficult lighting conditions and high-speed motions remains out
of reach due to the limitations of the sensor. This work addresses this
challenge with the first event camera-based TAP method. It leverages the high
temporal resolution and high dynamic range of event cameras for robust
high-speed tracking, and the global contexts in TAP methods to handle
asynchronous and sparse event measurements. We further extend the TAP framework
to handle event feature variations induced by motion -- thereby addressing an
open challenge in purely event-based tracking -- with a novel feature-alignment
loss which ensures the learning of motion-robust features. Our method is
trained with data from a new data generation pipeline and systematically
ablated across all design decisions. Our method shows strong cross-dataset
generalization and performs 136% better on the average Jaccard metric than the
baselines. Moreover, on an established feature tracking benchmark, it achieves
a 20% improvement over the previous best event-only method and even surpasses
the previous best events-and-frames method by 4.1%. Our code is available at
https://github.com/tub-rip/ETAP
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 15:13:24 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 14:08:39 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hamann",
"Friedhelm",
""
],
[
"Gehrig",
"Daniel",
""
],
[
"Febryanto",
"Filbert",
""
],
[
"Daniilidis",
"Kostas",
""
],
[
"Gallego",
"Guillermo",
""
]
] | TITLE: ETAP: Event-based Tracking of Any Point
ABSTRACT: Tracking any point (TAP) recently shifted the motion estimation paradigm from
focusing on individual salient points with local templates to tracking
arbitrary points with global image contexts. However, while research has mostly
focused on driving the accuracy of models in nominal settings, addressing
scenarios with difficult lighting conditions and high-speed motions remains out
of reach due to the limitations of the sensor. This work addresses this
challenge with the first event camera-based TAP method. It leverages the high
temporal resolution and high dynamic range of event cameras for robust
high-speed tracking, and the global contexts in TAP methods to handle
asynchronous and sparse event measurements. We further extend the TAP framework
to handle event feature variations induced by motion -- thereby addressing an
open challenge in purely event-based tracking -- with a novel feature-alignment
loss which ensures the learning of motion-robust features. Our method is
trained with data from a new data generation pipeline and systematically
ablated across all design decisions. Our method shows strong cross-dataset
generalization and performs 136% better on the average Jaccard metric than the
baselines. Moreover, on an established feature tracking benchmark, it achieves
a 20% improvement over the previous best event-only method and even surpasses
the previous best events-and-frames method by 4.1%. Our code is available at
https://github.com/tub-rip/ETAP
|
2412.01255 | Oriana Presacan | Oriana Presacan, Alexandru Dorobantiu, Vajira Thambawita, Michael A.
Riegler, Mette H. Stensen, Mario Iliceto, Alexandru C. Aldea, Akriti Sharma | Merging synthetic and real embryo data for advanced AI predictions | null | Scientific Reports, 15(1): 9805, 2025 | 10.1038/s41598-025-94680-0 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Accurate embryo morphology assessment is essential in assisted reproductive
technology for selecting the most viable embryo. Artificial intelligence has
the potential to enhance this process. However, the limited availability of
embryo data presents challenges for training deep learning models. To address
this, we trained two generative models using two datasets-one we created and
made publicly available, and one existing public dataset-to generate synthetic
embryo images at various cell stages, including 2-cell, 4-cell, 8-cell, morula,
and blastocyst. These were combined with real images to train classification
models for embryo cell stage prediction. Our results demonstrate that
incorporating synthetic images alongside real data improved classification
performance, with the model achieving 97% accuracy compared to 94.5% when
trained solely on real data. This trend remained consistent when tested on an
external Blastocyst dataset from a different clinic. Notably, even when trained
exclusively on synthetic data and tested on real data, the model achieved a
high accuracy of 92%. Furthermore, combining synthetic data from both
generative models yielded better classification results than using data from a
single generative model. Four embryologists evaluated the fidelity of the
synthetic images through a Turing test, during which they annotated
inaccuracies and offered feedback. The analysis showed the diffusion model
outperformed the generative adversarial network, deceiving embryologists 66.6%
versus 25.3% and achieving lower Frechet inception distance scores.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 08:24:49 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 16:57:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Presacan",
"Oriana",
""
],
[
"Dorobantiu",
"Alexandru",
""
],
[
"Thambawita",
"Vajira",
""
],
[
"Riegler",
"Michael A.",
""
],
[
"Stensen",
"Mette H.",
""
],
[
"Iliceto",
"Mario",
""
],
[
"Aldea",
"Alexandru C.",
""
],
[
"Sharma",
"Akriti",
""
]
] | TITLE: Merging synthetic and real embryo data for advanced AI predictions
ABSTRACT: Accurate embryo morphology assessment is essential in assisted reproductive
technology for selecting the most viable embryo. Artificial intelligence has
the potential to enhance this process. However, the limited availability of
embryo data presents challenges for training deep learning models. To address
this, we trained two generative models using two datasets-one we created and
made publicly available, and one existing public dataset-to generate synthetic
embryo images at various cell stages, including 2-cell, 4-cell, 8-cell, morula,
and blastocyst. These were combined with real images to train classification
models for embryo cell stage prediction. Our results demonstrate that
incorporating synthetic images alongside real data improved classification
performance, with the model achieving 97% accuracy compared to 94.5% when
trained solely on real data. This trend remained consistent when tested on an
external Blastocyst dataset from a different clinic. Notably, even when trained
exclusively on synthetic data and tested on real data, the model achieved a
high accuracy of 92%. Furthermore, combining synthetic data from both
generative models yielded better classification results than using data from a
single generative model. Four embryologists evaluated the fidelity of the
synthetic images through a Turing test, during which they annotated
inaccuracies and offered feedback. The analysis showed the diffusion model
outperformed the generative adversarial network, deceiving embryologists 66.6%
versus 25.3% and achieving lower Frechet inception distance scores.
|
2412.01820 | Jiayuan Rao | Jiayuan Rao, Haoning Wu, Hao Jiang, Ya Zhang, Yanfeng Wang, Weidi Xie | Towards Universal Soccer Video Understanding | CVPR 2025; Project Page: https://jyrao.github.io/UniSoccer/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a globally celebrated sport, soccer has attracted widespread interest from
fans all over the world. This paper aims to develop a comprehensive multi-modal
framework for soccer video understanding. Specifically, we make the following
contributions in this paper: (i) we introduce SoccerReplay-1988, the largest
multi-modal soccer dataset to date, featuring videos and detailed annotations
from 1,988 complete matches, with an automated annotation pipeline; (ii) we
present an advanced soccer-specific visual encoder, MatchVision, which
leverages spatiotemporal information across soccer videos and excels in various
downstream tasks; (iii) we conduct extensive experiments and ablation studies
on event classification, commentary generation, and multi-view foul
recognition. MatchVision demonstrates state-of-the-art performance on all of
them, substantially outperforming existing models, which highlights the
superiority of our proposed data and model. We believe that this work will
offer a standard paradigm for sports understanding research.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 18:58:04 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Dec 2024 06:38:22 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 14:22:47 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Rao",
"Jiayuan",
""
],
[
"Wu",
"Haoning",
""
],
[
"Jiang",
"Hao",
""
],
[
"Zhang",
"Ya",
""
],
[
"Wang",
"Yanfeng",
""
],
[
"Xie",
"Weidi",
""
]
] | TITLE: Towards Universal Soccer Video Understanding
ABSTRACT: As a globally celebrated sport, soccer has attracted widespread interest from
fans all over the world. This paper aims to develop a comprehensive multi-modal
framework for soccer video understanding. Specifically, we make the following
contributions in this paper: (i) we introduce SoccerReplay-1988, the largest
multi-modal soccer dataset to date, featuring videos and detailed annotations
from 1,988 complete matches, with an automated annotation pipeline; (ii) we
present an advanced soccer-specific visual encoder, MatchVision, which
leverages spatiotemporal information across soccer videos and excels in various
downstream tasks; (iii) we conduct extensive experiments and ablation studies
on event classification, commentary generation, and multi-view foul
recognition. MatchVision demonstrates state-of-the-art performance on all of
them, substantially outperforming existing models, which highlights the
superiority of our proposed data and model. We believe that this work will
offer a standard paradigm for sports understanding research.
|
2412.02083 | Ashutosh Hathidara | Ashutosh Hathidara, Lalit Pandey | Implementing An Artificial Quantum Perceptron | null | Ann Comp Phy Material Sci, 2(1), 01-05 (2025) | 10.33140/ACPMS.02.01.01 | null | quant-ph cs.AI | http://creativecommons.org/licenses/by/4.0/ | A Perceptron is a fundamental building block of a neural network. The
flexibility and scalability of perceptron make it ubiquitous in building
intelligent systems. Studies have shown the efficacy of a single neuron in
making intelligent decisions. Here, we examined and compared two perceptrons
with distinct mechanisms, and developed a quantum version of one of those
perceptrons. As a part of this modeling, we implemented the quantum circuit for
an artificial perception, generated a dataset, and simulated the training.
Through these experiments, we show that there is an exponential growth
advantage and test different qubit versions. Our findings show that this
quantum model of an individual perceptron can be used as a pattern classifier.
For the second type of model, we provide an understanding to design and
simulate a spike-dependent quantum perceptron. Our code is available at
https://github.com/ashutosh1919/quantum-perceptron
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 01:57:09 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 14:54:27 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hathidara",
"Ashutosh",
""
],
[
"Pandey",
"Lalit",
""
]
] | TITLE: Implementing An Artificial Quantum Perceptron
ABSTRACT: A Perceptron is a fundamental building block of a neural network. The
flexibility and scalability of perceptron make it ubiquitous in building
intelligent systems. Studies have shown the efficacy of a single neuron in
making intelligent decisions. Here, we examined and compared two perceptrons
with distinct mechanisms, and developed a quantum version of one of those
perceptrons. As a part of this modeling, we implemented the quantum circuit for
an artificial perception, generated a dataset, and simulated the training.
Through these experiments, we show that there is an exponential growth
advantage and test different qubit versions. Our findings show that this
quantum model of an individual perceptron can be used as a pattern classifier.
For the second type of model, we provide an understanding to design and
simulate a spike-dependent quantum perceptron. Our code is available at
https://github.com/ashutosh1919/quantum-perceptron
|
2412.03240 | Haowen Bai | Haowen Bai, Jiangshe Zhang, Zixiang Zhao, Yichen Wu, Lilun Deng, Yukun
Cui, Tao Feng, Shuang Xu | Task-driven Image Fusion with Learnable Fusion Loss | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-modal image fusion aggregates information from multiple sensor sources,
achieving superior visual quality and perceptual features compared to
single-source images, often improving downstream tasks. However, current fusion
methods for downstream tasks still use predefined fusion objectives that
potentially mismatch the downstream tasks, limiting adaptive guidance and
reducing model flexibility. To address this, we propose Task-driven Image
Fusion (TDFusion), a fusion framework incorporating a learnable fusion loss
guided by task loss. Specifically, our fusion loss includes learnable
parameters modeled by a neural network called the loss generation module. This
module is supervised by the downstream task loss in a meta-learning manner. The
learning objective is to minimize the task loss of fused images after
optimizing the fusion module with the fusion loss. Iterative updates between
the fusion module and the loss module ensure that the fusion network evolves
toward minimizing task loss, guiding the fusion process toward the task
objectives. TDFusion's training relies entirely on the downstream task loss,
making it adaptable to any specific task. It can be applied to any architecture
of fusion and task networks. Experiments demonstrate TDFusion's performance
through fusion experiments conducted on four different datasets, in addition to
evaluations on semantic segmentation and object detection tasks.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 11:42:17 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 11:21:17 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Bai",
"Haowen",
""
],
[
"Zhang",
"Jiangshe",
""
],
[
"Zhao",
"Zixiang",
""
],
[
"Wu",
"Yichen",
""
],
[
"Deng",
"Lilun",
""
],
[
"Cui",
"Yukun",
""
],
[
"Feng",
"Tao",
""
],
[
"Xu",
"Shuang",
""
]
] | TITLE: Task-driven Image Fusion with Learnable Fusion Loss
ABSTRACT: Multi-modal image fusion aggregates information from multiple sensor sources,
achieving superior visual quality and perceptual features compared to
single-source images, often improving downstream tasks. However, current fusion
methods for downstream tasks still use predefined fusion objectives that
potentially mismatch the downstream tasks, limiting adaptive guidance and
reducing model flexibility. To address this, we propose Task-driven Image
Fusion (TDFusion), a fusion framework incorporating a learnable fusion loss
guided by task loss. Specifically, our fusion loss includes learnable
parameters modeled by a neural network called the loss generation module. This
module is supervised by the downstream task loss in a meta-learning manner. The
learning objective is to minimize the task loss of fused images after
optimizing the fusion module with the fusion loss. Iterative updates between
the fusion module and the loss module ensure that the fusion network evolves
toward minimizing task loss, guiding the fusion process toward the task
objectives. TDFusion's training relies entirely on the downstream task loss,
making it adaptable to any specific task. It can be applied to any architecture
of fusion and task networks. Experiments demonstrate TDFusion's performance
through fusion experiments conducted on four different datasets, in addition to
evaluations on semantic segmentation and object detection tasks.
|
2412.04282 | Bingbing Hu | Bingbing Hu, Yanyan Li, Rui Xie, Bo Xu, Haoye Dong, Junfeng Yao, Gim
Hee Lee | Learnable Infinite Taylor Gaussian for Dynamic View Rendering | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Capturing the temporal evolution of Gaussian properties such as position,
rotation, and scale is a challenging task due to the vast number of
time-varying parameters and the limited photometric data available, which
generally results in convergence issues, making it difficult to find an optimal
solution. While feeding all inputs into an end-to-end neural network can
effectively model complex temporal dynamics, this approach lacks explicit
supervision and struggles to generate high-quality transformation fields. On
the other hand, using time-conditioned polynomial functions to model Gaussian
trajectories and orientations provides a more explicit and interpretable
solution, but requires significant handcrafted effort and lacks
generalizability across diverse scenes. To overcome these limitations, this
paper introduces a novel approach based on a learnable infinite Taylor Formula
to model the temporal evolution of Gaussians. This method offers both the
flexibility of an implicit network-based approach and the interpretability of
explicit polynomial functions, allowing for more robust and generalizable
modeling of Gaussian dynamics across various dynamic scenes. Extensive
experiments on dynamic novel view rendering tasks are conducted on public
datasets, demonstrating that the proposed method achieves state-of-the-art
performance in this domain. More information is available on our project
page(https://ellisonking.github.io/TaylorGaussian).
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 16:03:37 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 12:53:56 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hu",
"Bingbing",
""
],
[
"Li",
"Yanyan",
""
],
[
"Xie",
"Rui",
""
],
[
"Xu",
"Bo",
""
],
[
"Dong",
"Haoye",
""
],
[
"Yao",
"Junfeng",
""
],
[
"Lee",
"Gim Hee",
""
]
] | TITLE: Learnable Infinite Taylor Gaussian for Dynamic View Rendering
ABSTRACT: Capturing the temporal evolution of Gaussian properties such as position,
rotation, and scale is a challenging task due to the vast number of
time-varying parameters and the limited photometric data available, which
generally results in convergence issues, making it difficult to find an optimal
solution. While feeding all inputs into an end-to-end neural network can
effectively model complex temporal dynamics, this approach lacks explicit
supervision and struggles to generate high-quality transformation fields. On
the other hand, using time-conditioned polynomial functions to model Gaussian
trajectories and orientations provides a more explicit and interpretable
solution, but requires significant handcrafted effort and lacks
generalizability across diverse scenes. To overcome these limitations, this
paper introduces a novel approach based on a learnable infinite Taylor Formula
to model the temporal evolution of Gaussians. This method offers both the
flexibility of an implicit network-based approach and the interpretability of
explicit polynomial functions, allowing for more robust and generalizable
modeling of Gaussian dynamics across various dynamic scenes. Extensive
experiments on dynamic novel view rendering tasks are conducted on public
datasets, demonstrating that the proposed method achieves state-of-the-art
performance in this domain. More information is available on our project
page(https://ellisonking.github.io/TaylorGaussian).
|
2412.04526 | Daiheng Zhang | Daiheng Zhang and Yan Zeng and Xinyu Hong and Jinbo Xu | Leveraging Multi-modal Representations to Predict Protein Melting
Temperatures | Accepted to AAAI 2025 FM4BIO workshop | null | null | null | cs.LG cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurately predicting protein melting temperature changes (Delta Tm) is
fundamental for assessing protein stability and guiding protein engineering.
Leveraging multi-modal protein representations has shown great promise in
capturing the complex relationships among protein sequences, structures, and
functions. In this study, we develop models based on powerful protein language
models, including ESM-2, ESM-3 and AlphaFold, using various feature extraction
methods to enhance prediction accuracy. By utilizing the ESM-3 model, we
achieve a new state-of-the-art performance on the s571 test dataset, obtaining
a Pearson correlation coefficient (PCC) of 0.50. Furthermore, we conduct a fair
evaluation to compare the performance of different protein language models in
the Delta Tm prediction task. Our results demonstrate that integrating
multi-modal protein representations could advance the prediction of protein
melting temperatures.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 16:03:09 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Dec 2024 17:55:33 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 23:01:55 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Daiheng",
""
],
[
"Zeng",
"Yan",
""
],
[
"Hong",
"Xinyu",
""
],
[
"Xu",
"Jinbo",
""
]
] | TITLE: Leveraging Multi-modal Representations to Predict Protein Melting
Temperatures
ABSTRACT: Accurately predicting protein melting temperature changes (Delta Tm) is
fundamental for assessing protein stability and guiding protein engineering.
Leveraging multi-modal protein representations has shown great promise in
capturing the complex relationships among protein sequences, structures, and
functions. In this study, we develop models based on powerful protein language
models, including ESM-2, ESM-3 and AlphaFold, using various feature extraction
methods to enhance prediction accuracy. By utilizing the ESM-3 model, we
achieve a new state-of-the-art performance on the s571 test dataset, obtaining
a Pearson correlation coefficient (PCC) of 0.50. Furthermore, we conduct a fair
evaluation to compare the performance of different protein language models in
the Delta Tm prediction task. Our results demonstrate that integrating
multi-modal protein representations could advance the prediction of protein
melting temperatures.
|
2412.07217 | Aniket Bhanderi | Aniket Bhanderi, Raj Bhatnagar | Incremental Gaussian Mixture Clustering for Data Streams | null | null | 10.1109/ICDMW65004.2024.00032 | null | cs.LG cs.DB | http://creativecommons.org/licenses/by/4.0/ | The problem of analyzing data streams of very large volumes is important and
is very desirable for many application domains. In this paper we present and
demonstrate effective working of an algorithm to find clusters and anomalous
data points in a streaming datasets. Entropy minimization is used as a
criterion for defining and updating clusters formed from a streaming dataset.
As the clusters are formed we also identify anomalous datapoints that show up
far away from all known clusters. With a number of 2-D datasets we demonstrate
the effectiveness of discovering the clusters and also identifying anomalous
data points.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 06:15:14 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Bhanderi",
"Aniket",
""
],
[
"Bhatnagar",
"Raj",
""
]
] | TITLE: Incremental Gaussian Mixture Clustering for Data Streams
ABSTRACT: The problem of analyzing data streams of very large volumes is important and
is very desirable for many application domains. In this paper we present and
demonstrate effective working of an algorithm to find clusters and anomalous
data points in a streaming datasets. Entropy minimization is used as a
criterion for defining and updating clusters formed from a streaming dataset.
As the clusters are formed we also identify anomalous datapoints that show up
far away from all known clusters. With a number of 2-D datasets we demonstrate
the effectiveness of discovering the clusters and also identifying anomalous
data points.
|
2412.09401 | Siyan Dong | Yuzheng Liu, Siyan Dong, Shuzhe Wang, Yingda Yin, Yanchao Yang,
Qingnan Fan, Baoquan Chen | SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce SLAM3R, a novel and effective system for
real-time, high-quality, dense 3D reconstruction using RGB videos. SLAM3R
provides an end-to-end solution by seamlessly integrating local 3D
reconstruction and global coordinate registration through feed-forward neural
networks. Given an input video, the system first converts it into overlapping
clips using a sliding window mechanism. Unlike traditional pose
optimization-based methods, SLAM3R directly regresses 3D pointmaps from RGB
images in each window and progressively aligns and deforms these local
pointmaps to create a globally consistent scene reconstruction - all without
explicitly solving any camera parameters. Experiments across datasets
consistently show that SLAM3R achieves state-of-the-art reconstruction accuracy
and completeness while maintaining real-time performance at 20+ FPS. Code
available at: https://github.com/PKU-VCL-3DV/SLAM3R.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 16:08:03 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Dec 2024 12:23:39 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Mar 2025 17:01:39 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Yuzheng",
""
],
[
"Dong",
"Siyan",
""
],
[
"Wang",
"Shuzhe",
""
],
[
"Yin",
"Yingda",
""
],
[
"Yang",
"Yanchao",
""
],
[
"Fan",
"Qingnan",
""
],
[
"Chen",
"Baoquan",
""
]
] | TITLE: SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos
ABSTRACT: In this paper, we introduce SLAM3R, a novel and effective system for
real-time, high-quality, dense 3D reconstruction using RGB videos. SLAM3R
provides an end-to-end solution by seamlessly integrating local 3D
reconstruction and global coordinate registration through feed-forward neural
networks. Given an input video, the system first converts it into overlapping
clips using a sliding window mechanism. Unlike traditional pose
optimization-based methods, SLAM3R directly regresses 3D pointmaps from RGB
images in each window and progressively aligns and deforms these local
pointmaps to create a globally consistent scene reconstruction - all without
explicitly solving any camera parameters. Experiments across datasets
consistently show that SLAM3R achieves state-of-the-art reconstruction accuracy
and completeness while maintaining real-time performance at 20+ FPS. Code
available at: https://github.com/PKU-VCL-3DV/SLAM3R.
|
2412.10235 | Songpengcheng Xia | Songpengcheng Xia, Yu Zhang, Zhuo Su, Xiaozheng Zheng, Zheng Lv,
Guidong Wang, Yongjie Zhang, Qi Wu, Lei Chu, Ling Pei | EnvPoser: Environment-aware Realistic Human Motion Estimation from
Sparse Observations with Uncertainty Modeling | Accepted by CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimating full-body motion using the tracking signals of head and hands from
VR devices holds great potential for various applications. However, the
sparsity and unique distribution of observations present a significant
challenge, resulting in an ill-posed problem with multiple feasible solutions
(i.e., hypotheses). This amplifies uncertainty and ambiguity in full-body
motion estimation, especially for the lower-body joints. Therefore, we propose
a new method, EnvPoser, that employs a two-stage framework to perform full-body
motion estimation using sparse tracking signals and pre-scanned environment
from VR devices. EnvPoser models the multi-hypothesis nature of human motion
through an uncertainty-aware estimation module in the first stage. In the
second stage, we refine these multi-hypothesis estimates by integrating
semantic and geometric environmental constraints, ensuring that the final
motion estimation aligns realistically with both the environmental context and
physical interactions. Qualitative and quantitative experiments on two public
datasets demonstrate that our method achieves state-of-the-art performance,
highlighting significant improvements in human motion estimation within
motion-environment interaction scenarios.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 16:06:46 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 05:16:55 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xia",
"Songpengcheng",
""
],
[
"Zhang",
"Yu",
""
],
[
"Su",
"Zhuo",
""
],
[
"Zheng",
"Xiaozheng",
""
],
[
"Lv",
"Zheng",
""
],
[
"Wang",
"Guidong",
""
],
[
"Zhang",
"Yongjie",
""
],
[
"Wu",
"Qi",
""
],
[
"Chu",
"Lei",
""
],
[
"Pei",
"Ling",
""
]
] | TITLE: EnvPoser: Environment-aware Realistic Human Motion Estimation from
Sparse Observations with Uncertainty Modeling
ABSTRACT: Estimating full-body motion using the tracking signals of head and hands from
VR devices holds great potential for various applications. However, the
sparsity and unique distribution of observations present a significant
challenge, resulting in an ill-posed problem with multiple feasible solutions
(i.e., hypotheses). This amplifies uncertainty and ambiguity in full-body
motion estimation, especially for the lower-body joints. Therefore, we propose
a new method, EnvPoser, that employs a two-stage framework to perform full-body
motion estimation using sparse tracking signals and pre-scanned environment
from VR devices. EnvPoser models the multi-hypothesis nature of human motion
through an uncertainty-aware estimation module in the first stage. In the
second stage, we refine these multi-hypothesis estimates by integrating
semantic and geometric environmental constraints, ensuring that the final
motion estimation aligns realistically with both the environmental context and
physical interactions. Qualitative and quantitative experiments on two public
datasets demonstrate that our method achieves state-of-the-art performance,
highlighting significant improvements in human motion estimation within
motion-environment interaction scenarios.
|
2412.10437 | XiMing Xing | Ximing Xing, Juncheng Hu, Jing Zhang, Dong Xu, Qian Yu | SVGFusion: Scalable Text-to-SVG Generation via Vector Space Diffusion | project page: https://ximinng.github.io/SVGFusionProject/ | null | null | null | cs.CV cs.GR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we introduce SVGFusion, a Text-to-SVG model capable of scaling
to real-world SVG data without relying on text-based discrete language models
or prolonged Score Distillation Sampling (SDS) optimization. The core idea of
SVGFusion is to utilize a popular Text-to-Image framework to learn a continuous
latent space for vector graphics. Specifically, SVGFusion comprises two key
modules: a Vector-Pixel Fusion Variational Autoencoder (VP-VAE) and a Vector
Space Diffusion Transformer (VS-DiT). The VP-VAE processes both SVG codes and
their corresponding rasterizations to learn a continuous latent space, while
the VS-DiT generates latent codes within this space based on the input text
prompt. Building on the VP-VAE, we propose a novel rendering sequence modeling
strategy which enables the learned latent space to capture the inherent
creation logic of SVGs. This allows the model to generate SVGs with higher
visual quality and more logical construction, while systematically avoiding
occlusion in complex graphic compositions. Additionally, the scalability of
SVGFusion can be continuously enhanced by adding more VS-DiT blocks. To
effectively train and evaluate SVGFusion, we construct SVGX-Dataset, a
large-scale, high-quality SVG dataset that addresses the scarcity of
high-quality vector data. Extensive experiments demonstrate the superiority of
SVGFusion over existing SVG generation methods, establishing a new framework
for SVG content creation. Code, model, and data will be released at:
https://ximinng.github.io/SVGFusionProject/
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 09:02:25 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 16:20:45 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xing",
"Ximing",
""
],
[
"Hu",
"Juncheng",
""
],
[
"Zhang",
"Jing",
""
],
[
"Xu",
"Dong",
""
],
[
"Yu",
"Qian",
""
]
] | TITLE: SVGFusion: Scalable Text-to-SVG Generation via Vector Space Diffusion
ABSTRACT: In this work, we introduce SVGFusion, a Text-to-SVG model capable of scaling
to real-world SVG data without relying on text-based discrete language models
or prolonged Score Distillation Sampling (SDS) optimization. The core idea of
SVGFusion is to utilize a popular Text-to-Image framework to learn a continuous
latent space for vector graphics. Specifically, SVGFusion comprises two key
modules: a Vector-Pixel Fusion Variational Autoencoder (VP-VAE) and a Vector
Space Diffusion Transformer (VS-DiT). The VP-VAE processes both SVG codes and
their corresponding rasterizations to learn a continuous latent space, while
the VS-DiT generates latent codes within this space based on the input text
prompt. Building on the VP-VAE, we propose a novel rendering sequence modeling
strategy which enables the learned latent space to capture the inherent
creation logic of SVGs. This allows the model to generate SVGs with higher
visual quality and more logical construction, while systematically avoiding
occlusion in complex graphic compositions. Additionally, the scalability of
SVGFusion can be continuously enhanced by adding more VS-DiT blocks. To
effectively train and evaluate SVGFusion, we construct SVGX-Dataset, a
large-scale, high-quality SVG dataset that addresses the scarcity of
high-quality vector data. Extensive experiments demonstrate the superiority of
SVGFusion over existing SVG generation methods, establishing a new framework
for SVG content creation. Code, model, and data will be released at:
https://ximinng.github.io/SVGFusionProject/
|
2412.10783 | Zhengcong Fei | Zhengcong Fei, Di Qiu, Debang Li, Changqian Yu, Mingyuan Fan | Video Diffusion Transformers are In-Context Learners | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper investigates a solution for enabling in-context capabilities of
video diffusion transformers, with minimal tuning required for activation.
Specifically, we propose a simple pipeline to leverage in-context generation:
($\textbf{i}$) concatenate videos along spacial or time dimension,
($\textbf{ii}$) jointly caption multi-scene video clips from one source, and
($\textbf{iii}$) apply task-specific fine-tuning using carefully curated small
datasets. Through a series of diverse controllable tasks, we demonstrate
qualitatively that existing advanced text-to-video models can effectively
perform in-context generation. Notably, it allows for the creation of
consistent multi-scene videos exceeding 30 seconds in duration, without
additional computational overhead. Importantly, this method requires no
modifications to the original models, results in high-fidelity video outputs
that better align with prompt specifications and maintain role consistency. Our
framework presents a valuable tool for the research community and offers
critical insights for advancing product-level controllable video generation
systems. The data, code, and model weights are publicly available at:
https://github.com/feizc/Video-In-Context.
| [
{
"version": "v1",
"created": "Sat, 14 Dec 2024 10:39:55 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Dec 2024 11:39:59 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 08:53:33 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Fei",
"Zhengcong",
""
],
[
"Qiu",
"Di",
""
],
[
"Li",
"Debang",
""
],
[
"Yu",
"Changqian",
""
],
[
"Fan",
"Mingyuan",
""
]
] | TITLE: Video Diffusion Transformers are In-Context Learners
ABSTRACT: This paper investigates a solution for enabling in-context capabilities of
video diffusion transformers, with minimal tuning required for activation.
Specifically, we propose a simple pipeline to leverage in-context generation:
($\textbf{i}$) concatenate videos along spacial or time dimension,
($\textbf{ii}$) jointly caption multi-scene video clips from one source, and
($\textbf{iii}$) apply task-specific fine-tuning using carefully curated small
datasets. Through a series of diverse controllable tasks, we demonstrate
qualitatively that existing advanced text-to-video models can effectively
perform in-context generation. Notably, it allows for the creation of
consistent multi-scene videos exceeding 30 seconds in duration, without
additional computational overhead. Importantly, this method requires no
modifications to the original models, results in high-fidelity video outputs
that better align with prompt specifications and maintain role consistency. Our
framework presents a valuable tool for the research community and offers
critical insights for advancing product-level controllable video generation
systems. The data, code, and model weights are publicly available at:
https://github.com/feizc/Video-In-Context.
|
2412.10966 | Alex Morehead | Alex Morehead and Jianlin Cheng | FlowDock: Geometric Flow Matching for Generative Protein-Ligand Docking
and Affinity Prediction | 15 pages, 2 tables, 2 algorithms, 11 figures. Code, data, pre-trained
models, and baseline method predictions are available at
https://github.com/BioinfoMachineLearning/FlowDock | null | null | null | cs.LG cs.AI q-bio.BM q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Powerful generative AI models of protein-ligand structure have recently been
proposed, but few of these methods support both flexible protein-ligand docking
and affinity estimation. Of those that do, none can directly model multiple
binding ligands concurrently or have been rigorously benchmarked on
pharmacologically relevant drug targets, hindering their widespread adoption in
drug discovery efforts. In this work, we propose FlowDock, the first deep
geometric generative model based on conditional flow matching that learns to
directly map unbound (apo) structures to their bound (holo) counterparts for an
arbitrary number of binding ligands. Furthermore, FlowDock provides predicted
structural confidence scores and binding affinity values with each of its
generated protein-ligand complex structures, enabling fast virtual screening of
new (multi-ligand) drug targets. For the well-known PoseBusters Benchmark
dataset, FlowDock outperforms single-sequence AlphaFold 3 with a 51% blind
docking success rate using unbound (apo) protein input structures and without
any information derived from multiple sequence alignments, and for the
challenging new DockGen-E dataset, FlowDock outperforms single-sequence
AlphaFold 3 and matches single-sequence Chai-1 for binding pocket
generalization. Additionally, in the ligand category of the 16th community-wide
Critical Assessment of Techniques for Structure Prediction (CASP16), FlowDock
ranked among the top-5 methods for pharmacological binding affinity estimation
across 140 protein-ligand complexes, demonstrating the efficacy of its learned
representations in virtual screening. Source code, data, and pre-trained models
are available at https://github.com/BioinfoMachineLearning/FlowDock.
| [
{
"version": "v1",
"created": "Sat, 14 Dec 2024 20:54:37 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jan 2025 21:20:03 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 16:50:30 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Morehead",
"Alex",
""
],
[
"Cheng",
"Jianlin",
""
]
] | TITLE: FlowDock: Geometric Flow Matching for Generative Protein-Ligand Docking
and Affinity Prediction
ABSTRACT: Powerful generative AI models of protein-ligand structure have recently been
proposed, but few of these methods support both flexible protein-ligand docking
and affinity estimation. Of those that do, none can directly model multiple
binding ligands concurrently or have been rigorously benchmarked on
pharmacologically relevant drug targets, hindering their widespread adoption in
drug discovery efforts. In this work, we propose FlowDock, the first deep
geometric generative model based on conditional flow matching that learns to
directly map unbound (apo) structures to their bound (holo) counterparts for an
arbitrary number of binding ligands. Furthermore, FlowDock provides predicted
structural confidence scores and binding affinity values with each of its
generated protein-ligand complex structures, enabling fast virtual screening of
new (multi-ligand) drug targets. For the well-known PoseBusters Benchmark
dataset, FlowDock outperforms single-sequence AlphaFold 3 with a 51% blind
docking success rate using unbound (apo) protein input structures and without
any information derived from multiple sequence alignments, and for the
challenging new DockGen-E dataset, FlowDock outperforms single-sequence
AlphaFold 3 and matches single-sequence Chai-1 for binding pocket
generalization. Additionally, in the ligand category of the 16th community-wide
Critical Assessment of Techniques for Structure Prediction (CASP16), FlowDock
ranked among the top-5 methods for pharmacological binding affinity estimation
across 140 protein-ligand complexes, demonstrating the efficacy of its learned
representations in virtual screening. Source code, data, and pre-trained models
are available at https://github.com/BioinfoMachineLearning/FlowDock.
|
2412.11457 | Ruijie Lu | Ruijie Lu, Yixin Chen, Junfeng Ni, Baoxiong Jia, Yu Liu, Diwen Wan,
Gang Zeng, Siyuan Huang | MOVIS: Enhancing Multi-Object Novel View Synthesis for Indoor Scenes | Accepted by CVPR 2025. Project page:
https://jason-aplp.github.io/MOVIS/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Repurposing pre-trained diffusion models has been proven to be effective for
NVS. However, these methods are mostly limited to a single object; directly
applying such methods to compositional multi-object scenarios yields inferior
results, especially incorrect object placement and inconsistent shape and
appearance under novel views. How to enhance and systematically evaluate the
cross-view consistency of such models remains under-explored. To address this
issue, we propose MOVIS to enhance the structural awareness of the
view-conditioned diffusion model for multi-object NVS in terms of model inputs,
auxiliary tasks, and training strategy. First, we inject structure-aware
features, including depth and object mask, into the denoising U-Net to enhance
the model's comprehension of object instances and their spatial relationships.
Second, we introduce an auxiliary task requiring the model to simultaneously
predict novel view object masks, further improving the model's capability in
differentiating and placing objects. Finally, we conduct an in-depth analysis
of the diffusion sampling process and carefully devise a structure-guided
timestep sampling scheduler during training, which balances the learning of
global object placement and fine-grained detail recovery. To systematically
evaluate the plausibility of synthesized images, we propose to assess
cross-view consistency and novel view object placement alongside existing
image-level NVS metrics. Extensive experiments on challenging synthetic and
realistic datasets demonstrate that our method exhibits strong generalization
capabilities and produces consistent novel view synthesis, highlighting its
potential to guide future 3D-aware multi-object NVS tasks. Our project page is
available at https://jason-aplp.github.io/MOVIS/.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 05:23:45 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 12:34:37 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lu",
"Ruijie",
""
],
[
"Chen",
"Yixin",
""
],
[
"Ni",
"Junfeng",
""
],
[
"Jia",
"Baoxiong",
""
],
[
"Liu",
"Yu",
""
],
[
"Wan",
"Diwen",
""
],
[
"Zeng",
"Gang",
""
],
[
"Huang",
"Siyuan",
""
]
] | TITLE: MOVIS: Enhancing Multi-Object Novel View Synthesis for Indoor Scenes
ABSTRACT: Repurposing pre-trained diffusion models has been proven to be effective for
NVS. However, these methods are mostly limited to a single object; directly
applying such methods to compositional multi-object scenarios yields inferior
results, especially incorrect object placement and inconsistent shape and
appearance under novel views. How to enhance and systematically evaluate the
cross-view consistency of such models remains under-explored. To address this
issue, we propose MOVIS to enhance the structural awareness of the
view-conditioned diffusion model for multi-object NVS in terms of model inputs,
auxiliary tasks, and training strategy. First, we inject structure-aware
features, including depth and object mask, into the denoising U-Net to enhance
the model's comprehension of object instances and their spatial relationships.
Second, we introduce an auxiliary task requiring the model to simultaneously
predict novel view object masks, further improving the model's capability in
differentiating and placing objects. Finally, we conduct an in-depth analysis
of the diffusion sampling process and carefully devise a structure-guided
timestep sampling scheduler during training, which balances the learning of
global object placement and fine-grained detail recovery. To systematically
evaluate the plausibility of synthesized images, we propose to assess
cross-view consistency and novel view object placement alongside existing
image-level NVS metrics. Extensive experiments on challenging synthetic and
realistic datasets demonstrate that our method exhibits strong generalization
capabilities and produces consistent novel view synthesis, highlighting its
potential to guide future 3D-aware multi-object NVS tasks. Our project page is
available at https://jason-aplp.github.io/MOVIS/.
|
2412.12096 | Qianyi Wu | Cheng Zhang, Haofei Xu, Qianyi Wu, Camilo Cruz Gambardella, Dinh
Phung, Jianfei Cai | PanSplat: 4K Panorama Synthesis with Feed-Forward Gaussian Splatting | Camera Ready of CVPR2025. Project Page:
https://chengzhag.github.io/publication/pansplat/ Code:
https://github.com/chengzhag/PanSplat | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the advent of portable 360{\deg} cameras, panorama has gained
significant attention in applications like virtual reality (VR), virtual tours,
robotics, and autonomous driving. As a result, wide-baseline panorama view
synthesis has emerged as a vital task, where high resolution, fast inference,
and memory efficiency are essential. Nevertheless, existing methods are
typically constrained to lower resolutions (512 $\times$ 1024) due to demanding
memory and computational requirements. In this paper, we present PanSplat, a
generalizable, feed-forward approach that efficiently supports resolution up to
4K (2048 $\times$ 4096). Our approach features a tailored spherical 3D Gaussian
pyramid with a Fibonacci lattice arrangement, enhancing image quality while
reducing information redundancy. To accommodate the demands of high resolution,
we propose a pipeline that integrates a hierarchical spherical cost volume and
Gaussian heads with local operations, enabling two-step deferred
backpropagation for memory-efficient training on a single A100 GPU. Experiments
demonstrate that PanSplat achieves state-of-the-art results with superior
efficiency and image quality across both synthetic and real-world datasets.
Code is available at https://github.com/chengzhag/PanSplat.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 18:59:45 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 19:46:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Cheng",
""
],
[
"Xu",
"Haofei",
""
],
[
"Wu",
"Qianyi",
""
],
[
"Gambardella",
"Camilo Cruz",
""
],
[
"Phung",
"Dinh",
""
],
[
"Cai",
"Jianfei",
""
]
] | TITLE: PanSplat: 4K Panorama Synthesis with Feed-Forward Gaussian Splatting
ABSTRACT: With the advent of portable 360{\deg} cameras, panorama has gained
significant attention in applications like virtual reality (VR), virtual tours,
robotics, and autonomous driving. As a result, wide-baseline panorama view
synthesis has emerged as a vital task, where high resolution, fast inference,
and memory efficiency are essential. Nevertheless, existing methods are
typically constrained to lower resolutions (512 $\times$ 1024) due to demanding
memory and computational requirements. In this paper, we present PanSplat, a
generalizable, feed-forward approach that efficiently supports resolution up to
4K (2048 $\times$ 4096). Our approach features a tailored spherical 3D Gaussian
pyramid with a Fibonacci lattice arrangement, enhancing image quality while
reducing information redundancy. To accommodate the demands of high resolution,
we propose a pipeline that integrates a hierarchical spherical cost volume and
Gaussian heads with local operations, enabling two-step deferred
backpropagation for memory-efficient training on a single A100 GPU. Experiments
demonstrate that PanSplat achieves state-of-the-art results with superior
efficiency and image quality across both synthetic and real-world datasets.
Code is available at https://github.com/chengzhag/PanSplat.
|
2412.12725 | Xiaomeng Chu | Xiaomeng Chu, Jiajun Deng, Guoliang You, Yifan Duan, Houqiang Li,
Yanyong Zhang | RaCFormer: Towards High-Quality 3D Object Detection via Query-based
Radar-Camera Fusion | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Radar-Camera fusion transformer (RaCFormer) to boost the accuracy
of 3D object detection by the following insight. The Radar-Camera fusion in
outdoor 3D scene perception is capped by the image-to-BEV transformation--if
the depth of pixels is not accurately estimated, the naive combination of BEV
features actually integrates unaligned visual content. To avoid this problem,
we propose a query-based framework that enables adaptive sampling of
instance-relevant features from both the bird's-eye view (BEV) and the original
image view. Furthermore, we enhance system performance by two key designs:
optimizing query initialization and strengthening the representational capacity
of BEV. For the former, we introduce an adaptive circular distribution in polar
coordinates to refine the initialization of object queries, allowing for a
distance-based adjustment of query density. For the latter, we initially
incorporate a radar-guided depth head to refine the transformation from image
view to BEV. Subsequently, we focus on leveraging the Doppler effect of radar
and introduce an implicit dynamic catcher to capture the temporal elements
within the BEV. Extensive experiments on nuScenes and View-of-Delft (VoD)
datasets validate the merits of our design. Remarkably, our method achieves
superior results of 64.9% mAP and 70.2% NDS on nuScenes. RaCFormer also secures
the state-of-the-art performance on the VoD dataset. Code is available at
https://github.com/cxmomo/RaCFormer.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 09:47:48 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 16:47:54 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chu",
"Xiaomeng",
""
],
[
"Deng",
"Jiajun",
""
],
[
"You",
"Guoliang",
""
],
[
"Duan",
"Yifan",
""
],
[
"Li",
"Houqiang",
""
],
[
"Zhang",
"Yanyong",
""
]
] | TITLE: RaCFormer: Towards High-Quality 3D Object Detection via Query-based
Radar-Camera Fusion
ABSTRACT: We propose Radar-Camera fusion transformer (RaCFormer) to boost the accuracy
of 3D object detection by the following insight. The Radar-Camera fusion in
outdoor 3D scene perception is capped by the image-to-BEV transformation--if
the depth of pixels is not accurately estimated, the naive combination of BEV
features actually integrates unaligned visual content. To avoid this problem,
we propose a query-based framework that enables adaptive sampling of
instance-relevant features from both the bird's-eye view (BEV) and the original
image view. Furthermore, we enhance system performance by two key designs:
optimizing query initialization and strengthening the representational capacity
of BEV. For the former, we introduce an adaptive circular distribution in polar
coordinates to refine the initialization of object queries, allowing for a
distance-based adjustment of query density. For the latter, we initially
incorporate a radar-guided depth head to refine the transformation from image
view to BEV. Subsequently, we focus on leveraging the Doppler effect of radar
and introduce an implicit dynamic catcher to capture the temporal elements
within the BEV. Extensive experiments on nuScenes and View-of-Delft (VoD)
datasets validate the merits of our design. Remarkably, our method achieves
superior results of 64.9% mAP and 70.2% NDS on nuScenes. RaCFormer also secures
the state-of-the-art performance on the VoD dataset. Code is available at
https://github.com/cxmomo/RaCFormer.
|
2412.13071 | Ehsaneddin Asgari | Mohammad Mahdi Abootorabi and Ehsaneddin Asgari | CLASP: Contrastive Language-Speech Pretraining for Multilingual
Multimodal Information Retrieval | accepted at ECIR 2025, 13 pages, 4 figures | null | null | null | cs.CL cs.IR cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study introduces CLASP (Contrastive Language-Speech Pretraining), a
multilingual, multimodal representation tailored for audio-text information
retrieval. CLASP leverages the synergy between spoken content and textual data.
During training, we utilize our newly introduced speech-text dataset, which
encompasses 15 diverse categories ranging from fiction to religion. CLASP's
audio component integrates audio spectrograms with a pre-trained
self-supervised speech model, while its language encoding counterpart employs a
sentence encoder pre-trained on over 100 languages. This unified lightweight
model bridges the gap between various modalities and languages, enhancing its
effectiveness in handling and retrieving multilingual and multimodal data. Our
evaluations across multiple languages demonstrate that CLASP establishes new
benchmarks in HITS@1, MRR, and meanR metrics, outperforming traditional
ASR-based retrieval methods that rely on transcribing speech into text for
subsequent text retrieval, especially in specific scenarios.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 16:38:10 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 09:52:05 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Abootorabi",
"Mohammad Mahdi",
""
],
[
"Asgari",
"Ehsaneddin",
""
]
] | TITLE: CLASP: Contrastive Language-Speech Pretraining for Multilingual
Multimodal Information Retrieval
ABSTRACT: This study introduces CLASP (Contrastive Language-Speech Pretraining), a
multilingual, multimodal representation tailored for audio-text information
retrieval. CLASP leverages the synergy between spoken content and textual data.
During training, we utilize our newly introduced speech-text dataset, which
encompasses 15 diverse categories ranging from fiction to religion. CLASP's
audio component integrates audio spectrograms with a pre-trained
self-supervised speech model, while its language encoding counterpart employs a
sentence encoder pre-trained on over 100 languages. This unified lightweight
model bridges the gap between various modalities and languages, enhancing its
effectiveness in handling and retrieving multilingual and multimodal data. Our
evaluations across multiple languages demonstrate that CLASP establishes new
benchmarks in HITS@1, MRR, and meanR metrics, outperforming traditional
ASR-based retrieval methods that rely on transcribing speech into text for
subsequent text retrieval, especially in specific scenarios.
|
2412.13193 | Haoyi Jiang | Haoyi Jiang, Liu Liu, Tianheng Cheng, Xinjie Wang, Tianwei Lin,
Zhizhong Su, Wenyu Liu, Xinggang Wang | GaussTR: Foundation Model-Aligned Gaussian Transformer for
Self-Supervised 3D Spatial Understanding | CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D Semantic Occupancy Prediction is fundamental for spatial understanding,
yet existing approaches face challenges in scalability and generalization due
to their reliance on extensive labeled data and computationally intensive
voxel-wise representations. In this paper, we introduce GaussTR, a novel
Gaussian-based Transformer framework that unifies sparse 3D modeling with
foundation model alignment through Gaussian representations to advance 3D
spatial understanding. GaussTR predicts sparse sets of Gaussians in a
feed-forward manner to represent 3D scenes. By splatting the Gaussians into 2D
views and aligning the rendered features with foundation models, GaussTR
facilitates self-supervised 3D representation learning and enables
open-vocabulary semantic occupancy prediction without requiring explicit
annotations. Empirical experiments on the Occ3D-nuScenes dataset demonstrate
GaussTR's state-of-the-art zero-shot performance of 12.27 mIoU, along with a
40% reduction in training time. These results highlight the efficacy of GaussTR
for scalable and holistic 3D spatial understanding, with promising implications
in autonomous driving and embodied agents. The code is available at
https://github.com/hustvl/GaussTR.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 18:59:46 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 12:45:56 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Jiang",
"Haoyi",
""
],
[
"Liu",
"Liu",
""
],
[
"Cheng",
"Tianheng",
""
],
[
"Wang",
"Xinjie",
""
],
[
"Lin",
"Tianwei",
""
],
[
"Su",
"Zhizhong",
""
],
[
"Liu",
"Wenyu",
""
],
[
"Wang",
"Xinggang",
""
]
] | TITLE: GaussTR: Foundation Model-Aligned Gaussian Transformer for
Self-Supervised 3D Spatial Understanding
ABSTRACT: 3D Semantic Occupancy Prediction is fundamental for spatial understanding,
yet existing approaches face challenges in scalability and generalization due
to their reliance on extensive labeled data and computationally intensive
voxel-wise representations. In this paper, we introduce GaussTR, a novel
Gaussian-based Transformer framework that unifies sparse 3D modeling with
foundation model alignment through Gaussian representations to advance 3D
spatial understanding. GaussTR predicts sparse sets of Gaussians in a
feed-forward manner to represent 3D scenes. By splatting the Gaussians into 2D
views and aligning the rendered features with foundation models, GaussTR
facilitates self-supervised 3D representation learning and enables
open-vocabulary semantic occupancy prediction without requiring explicit
annotations. Empirical experiments on the Occ3D-nuScenes dataset demonstrate
GaussTR's state-of-the-art zero-shot performance of 12.27 mIoU, along with a
40% reduction in training time. These results highlight the efficacy of GaussTR
for scalable and holistic 3D spatial understanding, with promising implications
in autonomous driving and embodied agents. The code is available at
https://github.com/hustvl/GaussTR.
|
2412.13401 | Joshua Cho | Joshua Cho and Sara Aghajanzadeh and Zhen Zhu and D. A. Forsyth | Zero-Shot Low Light Image Enhancement with Diffusion Prior | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present a simple yet highly effective "free lunch" solution
for low-light image enhancement (LLIE), which aims to restore low-light images
as if acquired in well-illuminated environments. Our method necessitates no
optimization, training, fine-tuning, text conditioning, or hyperparameter
adjustments, yet it consistently reconstructs low-light images with superior
fidelity. Specifically, we leverage a pre-trained text-to-image diffusion
prior, learned from training on a large collection of natural images, and the
features present in the model itself to guide the inference, in contrast to
existing methods that depend on customized constraints. Comprehensive
quantitative evaluations demonstrate that our approach outperforms SOTA methods
on established datasets, while qualitative analyses indicate enhanced color
accuracy and the rectification of subtle chromatic deviations. Furthermore,
additional experiments reveal that our method, without any modifications,
achieves SOTA-comparable performance in the auto white balance (AWB) task.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 00:31:18 GMT"
},
{
"version": "v2",
"created": "Sun, 22 Dec 2024 21:29:58 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 14:41:13 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Mar 2025 00:01:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Cho",
"Joshua",
""
],
[
"Aghajanzadeh",
"Sara",
""
],
[
"Zhu",
"Zhen",
""
],
[
"Forsyth",
"D. A.",
""
]
] | TITLE: Zero-Shot Low Light Image Enhancement with Diffusion Prior
ABSTRACT: In this paper, we present a simple yet highly effective "free lunch" solution
for low-light image enhancement (LLIE), which aims to restore low-light images
as if acquired in well-illuminated environments. Our method necessitates no
optimization, training, fine-tuning, text conditioning, or hyperparameter
adjustments, yet it consistently reconstructs low-light images with superior
fidelity. Specifically, we leverage a pre-trained text-to-image diffusion
prior, learned from training on a large collection of natural images, and the
features present in the model itself to guide the inference, in contrast to
existing methods that depend on customized constraints. Comprehensive
quantitative evaluations demonstrate that our approach outperforms SOTA methods
on established datasets, while qualitative analyses indicate enhanced color
accuracy and the rectification of subtle chromatic deviations. Furthermore,
additional experiments reveal that our method, without any modifications,
achieves SOTA-comparable performance in the auto white balance (AWB) task.
|
2412.13684 | Chuang Yang | Chuang Yang, Bingxuan Zhao, Qing Zhou, and Qi Wang | MMO-IG: Multi-Class and Multi-Scale Object Image Generation for Remote
Sensing | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advancement of deep generative models (DGMs) has significantly
advanced research in computer vision, providing a cost-effective alternative to
acquiring vast quantities of expensive imagery. However, existing methods
predominantly focus on synthesizing remote sensing (RS) images aligned with
real images in a global layout view, which limits their applicability in RS
image object detection (RSIOD) research. To address these challenges, we
propose a multi-class and multi-scale object image generator based on DGMs,
termed MMO-IG, designed to generate RS images with supervised object labels
from global and local aspects simultaneously. Specifically, from the local
view, MMO-IG encodes various RS instances using an iso-spacing instance map
(ISIM). During the generation process, it decodes each instance region with
iso-spacing value in ISIM-corresponding to both background and foreground
instances-to produce RS images through the denoising process of diffusion
models. Considering the complex interdependencies among MMOs, we construct a
spatial-cross dependency knowledge graph (SCDKG). This ensures a realistic and
reliable multidirectional distribution among MMOs for region embedding, thereby
reducing the discrepancy between source and target domains. Besides, we propose
a structured object distribution instruction (SODI) to guide the generation of
synthesized RS image content from a global aspect with SCDKG-based ISIM
together. Extensive experimental results demonstrate that our MMO-IG exhibits
superior generation capabilities for RS images with dense MMO-supervised
labels, and RS detectors pre-trained with MMO-IG show excellent performance on
real-world datasets.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 10:19:12 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 13:22:39 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 06:11:53 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yang",
"Chuang",
""
],
[
"Zhao",
"Bingxuan",
""
],
[
"Zhou",
"Qing",
""
],
[
"Wang",
"Qi",
""
]
] | TITLE: MMO-IG: Multi-Class and Multi-Scale Object Image Generation for Remote
Sensing
ABSTRACT: The rapid advancement of deep generative models (DGMs) has significantly
advanced research in computer vision, providing a cost-effective alternative to
acquiring vast quantities of expensive imagery. However, existing methods
predominantly focus on synthesizing remote sensing (RS) images aligned with
real images in a global layout view, which limits their applicability in RS
image object detection (RSIOD) research. To address these challenges, we
propose a multi-class and multi-scale object image generator based on DGMs,
termed MMO-IG, designed to generate RS images with supervised object labels
from global and local aspects simultaneously. Specifically, from the local
view, MMO-IG encodes various RS instances using an iso-spacing instance map
(ISIM). During the generation process, it decodes each instance region with
iso-spacing value in ISIM-corresponding to both background and foreground
instances-to produce RS images through the denoising process of diffusion
models. Considering the complex interdependencies among MMOs, we construct a
spatial-cross dependency knowledge graph (SCDKG). This ensures a realistic and
reliable multidirectional distribution among MMOs for region embedding, thereby
reducing the discrepancy between source and target domains. Besides, we propose
a structured object distribution instruction (SODI) to guide the generation of
synthesized RS image content from a global aspect with SCDKG-based ISIM
together. Extensive experimental results demonstrate that our MMO-IG exhibits
superior generation capabilities for RS images with dense MMO-supervised
labels, and RS detectors pre-trained with MMO-IG show excellent performance on
real-world datasets.
|
2412.16153 | Shijie Wang | Shijie Wang, Samaneh Azadi, Rohit Girdhar, Saketh Rambhatla, Chen Sun,
Xi Yin | MotiF: Making Text Count in Image Animation with Motion Focal Loss | Accepted by CVPR 2025. Project page:
https://wang-sj16.github.io/motif/ | The IEEE/CVF Conference on Computer Vision and Pattern Recognition
2025 | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-Image-to-Video (TI2V) generation aims to generate a video from an image
following a text description, which is also referred to as text-guided image
animation. Most existing methods struggle to generate videos that align well
with the text prompts, particularly when motion is specified. To overcome this
limitation, we introduce MotiF, a simple yet effective approach that directs
the model's learning to the regions with more motion, thereby improving the
text alignment and motion generation. We use optical flow to generate a motion
heatmap and weight the loss according to the intensity of the motion. This
modified objective leads to noticeable improvements and complements existing
methods that utilize motion priors as model inputs. Additionally, due to the
lack of a diverse benchmark for evaluating TI2V generation, we propose TI2V
Bench, a dataset consists of 320 image-text pairs for robust evaluation. We
present a human evaluation protocol that asks the annotators to select an
overall preference between two videos followed by their justifications. Through
a comprehensive evaluation on TI2V Bench, MotiF outperforms nine open-sourced
models, achieving an average preference of 72%. The TI2V Bench and additional
results are released in https://wang-sj16.github.io/motif/.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2024 18:57:06 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 00:30:55 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Shijie",
""
],
[
"Azadi",
"Samaneh",
""
],
[
"Girdhar",
"Rohit",
""
],
[
"Rambhatla",
"Saketh",
""
],
[
"Sun",
"Chen",
""
],
[
"Yin",
"Xi",
""
]
] | TITLE: MotiF: Making Text Count in Image Animation with Motion Focal Loss
ABSTRACT: Text-Image-to-Video (TI2V) generation aims to generate a video from an image
following a text description, which is also referred to as text-guided image
animation. Most existing methods struggle to generate videos that align well
with the text prompts, particularly when motion is specified. To overcome this
limitation, we introduce MotiF, a simple yet effective approach that directs
the model's learning to the regions with more motion, thereby improving the
text alignment and motion generation. We use optical flow to generate a motion
heatmap and weight the loss according to the intensity of the motion. This
modified objective leads to noticeable improvements and complements existing
methods that utilize motion priors as model inputs. Additionally, due to the
lack of a diverse benchmark for evaluating TI2V generation, we propose TI2V
Bench, a dataset consists of 320 image-text pairs for robust evaluation. We
present a human evaluation protocol that asks the annotators to select an
overall preference between two videos followed by their justifications. Through
a comprehensive evaluation on TI2V Bench, MotiF outperforms nine open-sourced
models, achieving an average preference of 72%. The TI2V Bench and additional
results are released in https://wang-sj16.github.io/motif/.
|
2412.17622 | Parham Rezaei | Parham Rezaei, Farzan Farnia, Cheuk Ting Li | Be More Diverse than the Most Diverse: Optimal Mixtures of Generative
Models via Mixture-UCB Bandit Algorithms | null | Proceedings of the 13th International Conference on Learning
Representations (ICLR), 2025 | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | The availability of multiple training algorithms and architectures for
generative models requires a selection mechanism to form a single model over a
group of well-trained generation models. The selection task is commonly
addressed by identifying the model that maximizes an evaluation score based on
the diversity and quality of the generated data. However, such a best-model
identification approach overlooks the possibility that a mixture of available
models can outperform each individual model. In this work, we numerically show
that a mixture of generative models on benchmark image datasets can indeed
achieve a better evaluation score (based on FID and KID scores), compared to
the individual models. This observation motivates the development of efficient
algorithms for selecting the optimal mixture of the models. To address this, we
formulate a quadratic optimization problem to find an optimal mixture model
achieving the maximum of kernel-based evaluation scores including kernel
inception distance (KID) and R\'enyi kernel entropy (RKE). To identify the
optimal mixture of the models using the fewest possible sample queries, we view
the selection task as a multi-armed bandit (MAB) problem and propose the
Mixture Upper Confidence Bound (Mixture-UCB) algorithm that provably converges
to the optimal mixture of the involved models. More broadly, the proposed
Mixture-UCB can be extended to optimize every convex quadratic function of the
mixture weights in a general MAB setting. We prove a regret bound for the
Mixture-UCB algorithm and perform several numerical experiments to show the
success of Mixture-UCB in finding the optimal mixture of text and image
generative models. The project code is available at
https://github.com/Rezaei-Parham/Mixture-UCB.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2024 14:48:17 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 10:45:56 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Rezaei",
"Parham",
""
],
[
"Farnia",
"Farzan",
""
],
[
"Li",
"Cheuk Ting",
""
]
] | TITLE: Be More Diverse than the Most Diverse: Optimal Mixtures of Generative
Models via Mixture-UCB Bandit Algorithms
ABSTRACT: The availability of multiple training algorithms and architectures for
generative models requires a selection mechanism to form a single model over a
group of well-trained generation models. The selection task is commonly
addressed by identifying the model that maximizes an evaluation score based on
the diversity and quality of the generated data. However, such a best-model
identification approach overlooks the possibility that a mixture of available
models can outperform each individual model. In this work, we numerically show
that a mixture of generative models on benchmark image datasets can indeed
achieve a better evaluation score (based on FID and KID scores), compared to
the individual models. This observation motivates the development of efficient
algorithms for selecting the optimal mixture of the models. To address this, we
formulate a quadratic optimization problem to find an optimal mixture model
achieving the maximum of kernel-based evaluation scores including kernel
inception distance (KID) and R\'enyi kernel entropy (RKE). To identify the
optimal mixture of the models using the fewest possible sample queries, we view
the selection task as a multi-armed bandit (MAB) problem and propose the
Mixture Upper Confidence Bound (Mixture-UCB) algorithm that provably converges
to the optimal mixture of the involved models. More broadly, the proposed
Mixture-UCB can be extended to optimize every convex quadratic function of the
mixture weights in a general MAB setting. We prove a regret bound for the
Mixture-UCB algorithm and perform several numerical experiments to show the
success of Mixture-UCB in finding the optimal mixture of text and image
generative models. The project code is available at
https://github.com/Rezaei-Parham/Mixture-UCB.
|
2412.17856 | Xianlin Zeng | Xianlin Zeng, Yufeng Wang, Yuqi Sun, Guodong Guo, Wenrui Ding,
Baochang Zhang | Graph Structure Refinement with Energy-based Contrastive Learning | Accepted to AAAI 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Graph Neural Networks (GNNs) have recently gained widespread attention as a
successful tool for analyzing graph-structured data. However, imperfect graph
structure with noisy links lacks enough robustness and may damage graph
representations, therefore limiting the GNNs' performance in practical tasks.
Moreover, existing generative architectures fail to fit discriminative
graph-related tasks. To tackle these issues, we introduce an unsupervised
method based on a joint of generative training and discriminative training to
learn graph structure and representation, aiming to improve the discriminative
performance of generative models. We propose an Energy-based Contrastive
Learning (ECL) guided Graph Structure Refinement (GSR) framework, denoted as
ECL-GSR. To our knowledge, this is the first work to combine energy-based
models with contrastive learning for GSR. Specifically, we leverage ECL to
approximate the joint distribution of sample pairs, which increases the
similarity between representations of positive pairs while reducing the
similarity between negative ones. Refined structure is produced by augmenting
and removing edges according to the similarity metrics among node
representations. Extensive experiments demonstrate that ECL-GSR outperforms the
state-of-the-art on eight benchmark datasets in node classification. ECL-GSR
achieves faster training with fewer samples and memories against the leading
baseline, highlighting its simplicity and efficiency in downstream tasks.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2024 04:05:09 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Dec 2024 02:28:52 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 13:48:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zeng",
"Xianlin",
""
],
[
"Wang",
"Yufeng",
""
],
[
"Sun",
"Yuqi",
""
],
[
"Guo",
"Guodong",
""
],
[
"Ding",
"Wenrui",
""
],
[
"Zhang",
"Baochang",
""
]
] | TITLE: Graph Structure Refinement with Energy-based Contrastive Learning
ABSTRACT: Graph Neural Networks (GNNs) have recently gained widespread attention as a
successful tool for analyzing graph-structured data. However, imperfect graph
structure with noisy links lacks enough robustness and may damage graph
representations, therefore limiting the GNNs' performance in practical tasks.
Moreover, existing generative architectures fail to fit discriminative
graph-related tasks. To tackle these issues, we introduce an unsupervised
method based on a joint of generative training and discriminative training to
learn graph structure and representation, aiming to improve the discriminative
performance of generative models. We propose an Energy-based Contrastive
Learning (ECL) guided Graph Structure Refinement (GSR) framework, denoted as
ECL-GSR. To our knowledge, this is the first work to combine energy-based
models with contrastive learning for GSR. Specifically, we leverage ECL to
approximate the joint distribution of sample pairs, which increases the
similarity between representations of positive pairs while reducing the
similarity between negative ones. Refined structure is produced by augmenting
and removing edges according to the similarity metrics among node
representations. Extensive experiments demonstrate that ECL-GSR outperforms the
state-of-the-art on eight benchmark datasets in node classification. ECL-GSR
achieves faster training with fewer samples and memories against the leading
baseline, highlighting its simplicity and efficiency in downstream tasks.
|
2412.18219 | Kazuhiko Kawamoto | Takuma Fukuda, Hiroshi Kera, Kazuhiko Kawamoto | Adapter Merging with Centroid Prototype Mapping for Scalable
Class-Incremental Learning | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Adapter Merging with Centroid Prototype Mapping (ACMap), an
exemplar-free framework for class-incremental learning (CIL) that addresses
both catastrophic forgetting and scalability. While existing methods involve a
trade-off between inference time and accuracy, ACMap consolidates task-specific
adapters into a single adapter, thus achieving constant inference time across
tasks without sacrificing accuracy. The framework employs adapter merging to
build a shared subspace that aligns task representations and mitigates
forgetting, while centroid prototype mapping maintains high accuracy by
consistently adapting representations within the shared subspace. To further
improve scalability, an early stopping strategy limits adapter merging as tasks
increase. Extensive experiments on five benchmark datasets demonstrate that
ACMap matches state-of-the-art accuracy while maintaining inference time
comparable to the fastest existing methods. The code is available at
https://github.com/tf63/ACMap.
| [
{
"version": "v1",
"created": "Tue, 24 Dec 2024 06:57:16 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 08:20:08 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Fukuda",
"Takuma",
""
],
[
"Kera",
"Hiroshi",
""
],
[
"Kawamoto",
"Kazuhiko",
""
]
] | TITLE: Adapter Merging with Centroid Prototype Mapping for Scalable
Class-Incremental Learning
ABSTRACT: We propose Adapter Merging with Centroid Prototype Mapping (ACMap), an
exemplar-free framework for class-incremental learning (CIL) that addresses
both catastrophic forgetting and scalability. While existing methods involve a
trade-off between inference time and accuracy, ACMap consolidates task-specific
adapters into a single adapter, thus achieving constant inference time across
tasks without sacrificing accuracy. The framework employs adapter merging to
build a shared subspace that aligns task representations and mitigates
forgetting, while centroid prototype mapping maintains high accuracy by
consistently adapting representations within the shared subspace. To further
improve scalability, an early stopping strategy limits adapter merging as tasks
increase. Extensive experiments on five benchmark datasets demonstrate that
ACMap matches state-of-the-art accuracy while maintaining inference time
comparable to the fastest existing methods. The code is available at
https://github.com/tf63/ACMap.
|
2412.18883 | Megh Shukla | Reyhaneh Hosseininejad, Megh Shukla, Saeed Saadatnejad, Mathieu
Salzmann, Alexandre Alahi | MotionMap: Representing Multimodality in Human Pose Forecasting | CVPR 2025. We propose a new representation for learning multimodality
in human pose forecasting which does not depend on generative models | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Human pose forecasting is inherently multimodal since multiple futures exist
for an observed pose sequence. However, evaluating multimodality is challenging
since the task is ill-posed. Therefore, we first propose an alternative
paradigm to make the task well-posed. Next, while state-of-the-art methods
predict multimodality, this requires oversampling a large volume of
predictions. This raises key questions: (1) Can we capture multimodality by
efficiently sampling a smaller number of predictions? (2) Subsequently, which
of the predicted futures is more likely for an observed pose sequence? We
address these questions with MotionMap, a simple yet effective heatmap based
representation for multimodality. We extend heatmaps to represent a spatial
distribution over the space of all possible motions, where different local
maxima correspond to different forecasts for a given observation. MotionMap can
capture a variable number of modes per observation and provide confidence
measures for different modes. Further, MotionMap allows us to introduce the
notion of uncertainty and controllability over the forecasted pose sequence.
Finally, MotionMap captures rare modes that are non-trivial to evaluate yet
critical for safety. We support our claims through multiple qualitative and
quantitative experiments using popular 3D human pose datasets: Human3.6M and
AMASS, highlighting the strengths and limitations of our proposed method.
Project Page: https://vita-epfl.github.io/MotionMap
| [
{
"version": "v1",
"created": "Wed, 25 Dec 2024 11:47:26 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 16:42:33 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hosseininejad",
"Reyhaneh",
""
],
[
"Shukla",
"Megh",
""
],
[
"Saadatnejad",
"Saeed",
""
],
[
"Salzmann",
"Mathieu",
""
],
[
"Alahi",
"Alexandre",
""
]
] | TITLE: MotionMap: Representing Multimodality in Human Pose Forecasting
ABSTRACT: Human pose forecasting is inherently multimodal since multiple futures exist
for an observed pose sequence. However, evaluating multimodality is challenging
since the task is ill-posed. Therefore, we first propose an alternative
paradigm to make the task well-posed. Next, while state-of-the-art methods
predict multimodality, this requires oversampling a large volume of
predictions. This raises key questions: (1) Can we capture multimodality by
efficiently sampling a smaller number of predictions? (2) Subsequently, which
of the predicted futures is more likely for an observed pose sequence? We
address these questions with MotionMap, a simple yet effective heatmap based
representation for multimodality. We extend heatmaps to represent a spatial
distribution over the space of all possible motions, where different local
maxima correspond to different forecasts for a given observation. MotionMap can
capture a variable number of modes per observation and provide confidence
measures for different modes. Further, MotionMap allows us to introduce the
notion of uncertainty and controllability over the forecasted pose sequence.
Finally, MotionMap captures rare modes that are non-trivial to evaluate yet
critical for safety. We support our claims through multiple qualitative and
quantitative experiments using popular 3D human pose datasets: Human3.6M and
AMASS, highlighting the strengths and limitations of our proposed method.
Project Page: https://vita-epfl.github.io/MotionMap
|
2412.19165 | Qiude Zhang | Qiude Zhang, Chunyu Lin, Zhijie Shen, Nie Lang and Yao Zhao | Revisiting Monocular 3D Object Detection with Depth Thickness Field | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monocular 3D object detection is challenging due to the lack of accurate
depth. However, existing depth-assisted solutions still exhibit inferior
performance, whose reason is universally acknowledged as the unsatisfactory
accuracy of monocular depth estimation models. In this paper, we revisit
monocular 3D object detection from the depth perspective and formulate an
additional issue as the limited 3D structure-aware capability of existing depth
representations (e.g., depth one-hot encoding or depth distribution). To
address this issue, we introduce a novel Depth Thickness Field approach to
embed clear 3D structures of the scenes. Specifically, we present MonoDTF, a
scene-to-instance depth-adapted network for monocular 3D object detection. The
framework mainly comprises a Scene-Level Depth Retargeting (SDR) module and an
Instance-Level Spatial Refinement (ISR) module. The former retargets
traditional depth representations to the proposed depth thickness field,
incorporating the scene-level perception of 3D structures. The latter refines
the voxel space with the guidance of instances, enhancing the 3D instance-aware
capability of the depth thickness field and thus improving detection accuracy.
Extensive experiments on the KITTI and Waymo datasets demonstrate our
superiority to existing state-of-the-art (SoTA) methods and the universality
when equipped with different depth estimation models. The code will be
available.
| [
{
"version": "v1",
"created": "Thu, 26 Dec 2024 10:51:50 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 14:01:28 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Qiude",
""
],
[
"Lin",
"Chunyu",
""
],
[
"Shen",
"Zhijie",
""
],
[
"Lang",
"Nie",
""
],
[
"Zhao",
"Yao",
""
]
] | TITLE: Revisiting Monocular 3D Object Detection with Depth Thickness Field
ABSTRACT: Monocular 3D object detection is challenging due to the lack of accurate
depth. However, existing depth-assisted solutions still exhibit inferior
performance, whose reason is universally acknowledged as the unsatisfactory
accuracy of monocular depth estimation models. In this paper, we revisit
monocular 3D object detection from the depth perspective and formulate an
additional issue as the limited 3D structure-aware capability of existing depth
representations (e.g., depth one-hot encoding or depth distribution). To
address this issue, we introduce a novel Depth Thickness Field approach to
embed clear 3D structures of the scenes. Specifically, we present MonoDTF, a
scene-to-instance depth-adapted network for monocular 3D object detection. The
framework mainly comprises a Scene-Level Depth Retargeting (SDR) module and an
Instance-Level Spatial Refinement (ISR) module. The former retargets
traditional depth representations to the proposed depth thickness field,
incorporating the scene-level perception of 3D structures. The latter refines
the voxel space with the guidance of instances, enhancing the 3D instance-aware
capability of the depth thickness field and thus improving detection accuracy.
Extensive experiments on the KITTI and Waymo datasets demonstrate our
superiority to existing state-of-the-art (SoTA) methods and the universality
when equipped with different depth estimation models. The code will be
available.
|
2412.20066 | Boyun Li | Boyun Li, Haiyu Zhao, Wenxin Wang, Peng Hu, Yuanbiao Gou and Xi Peng | MaIR: A Locality- and Continuity-Preserving Mamba for Image Restoration | Accepted by CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in Mamba have shown promising results in image
restoration. These methods typically flatten 2D images into multiple distinct
1D sequences along rows and columns, process each sequence independently using
selective scan operation, and recombine them to form the outputs. However, such
a paradigm overlooks two vital aspects: i) the local relationships and spatial
continuity inherent in natural images, and ii) the discrepancies among
sequences unfolded through totally different ways. To overcome the drawbacks,
we explore two problems in Mamba-based restoration methods: i) how to design a
scanning strategy preserving both locality and continuity while facilitating
restoration, and ii) how to aggregate the distinct sequences unfolded in
totally different ways. To address these problems, we propose a novel
Mamba-based Image Restoration model (MaIR), which consists of Nested S-shaped
Scanning strategy (NSS) and Sequence Shuffle Attention block (SSA).
Specifically, NSS preserves locality and continuity of the input images through
the stripe-based scanning region and the S-shaped scanning path, respectively.
SSA aggregates sequences through calculating attention weights within the
corresponding channels of different sequences. Thanks to NSS and SSA, MaIR
surpasses 40 baselines across 14 challenging datasets, achieving
state-of-the-art performance on the tasks of image super-resolution, denoising,
deblurring and dehazing. The code is available at
https://github.com/XLearning-SCU/2025-CVPR-MaIR.
| [
{
"version": "v1",
"created": "Sat, 28 Dec 2024 07:40:39 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 09:30:06 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Boyun",
""
],
[
"Zhao",
"Haiyu",
""
],
[
"Wang",
"Wenxin",
""
],
[
"Hu",
"Peng",
""
],
[
"Gou",
"Yuanbiao",
""
],
[
"Peng",
"Xi",
""
]
] | TITLE: MaIR: A Locality- and Continuity-Preserving Mamba for Image Restoration
ABSTRACT: Recent advancements in Mamba have shown promising results in image
restoration. These methods typically flatten 2D images into multiple distinct
1D sequences along rows and columns, process each sequence independently using
selective scan operation, and recombine them to form the outputs. However, such
a paradigm overlooks two vital aspects: i) the local relationships and spatial
continuity inherent in natural images, and ii) the discrepancies among
sequences unfolded through totally different ways. To overcome the drawbacks,
we explore two problems in Mamba-based restoration methods: i) how to design a
scanning strategy preserving both locality and continuity while facilitating
restoration, and ii) how to aggregate the distinct sequences unfolded in
totally different ways. To address these problems, we propose a novel
Mamba-based Image Restoration model (MaIR), which consists of Nested S-shaped
Scanning strategy (NSS) and Sequence Shuffle Attention block (SSA).
Specifically, NSS preserves locality and continuity of the input images through
the stripe-based scanning region and the S-shaped scanning path, respectively.
SSA aggregates sequences through calculating attention weights within the
corresponding channels of different sequences. Thanks to NSS and SSA, MaIR
surpasses 40 baselines across 14 challenging datasets, achieving
state-of-the-art performance on the tasks of image super-resolution, denoising,
deblurring and dehazing. The code is available at
https://github.com/XLearning-SCU/2025-CVPR-MaIR.
|
2412.21059 | Jiazheng Xu | Jiazheng Xu, Yu Huang, Jiale Cheng, Yuanming Yang, Jiajun Xu, Yuan
Wang, Wenbo Duan, Shen Yang, Qunlin Jin, Shurun Li, Jiayan Teng, Zhuoyi Yang,
Wendi Zheng, Xiao Liu, Ming Ding, Xiaohan Zhang, Xiaotao Gu, Shiyu Huang,
Minlie Huang, Jie Tang, Yuxiao Dong | VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning
for Image and Video Generation | 29 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Visual generative models have achieved remarkable progress in synthesizing
photorealistic images and videos, yet aligning their outputs with human
preferences across critical dimensions remains a persistent challenge. Though
reinforcement learning from human feedback offers promise for preference
alignment, existing reward models for visual generation face limitations,
including black-box scoring without interpretability and potentially resultant
unexpected biases. We present VisionReward, a general framework for learning
human visual preferences in both image and video generation. Specifically, we
employ a hierarchical visual assessment framework to capture fine-grained human
preferences, and leverages linear weighting to enable interpretable preference
learning. Furthermore, we propose a multi-dimensional consistent strategy when
using VisionReward as a reward model during preference optimization for visual
generation. Experiments show that VisionReward can significantly outperform
existing image and video reward models on both machine metrics and human
evaluation. Notably, VisionReward surpasses VideoScore by 17.2% in preference
prediction accuracy, and text-to-video models with VisionReward achieve a 31.6%
higher pairwise win rate compared to the same models using VideoScore. All code
and datasets are provided at https://github.com/THUDM/VisionReward.
| [
{
"version": "v1",
"created": "Mon, 30 Dec 2024 16:24:09 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 09:37:33 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xu",
"Jiazheng",
""
],
[
"Huang",
"Yu",
""
],
[
"Cheng",
"Jiale",
""
],
[
"Yang",
"Yuanming",
""
],
[
"Xu",
"Jiajun",
""
],
[
"Wang",
"Yuan",
""
],
[
"Duan",
"Wenbo",
""
],
[
"Yang",
"Shen",
""
],
[
"Jin",
"Qunlin",
""
],
[
"Li",
"Shurun",
""
],
[
"Teng",
"Jiayan",
""
],
[
"Yang",
"Zhuoyi",
""
],
[
"Zheng",
"Wendi",
""
],
[
"Liu",
"Xiao",
""
],
[
"Ding",
"Ming",
""
],
[
"Zhang",
"Xiaohan",
""
],
[
"Gu",
"Xiaotao",
""
],
[
"Huang",
"Shiyu",
""
],
[
"Huang",
"Minlie",
""
],
[
"Tang",
"Jie",
""
],
[
"Dong",
"Yuxiao",
""
]
] | TITLE: VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning
for Image and Video Generation
ABSTRACT: Visual generative models have achieved remarkable progress in synthesizing
photorealistic images and videos, yet aligning their outputs with human
preferences across critical dimensions remains a persistent challenge. Though
reinforcement learning from human feedback offers promise for preference
alignment, existing reward models for visual generation face limitations,
including black-box scoring without interpretability and potentially resultant
unexpected biases. We present VisionReward, a general framework for learning
human visual preferences in both image and video generation. Specifically, we
employ a hierarchical visual assessment framework to capture fine-grained human
preferences, and leverages linear weighting to enable interpretable preference
learning. Furthermore, we propose a multi-dimensional consistent strategy when
using VisionReward as a reward model during preference optimization for visual
generation. Experiments show that VisionReward can significantly outperform
existing image and video reward models on both machine metrics and human
evaluation. Notably, VisionReward surpasses VideoScore by 17.2% in preference
prediction accuracy, and text-to-video models with VisionReward achieve a 31.6%
higher pairwise win rate compared to the same models using VideoScore. All code
and datasets are provided at https://github.com/THUDM/VisionReward.
|
2501.08326 | Miran Heo | Miran Heo, Min-Hung Chen, De-An Huang, Sifei Liu, Subhashree
Radhakrishnan, Seon Joo Kim, Yu-Chiang Frank Wang, Ryo Hachiuma | Omni-RGPT: Unifying Image and Video Region-level Understanding via Token
Marks | CVPR 2025, Project page: https://miranheo.github.io/omni-rgpt/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present Omni-RGPT, a multimodal large language model designed to
facilitate region-level comprehension for both images and videos. To achieve
consistent region representation across spatio-temporal dimensions, we
introduce Token Mark, a set of tokens highlighting the target regions within
the visual feature space. These tokens are directly embedded into spatial
regions using region prompts (e.g., boxes or masks) and simultaneously
incorporated into the text prompt to specify the target, establishing a direct
connection between visual and text tokens. To further support robust video
understanding without requiring tracklets, we introduce an auxiliary task that
guides Token Mark by leveraging the consistency of the tokens, enabling stable
region interpretation across the video. Additionally, we introduce a
large-scale region-level video instruction dataset (RegVID-300k). Omni-RGPT
achieves state-of-the-art results on image and video-based commonsense
reasoning benchmarks while showing strong performance in captioning and
referring expression comprehension tasks.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2025 18:58:04 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 09:03:54 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Heo",
"Miran",
""
],
[
"Chen",
"Min-Hung",
""
],
[
"Huang",
"De-An",
""
],
[
"Liu",
"Sifei",
""
],
[
"Radhakrishnan",
"Subhashree",
""
],
[
"Kim",
"Seon Joo",
""
],
[
"Wang",
"Yu-Chiang Frank",
""
],
[
"Hachiuma",
"Ryo",
""
]
] | TITLE: Omni-RGPT: Unifying Image and Video Region-level Understanding via Token
Marks
ABSTRACT: We present Omni-RGPT, a multimodal large language model designed to
facilitate region-level comprehension for both images and videos. To achieve
consistent region representation across spatio-temporal dimensions, we
introduce Token Mark, a set of tokens highlighting the target regions within
the visual feature space. These tokens are directly embedded into spatial
regions using region prompts (e.g., boxes or masks) and simultaneously
incorporated into the text prompt to specify the target, establishing a direct
connection between visual and text tokens. To further support robust video
understanding without requiring tracklets, we introduce an auxiliary task that
guides Token Mark by leveraging the consistency of the tokens, enabling stable
region interpretation across the video. Additionally, we introduce a
large-scale region-level video instruction dataset (RegVID-300k). Omni-RGPT
achieves state-of-the-art results on image and video-based commonsense
reasoning benchmarks while showing strong performance in captioning and
referring expression comprehension tasks.
|
2501.10811 | Cheng Liu | Cheng Liu, Hui Wang, Jinghua Zhao, Shiwan Zhao, Hui Bu, Xin Xu,
Jiaming Zhou, Haoqin Sun, Yong Qin | MusicEval: A Generative Music Dataset with Expert Ratings for Automatic
Text-to-Music Evaluation | Accepted by ICASSP 2025 | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The technology for generating music from textual descriptions has seen rapid
advancements. However, evaluating text-to-music (TTM) systems remains a
significant challenge, primarily due to the difficulty of balancing performance
and cost with existing objective and subjective evaluation methods. In this
paper, we propose an automatic assessment task for TTM models to align with
human perception. To address the TTM evaluation challenges posed by the
professional requirements of music evaluation and the complexity of the
relationship between text and music, we collect MusicEval, the first generative
music assessment dataset. This dataset contains 2,748 music clips generated by
31 advanced and widely used models in response to 384 text prompts, along with
13,740 ratings from 14 music experts. Furthermore, we design a CLAP-based
assessment model built on this dataset, and our experimental results validate
the feasibility of the proposed task, providing a valuable reference for future
development in TTM evaluation. The dataset is available at
https://www.aishelltech.com/AISHELL_7A.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2025 16:21:03 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 02:05:18 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Cheng",
""
],
[
"Wang",
"Hui",
""
],
[
"Zhao",
"Jinghua",
""
],
[
"Zhao",
"Shiwan",
""
],
[
"Bu",
"Hui",
""
],
[
"Xu",
"Xin",
""
],
[
"Zhou",
"Jiaming",
""
],
[
"Sun",
"Haoqin",
""
],
[
"Qin",
"Yong",
""
]
] | TITLE: MusicEval: A Generative Music Dataset with Expert Ratings for Automatic
Text-to-Music Evaluation
ABSTRACT: The technology for generating music from textual descriptions has seen rapid
advancements. However, evaluating text-to-music (TTM) systems remains a
significant challenge, primarily due to the difficulty of balancing performance
and cost with existing objective and subjective evaluation methods. In this
paper, we propose an automatic assessment task for TTM models to align with
human perception. To address the TTM evaluation challenges posed by the
professional requirements of music evaluation and the complexity of the
relationship between text and music, we collect MusicEval, the first generative
music assessment dataset. This dataset contains 2,748 music clips generated by
31 advanced and widely used models in response to 384 text prompts, along with
13,740 ratings from 14 music experts. Furthermore, we design a CLAP-based
assessment model built on this dataset, and our experimental results validate
the feasibility of the proposed task, providing a valuable reference for future
development in TTM evaluation. The dataset is available at
https://www.aishelltech.com/AISHELL_7A.
|
2501.11425 | Siyu Yuan | Siyu Yuan, Zehui Chen, Zhiheng Xi, Junjie Ye, Zhengyin Du, Jiecao Chen | Agent-R: Training Language Model Agents to Reflect via Iterative
Self-Training | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models (LLMs) agents are increasingly pivotal for addressing
complex tasks in interactive environments. Existing work mainly focuses on
enhancing performance through behavior cloning from stronger experts, yet such
approaches often falter in real-world applications, mainly due to the inability
to recover from errors. However, step-level critique data is difficult and
expensive to collect. Automating and dynamically constructing self-critique
datasets is thus crucial to empowering models with intelligent agent
capabilities. In this work, we propose an iterative self-training framework,
Agent-R, that enables language Agent to Reflect on the fly. Unlike traditional
methods that reward or penalize actions based on correctness, Agent-R leverages
MCTS to construct training data that recover correct trajectories from
erroneous ones. A key challenge of agent reflection lies in the necessity for
timely revision rather than waiting until the end of a rollout. To address
this, we introduce a model-guided critique construction mechanism: the actor
model identifies the first error step (within its current capability) in a
failed trajectory. Starting from it, we splice it with the adjacent correct
path, which shares the same parent node in the tree. This strategy enables the
model to learn reflection based on its current policy, therefore yielding
better learning efficiency. To further explore the scalability of this
self-improvement paradigm, we investigate iterative refinement of both error
correction capabilities and dataset construction. Our findings demonstrate that
Agent-R continuously improves the model's ability to recover from errors and
enables timely error correction. Experiments on three interactive environments
show that Agent-R effectively equips agents to correct erroneous actions while
avoiding loops, achieving superior performance compared to baseline methods
(+5.59%).
| [
{
"version": "v1",
"created": "Mon, 20 Jan 2025 11:46:04 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 09:28:09 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 10:18:56 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yuan",
"Siyu",
""
],
[
"Chen",
"Zehui",
""
],
[
"Xi",
"Zhiheng",
""
],
[
"Ye",
"Junjie",
""
],
[
"Du",
"Zhengyin",
""
],
[
"Chen",
"Jiecao",
""
]
] | TITLE: Agent-R: Training Language Model Agents to Reflect via Iterative
Self-Training
ABSTRACT: Large Language Models (LLMs) agents are increasingly pivotal for addressing
complex tasks in interactive environments. Existing work mainly focuses on
enhancing performance through behavior cloning from stronger experts, yet such
approaches often falter in real-world applications, mainly due to the inability
to recover from errors. However, step-level critique data is difficult and
expensive to collect. Automating and dynamically constructing self-critique
datasets is thus crucial to empowering models with intelligent agent
capabilities. In this work, we propose an iterative self-training framework,
Agent-R, that enables language Agent to Reflect on the fly. Unlike traditional
methods that reward or penalize actions based on correctness, Agent-R leverages
MCTS to construct training data that recover correct trajectories from
erroneous ones. A key challenge of agent reflection lies in the necessity for
timely revision rather than waiting until the end of a rollout. To address
this, we introduce a model-guided critique construction mechanism: the actor
model identifies the first error step (within its current capability) in a
failed trajectory. Starting from it, we splice it with the adjacent correct
path, which shares the same parent node in the tree. This strategy enables the
model to learn reflection based on its current policy, therefore yielding
better learning efficiency. To further explore the scalability of this
self-improvement paradigm, we investigate iterative refinement of both error
correction capabilities and dataset construction. Our findings demonstrate that
Agent-R continuously improves the model's ability to recover from errors and
enables timely error correction. Experiments on three interactive environments
show that Agent-R effectively equips agents to correct erroneous actions while
avoiding loops, achieving superior performance compared to baseline methods
(+5.59%).
|
2501.11561 | Zhiyuan You | Zhiyuan You, Xin Cai, Jinjin Gu, Tianfan Xue, Chao Dong | Teaching Large Language Models to Regress Accurate Image Quality Scores
using Score Distribution | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advancement of Multi-modal Large Language Models (MLLMs),
MLLM-based Image Quality Assessment (IQA) methods have shown promising
performance in linguistic quality description. However, current methods still
fall short in accurately scoring image quality. In this work, we aim to
leverage MLLMs to regress accurate quality scores. A key challenge is that the
quality score is inherently continuous, typically modeled as a Gaussian
distribution, whereas MLLMs generate discrete token outputs. This mismatch
necessitates score discretization. Previous approaches discretize the mean
score into a one-hot label, resulting in information loss and failing to
capture inter-image relationships. We propose a distribution-based approach
that discretizes the score distribution into a soft label. This method
preserves the characteristics of the score distribution, achieving high
accuracy and maintaining inter-image relationships. Moreover, to address
dataset variation, where different IQA datasets exhibit various distributions,
we introduce a fidelity loss based on Thurstone's model. This loss captures
intra-dataset relationships, facilitating co-training across multiple IQA
datasets. With these designs, we develop the distribution-based Depicted image
Quality Assessment model for Score regression (DeQA-Score). Experiments across
multiple benchmarks show that DeQA-Score stably outperforms baselines in score
regression. Also, DeQA-Score can predict the score distribution that closely
aligns with human annotations. Codes and model weights have been released in
https://depictqa.github.io/deqa-score/.
| [
{
"version": "v1",
"created": "Mon, 20 Jan 2025 16:04:57 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 16:59:31 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"You",
"Zhiyuan",
""
],
[
"Cai",
"Xin",
""
],
[
"Gu",
"Jinjin",
""
],
[
"Xue",
"Tianfan",
""
],
[
"Dong",
"Chao",
""
]
] | TITLE: Teaching Large Language Models to Regress Accurate Image Quality Scores
using Score Distribution
ABSTRACT: With the rapid advancement of Multi-modal Large Language Models (MLLMs),
MLLM-based Image Quality Assessment (IQA) methods have shown promising
performance in linguistic quality description. However, current methods still
fall short in accurately scoring image quality. In this work, we aim to
leverage MLLMs to regress accurate quality scores. A key challenge is that the
quality score is inherently continuous, typically modeled as a Gaussian
distribution, whereas MLLMs generate discrete token outputs. This mismatch
necessitates score discretization. Previous approaches discretize the mean
score into a one-hot label, resulting in information loss and failing to
capture inter-image relationships. We propose a distribution-based approach
that discretizes the score distribution into a soft label. This method
preserves the characteristics of the score distribution, achieving high
accuracy and maintaining inter-image relationships. Moreover, to address
dataset variation, where different IQA datasets exhibit various distributions,
we introduce a fidelity loss based on Thurstone's model. This loss captures
intra-dataset relationships, facilitating co-training across multiple IQA
datasets. With these designs, we develop the distribution-based Depicted image
Quality Assessment model for Score regression (DeQA-Score). Experiments across
multiple benchmarks show that DeQA-Score stably outperforms baselines in score
regression. Also, DeQA-Score can predict the score distribution that closely
aligns with human annotations. Codes and model weights have been released in
https://depictqa.github.io/deqa-score/.
|
2501.12263 | Jian Teng | Bingyi Liu, Jian Teng, Hongfei Xue, Enshu Wang, Chuanhui Zhu, Pu Wang,
Libing Wu | mmCooper: A Multi-agent Multi-stage Communication-efficient and
Collaboration-robust Cooperative Perception Framework | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative perception significantly enhances individual vehicle perception
performance through the exchange of sensory information among agents. However,
real-world deployment faces challenges due to bandwidth constraints and
inevitable calibration errors during information exchange. To address these
issues, we propose mmCooper, a novel multi-agent, multi-stage,
communication-efficient, and collaboration-robust cooperative perception
framework. Our framework leverages a multi-stage collaboration strategy that
dynamically and adaptively balances intermediate- and late-stage information to
share among agents, enhancing perceptual performance while maintaining
communication efficiency. To support robust collaboration despite potential
misalignments and calibration errors, our framework prevents misleading
low-confidence sensing information from transmission and refines the received
detection results from collaborators to improve accuracy. The extensive
evaluation results on both real-world and simulated datasets demonstrate the
effectiveness of the mmCooper framework and its components.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 16:34:16 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 07:42:31 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Bingyi",
""
],
[
"Teng",
"Jian",
""
],
[
"Xue",
"Hongfei",
""
],
[
"Wang",
"Enshu",
""
],
[
"Zhu",
"Chuanhui",
""
],
[
"Wang",
"Pu",
""
],
[
"Wu",
"Libing",
""
]
] | TITLE: mmCooper: A Multi-agent Multi-stage Communication-efficient and
Collaboration-robust Cooperative Perception Framework
ABSTRACT: Collaborative perception significantly enhances individual vehicle perception
performance through the exchange of sensory information among agents. However,
real-world deployment faces challenges due to bandwidth constraints and
inevitable calibration errors during information exchange. To address these
issues, we propose mmCooper, a novel multi-agent, multi-stage,
communication-efficient, and collaboration-robust cooperative perception
framework. Our framework leverages a multi-stage collaboration strategy that
dynamically and adaptively balances intermediate- and late-stage information to
share among agents, enhancing perceptual performance while maintaining
communication efficiency. To support robust collaboration despite potential
misalignments and calibration errors, our framework prevents misleading
low-confidence sensing information from transmission and refines the received
detection results from collaborators to improve accuracy. The extensive
evaluation results on both real-world and simulated datasets demonstrate the
effectiveness of the mmCooper framework and its components.
|
2501.13558 | Francesco Di Sario | Francesco Di Sario, Riccardo Renzulli, Marco Grangetto, Akihiro
Sugimoto, Enzo Tartaglione | GoDe: Gaussians on Demand for Progressive Level of Detail and Scalable
Compression | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D Gaussian Splatting enhances real-time performance in novel view synthesis
by representing scenes with mixtures of Gaussians and utilizing differentiable
rasterization. However, it typically requires large storage capacity and high
VRAM, demanding the design of effective pruning and compression techniques.
Existing methods, while effective in some scenarios, struggle with scalability
and fail to adapt models based on critical factors such as computing
capabilities or bandwidth, requiring to re-train the model under different
configurations. In this work, we propose a novel, model-agnostic technique that
organizes Gaussians into several hierarchical layers, enabling progressive
Level of Detail (LoD) strategy. This method, combined with recent approach of
compression of 3DGS, allows a single model to instantly scale across several
compression ratios, with minimal to none impact to quality compared to a single
non-scalable model and without requiring re-training. We validate our approach
on typical datasets and benchmarks, showcasing low distortion and substantial
gains in terms of scalability and adaptability.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 11:05:45 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 22:36:30 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Di Sario",
"Francesco",
""
],
[
"Renzulli",
"Riccardo",
""
],
[
"Grangetto",
"Marco",
""
],
[
"Sugimoto",
"Akihiro",
""
],
[
"Tartaglione",
"Enzo",
""
]
] | TITLE: GoDe: Gaussians on Demand for Progressive Level of Detail and Scalable
Compression
ABSTRACT: 3D Gaussian Splatting enhances real-time performance in novel view synthesis
by representing scenes with mixtures of Gaussians and utilizing differentiable
rasterization. However, it typically requires large storage capacity and high
VRAM, demanding the design of effective pruning and compression techniques.
Existing methods, while effective in some scenarios, struggle with scalability
and fail to adapt models based on critical factors such as computing
capabilities or bandwidth, requiring to re-train the model under different
configurations. In this work, we propose a novel, model-agnostic technique that
organizes Gaussians into several hierarchical layers, enabling progressive
Level of Detail (LoD) strategy. This method, combined with recent approach of
compression of 3DGS, allows a single model to instantly scale across several
compression ratios, with minimal to none impact to quality compared to a single
non-scalable model and without requiring re-training. We validate our approach
on typical datasets and benchmarks, showcasing low distortion and substantial
gains in terms of scalability and adaptability.
|
2501.13962 | Hamza Kheddar | Afrah Gueriani, Hamza Kheddar, Ahmed Cherif Mazari | Adaptive Cyber-Attack Detection in IIoT Using Attention-Based LSTM-CNN
Models | null | 2024 International Conference on Telecommunications and
Intelligent Systems (ICTIS), IEEE | 10.1109/ICTIS62692.2024.10894509 | null | cs.CR cs.AI cs.LG cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | The rapid expansion of the industrial Internet of things (IIoT) has
introduced new challenges in securing critical infrastructures against
sophisticated cyberthreats. This study presents the development and evaluation
of an advanced Intrusion detection (IDS) based on a hybrid LSTM-convolution
neural network (CNN)-Attention architecture, specifically designed to detect
and classify cyberattacks in IIoT environments. The research focuses on two key
classification tasks: binary and multi-class classification. The proposed
models was rigorously tested using the Edge-IIoTset dataset. To mitigate the
class imbalance in the dataset, the synthetic minority over-sampling technique
(SMOTE) was employed to generate synthetic samples for the underrepresented
classes. This ensured that the model could learn effectively from all classes,
thereby improving the overall classification performance. Through systematic
experimentation, various deep learning (DL) models were compared, ultimately
demonstrating that the LSTM-CNN-Attention model consistently outperformed
others across key performance metrics. In binary classification, the model
achieved near-perfect accuracy, while in multi-class classification, it
maintained a high accuracy level (99.04%), effectively categorizing different
attack types with a loss value of 0.0220%.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 20:52:23 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Gueriani",
"Afrah",
""
],
[
"Kheddar",
"Hamza",
""
],
[
"Mazari",
"Ahmed Cherif",
""
]
] | TITLE: Adaptive Cyber-Attack Detection in IIoT Using Attention-Based LSTM-CNN
Models
ABSTRACT: The rapid expansion of the industrial Internet of things (IIoT) has
introduced new challenges in securing critical infrastructures against
sophisticated cyberthreats. This study presents the development and evaluation
of an advanced Intrusion detection (IDS) based on a hybrid LSTM-convolution
neural network (CNN)-Attention architecture, specifically designed to detect
and classify cyberattacks in IIoT environments. The research focuses on two key
classification tasks: binary and multi-class classification. The proposed
models was rigorously tested using the Edge-IIoTset dataset. To mitigate the
class imbalance in the dataset, the synthetic minority over-sampling technique
(SMOTE) was employed to generate synthetic samples for the underrepresented
classes. This ensured that the model could learn effectively from all classes,
thereby improving the overall classification performance. Through systematic
experimentation, various deep learning (DL) models were compared, ultimately
demonstrating that the LSTM-CNN-Attention model consistently outperformed
others across key performance metrics. In binary classification, the model
achieved near-perfect accuracy, while in multi-class classification, it
maintained a high accuracy level (99.04%), effectively categorizing different
attack types with a loss value of 0.0220%.
|
2501.14002 | Zui Chen | Zui Chen, Tianqiao Liu, Mi Tian, Qing Tong, Weiqi Luo, Zitao Liu | Advancing Mathematical Reasoning in Language Models: The Impact of
Problem-Solving Data, Data Synthesis Methods, and Training Stages | ICLR 2025 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Mathematical reasoning remains a challenging area for large language models
(LLMs), prompting the development of math-specific LLMs such as LLEMMA,
DeepSeekMath, and Qwen2-Math, among others. These models typically follow a
two-stage training paradigm: pre-training with math-related corpora and
post-training with problem datasets for supervised fine-tuning (SFT). Despite
these efforts, the improvements in mathematical reasoning achieved through
continued pre-training (CPT) are often less significant compared to those
obtained via SFT. This study addresses this discrepancy by exploring
alternative strategies during the pre-training phase, focusing on the use of
problem-solving data over general mathematical corpora. We investigate three
primary research questions: (1) Can problem-solving data enhance the model's
mathematical reasoning capabilities more effectively than general mathematical
corpora during CPT? (2) Are synthetic data from the same source equally
effective, and which synthesis methods are most efficient? (3) How do the
capabilities developed from the same problem-solving data differ between the
CPT and SFT stages, and what factors contribute to these differences? Our
findings indicate that problem-solving data significantly enhances the model's
mathematical capabilities compared to general mathematical corpora. We also
identify effective data synthesis methods, demonstrating that the tutorship
amplification synthesis method achieves the best performance. Furthermore,
while SFT facilitates instruction-following abilities, it underperforms
compared to CPT with the same data, which can be partially attributed to its
poor learning capacity for more challenging problem-solving data. These
insights provide valuable guidance for optimizing the mathematical reasoning
capabilities of LLMs, culminating in our development of a powerful mathematical
base model called MathGPT-8B.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 12:14:57 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Feb 2025 07:26:26 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 02:20:01 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Zui",
""
],
[
"Liu",
"Tianqiao",
""
],
[
"Tian",
"Mi",
""
],
[
"Tong",
"Qing",
""
],
[
"Luo",
"Weiqi",
""
],
[
"Liu",
"Zitao",
""
]
] | TITLE: Advancing Mathematical Reasoning in Language Models: The Impact of
Problem-Solving Data, Data Synthesis Methods, and Training Stages
ABSTRACT: Mathematical reasoning remains a challenging area for large language models
(LLMs), prompting the development of math-specific LLMs such as LLEMMA,
DeepSeekMath, and Qwen2-Math, among others. These models typically follow a
two-stage training paradigm: pre-training with math-related corpora and
post-training with problem datasets for supervised fine-tuning (SFT). Despite
these efforts, the improvements in mathematical reasoning achieved through
continued pre-training (CPT) are often less significant compared to those
obtained via SFT. This study addresses this discrepancy by exploring
alternative strategies during the pre-training phase, focusing on the use of
problem-solving data over general mathematical corpora. We investigate three
primary research questions: (1) Can problem-solving data enhance the model's
mathematical reasoning capabilities more effectively than general mathematical
corpora during CPT? (2) Are synthetic data from the same source equally
effective, and which synthesis methods are most efficient? (3) How do the
capabilities developed from the same problem-solving data differ between the
CPT and SFT stages, and what factors contribute to these differences? Our
findings indicate that problem-solving data significantly enhances the model's
mathematical capabilities compared to general mathematical corpora. We also
identify effective data synthesis methods, demonstrating that the tutorship
amplification synthesis method achieves the best performance. Furthermore,
while SFT facilitates instruction-following abilities, it underperforms
compared to CPT with the same data, which can be partially attributed to its
poor learning capacity for more challenging problem-solving data. These
insights provide valuable guidance for optimizing the mathematical reasoning
capabilities of LLMs, culminating in our development of a powerful mathematical
base model called MathGPT-8B.
|
2501.14277 | JongMin Lee | JongMin Lee, Sungjoo Yoo | Dense-SfM: Structure from Motion with Dense Consistent Matching | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Dense-SfM, a novel Structure from Motion (SfM) framework designed
for dense and accurate 3D reconstruction from multi-view images. Sparse
keypoint matching, which traditional SfM methods often rely on, limits both
accuracy and point density, especially in texture-less areas. Dense-SfM
addresses this limitation by integrating dense matching with a Gaussian
Splatting (GS) based track extension which gives more consistent, longer
feature tracks. To further improve reconstruction accuracy, Dense-SfM is
equipped with a multi-view kernelized matching module leveraging transformer
and Gaussian Process architectures, for robust track refinement across
multi-views. Evaluations on the ETH3D and Texture-Poor SfM datasets show that
Dense-SfM offers significant improvements in accuracy and density over
state-of-the-art methods. Project page: https://icetea-cv.github.io/densesfm/.
| [
{
"version": "v1",
"created": "Fri, 24 Jan 2025 06:45:12 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 04:33:34 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lee",
"JongMin",
""
],
[
"Yoo",
"Sungjoo",
""
]
] | TITLE: Dense-SfM: Structure from Motion with Dense Consistent Matching
ABSTRACT: We present Dense-SfM, a novel Structure from Motion (SfM) framework designed
for dense and accurate 3D reconstruction from multi-view images. Sparse
keypoint matching, which traditional SfM methods often rely on, limits both
accuracy and point density, especially in texture-less areas. Dense-SfM
addresses this limitation by integrating dense matching with a Gaussian
Splatting (GS) based track extension which gives more consistent, longer
feature tracks. To further improve reconstruction accuracy, Dense-SfM is
equipped with a multi-view kernelized matching module leveraging transformer
and Gaussian Process architectures, for robust track refinement across
multi-views. Evaluations on the ETH3D and Texture-Poor SfM datasets show that
Dense-SfM offers significant improvements in accuracy and density over
state-of-the-art methods. Project page: https://icetea-cv.github.io/densesfm/.
|
2501.15449 | Yanan Zhang | Zengran Wang, Yanan Zhang, Jiaxin Chen, Di Huang | Breaking the SSL-AL Barrier: A Synergistic Semi-Supervised Active
Learning Framework for 3D Object Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To address the annotation burden in LiDAR-based 3D object detection, active
learning (AL) methods offer a promising solution. However, traditional active
learning approaches solely rely on a small amount of labeled data to train an
initial model for data selection, overlooking the potential of leveraging the
abundance of unlabeled data. Recently, attempts to integrate semi-supervised
learning (SSL) into AL with the goal of leveraging unlabeled data have faced
challenges in effectively resolving the conflict between the two paradigms,
resulting in less satisfactory performance. To tackle this conflict, we propose
a Synergistic Semi-Supervised Active Learning framework, dubbed as S-SSAL.
Specifically, from the perspective of SSL, we propose a Collaborative
PseudoScene Pre-training (CPSP) method that effectively learns from unlabeled
data without introducing adverse effects. From the perspective of AL, we design
a Collaborative Active Learning (CAL) method, which complements the uncertainty
and diversity methods by model cascading. This allows us to fully exploit the
potential of the CPSP pre-trained model. Extensive experiments conducted on
KITTI and Waymo demonstrate the effectiveness of our S-SSAL framework. Notably,
on the KITTI dataset, utilizing only 2% labeled data, S-SSAL can achieve
performance comparable to models trained on the full dataset. The code has been
released at https://github.com/LandDreamer/S_SSAL.
| [
{
"version": "v1",
"created": "Sun, 26 Jan 2025 08:43:59 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 13:53:31 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Zengran",
""
],
[
"Zhang",
"Yanan",
""
],
[
"Chen",
"Jiaxin",
""
],
[
"Huang",
"Di",
""
]
] | TITLE: Breaking the SSL-AL Barrier: A Synergistic Semi-Supervised Active
Learning Framework for 3D Object Detection
ABSTRACT: To address the annotation burden in LiDAR-based 3D object detection, active
learning (AL) methods offer a promising solution. However, traditional active
learning approaches solely rely on a small amount of labeled data to train an
initial model for data selection, overlooking the potential of leveraging the
abundance of unlabeled data. Recently, attempts to integrate semi-supervised
learning (SSL) into AL with the goal of leveraging unlabeled data have faced
challenges in effectively resolving the conflict between the two paradigms,
resulting in less satisfactory performance. To tackle this conflict, we propose
a Synergistic Semi-Supervised Active Learning framework, dubbed as S-SSAL.
Specifically, from the perspective of SSL, we propose a Collaborative
PseudoScene Pre-training (CPSP) method that effectively learns from unlabeled
data without introducing adverse effects. From the perspective of AL, we design
a Collaborative Active Learning (CAL) method, which complements the uncertainty
and diversity methods by model cascading. This allows us to fully exploit the
potential of the CPSP pre-trained model. Extensive experiments conducted on
KITTI and Waymo demonstrate the effectiveness of our S-SSAL framework. Notably,
on the KITTI dataset, utilizing only 2% labeled data, S-SSAL can achieve
performance comparable to models trained on the full dataset. The code has been
released at https://github.com/LandDreamer/S_SSAL.
|
2501.18216 | Yejing Wang | Yejing Wang, Chi Zhang, Xiangyu Zhao, Qidong Liu, Maolin Wang, Xuetao
Wei, Zitao Liu, Xing Shi, Xudong Yang, Ling Zhong, Wei Lin | Behavior Modeling Space Reconstruction for E-Commerce Search | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Delivering superior search services is crucial for enhancing customer
experience and driving revenue growth. Conventionally, search systems model
user behaviors by combining user preference and query item relevance
statically, often through a fixed logical 'and' relationship. This paper
reexamines existing approaches through a unified lens using both causal graphs
and Venn diagrams, uncovering two prevalent yet significant issues: entangled
preference and relevance effects, and a collapsed modeling space. To surmount
these challenges, our research introduces a novel framework, DRP, which
enhances search accuracy through two components to reconstruct the behavior
modeling space. Specifically, we implement preference editing to proactively
remove the relevance effect from preference predictions, yielding untainted
user preferences. Additionally, we employ adaptive fusion, which dynamically
adjusts fusion criteria to align with the varying patterns of relevance and
preference, facilitating more nuanced and tailored behavior predictions within
the reconstructed modeling space. Empirical validation on two public datasets
and a proprietary search dataset underscores the superiority of our proposed
methodology, demonstrating marked improvements in performance over existing
approaches.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2025 09:17:04 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Feb 2025 03:34:08 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 17:10:59 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Yejing",
""
],
[
"Zhang",
"Chi",
""
],
[
"Zhao",
"Xiangyu",
""
],
[
"Liu",
"Qidong",
""
],
[
"Wang",
"Maolin",
""
],
[
"Wei",
"Xuetao",
""
],
[
"Liu",
"Zitao",
""
],
[
"Shi",
"Xing",
""
],
[
"Yang",
"Xudong",
""
],
[
"Zhong",
"Ling",
""
],
[
"Lin",
"Wei",
""
]
] | TITLE: Behavior Modeling Space Reconstruction for E-Commerce Search
ABSTRACT: Delivering superior search services is crucial for enhancing customer
experience and driving revenue growth. Conventionally, search systems model
user behaviors by combining user preference and query item relevance
statically, often through a fixed logical 'and' relationship. This paper
reexamines existing approaches through a unified lens using both causal graphs
and Venn diagrams, uncovering two prevalent yet significant issues: entangled
preference and relevance effects, and a collapsed modeling space. To surmount
these challenges, our research introduces a novel framework, DRP, which
enhances search accuracy through two components to reconstruct the behavior
modeling space. Specifically, we implement preference editing to proactively
remove the relevance effect from preference predictions, yielding untainted
user preferences. Additionally, we employ adaptive fusion, which dynamically
adjusts fusion criteria to align with the varying patterns of relevance and
preference, facilitating more nuanced and tailored behavior predictions within
the reconstructed modeling space. Empirical validation on two public datasets
and a proprietary search dataset underscores the superiority of our proposed
methodology, demonstrating marked improvements in performance over existing
approaches.
|
2501.18648 | Ranjan Sapkota | Ranjan Sapkota, Shaina Raza, Maged Shoman, Achyut Paudel, Manoj Karkee | Multimodal Large Language Models for Image, Text, and Speech Data
Augmentation: A Survey | 52 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In the past five years, research has shifted from traditional Machine
Learning (ML) and Deep Learning (DL) approaches to leveraging Large Language
Models (LLMs) , including multimodality, for data augmentation to enhance
generalization, and combat overfitting in training deep convolutional neural
networks. However, while existing surveys predominantly focus on ML and DL
techniques or limited modalities (text or images), a gap remains in addressing
the latest advancements and multi-modal applications of LLM-based methods. This
survey fills that gap by exploring recent literature utilizing multimodal LLMs
to augment image, text, and audio data, offering a comprehensive understanding
of these processes. We outlined various methods employed in the LLM-based
image, text and speech augmentation, and discussed the limitations identified
in current approaches. Additionally, we identified potential solutions to these
limitations from the literature to enhance the efficacy of data augmentation
practices using multimodal LLMs. This survey serves as a foundation for future
research, aiming to refine and expand the use of multimodal LLMs in enhancing
dataset quality and diversity for deep learning applications. (Surveyed Paper
GitHub Repo: https://github.com/WSUAgRobotics/data-aug-multi-modal-llm.
Keywords: LLM data augmentation, Grok text data augmentation, DeepSeek image
data augmentation, Grok speech data augmentation, GPT audio augmentation, voice
augmentation, DeepSeek for data augmentation, DeepSeek R1 text data
augmentation, DeepSeek R1 image augmentation, Image Augmentation using LLM,
Text Augmentation using LLM, LLM data augmentation for deep learning
applications)
| [
{
"version": "v1",
"created": "Wed, 29 Jan 2025 16:38:57 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 18:17:47 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Sapkota",
"Ranjan",
""
],
[
"Raza",
"Shaina",
""
],
[
"Shoman",
"Maged",
""
],
[
"Paudel",
"Achyut",
""
],
[
"Karkee",
"Manoj",
""
]
] | TITLE: Multimodal Large Language Models for Image, Text, and Speech Data
Augmentation: A Survey
ABSTRACT: In the past five years, research has shifted from traditional Machine
Learning (ML) and Deep Learning (DL) approaches to leveraging Large Language
Models (LLMs) , including multimodality, for data augmentation to enhance
generalization, and combat overfitting in training deep convolutional neural
networks. However, while existing surveys predominantly focus on ML and DL
techniques or limited modalities (text or images), a gap remains in addressing
the latest advancements and multi-modal applications of LLM-based methods. This
survey fills that gap by exploring recent literature utilizing multimodal LLMs
to augment image, text, and audio data, offering a comprehensive understanding
of these processes. We outlined various methods employed in the LLM-based
image, text and speech augmentation, and discussed the limitations identified
in current approaches. Additionally, we identified potential solutions to these
limitations from the literature to enhance the efficacy of data augmentation
practices using multimodal LLMs. This survey serves as a foundation for future
research, aiming to refine and expand the use of multimodal LLMs in enhancing
dataset quality and diversity for deep learning applications. (Surveyed Paper
GitHub Repo: https://github.com/WSUAgRobotics/data-aug-multi-modal-llm.
Keywords: LLM data augmentation, Grok text data augmentation, DeepSeek image
data augmentation, Grok speech data augmentation, GPT audio augmentation, voice
augmentation, DeepSeek for data augmentation, DeepSeek R1 text data
augmentation, DeepSeek R1 image augmentation, Image Augmentation using LLM,
Text Augmentation using LLM, LLM data augmentation for deep learning
applications)
|
2501.19348 | Anne Josiane Kouam | Anne Josiane Kouam, Aline Carneiro Viana, Mariano G. Beir\'o, Leo
Ferres, Luca Pappalardo | Characterizing User Behavior: The Interplay Between Mobility Patterns
and Mobile Traffic | null | null | null | null | cs.NI cs.IR | http://creativecommons.org/licenses/by/4.0/ | Mobile devices have become essential for capturing human activity, and
eXtended Data Records (XDRs) offer rich opportunities for detailed user
behavior modeling, which is useful for designing personalized digital services.
Previous studies have primarily focused on aggregated mobile traffic and
mobility analyses, often neglecting individual-level insights. This paper
introduces a novel approach that explores the dependency between traffic and
mobility behaviors at the user level. By analyzing 13 individual features that
encompass traffic patterns and various mobility aspects, we enhance the
understanding of how these behaviors interact. Our advanced user modeling
framework integrates traffic and mobility behaviors over time, allowing for
fine-grained dependencies while maintaining population heterogeneity through
user-specific signatures. Furthermore, we develop a Markov model that infers
traffic behavior from mobility and vice versa, prioritizing significant
dependencies while addressing privacy concerns. Using a week-long XDR dataset
from 1,337,719 users across several provinces in Chile, we validate our
approach, demonstrating its robustness and applicability in accurately
inferring user behavior and matching mobility and traffic profiles across
diverse urban contexts.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 17:52:03 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 17:19:27 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Kouam",
"Anne Josiane",
""
],
[
"Viana",
"Aline Carneiro",
""
],
[
"Beiró",
"Mariano G.",
""
],
[
"Ferres",
"Leo",
""
],
[
"Pappalardo",
"Luca",
""
]
] | TITLE: Characterizing User Behavior: The Interplay Between Mobility Patterns
and Mobile Traffic
ABSTRACT: Mobile devices have become essential for capturing human activity, and
eXtended Data Records (XDRs) offer rich opportunities for detailed user
behavior modeling, which is useful for designing personalized digital services.
Previous studies have primarily focused on aggregated mobile traffic and
mobility analyses, often neglecting individual-level insights. This paper
introduces a novel approach that explores the dependency between traffic and
mobility behaviors at the user level. By analyzing 13 individual features that
encompass traffic patterns and various mobility aspects, we enhance the
understanding of how these behaviors interact. Our advanced user modeling
framework integrates traffic and mobility behaviors over time, allowing for
fine-grained dependencies while maintaining population heterogeneity through
user-specific signatures. Furthermore, we develop a Markov model that infers
traffic behavior from mobility and vice versa, prioritizing significant
dependencies while addressing privacy concerns. Using a week-long XDR dataset
from 1,337,719 users across several provinces in Chile, we validate our
approach, demonstrating its robustness and applicability in accurately
inferring user behavior and matching mobility and traffic profiles across
diverse urban contexts.
|
2502.00700 | Yunuo Chen | Yunuo Chen, Qian Li, Bing He, Donghui Feng, Ronghua Wu, Qi Wang, Li
Song, Guo Lu, Wenjun Zhang | S2CFormer: Revisiting the RD-Latency Trade-off in Transformer-based
Learned Image Compression | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformer-based Learned Image Compression (LIC) suffers from a suboptimal
trade-off between decoding latency and rate-distortion (R-D) performance.
Moreover, the critical role of the FeedForward Network (FFN)-based channel
aggregation module has been largely overlooked. Our research reveals that
efficient channel aggregation-rather than complex and time-consuming spatial
operations-is the key to achieving competitive LIC models. Based on this
insight, we initiate the ``S2CFormer'' paradigm, a general architecture that
simplifies spatial operations and enhances channel operations to overcome the
previous trade-off. We present two instances of the S2CFormer: S2C-Conv, and
S2C-Attention. Both models demonstrate state-of-the-art (SOTA) R-D performance
and significantly faster decoding speed. Furthermore, we introduce S2C-Hybrid,
an enhanced variant that maximizes the strengths of different S2CFormer
instances to achieve a better performance-latency trade-off. This model
outperforms all the existing methods on the Kodak, Tecnick, and CLIC
Professional Validation datasets, setting a new benchmark for efficient and
high-performance LIC. The code is at
\href{https://github.com/YunuoChen/S2CFormer}{https://github.com/YunuoChen/S2CFormer}.
| [
{
"version": "v1",
"created": "Sun, 2 Feb 2025 07:15:51 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Feb 2025 18:30:07 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 09:19:16 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Yunuo",
""
],
[
"Li",
"Qian",
""
],
[
"He",
"Bing",
""
],
[
"Feng",
"Donghui",
""
],
[
"Wu",
"Ronghua",
""
],
[
"Wang",
"Qi",
""
],
[
"Song",
"Li",
""
],
[
"Lu",
"Guo",
""
],
[
"Zhang",
"Wenjun",
""
]
] | TITLE: S2CFormer: Revisiting the RD-Latency Trade-off in Transformer-based
Learned Image Compression
ABSTRACT: Transformer-based Learned Image Compression (LIC) suffers from a suboptimal
trade-off between decoding latency and rate-distortion (R-D) performance.
Moreover, the critical role of the FeedForward Network (FFN)-based channel
aggregation module has been largely overlooked. Our research reveals that
efficient channel aggregation-rather than complex and time-consuming spatial
operations-is the key to achieving competitive LIC models. Based on this
insight, we initiate the ``S2CFormer'' paradigm, a general architecture that
simplifies spatial operations and enhances channel operations to overcome the
previous trade-off. We present two instances of the S2CFormer: S2C-Conv, and
S2C-Attention. Both models demonstrate state-of-the-art (SOTA) R-D performance
and significantly faster decoding speed. Furthermore, we introduce S2C-Hybrid,
an enhanced variant that maximizes the strengths of different S2CFormer
instances to achieve a better performance-latency trade-off. This model
outperforms all the existing methods on the Kodak, Tecnick, and CLIC
Professional Validation datasets, setting a new benchmark for efficient and
high-performance LIC. The code is at
\href{https://github.com/YunuoChen/S2CFormer}{https://github.com/YunuoChen/S2CFormer}.
|
2502.01891 | Kemal Kurniawan | Kemal Kurniawan, Meladel Mistica, Timothy Baldwin, Jey Han Lau | Training and Evaluating with Human Label Variation: An Empirical Study | 25 pages | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Human label variation (HLV) challenges the standard assumption that a
labelled instance has a single ground truth, instead embracing the natural
variation in human annotation to train and evaluate models. While various
training methods and metrics for HLV have been proposed, it is still unclear
which methods and metrics perform best in what settings. We propose new
evaluation metrics for HLV leveraging fuzzy set theory. Since these new
proposed metrics are differentiable, we then in turn experiment with employing
these metrics as training objectives. We conduct an extensive study over 6 HLV
datasets testing 14 training methods and 6 evaluation metrics. We find that
training on either disaggregated annotations or soft labels performs best
across metrics, outperforming training using the proposed training objectives
with differentiable metrics. We also show that our proposed soft metric is more
interpretable and correlates best with human preference.
| [
{
"version": "v1",
"created": "Mon, 3 Feb 2025 23:49:20 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 00:06:14 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Kurniawan",
"Kemal",
""
],
[
"Mistica",
"Meladel",
""
],
[
"Baldwin",
"Timothy",
""
],
[
"Lau",
"Jey Han",
""
]
] | TITLE: Training and Evaluating with Human Label Variation: An Empirical Study
ABSTRACT: Human label variation (HLV) challenges the standard assumption that a
labelled instance has a single ground truth, instead embracing the natural
variation in human annotation to train and evaluate models. While various
training methods and metrics for HLV have been proposed, it is still unclear
which methods and metrics perform best in what settings. We propose new
evaluation metrics for HLV leveraging fuzzy set theory. Since these new
proposed metrics are differentiable, we then in turn experiment with employing
these metrics as training objectives. We conduct an extensive study over 6 HLV
datasets testing 14 training methods and 6 evaluation metrics. We find that
training on either disaggregated annotations or soft labels performs best
across metrics, outperforming training using the proposed training objectives
with differentiable metrics. We also show that our proposed soft metric is more
interpretable and correlates best with human preference.
|
2502.02215 | Senmao Li | Senmao Li and Kai Wang and Joost van de Weijer and Fahad Shahbaz Khan
and Chun-Le Guo and Shiqi Yang and Yaxing Wang and Jian Yang and Ming-Ming
Cheng | InterLCM: Low-Quality Images as Intermediate States of Latent
Consistency Models for Effective Blind Face Restoration | Accepted at ICLR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion priors have been used for blind face restoration (BFR) by
fine-tuning diffusion models (DMs) on restoration datasets to recover
low-quality images. However, the naive application of DMs presents several key
limitations. (i) The diffusion prior has inferior semantic consistency (e.g.,
ID, structure and color.), increasing the difficulty of optimizing the BFR
model; (ii) reliance on hundreds of denoising iterations, preventing the
effective cooperation with perceptual losses, which is crucial for faithful
restoration. Observing that the latent consistency model (LCM) learns
consistency noise-to-data mappings on the ODE-trajectory and therefore shows
more semantic consistency in the subject identity, structural information and
color preservation, we propose InterLCM to leverage the LCM for its superior
semantic consistency and efficiency to counter the above issues. Treating
low-quality images as the intermediate state of LCM, InterLCM achieves a
balance between fidelity and quality by starting from earlier LCM steps. LCM
also allows the integration of perceptual loss during training, leading to
improved restoration quality, particularly in real-world scenarios. To mitigate
structural and semantic uncertainties, InterLCM incorporates a Visual Module to
extract visual features and a Spatial Encoder to capture spatial details,
enhancing the fidelity of restored images. Extensive experiments demonstrate
that InterLCM outperforms existing approaches in both synthetic and real-world
datasets while also achieving faster inference speed.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 10:51:20 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 18:51:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Senmao",
""
],
[
"Wang",
"Kai",
""
],
[
"van de Weijer",
"Joost",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Guo",
"Chun-Le",
""
],
[
"Yang",
"Shiqi",
""
],
[
"Wang",
"Yaxing",
""
],
[
"Yang",
"Jian",
""
],
[
"Cheng",
"Ming-Ming",
""
]
] | TITLE: InterLCM: Low-Quality Images as Intermediate States of Latent
Consistency Models for Effective Blind Face Restoration
ABSTRACT: Diffusion priors have been used for blind face restoration (BFR) by
fine-tuning diffusion models (DMs) on restoration datasets to recover
low-quality images. However, the naive application of DMs presents several key
limitations. (i) The diffusion prior has inferior semantic consistency (e.g.,
ID, structure and color.), increasing the difficulty of optimizing the BFR
model; (ii) reliance on hundreds of denoising iterations, preventing the
effective cooperation with perceptual losses, which is crucial for faithful
restoration. Observing that the latent consistency model (LCM) learns
consistency noise-to-data mappings on the ODE-trajectory and therefore shows
more semantic consistency in the subject identity, structural information and
color preservation, we propose InterLCM to leverage the LCM for its superior
semantic consistency and efficiency to counter the above issues. Treating
low-quality images as the intermediate state of LCM, InterLCM achieves a
balance between fidelity and quality by starting from earlier LCM steps. LCM
also allows the integration of perceptual loss during training, leading to
improved restoration quality, particularly in real-world scenarios. To mitigate
structural and semantic uncertainties, InterLCM incorporates a Visual Module to
extract visual features and a Spatial Encoder to capture spatial details,
enhancing the fidelity of restored images. Extensive experiments demonstrate
that InterLCM outperforms existing approaches in both synthetic and real-world
datasets while also achieving faster inference speed.
|
2502.05741 | Donghui Feng | Donghui Feng, Zhengxue Cheng, Shen Wang, Ronghua Wu, Hongwei Hu, Guo
Lu, Li Song | Linear Attention Modeling for Learned Image Compression | Accepted by CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years, learned image compression has made tremendous progress to
achieve impressive coding efficiency. Its coding gain mainly comes from
non-linear neural network-based transform and learnable entropy modeling.
However, most studies focus on a strong backbone, and few studies consider a
low complexity design. In this paper, we propose LALIC, a linear attention
modeling for learned image compression. Specially, we propose to use Bi-RWKV
blocks, by utilizing the Spatial Mix and Channel Mix modules to achieve more
compact feature extraction, and apply the Conv based Omni-Shift module to adapt
to two-dimensional latent representation. Furthermore, we propose a RWKV-based
Spatial-Channel ConTeXt model (RWKV-SCCTX), that leverages the Bi-RWKV to
modeling the correlation between neighboring features effectively. To our
knowledge, our work is the first work to utilize efficient Bi-RWKV models with
linear attention for learned image compression. Experimental results
demonstrate that our method achieves competitive RD performances by
outperforming VTM-9.1 by -15.26%, -15.41%, -17.63% in BD-rate on Kodak, CLIC
and Tecnick datasets. The code is available at
https://github.com/sjtu-medialab/RwkvCompress .
| [
{
"version": "v1",
"created": "Sun, 9 Feb 2025 01:57:17 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 17:16:31 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Feng",
"Donghui",
""
],
[
"Cheng",
"Zhengxue",
""
],
[
"Wang",
"Shen",
""
],
[
"Wu",
"Ronghua",
""
],
[
"Hu",
"Hongwei",
""
],
[
"Lu",
"Guo",
""
],
[
"Song",
"Li",
""
]
] | TITLE: Linear Attention Modeling for Learned Image Compression
ABSTRACT: Recent years, learned image compression has made tremendous progress to
achieve impressive coding efficiency. Its coding gain mainly comes from
non-linear neural network-based transform and learnable entropy modeling.
However, most studies focus on a strong backbone, and few studies consider a
low complexity design. In this paper, we propose LALIC, a linear attention
modeling for learned image compression. Specially, we propose to use Bi-RWKV
blocks, by utilizing the Spatial Mix and Channel Mix modules to achieve more
compact feature extraction, and apply the Conv based Omni-Shift module to adapt
to two-dimensional latent representation. Furthermore, we propose a RWKV-based
Spatial-Channel ConTeXt model (RWKV-SCCTX), that leverages the Bi-RWKV to
modeling the correlation between neighboring features effectively. To our
knowledge, our work is the first work to utilize efficient Bi-RWKV models with
linear attention for learned image compression. Experimental results
demonstrate that our method achieves competitive RD performances by
outperforming VTM-9.1 by -15.26%, -15.41%, -17.63% in BD-rate on Kodak, CLIC
and Tecnick datasets. The code is available at
https://github.com/sjtu-medialab/RwkvCompress .
|
2502.07029 | Kwanghee Choi | Kwanghee Choi, Eunjung Yeo, Kalvin Chang, Shinji Watanabe, David
Mortensen | Leveraging Allophony in Self-Supervised Speech Models for Atypical
Pronunciation Assessment | Accepted to NAACL 2025. Codebase available at
https://github.com/juice500ml/acoustic-units-for-ood | null | null | null | cs.CL cs.AI cs.LG eess.AS | http://creativecommons.org/licenses/by/4.0/ | Allophony refers to the variation in the phonetic realization of a phoneme
based on its phonetic environment. Modeling allophones is crucial for atypical
pronunciation assessment, which involves distinguishing atypical from typical
pronunciations. However, recent phoneme classifier-based approaches often
simplify this by treating various realizations as a single phoneme, bypassing
the complexity of modeling allophonic variation. Motivated by the acoustic
modeling capabilities of frozen self-supervised speech model (S3M) features, we
propose MixGoP, a novel approach that leverages Gaussian mixture models to
model phoneme distributions with multiple subclusters. Our experiments show
that MixGoP achieves state-of-the-art performance across four out of five
datasets, including dysarthric and non-native speech. Our analysis further
suggests that S3M features capture allophonic variation more effectively than
MFCCs and Mel spectrograms, highlighting the benefits of integrating MixGoP
with S3M features.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 20:46:42 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 03:38:32 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Choi",
"Kwanghee",
""
],
[
"Yeo",
"Eunjung",
""
],
[
"Chang",
"Kalvin",
""
],
[
"Watanabe",
"Shinji",
""
],
[
"Mortensen",
"David",
""
]
] | TITLE: Leveraging Allophony in Self-Supervised Speech Models for Atypical
Pronunciation Assessment
ABSTRACT: Allophony refers to the variation in the phonetic realization of a phoneme
based on its phonetic environment. Modeling allophones is crucial for atypical
pronunciation assessment, which involves distinguishing atypical from typical
pronunciations. However, recent phoneme classifier-based approaches often
simplify this by treating various realizations as a single phoneme, bypassing
the complexity of modeling allophonic variation. Motivated by the acoustic
modeling capabilities of frozen self-supervised speech model (S3M) features, we
propose MixGoP, a novel approach that leverages Gaussian mixture models to
model phoneme distributions with multiple subclusters. Our experiments show
that MixGoP achieves state-of-the-art performance across four out of five
datasets, including dysarthric and non-native speech. Our analysis further
suggests that S3M features capture allophonic variation more effectively than
MFCCs and Mel spectrograms, highlighting the benefits of integrating MixGoP
with S3M features.
|
2502.09303 | Minghong Wu | Minghong Wu, Minghui Liwang, Yuhan Su, Li Li, Seyyedali
Hosseinalipour, Xianbin Wang, Huaiyu Dai, Zhenzhen Jiao | Towards Seamless Hierarchical Federated Learning under Intermittent
Client Participation: A Stagewise Decision-Making Methodology | 20 pages, 8 figures,5 tables | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) offers a pioneering distributed learning paradigm
that enables devices/clients to build a shared global model. This global model
is obtained through frequent model transmissions between clients and a central
server, which may cause high latency, energy consumption, and congestion over
backhaul links. To overcome these drawbacks, Hierarchical Federated Learning
(HFL) has emerged, which organizes clients into multiple clusters and utilizes
edge nodes (e.g., edge servers) for intermediate model aggregations between
clients and the central server. Current research on HFL mainly focus on
enhancing model accuracy, latency, and energy consumption in scenarios with a
stable/fixed set of clients. However, addressing the dynamic availability of
clients -- a critical aspect of real-world scenarios -- remains underexplored.
This study delves into optimizing client selection and client-to-edge
associations in HFL under intermittent client participation so as to minimize
overall system costs (i.e., delay and energy), while achieving fast model
convergence. We unveil that achieving this goal involves solving a complex
NP-hard problem. To tackle this, we propose a stagewise methodology that splits
the solution into two stages, referred to as Plan A and Plan B. Plan A focuses
on identifying long-term clients with high chance of participation in
subsequent model training rounds. Plan B serves as a backup, selecting
alternative clients when long-term clients are unavailable during model
training rounds. This stagewise methodology offers a fresh perspective on
client selection that can enhance both HFL and conventional FL via enabling
low-overhead decision-making processes. Through evaluations on MNIST and
CIFAR-10 datasets, we show that our methodology outperforms existing benchmarks
in terms of model accuracy and system costs.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2025 13:16:10 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 13:48:11 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wu",
"Minghong",
""
],
[
"Liwang",
"Minghui",
""
],
[
"Su",
"Yuhan",
""
],
[
"Li",
"Li",
""
],
[
"Hosseinalipour",
"Seyyedali",
""
],
[
"Wang",
"Xianbin",
""
],
[
"Dai",
"Huaiyu",
""
],
[
"Jiao",
"Zhenzhen",
""
]
] | TITLE: Towards Seamless Hierarchical Federated Learning under Intermittent
Client Participation: A Stagewise Decision-Making Methodology
ABSTRACT: Federated Learning (FL) offers a pioneering distributed learning paradigm
that enables devices/clients to build a shared global model. This global model
is obtained through frequent model transmissions between clients and a central
server, which may cause high latency, energy consumption, and congestion over
backhaul links. To overcome these drawbacks, Hierarchical Federated Learning
(HFL) has emerged, which organizes clients into multiple clusters and utilizes
edge nodes (e.g., edge servers) for intermediate model aggregations between
clients and the central server. Current research on HFL mainly focus on
enhancing model accuracy, latency, and energy consumption in scenarios with a
stable/fixed set of clients. However, addressing the dynamic availability of
clients -- a critical aspect of real-world scenarios -- remains underexplored.
This study delves into optimizing client selection and client-to-edge
associations in HFL under intermittent client participation so as to minimize
overall system costs (i.e., delay and energy), while achieving fast model
convergence. We unveil that achieving this goal involves solving a complex
NP-hard problem. To tackle this, we propose a stagewise methodology that splits
the solution into two stages, referred to as Plan A and Plan B. Plan A focuses
on identifying long-term clients with high chance of participation in
subsequent model training rounds. Plan B serves as a backup, selecting
alternative clients when long-term clients are unavailable during model
training rounds. This stagewise methodology offers a fresh perspective on
client selection that can enhance both HFL and conventional FL via enabling
low-overhead decision-making processes. Through evaluations on MNIST and
CIFAR-10 datasets, we show that our methodology outperforms existing benchmarks
in terms of model accuracy and system costs.
|
2502.10436 | Donato Crisostomi | Tommaso Mencattini, Adrian Robert Minut, Donato Crisostomi, Andrea
Santilli, Emanuele Rodol\`a | MERGE$^3$: Efficient Evolutionary Merging on Consumer-grade GPUs | 19 pages, 13 figures | null | null | null | cs.NE cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Evolutionary model merging enables the creation of high-performing multi-task
models but remains computationally prohibitive for consumer hardware. We
introduce MERGE$^3$, an efficient framework that makes evolutionary merging
feasible on a single GPU by reducing fitness computation costs 50$\times$ while
preserving performance. MERGE$^3$ achieves this by Extracting a reduced dataset
for evaluation, Estimating model abilities using Item Response Theory (IRT),
and Evolving optimal merges via IRT-based performance estimators. Our method
enables state-of-the-art multilingual and cross-lingual merging, transferring
knowledge across languages with significantly lower computational overhead. We
provide theoretical guarantees and an open-source library, democratizing
high-quality model merging.
| [
{
"version": "v1",
"created": "Sun, 9 Feb 2025 14:24:16 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 12:04:09 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Mencattini",
"Tommaso",
""
],
[
"Minut",
"Adrian Robert",
""
],
[
"Crisostomi",
"Donato",
""
],
[
"Santilli",
"Andrea",
""
],
[
"Rodolà",
"Emanuele",
""
]
] | TITLE: MERGE$^3$: Efficient Evolutionary Merging on Consumer-grade GPUs
ABSTRACT: Evolutionary model merging enables the creation of high-performing multi-task
models but remains computationally prohibitive for consumer hardware. We
introduce MERGE$^3$, an efficient framework that makes evolutionary merging
feasible on a single GPU by reducing fitness computation costs 50$\times$ while
preserving performance. MERGE$^3$ achieves this by Extracting a reduced dataset
for evaluation, Estimating model abilities using Item Response Theory (IRT),
and Evolving optimal merges via IRT-based performance estimators. Our method
enables state-of-the-art multilingual and cross-lingual merging, transferring
knowledge across languages with significantly lower computational overhead. We
provide theoretical guarantees and an open-source library, democratizing
high-quality model merging.
|
2502.11183 | Ante Wang | Ante Wang, Linfeng Song, Ye Tian, Dian Yu, Haitao Mi, Xiangyu Duan,
Zhaopeng Tu, Jinsong Su, Dong Yu | Don't Get Lost in the Trees: Streamlining LLM Reasoning by Overcoming
Tree Search Exploration Pitfalls | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in tree search algorithms guided by verifiers have
significantly enhanced the reasoning capabilities of large language models
(LLMs), but at the cost of increased computational resources. In this work, we
identify two key challenges contributing to this inefficiency:
$\textit{over-exploration}$ due to redundant states with semantically
equivalent content, and $\textit{under-exploration}$ caused by high variance in
verifier scoring leading to frequent trajectory switching. To address these
issues, we propose FETCH, an e$\textbf{f}$fici$\textbf{e}$nt $\textbf{t}$ree
sear$\textbf{ch}$ framework, which is a flexible, plug-and-play system
compatible with various tree search algorithms. Our framework mitigates
over-exploration by merging semantically similar states using agglomerative
clustering of text embeddings obtained from a fine-tuned SimCSE model. To
tackle under-exploration, we enhance verifiers by incorporating temporal
difference learning with adjusted $\lambda$-returns during training to reduce
variance, and employing a verifier ensemble to aggregate scores during
inference. Experiments on GSM8K, GSM-Plus, and MATH datasets demonstrate that
our methods significantly improve reasoning accuracy and computational
efficiency across four different tree search algorithms, paving the way for
more practical applications of LLM-based reasoning. The code is available at
https://github.com/Soistesimmer/Fetch.
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2025 16:12:01 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 09:25:06 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Ante",
""
],
[
"Song",
"Linfeng",
""
],
[
"Tian",
"Ye",
""
],
[
"Yu",
"Dian",
""
],
[
"Mi",
"Haitao",
""
],
[
"Duan",
"Xiangyu",
""
],
[
"Tu",
"Zhaopeng",
""
],
[
"Su",
"Jinsong",
""
],
[
"Yu",
"Dong",
""
]
] | TITLE: Don't Get Lost in the Trees: Streamlining LLM Reasoning by Overcoming
Tree Search Exploration Pitfalls
ABSTRACT: Recent advancements in tree search algorithms guided by verifiers have
significantly enhanced the reasoning capabilities of large language models
(LLMs), but at the cost of increased computational resources. In this work, we
identify two key challenges contributing to this inefficiency:
$\textit{over-exploration}$ due to redundant states with semantically
equivalent content, and $\textit{under-exploration}$ caused by high variance in
verifier scoring leading to frequent trajectory switching. To address these
issues, we propose FETCH, an e$\textbf{f}$fici$\textbf{e}$nt $\textbf{t}$ree
sear$\textbf{ch}$ framework, which is a flexible, plug-and-play system
compatible with various tree search algorithms. Our framework mitigates
over-exploration by merging semantically similar states using agglomerative
clustering of text embeddings obtained from a fine-tuned SimCSE model. To
tackle under-exploration, we enhance verifiers by incorporating temporal
difference learning with adjusted $\lambda$-returns during training to reduce
variance, and employing a verifier ensemble to aggregate scores during
inference. Experiments on GSM8K, GSM-Plus, and MATH datasets demonstrate that
our methods significantly improve reasoning accuracy and computational
efficiency across four different tree search algorithms, paving the way for
more practical applications of LLM-based reasoning. The code is available at
https://github.com/Soistesimmer/Fetch.
|
2502.12013 | Krishn V. Kher | Krishn Vishwas Kher, Lokesh Venkata Siva Maruthi Badisa, Kusampudi
Venkata Datta Sri Harsha, Chitneedi Geetha Sowmya, Saksham Mittal,
SakethaNath Jagarlapudi | Unsupervised Structural-Counterfactual Generation under Domain Shift | Updated author list | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by the burgeoning interest in cross-domain learning, we present a
novel generative modeling challenge: generating counterfactual samples in a
target domain based on factual observations from a source domain. Our approach
operates within an unsupervised paradigm devoid of parallel or joint datasets,
relying exclusively on distinct observational samples and causal graphs for
each domain. This setting presents challenges that surpass those of
conventional counterfactual generation. Central to our methodology is the
disambiguation of exogenous causes into effect-intrinsic and domain-intrinsic
categories. This differentiation facilitates the integration of domain-specific
causal graphs into a unified joint causal graph via shared effect-intrinsic
exogenous variables. We propose leveraging Neural Causal models within this
joint framework to enable accurate counterfactual generation under standard
identifiability assumptions. Furthermore, we introduce a novel loss function
that effectively segregates effect-intrinsic from domain-intrinsic variables
during model training. Given a factual observation, our framework combines the
posterior distribution of effect-intrinsic variables from the source domain
with the prior distribution of domain-intrinsic variables from the target
domain to synthesize the desired counterfactuals, adhering to Pearl's causal
hierarchy. Intriguingly, when domain shifts are restricted to alterations in
causal mechanisms without accompanying covariate shifts, our training regimen
parallels the resolution of a conditional optimal transport problem. Empirical
evaluations on a synthetic dataset show that our framework generates
counterfactuals in the target domain that very closely resemble the ground
truth.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 16:48:16 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 12:42:42 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Kher",
"Krishn Vishwas",
""
],
[
"Badisa",
"Lokesh Venkata Siva Maruthi",
""
],
[
"Harsha",
"Kusampudi Venkata Datta Sri",
""
],
[
"Sowmya",
"Chitneedi Geetha",
""
],
[
"Mittal",
"Saksham",
""
],
[
"Jagarlapudi",
"SakethaNath",
""
]
] | TITLE: Unsupervised Structural-Counterfactual Generation under Domain Shift
ABSTRACT: Motivated by the burgeoning interest in cross-domain learning, we present a
novel generative modeling challenge: generating counterfactual samples in a
target domain based on factual observations from a source domain. Our approach
operates within an unsupervised paradigm devoid of parallel or joint datasets,
relying exclusively on distinct observational samples and causal graphs for
each domain. This setting presents challenges that surpass those of
conventional counterfactual generation. Central to our methodology is the
disambiguation of exogenous causes into effect-intrinsic and domain-intrinsic
categories. This differentiation facilitates the integration of domain-specific
causal graphs into a unified joint causal graph via shared effect-intrinsic
exogenous variables. We propose leveraging Neural Causal models within this
joint framework to enable accurate counterfactual generation under standard
identifiability assumptions. Furthermore, we introduce a novel loss function
that effectively segregates effect-intrinsic from domain-intrinsic variables
during model training. Given a factual observation, our framework combines the
posterior distribution of effect-intrinsic variables from the source domain
with the prior distribution of domain-intrinsic variables from the target
domain to synthesize the desired counterfactuals, adhering to Pearl's causal
hierarchy. Intriguingly, when domain shifts are restricted to alterations in
causal mechanisms without accompanying covariate shifts, our training regimen
parallels the resolution of a conditional optimal transport problem. Empirical
evaluations on a synthetic dataset show that our framework generates
counterfactuals in the target domain that very closely resemble the ground
truth.
|
2502.12138 | Shangzhan Zhang | Shangzhan Zhang, Jianyuan Wang, Yinghao Xu, Nan Xue, Christian
Rupprecht, Xiaowei Zhou, Yujun Shen, Gordon Wetzstein | FLARE: Feed-forward Geometry, Appearance and Camera Estimation from
Uncalibrated Sparse Views | CVPR 2025. Website: https://zhanghe3z.github.io/FLARE/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present FLARE, a feed-forward model designed to infer high-quality camera
poses and 3D geometry from uncalibrated sparse-view images (i.e., as few as 2-8
inputs), which is a challenging yet practical setting in real-world
applications. Our solution features a cascaded learning paradigm with camera
pose serving as the critical bridge, recognizing its essential role in mapping
3D structures onto 2D image planes. Concretely, FLARE starts with camera pose
estimation, whose results condition the subsequent learning of geometric
structure and appearance, optimized through the objectives of geometry
reconstruction and novel-view synthesis. Utilizing large-scale public datasets
for training, our method delivers state-of-the-art performance in the tasks of
pose estimation, geometry reconstruction, and novel view synthesis, while
maintaining the inference efficiency (i.e., less than 0.5 seconds). The project
page and code can be found at: https://zhanghe3z.github.io/FLARE/
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 18:54:05 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Feb 2025 20:27:35 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 12:09:29 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Mar 2025 11:30:32 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Shangzhan",
""
],
[
"Wang",
"Jianyuan",
""
],
[
"Xu",
"Yinghao",
""
],
[
"Xue",
"Nan",
""
],
[
"Rupprecht",
"Christian",
""
],
[
"Zhou",
"Xiaowei",
""
],
[
"Shen",
"Yujun",
""
],
[
"Wetzstein",
"Gordon",
""
]
] | TITLE: FLARE: Feed-forward Geometry, Appearance and Camera Estimation from
Uncalibrated Sparse Views
ABSTRACT: We present FLARE, a feed-forward model designed to infer high-quality camera
poses and 3D geometry from uncalibrated sparse-view images (i.e., as few as 2-8
inputs), which is a challenging yet practical setting in real-world
applications. Our solution features a cascaded learning paradigm with camera
pose serving as the critical bridge, recognizing its essential role in mapping
3D structures onto 2D image planes. Concretely, FLARE starts with camera pose
estimation, whose results condition the subsequent learning of geometric
structure and appearance, optimized through the objectives of geometry
reconstruction and novel-view synthesis. Utilizing large-scale public datasets
for training, our method delivers state-of-the-art performance in the tasks of
pose estimation, geometry reconstruction, and novel view synthesis, while
maintaining the inference efficiency (i.e., less than 0.5 seconds). The project
page and code can be found at: https://zhanghe3z.github.io/FLARE/
|
2502.13898 | Daniel Oliveira | Daniel A. P. Oliveira, Louren\c{c}o Teodoro, David Martins de Matos | GroundCap: A Visually Grounded Image Captioning Dataset | 37 pages | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current image captioning systems lack the ability to link descriptive text to
specific visual elements, making their outputs difficult to verify. While
recent approaches offer some grounding capabilities, they cannot track object
identities across multiple references or ground both actions and objects
simultaneously. We propose a novel ID-based grounding system that enables
consistent object reference tracking and action-object linking, and present
GroundCap, a dataset containing 52,016 images from 77 movies, with 344
human-annotated and 52,016 automatically generated captions. Each caption is
grounded on detected objects (132 classes) and actions (51 classes) using a tag
system that maintains object identity while linking actions to the
corresponding objects. Our approach features persistent object IDs for
reference tracking, explicit action-object linking, and segmentation of
background elements through K-means clustering. We propose gMETEOR, a metric
combining caption quality with grounding accuracy, and establish baseline
performance by fine-tuning Pixtral-12B. Human evaluation demonstrates our
approach's effectiveness in producing verifiable descriptions with coherent
object references.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2025 17:31:59 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 17:51:52 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Oliveira",
"Daniel A. P.",
""
],
[
"Teodoro",
"Lourenço",
""
],
[
"de Matos",
"David Martins",
""
]
] | TITLE: GroundCap: A Visually Grounded Image Captioning Dataset
ABSTRACT: Current image captioning systems lack the ability to link descriptive text to
specific visual elements, making their outputs difficult to verify. While
recent approaches offer some grounding capabilities, they cannot track object
identities across multiple references or ground both actions and objects
simultaneously. We propose a novel ID-based grounding system that enables
consistent object reference tracking and action-object linking, and present
GroundCap, a dataset containing 52,016 images from 77 movies, with 344
human-annotated and 52,016 automatically generated captions. Each caption is
grounded on detected objects (132 classes) and actions (51 classes) using a tag
system that maintains object identity while linking actions to the
corresponding objects. Our approach features persistent object IDs for
reference tracking, explicit action-object linking, and segmentation of
background elements through K-means clustering. We propose gMETEOR, a metric
combining caption quality with grounding accuracy, and establish baseline
performance by fine-tuning Pixtral-12B. Human evaluation demonstrates our
approach's effectiveness in producing verifiable descriptions with coherent
object references.
|
2502.14454 | Haeyun Choi | Haeyun Choi, Heemin Yang, Janghyeok Han, Sunghyun Cho | Exploiting Deblurring Networks for Radiance Fields | Accepted to CVPR 2025. Project page:
https://haeyun-choi.github.io/DDRF_page/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose DeepDeblurRF, a novel radiance field deblurring
approach that can synthesize high-quality novel views from blurred training
views with significantly reduced training time. DeepDeblurRF leverages deep
neural network (DNN)-based deblurring modules to enjoy their deblurring
performance and computational efficiency. To effectively combine DNN-based
deblurring and radiance field construction, we propose a novel radiance field
(RF)-guided deblurring and an iterative framework that performs RF-guided
deblurring and radiance field construction in an alternating manner. Moreover,
DeepDeblurRF is compatible with various scene representations, such as voxel
grids and 3D Gaussians, expanding its applicability. We also present
BlurRF-Synth, the first large-scale synthetic dataset for training radiance
field deblurring frameworks. We conduct extensive experiments on both camera
motion blur and defocus blur, demonstrating that DeepDeblurRF achieves
state-of-the-art novel-view synthesis quality with significantly reduced
training time.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 11:11:18 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 10:52:10 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Choi",
"Haeyun",
""
],
[
"Yang",
"Heemin",
""
],
[
"Han",
"Janghyeok",
""
],
[
"Cho",
"Sunghyun",
""
]
] | TITLE: Exploiting Deblurring Networks for Radiance Fields
ABSTRACT: In this paper, we propose DeepDeblurRF, a novel radiance field deblurring
approach that can synthesize high-quality novel views from blurred training
views with significantly reduced training time. DeepDeblurRF leverages deep
neural network (DNN)-based deblurring modules to enjoy their deblurring
performance and computational efficiency. To effectively combine DNN-based
deblurring and radiance field construction, we propose a novel radiance field
(RF)-guided deblurring and an iterative framework that performs RF-guided
deblurring and radiance field construction in an alternating manner. Moreover,
DeepDeblurRF is compatible with various scene representations, such as voxel
grids and 3D Gaussians, expanding its applicability. We also present
BlurRF-Synth, the first large-scale synthetic dataset for training radiance
field deblurring frameworks. We conduct extensive experiments on both camera
motion blur and defocus blur, demonstrating that DeepDeblurRF achieves
state-of-the-art novel-view synthesis quality with significantly reduced
training time.
|
2502.19908 | Dongkun Zhang | Dongkun Zhang, Jiaming Liang, Ke Guo, Sha Lu, Qi Wang, Rong Xiong,
Zhenwei Miao, Yue Wang | CarPlanner: Consistent Auto-regressive Trajectory Planning for
Large-scale Reinforcement Learning in Autonomous Driving | CVPR 2025 | null | null | null | cs.RO cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Trajectory planning is vital for autonomous driving, ensuring safe and
efficient navigation in complex environments. While recent learning-based
methods, particularly reinforcement learning (RL), have shown promise in
specific scenarios, RL planners struggle with training inefficiencies and
managing large-scale, real-world driving scenarios. In this paper, we introduce
\textbf{CarPlanner}, a \textbf{C}onsistent \textbf{a}uto-\textbf{r}egressive
\textbf{Planner} that uses RL to generate multi-modal trajectories. The
auto-regressive structure enables efficient large-scale RL training, while the
incorporation of consistency ensures stable policy learning by maintaining
coherent temporal consistency across time steps. Moreover, CarPlanner employs a
generation-selection framework with an expert-guided reward function and an
invariant-view module, simplifying RL training and enhancing policy
performance. Extensive analysis demonstrates that our proposed RL framework
effectively addresses the challenges of training efficiency and performance
enhancement, positioning CarPlanner as a promising solution for trajectory
planning in autonomous driving. To the best of our knowledge, we are the first
to demonstrate that the RL-based planner can surpass both IL- and rule-based
state-of-the-arts (SOTAs) on the challenging large-scale real-world dataset
nuPlan. Our proposed CarPlanner surpasses RL-, IL-, and rule-based SOTA
approaches within this demanding dataset.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 09:26:22 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 06:36:27 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 14:03:59 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Dongkun",
""
],
[
"Liang",
"Jiaming",
""
],
[
"Guo",
"Ke",
""
],
[
"Lu",
"Sha",
""
],
[
"Wang",
"Qi",
""
],
[
"Xiong",
"Rong",
""
],
[
"Miao",
"Zhenwei",
""
],
[
"Wang",
"Yue",
""
]
] | TITLE: CarPlanner: Consistent Auto-regressive Trajectory Planning for
Large-scale Reinforcement Learning in Autonomous Driving
ABSTRACT: Trajectory planning is vital for autonomous driving, ensuring safe and
efficient navigation in complex environments. While recent learning-based
methods, particularly reinforcement learning (RL), have shown promise in
specific scenarios, RL planners struggle with training inefficiencies and
managing large-scale, real-world driving scenarios. In this paper, we introduce
\textbf{CarPlanner}, a \textbf{C}onsistent \textbf{a}uto-\textbf{r}egressive
\textbf{Planner} that uses RL to generate multi-modal trajectories. The
auto-regressive structure enables efficient large-scale RL training, while the
incorporation of consistency ensures stable policy learning by maintaining
coherent temporal consistency across time steps. Moreover, CarPlanner employs a
generation-selection framework with an expert-guided reward function and an
invariant-view module, simplifying RL training and enhancing policy
performance. Extensive analysis demonstrates that our proposed RL framework
effectively addresses the challenges of training efficiency and performance
enhancement, positioning CarPlanner as a promising solution for trajectory
planning in autonomous driving. To the best of our knowledge, we are the first
to demonstrate that the RL-based planner can surpass both IL- and rule-based
state-of-the-arts (SOTAs) on the challenging large-scale real-world dataset
nuPlan. Our proposed CarPlanner surpasses RL-, IL-, and rule-based SOTA
approaches within this demanding dataset.
|
2502.19958 | Ke Niu | Ke Niu, Haiyang Yu, Mengyang Zhao, Teng Fu, Siyang Yi, Wei Lu, Bin Li,
Xuelin Qian, Xiangyang Xue | ChatReID: Open-ended Interactive Person Retrieval via Hierarchical
Progressive Tuning for Vision Language Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Person re-identification (Re-ID) is a crucial task in computer vision, aiming
to recognize individuals across non-overlapping camera views. While recent
advanced vision-language models (VLMs) excel in logical reasoning and
multi-task generalization, their applications in Re-ID tasks remain limited.
They either struggle to perform accurate matching based on identity-relevant
features or assist image-dominated branches as auxiliary semantics. In this
paper, we propose a novel framework ChatReID, that shifts the focus towards a
text-side-dominated retrieval paradigm, enabling flexible and interactive
re-identification. To integrate the reasoning abilities of language models into
Re-ID pipelines, We first present a large-scale instruction dataset, which
contains more than 8 million prompts to promote the model fine-tuning. Next. we
introduce a hierarchical progressive tuning strategy, which endows Re-ID
ability through three stages of tuning, i.e., from person attribute
understanding to fine-grained image retrieval and to multi-modal task
reasoning. Extensive experiments across ten popular benchmarks demonstrate that
ChatReID outperforms existing methods, achieving state-of-the-art performance
in all Re-ID tasks. More experiments demonstrate that ChatReID not only has the
ability to recognize fine-grained details but also to integrate them into a
coherent reasoning process.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 10:34:14 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 11:13:15 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Niu",
"Ke",
""
],
[
"Yu",
"Haiyang",
""
],
[
"Zhao",
"Mengyang",
""
],
[
"Fu",
"Teng",
""
],
[
"Yi",
"Siyang",
""
],
[
"Lu",
"Wei",
""
],
[
"Li",
"Bin",
""
],
[
"Qian",
"Xuelin",
""
],
[
"Xue",
"Xiangyang",
""
]
] | TITLE: ChatReID: Open-ended Interactive Person Retrieval via Hierarchical
Progressive Tuning for Vision Language Models
ABSTRACT: Person re-identification (Re-ID) is a crucial task in computer vision, aiming
to recognize individuals across non-overlapping camera views. While recent
advanced vision-language models (VLMs) excel in logical reasoning and
multi-task generalization, their applications in Re-ID tasks remain limited.
They either struggle to perform accurate matching based on identity-relevant
features or assist image-dominated branches as auxiliary semantics. In this
paper, we propose a novel framework ChatReID, that shifts the focus towards a
text-side-dominated retrieval paradigm, enabling flexible and interactive
re-identification. To integrate the reasoning abilities of language models into
Re-ID pipelines, We first present a large-scale instruction dataset, which
contains more than 8 million prompts to promote the model fine-tuning. Next. we
introduce a hierarchical progressive tuning strategy, which endows Re-ID
ability through three stages of tuning, i.e., from person attribute
understanding to fine-grained image retrieval and to multi-modal task
reasoning. Extensive experiments across ten popular benchmarks demonstrate that
ChatReID outperforms existing methods, achieving state-of-the-art performance
in all Re-ID tasks. More experiments demonstrate that ChatReID not only has the
ability to recognize fine-grained details but also to integrate them into a
coherent reasoning process.
|
2502.20808 | Wang Peijie | Peijie Wang, Zhong-Zhi Li, Fei Yin, Xin Yang, Dekang Ran, Cheng-Lin
Liu | MV-MATH: Evaluating Multimodal Math Reasoning in Multi-Visual Contexts | 47 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal Large Language Models (MLLMs) have shown promising capabilities in
mathematical reasoning within visual contexts across various datasets. However,
most existing multimodal math benchmarks are limited to single-visual contexts,
which diverges from the multi-visual scenarios commonly encountered in
real-world mathematical applications. To address this gap, we introduce
MV-MATH: a meticulously curated dataset of 2,009 high-quality mathematical
problems. Each problem integrates multiple images interleaved with text,
derived from authentic K-12 scenarios, and enriched with detailed annotations.
MV-MATH includes multiple-choice, free-form, and multi-step questions, covering
11 subject areas across 3 difficulty levels, and serves as a comprehensive and
rigorous benchmark for assessing MLLMs' mathematical reasoning in multi-visual
contexts. Through extensive experimentation, we observe that MLLMs encounter
substantial challenges in multi-visual math tasks, with a considerable
performance gap relative to human capabilities on MV-MATH. Furthermore, we
analyze the performance and error patterns of various models, providing
insights into MLLMs' mathematical reasoning capabilities within multi-visual
settings.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 07:50:36 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 03:43:03 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 14:02:51 GMT"
},
{
"version": "v4",
"created": "Sun, 23 Mar 2025 14:55:02 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Peijie",
""
],
[
"Li",
"Zhong-Zhi",
""
],
[
"Yin",
"Fei",
""
],
[
"Yang",
"Xin",
""
],
[
"Ran",
"Dekang",
""
],
[
"Liu",
"Cheng-Lin",
""
]
] | TITLE: MV-MATH: Evaluating Multimodal Math Reasoning in Multi-Visual Contexts
ABSTRACT: Multimodal Large Language Models (MLLMs) have shown promising capabilities in
mathematical reasoning within visual contexts across various datasets. However,
most existing multimodal math benchmarks are limited to single-visual contexts,
which diverges from the multi-visual scenarios commonly encountered in
real-world mathematical applications. To address this gap, we introduce
MV-MATH: a meticulously curated dataset of 2,009 high-quality mathematical
problems. Each problem integrates multiple images interleaved with text,
derived from authentic K-12 scenarios, and enriched with detailed annotations.
MV-MATH includes multiple-choice, free-form, and multi-step questions, covering
11 subject areas across 3 difficulty levels, and serves as a comprehensive and
rigorous benchmark for assessing MLLMs' mathematical reasoning in multi-visual
contexts. Through extensive experimentation, we observe that MLLMs encounter
substantial challenges in multi-visual math tasks, with a considerable
performance gap relative to human capabilities on MV-MATH. Furthermore, we
analyze the performance and error patterns of various models, providing
insights into MLLMs' mathematical reasoning capabilities within multi-visual
settings.
|
2503.00068 | Ziyu Wu | Ziyu Wu, Yufan Xiong, Mengting Niu, Fangting Xie, Quan Wan, Qijun
Ying, Boyan Liu, Xiaohui Cai | PI-HMR: Towards Robust In-bed Temporal Human Shape Reconstruction with
Contact Pressure Sensing | Accepeted by CVPR2025 | null | null | null | cs.CV cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-term in-bed monitoring benefits automatic and real-time health
management within healthcare, and the advancement of human shape reconstruction
technologies further enhances the representation and visualization of users'
activity patterns. However, existing technologies are primarily based on visual
cues, facing serious challenges in non-light-of-sight and privacy-sensitive
in-bed scenes. Pressure-sensing bedsheets offer a promising solution for
real-time motion reconstruction. Yet, limited exploration in model designs and
data have hindered its further development. To tackle these issues, we propose
a general framework that bridges gaps in data annotation and model design.
Firstly, we introduce SMPLify-IB, an optimization method that overcomes the
depth ambiguity issue in top-view scenarios through gravity constraints,
enabling generating high-quality 3D human shape annotations for in-bed
datasets. Then we present PI-HMR, a temporal-based human shape estimator to
regress meshes from pressure sequences. By integrating multi-scale feature
fusion with high-pressure distribution and spatial position priors, PI-HMR
outperforms SOTA methods with 17.01mm Mean-Per-Joint-Error decrease. This work
provides a whole
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 12:42:44 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 10:01:54 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wu",
"Ziyu",
""
],
[
"Xiong",
"Yufan",
""
],
[
"Niu",
"Mengting",
""
],
[
"Xie",
"Fangting",
""
],
[
"Wan",
"Quan",
""
],
[
"Ying",
"Qijun",
""
],
[
"Liu",
"Boyan",
""
],
[
"Cai",
"Xiaohui",
""
]
] | TITLE: PI-HMR: Towards Robust In-bed Temporal Human Shape Reconstruction with
Contact Pressure Sensing
ABSTRACT: Long-term in-bed monitoring benefits automatic and real-time health
management within healthcare, and the advancement of human shape reconstruction
technologies further enhances the representation and visualization of users'
activity patterns. However, existing technologies are primarily based on visual
cues, facing serious challenges in non-light-of-sight and privacy-sensitive
in-bed scenes. Pressure-sensing bedsheets offer a promising solution for
real-time motion reconstruction. Yet, limited exploration in model designs and
data have hindered its further development. To tackle these issues, we propose
a general framework that bridges gaps in data annotation and model design.
Firstly, we introduce SMPLify-IB, an optimization method that overcomes the
depth ambiguity issue in top-view scenarios through gravity constraints,
enabling generating high-quality 3D human shape annotations for in-bed
datasets. Then we present PI-HMR, a temporal-based human shape estimator to
regress meshes from pressure sequences. By integrating multi-scale feature
fusion with high-pressure distribution and spatial position priors, PI-HMR
outperforms SOTA methods with 17.01mm Mean-Per-Joint-Error decrease. This work
provides a whole
|
2503.00131 | Farouk Mokhtar | Farouk Mokhtar, Joosep Pata, Dolores Garcia, Eric Wulff, Mengke Zhang,
Michael Kagan, Javier Duarte | Fine-tuning machine-learned particle-flow reconstruction for new
detector geometries in future colliders | 20 pages, 13 figures | null | null | null | hep-ex cs.LG hep-ph physics.data-an physics.ins-det | http://creativecommons.org/licenses/by/4.0/ | We demonstrate transfer learning capabilities in a machine-learned algorithm
trained for particle-flow reconstruction in high energy particle colliders.
This paper presents a cross-detector fine-tuning study, where we initially
pre-train the model on a large full simulation dataset from one detector
design, and subsequently fine-tune the model on a sample with a different
collider and detector design. Specifically, we use the Compact Linear Collider
detector (CLICdet) model for the initial training set, and demonstrate
successful knowledge transfer to the CLIC-like detector (CLD) proposed for the
Future Circular Collider in electron-positron mode (FCC-ee). We show that with
an order of magnitude less samples from the second dataset, we can achieve the
same performance as a costly training from scratch, across particle-level and
event-level performance metrics, including jet and missing transverse momentum
resolution. Furthermore, we find that the fine-tuned model achieves comparable
performance to the traditional rule-based particle-flow approach on event-level
metrics after training on 100,000 CLD events, whereas a model trained from
scratch requires at least 1 million CLD events to achieve similar
reconstruction performance. To our knowledge, this represents the first
full-simulation cross-detector transfer learning study for particle-flow
reconstruction. These findings offer valuable insights towards building large
foundation models that can be fine-tuned across different detector designs and
geometries, helping to accelerate the development cycle for new detectors and
opening the door to rapid detector design and optimization using machine
learning.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 19:16:01 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 17:21:04 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Mokhtar",
"Farouk",
""
],
[
"Pata",
"Joosep",
""
],
[
"Garcia",
"Dolores",
""
],
[
"Wulff",
"Eric",
""
],
[
"Zhang",
"Mengke",
""
],
[
"Kagan",
"Michael",
""
],
[
"Duarte",
"Javier",
""
]
] | TITLE: Fine-tuning machine-learned particle-flow reconstruction for new
detector geometries in future colliders
ABSTRACT: We demonstrate transfer learning capabilities in a machine-learned algorithm
trained for particle-flow reconstruction in high energy particle colliders.
This paper presents a cross-detector fine-tuning study, where we initially
pre-train the model on a large full simulation dataset from one detector
design, and subsequently fine-tune the model on a sample with a different
collider and detector design. Specifically, we use the Compact Linear Collider
detector (CLICdet) model for the initial training set, and demonstrate
successful knowledge transfer to the CLIC-like detector (CLD) proposed for the
Future Circular Collider in electron-positron mode (FCC-ee). We show that with
an order of magnitude less samples from the second dataset, we can achieve the
same performance as a costly training from scratch, across particle-level and
event-level performance metrics, including jet and missing transverse momentum
resolution. Furthermore, we find that the fine-tuned model achieves comparable
performance to the traditional rule-based particle-flow approach on event-level
metrics after training on 100,000 CLD events, whereas a model trained from
scratch requires at least 1 million CLD events to achieve similar
reconstruction performance. To our knowledge, this represents the first
full-simulation cross-detector transfer learning study for particle-flow
reconstruction. These findings offer valuable insights towards building large
foundation models that can be fine-tuned across different detector designs and
geometries, helping to accelerate the development cycle for new detectors and
opening the door to rapid detector design and optimization using machine
learning.
|
2503.00861 | Taewoong Kang | Taewoong Kang, Sohyun Jeong, Hyojin Jang and Jaegul Choo | Zero-Shot Head Swapping in Real-World Scenarios | CVPR'25 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With growing demand in media and social networks for personalized images, the
need for advanced head-swapping techniques, integrating an entire head from the
head image with the body from the body image, has increased. However,
traditional head swapping methods heavily rely on face-centered cropped data
with primarily frontal facing views, which limits their effectiveness in real
world applications. Additionally, their masking methods, designed to indicate
regions requiring editing, are optimized for these types of dataset but
struggle to achieve seamless blending in complex situations, such as when the
original data includes features like long hair extending beyond the masked
area. To overcome these limitations and enhance adaptability in diverse and
complex scenarios, we propose a novel head swapping method, HID, that is robust
to images including the full head and the upper body, and handles from frontal
to side views, while automatically generating context aware masks. For
automatic mask generation, we introduce the IOMask, which enables seamless
blending of the head and body, effectively addressing integration challenges.
We further introduce the hair injection module to capture hair details with
greater precision. Our experiments demonstrate that the proposed approach
achieves state-of-the-art performance in head swapping, providing visually
consistent and realistic results across a wide range of challenging conditions.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 11:44:23 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 04:38:17 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 06:03:55 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Kang",
"Taewoong",
""
],
[
"Jeong",
"Sohyun",
""
],
[
"Jang",
"Hyojin",
""
],
[
"Choo",
"Jaegul",
""
]
] | TITLE: Zero-Shot Head Swapping in Real-World Scenarios
ABSTRACT: With growing demand in media and social networks for personalized images, the
need for advanced head-swapping techniques, integrating an entire head from the
head image with the body from the body image, has increased. However,
traditional head swapping methods heavily rely on face-centered cropped data
with primarily frontal facing views, which limits their effectiveness in real
world applications. Additionally, their masking methods, designed to indicate
regions requiring editing, are optimized for these types of dataset but
struggle to achieve seamless blending in complex situations, such as when the
original data includes features like long hair extending beyond the masked
area. To overcome these limitations and enhance adaptability in diverse and
complex scenarios, we propose a novel head swapping method, HID, that is robust
to images including the full head and the upper body, and handles from frontal
to side views, while automatically generating context aware masks. For
automatic mask generation, we introduce the IOMask, which enables seamless
blending of the head and body, effectively addressing integration challenges.
We further introduce the hair injection module to capture hair details with
greater precision. Our experiments demonstrate that the proposed approach
achieves state-of-the-art performance in head swapping, providing visually
consistent and realistic results across a wide range of challenging conditions.
|
2503.01113 | Hui Liu | Hui Liu, Chen Jia, Fan Shi, Xu Cheng, Shengyong Chen | SCSegamba: Lightweight Structure-Aware Vision Mamba for Crack
Segmentation in Structures | This paper has been accepted by CVPR2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Pixel-level segmentation of structural cracks across various scenarios
remains a considerable challenge. Current methods encounter challenges in
effectively modeling crack morphology and texture, facing challenges in
balancing segmentation quality with low computational resource usage. To
overcome these limitations, we propose a lightweight Structure-Aware Vision
Mamba Network (SCSegamba), capable of generating high-quality pixel-level
segmentation maps by leveraging both the morphological information and texture
cues of crack pixels with minimal computational cost. Specifically, we
developed a Structure-Aware Visual State Space module (SAVSS), which
incorporates a lightweight Gated Bottleneck Convolution (GBC) and a
Structure-Aware Scanning Strategy (SASS). The key insight of GBC lies in its
effectiveness in modeling the morphological information of cracks, while the
SASS enhances the perception of crack topology and texture by strengthening the
continuity of semantic information between crack pixels. Experiments on crack
benchmark datasets demonstrate that our method outperforms other
state-of-the-art (SOTA) methods, achieving the highest performance with only
2.8M parameters. On the multi-scenario dataset, our method reached 0.8390 in F1
score and 0.8479 in mIoU. The code is available at
https://github.com/Karl1109/SCSegamba.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 02:40:57 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 07:32:48 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Mar 2025 13:59:45 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Hui",
""
],
[
"Jia",
"Chen",
""
],
[
"Shi",
"Fan",
""
],
[
"Cheng",
"Xu",
""
],
[
"Chen",
"Shengyong",
""
]
] | TITLE: SCSegamba: Lightweight Structure-Aware Vision Mamba for Crack
Segmentation in Structures
ABSTRACT: Pixel-level segmentation of structural cracks across various scenarios
remains a considerable challenge. Current methods encounter challenges in
effectively modeling crack morphology and texture, facing challenges in
balancing segmentation quality with low computational resource usage. To
overcome these limitations, we propose a lightweight Structure-Aware Vision
Mamba Network (SCSegamba), capable of generating high-quality pixel-level
segmentation maps by leveraging both the morphological information and texture
cues of crack pixels with minimal computational cost. Specifically, we
developed a Structure-Aware Visual State Space module (SAVSS), which
incorporates a lightweight Gated Bottleneck Convolution (GBC) and a
Structure-Aware Scanning Strategy (SASS). The key insight of GBC lies in its
effectiveness in modeling the morphological information of cracks, while the
SASS enhances the perception of crack topology and texture by strengthening the
continuity of semantic information between crack pixels. Experiments on crack
benchmark datasets demonstrate that our method outperforms other
state-of-the-art (SOTA) methods, achieving the highest performance with only
2.8M parameters. On the multi-scenario dataset, our method reached 0.8390 in F1
score and 0.8479 in mIoU. The code is available at
https://github.com/Karl1109/SCSegamba.
|
2503.01407 | GaoZheng Pei | Gaozheng Pei, Shaojie Lyu, Gong Chen, Ke Ma, Qianqian Xu, Yingfei Sun,
Qingming Huang | Divide and Conquer: Heterogeneous Noise Integration for Diffusion-based
Adversarial Purification | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing diffusion-based purification methods aim to disrupt adversarial
perturbations by introducing a certain amount of noise through a forward
diffusion process, followed by a reverse process to recover clean examples.
However, this approach is fundamentally flawed: the uniform operation of the
forward process across all pixels compromises normal pixels while attempting to
combat adversarial perturbations, resulting in the target model producing
incorrect predictions. Simply relying on low-intensity noise is insufficient
for effective defense. To address this critical issue, we implement a
heterogeneous purification strategy grounded in the interpretability of neural
networks. Our method decisively applies higher-intensity noise to specific
pixels that the target model focuses on while the remaining pixels are
subjected to only low-intensity noise. This requirement motivates us to
redesign the sampling process of the diffusion model, allowing for the
effective removal of varying noise levels. Furthermore, to evaluate our method
against strong adaptative attack, our proposed method sharply reduces time cost
and memory usage through a single-step resampling. The empirical evidence from
extensive experiments across three datasets demonstrates that our method
outperforms most current adversarial training and purification techniques by a
substantial margin.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 11:00:25 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 07:15:05 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Pei",
"Gaozheng",
""
],
[
"Lyu",
"Shaojie",
""
],
[
"Chen",
"Gong",
""
],
[
"Ma",
"Ke",
""
],
[
"Xu",
"Qianqian",
""
],
[
"Sun",
"Yingfei",
""
],
[
"Huang",
"Qingming",
""
]
] | TITLE: Divide and Conquer: Heterogeneous Noise Integration for Diffusion-based
Adversarial Purification
ABSTRACT: Existing diffusion-based purification methods aim to disrupt adversarial
perturbations by introducing a certain amount of noise through a forward
diffusion process, followed by a reverse process to recover clean examples.
However, this approach is fundamentally flawed: the uniform operation of the
forward process across all pixels compromises normal pixels while attempting to
combat adversarial perturbations, resulting in the target model producing
incorrect predictions. Simply relying on low-intensity noise is insufficient
for effective defense. To address this critical issue, we implement a
heterogeneous purification strategy grounded in the interpretability of neural
networks. Our method decisively applies higher-intensity noise to specific
pixels that the target model focuses on while the remaining pixels are
subjected to only low-intensity noise. This requirement motivates us to
redesign the sampling process of the diffusion model, allowing for the
effective removal of varying noise levels. Furthermore, to evaluate our method
against strong adaptative attack, our proposed method sharply reduces time cost
and memory usage through a single-step resampling. The empirical evidence from
extensive experiments across three datasets demonstrates that our method
outperforms most current adversarial training and purification techniques by a
substantial margin.
|
2503.03519 | Shunxin Wang | Shunxin Wang, Raymond Veldhuis, Nicola Strisciuglio | Do ImageNet-trained models learn shortcuts? The impact of frequency
shortcuts on generalization | received at CVPR2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Frequency shortcuts refer to specific frequency patterns that models heavily
rely on for correct classification. Previous studies have shown that models
trained on small image datasets often exploit such shortcuts, potentially
impairing their generalization performance. However, existing methods for
identifying frequency shortcuts require expensive computations and become
impractical for analyzing models trained on large datasets. In this work, we
propose the first approach to more efficiently analyze frequency shortcuts at a
large scale. We show that both CNN and transformer models learn frequency
shortcuts on ImageNet. We also expose that frequency shortcut solutions can
yield good performance on out-of-distribution (OOD) test sets which largely
retain texture information. However, these shortcuts, mostly aligned with
texture patterns, hinder model generalization on rendition-based OOD test sets.
These observations suggest that current OOD evaluations often overlook the
impact of frequency shortcuts on model generalization. Future benchmarks could
thus benefit from explicitly assessing and accounting for these shortcuts to
build models that generalize across a broader range of OOD scenarios.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:03:34 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 14:58:05 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Shunxin",
""
],
[
"Veldhuis",
"Raymond",
""
],
[
"Strisciuglio",
"Nicola",
""
]
] | TITLE: Do ImageNet-trained models learn shortcuts? The impact of frequency
shortcuts on generalization
ABSTRACT: Frequency shortcuts refer to specific frequency patterns that models heavily
rely on for correct classification. Previous studies have shown that models
trained on small image datasets often exploit such shortcuts, potentially
impairing their generalization performance. However, existing methods for
identifying frequency shortcuts require expensive computations and become
impractical for analyzing models trained on large datasets. In this work, we
propose the first approach to more efficiently analyze frequency shortcuts at a
large scale. We show that both CNN and transformer models learn frequency
shortcuts on ImageNet. We also expose that frequency shortcut solutions can
yield good performance on out-of-distribution (OOD) test sets which largely
retain texture information. However, these shortcuts, mostly aligned with
texture patterns, hinder model generalization on rendition-based OOD test sets.
These observations suggest that current OOD evaluations often overlook the
impact of frequency shortcuts on model generalization. Future benchmarks could
thus benefit from explicitly assessing and accounting for these shortcuts to
build models that generalize across a broader range of OOD scenarios.
|
2503.04565 | Kailun Yang | Kai Luo, Hao Shi, Sheng Wu, Fei Teng, Mengfei Duan, Chang Huang,
Yuhang Wang, Kaiwei Wang, Kailun Yang | Omnidirectional Multi-Object Tracking | Accepted to CVPR 2025. The established dataset and source code are
available at https://github.com/xifen523/OmniTrack | null | null | null | cs.CV cs.RO eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Panoramic imagery, with its 360{\deg} field of view, offers comprehensive
information to support Multi-Object Tracking (MOT) in capturing spatial and
temporal relationships of surrounding objects. However, most MOT algorithms are
tailored for pinhole images with limited views, impairing their effectiveness
in panoramic settings. Additionally, panoramic image distortions, such as
resolution loss, geometric deformation, and uneven lighting, hinder direct
adaptation of existing MOT methods, leading to significant performance
degradation. To address these challenges, we propose OmniTrack, an
omnidirectional MOT framework that incorporates Tracklet Management to
introduce temporal cues, FlexiTrack Instances for object localization and
association, and the CircularStatE Module to alleviate image and geometric
distortions. This integration enables tracking in panoramic field-of-view
scenarios, even under rapid sensor motion. To mitigate the lack of panoramic
MOT datasets, we introduce the QuadTrack dataset--a comprehensive panoramic
dataset collected by a quadruped robot, featuring diverse challenges such as
panoramic fields of view, intense motion, and complex environments. Extensive
experiments on the public JRDB dataset and the newly introduced QuadTrack
benchmark demonstrate the state-of-the-art performance of the proposed
framework. OmniTrack achieves a HOTA score of 26.92% on JRDB, representing an
improvement of 3.43%, and further achieves 23.45% on QuadTrack, surpassing the
baseline by 6.81%. The established dataset and source code are available at
https://github.com/xifen523/OmniTrack.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 15:53:42 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 11:58:13 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Luo",
"Kai",
""
],
[
"Shi",
"Hao",
""
],
[
"Wu",
"Sheng",
""
],
[
"Teng",
"Fei",
""
],
[
"Duan",
"Mengfei",
""
],
[
"Huang",
"Chang",
""
],
[
"Wang",
"Yuhang",
""
],
[
"Wang",
"Kaiwei",
""
],
[
"Yang",
"Kailun",
""
]
] | TITLE: Omnidirectional Multi-Object Tracking
ABSTRACT: Panoramic imagery, with its 360{\deg} field of view, offers comprehensive
information to support Multi-Object Tracking (MOT) in capturing spatial and
temporal relationships of surrounding objects. However, most MOT algorithms are
tailored for pinhole images with limited views, impairing their effectiveness
in panoramic settings. Additionally, panoramic image distortions, such as
resolution loss, geometric deformation, and uneven lighting, hinder direct
adaptation of existing MOT methods, leading to significant performance
degradation. To address these challenges, we propose OmniTrack, an
omnidirectional MOT framework that incorporates Tracklet Management to
introduce temporal cues, FlexiTrack Instances for object localization and
association, and the CircularStatE Module to alleviate image and geometric
distortions. This integration enables tracking in panoramic field-of-view
scenarios, even under rapid sensor motion. To mitigate the lack of panoramic
MOT datasets, we introduce the QuadTrack dataset--a comprehensive panoramic
dataset collected by a quadruped robot, featuring diverse challenges such as
panoramic fields of view, intense motion, and complex environments. Extensive
experiments on the public JRDB dataset and the newly introduced QuadTrack
benchmark demonstrate the state-of-the-art performance of the proposed
framework. OmniTrack achieves a HOTA score of 26.92% on JRDB, representing an
improvement of 3.43%, and further achieves 23.45% on QuadTrack, surpassing the
baseline by 6.81%. The established dataset and source code are available at
https://github.com/xifen523/OmniTrack.
|
2503.05858 | Jiachen Luo | Jiachen Luo, Huy Phan, Lin Wang, Joshua D. Reiss | Bimodal Connection Attention Fusion for Speech Emotion Recognition | null | null | null | null | cs.SD cs.AI cs.CL cs.MM eess.AS | http://creativecommons.org/licenses/by/4.0/ | Multi-modal emotion recognition is challenging due to the difficulty of
extracting features that capture subtle emotional differences. Understanding
multi-modal interactions and connections is key to building effective bimodal
speech emotion recognition systems. In this work, we propose Bimodal Connection
Attention Fusion (BCAF) method, which includes three main modules: the
interactive connection network, the bimodal attention network, and the
correlative attention network. The interactive connection network uses an
encoder-decoder architecture to model modality connections between audio and
text while leveraging modality-specific features. The bimodal attention network
enhances semantic complementation and exploits intra- and inter-modal
interactions. The correlative attention network reduces cross-modal noise and
captures correlations between audio and text. Experiments on the MELD and
IEMOCAP datasets demonstrate that the proposed BCAF method outperforms existing
state-of-the-art baselines.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 10:20:57 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 19:50:21 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 11:48:18 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Luo",
"Jiachen",
""
],
[
"Phan",
"Huy",
""
],
[
"Wang",
"Lin",
""
],
[
"Reiss",
"Joshua D.",
""
]
] | TITLE: Bimodal Connection Attention Fusion for Speech Emotion Recognition
ABSTRACT: Multi-modal emotion recognition is challenging due to the difficulty of
extracting features that capture subtle emotional differences. Understanding
multi-modal interactions and connections is key to building effective bimodal
speech emotion recognition systems. In this work, we propose Bimodal Connection
Attention Fusion (BCAF) method, which includes three main modules: the
interactive connection network, the bimodal attention network, and the
correlative attention network. The interactive connection network uses an
encoder-decoder architecture to model modality connections between audio and
text while leveraging modality-specific features. The bimodal attention network
enhances semantic complementation and exploits intra- and inter-modal
interactions. The correlative attention network reduces cross-modal noise and
captures correlations between audio and text. Experiments on the MELD and
IEMOCAP datasets demonstrate that the proposed BCAF method outperforms existing
state-of-the-art baselines.
|
2503.06235 | Yang Li | Yang LI, Jinglu Wang, Lei Chu, Xiao Li, Shiu-hong Kao, Ying-Cong Chen,
Yan Lu | StreamGS: Online Generalizable Gaussian Splatting Reconstruction for
Unposed Image Streams | 8 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The advent of 3D Gaussian Splatting (3DGS) has advanced 3D scene
reconstruction and novel view synthesis. With the growing interest of
interactive applications that need immediate feedback, online 3DGS
reconstruction in real-time is in high demand. However, none of existing
methods yet meet the demand due to three main challenges: the absence of
predetermined camera parameters, the need for generalizable 3DGS optimization,
and the necessity of reducing redundancy. We propose StreamGS, an online
generalizable 3DGS reconstruction method for unposed image streams, which
progressively transform image streams to 3D Gaussian streams by predicting and
aggregating per-frame Gaussians. Our method overcomes the limitation of the
initial point reconstruction \cite{dust3r} in tackling out-of-domain (OOD)
issues by introducing a content adaptive refinement. The refinement enhances
cross-frame consistency by establishing reliable pixel correspondences between
adjacent frames. Such correspondences further aid in merging redundant
Gaussians through cross-frame feature aggregation. The density of Gaussians is
thereby reduced, empowering online reconstruction by significantly lowering
computational and memory costs. Extensive experiments on diverse datasets have
demonstrated that StreamGS achieves quality on par with optimization-based
approaches but does so 150 times faster, and exhibits superior generalizability
in handling OOD scenes.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 14:35:39 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 09:27:02 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"LI",
"Yang",
""
],
[
"Wang",
"Jinglu",
""
],
[
"Chu",
"Lei",
""
],
[
"Li",
"Xiao",
""
],
[
"Kao",
"Shiu-hong",
""
],
[
"Chen",
"Ying-Cong",
""
],
[
"Lu",
"Yan",
""
]
] | TITLE: StreamGS: Online Generalizable Gaussian Splatting Reconstruction for
Unposed Image Streams
ABSTRACT: The advent of 3D Gaussian Splatting (3DGS) has advanced 3D scene
reconstruction and novel view synthesis. With the growing interest of
interactive applications that need immediate feedback, online 3DGS
reconstruction in real-time is in high demand. However, none of existing
methods yet meet the demand due to three main challenges: the absence of
predetermined camera parameters, the need for generalizable 3DGS optimization,
and the necessity of reducing redundancy. We propose StreamGS, an online
generalizable 3DGS reconstruction method for unposed image streams, which
progressively transform image streams to 3D Gaussian streams by predicting and
aggregating per-frame Gaussians. Our method overcomes the limitation of the
initial point reconstruction \cite{dust3r} in tackling out-of-domain (OOD)
issues by introducing a content adaptive refinement. The refinement enhances
cross-frame consistency by establishing reliable pixel correspondences between
adjacent frames. Such correspondences further aid in merging redundant
Gaussians through cross-frame feature aggregation. The density of Gaussians is
thereby reduced, empowering online reconstruction by significantly lowering
computational and memory costs. Extensive experiments on diverse datasets have
demonstrated that StreamGS achieves quality on par with optimization-based
approaches but does so 150 times faster, and exhibits superior generalizability
in handling OOD scenes.
|
2503.06960 | Xin Wen | Xin Wen, Bingchen Zhao, Yilun Chen, Jiangmiao Pang, Xiaojuan Qi | A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning | Accepted by CVPR 2025 | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pre-trained vision models (PVMs) are fundamental to modern robotics, yet
their optimal configuration remains unclear. Through systematic evaluation, we
find that while DINO and iBOT outperform MAE across visuomotor control and
perception tasks, they struggle when trained on non-(single-)object-centric
(NOC) data--a limitation strongly correlated with their diminished ability to
learn object-centric representations. This investigation indicates that the
ability to form object-centric representations from the non-object-centric
robotics dataset is the key to success for PVMs. Motivated by this discovery,
we designed SlotMIM, a method that induces object-centric representations by
introducing a semantic bottleneck to reduce the number of prototypes to
encourage the emergence of objectness as well as cross-view consistency
regularization for encouraging multiview invariance. Our experiments encompass
pre-training on object-centric, scene-centric, web-crawled, and ego-centric
data. Across all settings, our approach learns transferrable representations
and achieves significant improvements over prior work in image recognition,
scene understanding, and robot learning evaluations. When scaled up with
million-scale datasets, our method also demonstrates superior data efficiency
and scalability. Our code and models are publicly available at
https://github.com/CVMI-Lab/SlotMIM.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 06:18:31 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 08:34:06 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wen",
"Xin",
""
],
[
"Zhao",
"Bingchen",
""
],
[
"Chen",
"Yilun",
""
],
[
"Pang",
"Jiangmiao",
""
],
[
"Qi",
"Xiaojuan",
""
]
] | TITLE: A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning
ABSTRACT: Pre-trained vision models (PVMs) are fundamental to modern robotics, yet
their optimal configuration remains unclear. Through systematic evaluation, we
find that while DINO and iBOT outperform MAE across visuomotor control and
perception tasks, they struggle when trained on non-(single-)object-centric
(NOC) data--a limitation strongly correlated with their diminished ability to
learn object-centric representations. This investigation indicates that the
ability to form object-centric representations from the non-object-centric
robotics dataset is the key to success for PVMs. Motivated by this discovery,
we designed SlotMIM, a method that induces object-centric representations by
introducing a semantic bottleneck to reduce the number of prototypes to
encourage the emergence of objectness as well as cross-view consistency
regularization for encouraging multiview invariance. Our experiments encompass
pre-training on object-centric, scene-centric, web-crawled, and ego-centric
data. Across all settings, our approach learns transferrable representations
and achieves significant improvements over prior work in image recognition,
scene understanding, and robot learning evaluations. When scaled up with
million-scale datasets, our method also demonstrates superior data efficiency
and scalability. Our code and models are publicly available at
https://github.com/CVMI-Lab/SlotMIM.
|
2503.07157 | Hung Vo | Hung Q. Vo, Pengyu Yuan, Zheng Yin, Kelvin K. Wong, Chika F. Ezeana,
Son T. Ly, Stephen T.C. Wong, Hien V. Nguyen | MIRAM: Masked Image Reconstruction Across Multiple Scales for Breast
Lesion Risk Prediction | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Self-supervised learning (SSL) has garnered substantial interest within the
machine learning and computer vision communities. Two prominent approaches in
SSL include contrastive-based learning and self-distillation utilizing cropping
augmentation. Lately, masked image modeling (MIM) has emerged as a more potent
SSL technique, employing image inpainting as a pretext task. MIM creates a
strong inductive bias toward meaningful spatial and semantic understanding.
This has opened up new opportunities for SSL to contribute not only to
classification tasks but also to more complex applications like object
detection and image segmentation. Building upon this progress, our research
paper introduces a scalable and practical SSL approach centered around more
challenging pretext tasks that facilitate the acquisition of robust features.
Specifically, we leverage multi-scale image reconstruction from randomly masked
input images as the foundation for feature learning. Our hypothesis posits that
reconstructing high-resolution images enables the model to attend to finer
spatial details, particularly beneficial for discerning subtle intricacies
within medical images. The proposed SSL features help improve classification
performance on the Curated Breast Imaging Subset of Digital Database for
Screening Mammography (CBIS-DDSM) dataset. In pathology classification, our
method demonstrates a 3\% increase in average precision (AP) and a 1\% increase
in the area under the receiver operating characteristic curve (AUC) when
compared to state-of-the-art (SOTA) algorithms. Moreover, in mass margins
classification, our approach achieves a 4\% increase in AP and a 2\% increase
in AUC.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 10:32:55 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 08:01:49 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Vo",
"Hung Q.",
""
],
[
"Yuan",
"Pengyu",
""
],
[
"Yin",
"Zheng",
""
],
[
"Wong",
"Kelvin K.",
""
],
[
"Ezeana",
"Chika F.",
""
],
[
"Ly",
"Son T.",
""
],
[
"Wong",
"Stephen T. C.",
""
],
[
"Nguyen",
"Hien V.",
""
]
] | TITLE: MIRAM: Masked Image Reconstruction Across Multiple Scales for Breast
Lesion Risk Prediction
ABSTRACT: Self-supervised learning (SSL) has garnered substantial interest within the
machine learning and computer vision communities. Two prominent approaches in
SSL include contrastive-based learning and self-distillation utilizing cropping
augmentation. Lately, masked image modeling (MIM) has emerged as a more potent
SSL technique, employing image inpainting as a pretext task. MIM creates a
strong inductive bias toward meaningful spatial and semantic understanding.
This has opened up new opportunities for SSL to contribute not only to
classification tasks but also to more complex applications like object
detection and image segmentation. Building upon this progress, our research
paper introduces a scalable and practical SSL approach centered around more
challenging pretext tasks that facilitate the acquisition of robust features.
Specifically, we leverage multi-scale image reconstruction from randomly masked
input images as the foundation for feature learning. Our hypothesis posits that
reconstructing high-resolution images enables the model to attend to finer
spatial details, particularly beneficial for discerning subtle intricacies
within medical images. The proposed SSL features help improve classification
performance on the Curated Breast Imaging Subset of Digital Database for
Screening Mammography (CBIS-DDSM) dataset. In pathology classification, our
method demonstrates a 3\% increase in average precision (AP) and a 1\% increase
in the area under the receiver operating characteristic curve (AUC) when
compared to state-of-the-art (SOTA) algorithms. Moreover, in mass margins
classification, our approach achieves a 4\% increase in AP and a 2\% increase
in AUC.
|
2503.08085 | Kyeongkook Seo | Kyeongkook Seo, Dong-Jun Han, Jaejun Yoo | PRISM: Privacy-Preserving Improved Stochastic Masking for Federated
Generative Models | null | null | null | null | cs.LG cs.CR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite recent advancements in federated learning (FL), the integration of
generative models into FL has been limited due to challenges such as high
communication costs and unstable training in heterogeneous data environments.
To address these issues, we propose PRISM, a FL framework tailored for
generative models that ensures (i) stable performance in heterogeneous data
distributions and (ii) resource efficiency in terms of communication cost and
final model size. The key of our method is to search for an optimal stochastic
binary mask for a random network rather than updating the model weights,
identifying a sparse subnetwork with high generative performance; i.e., a
``strong lottery ticket''. By communicating binary masks in a stochastic
manner, PRISM minimizes communication overhead. This approach, combined with
the utilization of maximum mean discrepancy (MMD) loss and a mask-aware dynamic
moving average aggregation method (MADA) on the server side, facilitates stable
and strong generative capabilities by mitigating local divergence in FL
scenarios. Moreover, thanks to its sparsifying characteristic, PRISM yields a
lightweight model without extra pruning or quantization, making it ideal for
environments such as edge devices. Experiments on MNIST, FMNIST, CelebA, and
CIFAR10 demonstrate that PRISM outperforms existing methods, while maintaining
privacy with minimal communication costs. PRISM is the first to successfully
generate images under challenging non-IID and privacy-preserving FL
environments on complex datasets, where previous methods have struggled.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 06:37:54 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 07:22:25 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 16:34:35 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Seo",
"Kyeongkook",
""
],
[
"Han",
"Dong-Jun",
""
],
[
"Yoo",
"Jaejun",
""
]
] | TITLE: PRISM: Privacy-Preserving Improved Stochastic Masking for Federated
Generative Models
ABSTRACT: Despite recent advancements in federated learning (FL), the integration of
generative models into FL has been limited due to challenges such as high
communication costs and unstable training in heterogeneous data environments.
To address these issues, we propose PRISM, a FL framework tailored for
generative models that ensures (i) stable performance in heterogeneous data
distributions and (ii) resource efficiency in terms of communication cost and
final model size. The key of our method is to search for an optimal stochastic
binary mask for a random network rather than updating the model weights,
identifying a sparse subnetwork with high generative performance; i.e., a
``strong lottery ticket''. By communicating binary masks in a stochastic
manner, PRISM minimizes communication overhead. This approach, combined with
the utilization of maximum mean discrepancy (MMD) loss and a mask-aware dynamic
moving average aggregation method (MADA) on the server side, facilitates stable
and strong generative capabilities by mitigating local divergence in FL
scenarios. Moreover, thanks to its sparsifying characteristic, PRISM yields a
lightweight model without extra pruning or quantization, making it ideal for
environments such as edge devices. Experiments on MNIST, FMNIST, CelebA, and
CIFAR10 demonstrate that PRISM outperforms existing methods, while maintaining
privacy with minimal communication costs. PRISM is the first to successfully
generate images under challenging non-IID and privacy-preserving FL
environments on complex datasets, where previous methods have struggled.
|
2503.08317 | Zikang Yuan | Zikang Yuan, Yuechuan Pu, Hongcheng Luo, Fengtian Lang, Cheng Chi,
Teng Li, Yingying Shen, Haiyang Sun, Bing Wang and Xin Yang | Uni-Gaussians: Unifying Camera and Lidar Simulation with Gaussians for
Dynamic Driving Scenarios | 10 pages | null | null | null | cs.RO cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensuring the safety of autonomous vehicles necessitates comprehensive
simulation of multi-sensor data, encompassing inputs from both cameras and
LiDAR sensors, across various dynamic driving scenarios. Neural rendering
techniques, which utilize collected raw sensor data to simulate these dynamic
environments, have emerged as a leading methodology. While NeRF-based
approaches can uniformly represent scenes for rendering data from both camera
and LiDAR, they are hindered by slow rendering speeds due to dense sampling.
Conversely, Gaussian Splatting-based methods employ Gaussian primitives for
scene representation and achieve rapid rendering through rasterization.
However, these rasterization-based techniques struggle to accurately model
non-linear optical sensors. This limitation restricts their applicability to
sensors beyond pinhole cameras. To address these challenges and enable unified
representation of dynamic driving scenarios using Gaussian primitives, this
study proposes a novel hybrid approach. Our method utilizes rasterization for
rendering image data while employing Gaussian ray-tracing for LiDAR data
rendering. Experimental results on public datasets demonstrate that our
approach outperforms current state-of-the-art methods. This work presents a
unified and efficient solution for realistic simulation of camera and LiDAR
data in autonomous driving scenarios using Gaussian primitives, offering
significant advancements in both rendering quality and computational
efficiency.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 11:25:57 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 02:41:24 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 07:18:42 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yuan",
"Zikang",
""
],
[
"Pu",
"Yuechuan",
""
],
[
"Luo",
"Hongcheng",
""
],
[
"Lang",
"Fengtian",
""
],
[
"Chi",
"Cheng",
""
],
[
"Li",
"Teng",
""
],
[
"Shen",
"Yingying",
""
],
[
"Sun",
"Haiyang",
""
],
[
"Wang",
"Bing",
""
],
[
"Yang",
"Xin",
""
]
] | TITLE: Uni-Gaussians: Unifying Camera and Lidar Simulation with Gaussians for
Dynamic Driving Scenarios
ABSTRACT: Ensuring the safety of autonomous vehicles necessitates comprehensive
simulation of multi-sensor data, encompassing inputs from both cameras and
LiDAR sensors, across various dynamic driving scenarios. Neural rendering
techniques, which utilize collected raw sensor data to simulate these dynamic
environments, have emerged as a leading methodology. While NeRF-based
approaches can uniformly represent scenes for rendering data from both camera
and LiDAR, they are hindered by slow rendering speeds due to dense sampling.
Conversely, Gaussian Splatting-based methods employ Gaussian primitives for
scene representation and achieve rapid rendering through rasterization.
However, these rasterization-based techniques struggle to accurately model
non-linear optical sensors. This limitation restricts their applicability to
sensors beyond pinhole cameras. To address these challenges and enable unified
representation of dynamic driving scenarios using Gaussian primitives, this
study proposes a novel hybrid approach. Our method utilizes rasterization for
rendering image data while employing Gaussian ray-tracing for LiDAR data
rendering. Experimental results on public datasets demonstrate that our
approach outperforms current state-of-the-art methods. This work presents a
unified and efficient solution for realistic simulation of camera and LiDAR
data in autonomous driving scenarios using Gaussian primitives, offering
significant advancements in both rendering quality and computational
efficiency.
|
2503.09749 | Yongle Yuan | Yongle Yuan and Kevin W. Bowyer | A Siamese Network to Detect If Two Iris Images Are Monozygotic | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In Daugman-style iris recognition, the textures of the left and right irises
of the same person are traditionally considered as being as different as the
irises of two unrelated persons. However, previous research indicates that
humans can detect that two iris images are from different eyes of the same
person, or eyes of monozygotic twins, with an accuracy of about 80%. In this
work, we employ a Siamese network architecture and contrastive learning to
categorize a pair of iris images as coming from monozygotic or non-monozygotic
irises. This could potentially be applied, for example, as a fast, noninvasive
test to determine if twins are monozygotic or non-monozygotic. We construct a
dataset comprising both synthetic monozygotic pairs (images of different irises
of the same individual) and natural monozygotic pairs (images of different
images from persons who are identical twins), in addition to non-monozygotic
pairs from unrelated individuals, ensuring a comprehensive evaluation of the
model's capabilities. To gain deeper insights into the learned representations,
we train and analyze three variants of the model using (1) the original input
images, (2) iris-only images, and (3) non-iris-only images. This comparison
reveals the critical importance of iris-specific textural details and
contextual ocular cues in identifying monozygotic iris patterns. The results
demonstrate that models leveraging full eye-region information outperform those
trained solely on iris-only data, emphasizing the nuanced interplay between
iris and ocular characteristics. Our approach achieves accuracy levels using
the full iris image that exceed those previously reported for human
classification of monozygotic iris pairs. This study presents the first
classifier designed to determine whether a pair of iris images originates from
monozygotic individuals.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 18:48:38 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 19:04:06 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yuan",
"Yongle",
""
],
[
"Bowyer",
"Kevin W.",
""
]
] | TITLE: A Siamese Network to Detect If Two Iris Images Are Monozygotic
ABSTRACT: In Daugman-style iris recognition, the textures of the left and right irises
of the same person are traditionally considered as being as different as the
irises of two unrelated persons. However, previous research indicates that
humans can detect that two iris images are from different eyes of the same
person, or eyes of monozygotic twins, with an accuracy of about 80%. In this
work, we employ a Siamese network architecture and contrastive learning to
categorize a pair of iris images as coming from monozygotic or non-monozygotic
irises. This could potentially be applied, for example, as a fast, noninvasive
test to determine if twins are monozygotic or non-monozygotic. We construct a
dataset comprising both synthetic monozygotic pairs (images of different irises
of the same individual) and natural monozygotic pairs (images of different
images from persons who are identical twins), in addition to non-monozygotic
pairs from unrelated individuals, ensuring a comprehensive evaluation of the
model's capabilities. To gain deeper insights into the learned representations,
we train and analyze three variants of the model using (1) the original input
images, (2) iris-only images, and (3) non-iris-only images. This comparison
reveals the critical importance of iris-specific textural details and
contextual ocular cues in identifying monozygotic iris patterns. The results
demonstrate that models leveraging full eye-region information outperform those
trained solely on iris-only data, emphasizing the nuanced interplay between
iris and ocular characteristics. Our approach achieves accuracy levels using
the full iris image that exceed those previously reported for human
classification of monozygotic iris pairs. This study presents the first
classifier designed to determine whether a pair of iris images originates from
monozygotic individuals.
|
2503.10080 | Zhen Qu | Zhen Qu, Xian Tao, Xinyi Gong, Shichen Qu, Qiyu Chen, Zhengtao Zhang,
Xingang Wang, Guiguang Ding | Bayesian Prompt Flow Learning for Zero-Shot Anomaly Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, vision-language models (e.g. CLIP) have demonstrated remarkable
performance in zero-shot anomaly detection (ZSAD). By leveraging auxiliary data
during training, these models can directly perform cross-category anomaly
detection on target datasets, such as detecting defects on industrial product
surfaces or identifying tumors in organ tissues. Existing approaches typically
construct text prompts through either manual design or the optimization of
learnable prompt vectors. However, these methods face several challenges: 1)
handcrafted prompts require extensive expert knowledge and trial-and-error; 2)
single-form learnable prompts struggle to capture complex anomaly semantics;
and 3) an unconstrained prompt space limits generalization to unseen
categories. To address these issues, we propose Bayesian Prompt Flow Learning
(Bayes-PFL), which models the prompt space as a learnable probability
distribution from a Bayesian perspective. Specifically, a prompt flow module is
designed to learn both image-specific and image-agnostic distributions, which
are jointly utilized to regularize the text prompt space and improve the
model's generalization on unseen categories. These learned distributions are
then sampled to generate diverse text prompts, effectively covering the prompt
space. Additionally, a residual cross-model attention (RCA) module is
introduced to better align dynamic text embeddings with fine-grained image
features. Extensive experiments on 15 industrial and medical datasets
demonstrate our method's superior performance. The code is available at
https://github.com/xiaozhen228/Bayes-PFL.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 06:05:35 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 00:51:39 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Qu",
"Zhen",
""
],
[
"Tao",
"Xian",
""
],
[
"Gong",
"Xinyi",
""
],
[
"Qu",
"Shichen",
""
],
[
"Chen",
"Qiyu",
""
],
[
"Zhang",
"Zhengtao",
""
],
[
"Wang",
"Xingang",
""
],
[
"Ding",
"Guiguang",
""
]
] | TITLE: Bayesian Prompt Flow Learning for Zero-Shot Anomaly Detection
ABSTRACT: Recently, vision-language models (e.g. CLIP) have demonstrated remarkable
performance in zero-shot anomaly detection (ZSAD). By leveraging auxiliary data
during training, these models can directly perform cross-category anomaly
detection on target datasets, such as detecting defects on industrial product
surfaces or identifying tumors in organ tissues. Existing approaches typically
construct text prompts through either manual design or the optimization of
learnable prompt vectors. However, these methods face several challenges: 1)
handcrafted prompts require extensive expert knowledge and trial-and-error; 2)
single-form learnable prompts struggle to capture complex anomaly semantics;
and 3) an unconstrained prompt space limits generalization to unseen
categories. To address these issues, we propose Bayesian Prompt Flow Learning
(Bayes-PFL), which models the prompt space as a learnable probability
distribution from a Bayesian perspective. Specifically, a prompt flow module is
designed to learn both image-specific and image-agnostic distributions, which
are jointly utilized to regularize the text prompt space and improve the
model's generalization on unseen categories. These learned distributions are
then sampled to generate diverse text prompts, effectively covering the prompt
space. Additionally, a residual cross-model attention (RCA) module is
introduced to better align dynamic text embeddings with fine-grained image
features. Extensive experiments on 15 industrial and medical datasets
demonstrate our method's superior performance. The code is available at
https://github.com/xiaozhen228/Bayes-PFL.
|
2503.10781 | Evangelos Kazakos | Evangelos Kazakos, Cordelia Schmid, Josef Sivic | Large-scale Pre-training for Grounded Video Caption Generation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We propose a novel approach for captioning and object grounding in video,
where the objects in the caption are grounded in the video via temporally dense
bounding boxes. We introduce the following contributions. First, we present a
large-scale automatic annotation method that aggregates captions grounded with
bounding boxes across individual frames into temporally dense and consistent
bounding box annotations. We apply this approach on the HowTo100M dataset to
construct a large-scale pre-training dataset, named HowToGround1M. We also
introduce a Grounded Video Caption Generation model, dubbed GROVE, and
pre-train the model on HowToGround1M. Second, we introduce a new dataset,
called iGround, of 3500 videos with manually annotated captions and dense
spatio-temporally grounded bounding boxes. This allows us to measure progress
on this challenging problem, as well as to fine-tune our model on this
small-scale but high-quality data. Third, we demonstrate that our approach
achieves state-of-the-art results on the proposed iGround dataset compared to a
number of baselines, as well as on the VidSTG and ActivityNet-Entities
datasets. We perform extensive ablations that demonstrate the importance of
pre-training using our automatically annotated HowToGround1M dataset followed
by fine-tuning on the manually annotated iGround dataset and validate the key
technical contributions of our model.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 18:21:07 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 05:11:52 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Kazakos",
"Evangelos",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Sivic",
"Josef",
""
]
] | TITLE: Large-scale Pre-training for Grounded Video Caption Generation
ABSTRACT: We propose a novel approach for captioning and object grounding in video,
where the objects in the caption are grounded in the video via temporally dense
bounding boxes. We introduce the following contributions. First, we present a
large-scale automatic annotation method that aggregates captions grounded with
bounding boxes across individual frames into temporally dense and consistent
bounding box annotations. We apply this approach on the HowTo100M dataset to
construct a large-scale pre-training dataset, named HowToGround1M. We also
introduce a Grounded Video Caption Generation model, dubbed GROVE, and
pre-train the model on HowToGround1M. Second, we introduce a new dataset,
called iGround, of 3500 videos with manually annotated captions and dense
spatio-temporally grounded bounding boxes. This allows us to measure progress
on this challenging problem, as well as to fine-tune our model on this
small-scale but high-quality data. Third, we demonstrate that our approach
achieves state-of-the-art results on the proposed iGround dataset compared to a
number of baselines, as well as on the VidSTG and ActivityNet-Entities
datasets. We perform extensive ablations that demonstrate the importance of
pre-training using our automatically annotated HowToGround1M dataset followed
by fine-tuning on the manually annotated iGround dataset and validate the key
technical contributions of our model.
|
2503.11335 | Moein Sorkhei | Moein Sorkhei, Emir Konuk, Kevin Smith, Christos Matsoukas | APLA: A Simple Adaptation Method for Vision Transformers | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing adaptation techniques typically require architectural modifications
or added parameters, leading to high computational costs and complexity. We
introduce Attention Projection Layer Adaptation (APLA), a simple approach to
adapt vision transformers (ViTs) without altering the architecture or adding
parameters. Through a systematic analysis, we find that the layer immediately
after the attention mechanism is crucial for adaptation. By updating only this
projection layer, or even just a random subset of this layer's weights, APLA
achieves state-of-the-art performance while reducing GPU memory usage by up to
52.63% and training time by up to 43.0%, with no extra cost at inference.
Across 46 datasets covering a variety of tasks including scene classification,
medical imaging, satellite imaging, and fine-grained classification, APLA
consistently outperforms 17 other leading adaptation methods, including full
fine-tuning, on classification, segmentation, and detection tasks. The code is
available at https://github.com/MoeinSorkhei/APLA.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 12:03:29 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 10:10:38 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Sorkhei",
"Moein",
""
],
[
"Konuk",
"Emir",
""
],
[
"Smith",
"Kevin",
""
],
[
"Matsoukas",
"Christos",
""
]
] | TITLE: APLA: A Simple Adaptation Method for Vision Transformers
ABSTRACT: Existing adaptation techniques typically require architectural modifications
or added parameters, leading to high computational costs and complexity. We
introduce Attention Projection Layer Adaptation (APLA), a simple approach to
adapt vision transformers (ViTs) without altering the architecture or adding
parameters. Through a systematic analysis, we find that the layer immediately
after the attention mechanism is crucial for adaptation. By updating only this
projection layer, or even just a random subset of this layer's weights, APLA
achieves state-of-the-art performance while reducing GPU memory usage by up to
52.63% and training time by up to 43.0%, with no extra cost at inference.
Across 46 datasets covering a variety of tasks including scene classification,
medical imaging, satellite imaging, and fine-grained classification, APLA
consistently outperforms 17 other leading adaptation methods, including full
fine-tuning, on classification, segmentation, and detection tasks. The code is
available at https://github.com/MoeinSorkhei/APLA.
|
2503.12552 | Tianyu Li | Tianyu Li, Yihang Qiu, Zhenhua Wu, Carl Lindstr\"om, Peng Su, Matthias
Nie{\ss}ner, Hongyang Li | MTGS: Multi-Traversal Gaussian Splatting | null | null | null | null | cs.CV cs.GR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Multi-traversal data, commonly collected through daily commutes or by
self-driving fleets, provides multiple viewpoints for scene reconstruction
within a road block. This data offers significant potential for high-quality
novel view synthesis, which is crucial for applications such as autonomous
vehicle simulators. However, inherent challenges in multi-traversal data often
result in suboptimal reconstruction quality, including variations in appearance
and the presence of dynamic objects. To address these issues, we propose
Multi-Traversal Gaussian Splatting (MTGS), a novel approach that reconstructs
high-quality driving scenes from arbitrarily collected multi-traversal data by
modeling a shared static geometry while separately handling dynamic elements
and appearance variations. Our method employs a multi-traversal dynamic scene
graph with a shared static node and traversal-specific dynamic nodes,
complemented by color correction nodes with learnable spherical harmonics
coefficient residuals. This approach enables high-fidelity novel view synthesis
and provides flexibility to navigate any viewpoint. We conduct extensive
experiments on a large-scale driving dataset, nuPlan, with multi-traversal
data. Our results demonstrate that MTGS improves LPIPS by 23.5% and geometry
accuracy by 46.3% compared to single-traversal baselines. The code and data
would be available to the public.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 15:46:12 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 08:09:23 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 07:22:52 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Tianyu",
""
],
[
"Qiu",
"Yihang",
""
],
[
"Wu",
"Zhenhua",
""
],
[
"Lindström",
"Carl",
""
],
[
"Su",
"Peng",
""
],
[
"Nießner",
"Matthias",
""
],
[
"Li",
"Hongyang",
""
]
] | TITLE: MTGS: Multi-Traversal Gaussian Splatting
ABSTRACT: Multi-traversal data, commonly collected through daily commutes or by
self-driving fleets, provides multiple viewpoints for scene reconstruction
within a road block. This data offers significant potential for high-quality
novel view synthesis, which is crucial for applications such as autonomous
vehicle simulators. However, inherent challenges in multi-traversal data often
result in suboptimal reconstruction quality, including variations in appearance
and the presence of dynamic objects. To address these issues, we propose
Multi-Traversal Gaussian Splatting (MTGS), a novel approach that reconstructs
high-quality driving scenes from arbitrarily collected multi-traversal data by
modeling a shared static geometry while separately handling dynamic elements
and appearance variations. Our method employs a multi-traversal dynamic scene
graph with a shared static node and traversal-specific dynamic nodes,
complemented by color correction nodes with learnable spherical harmonics
coefficient residuals. This approach enables high-fidelity novel view synthesis
and provides flexibility to navigate any viewpoint. We conduct extensive
experiments on a large-scale driving dataset, nuPlan, with multi-traversal
data. Our results demonstrate that MTGS improves LPIPS by 23.5% and geometry
accuracy by 46.3% compared to single-traversal baselines. The code and data
would be available to the public.
|
2503.12642 | Anjali Dharmik | Anjali Dharmik | COVID 19 Diagnosis Analysis using Transfer Learning | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | Coronaviruses, including SARS-CoV-2, are responsible for COVID-19, a highly
transmissible disease that emerged in December 2019 in Wuhan, China. During the
past five years, significant advancements have been made in understanding and
mitigating the virus. Although the initial outbreak led to global health
crises, improved vaccination strategies, antiviral treatments, and AI-driven
diagnostic tools have contributed to better disease management. However,
COVID-19 continues to pose risks, particularly for immuno-compromised
individuals and those with pre-existing conditions. This study explores the use
of deep learning for a rapid and accurate diagnosis of COVID-19, addressing
ongoing challenges in healthcare infrastructure and testing accessibility. We
propose an enhanced automated detection system leveraging state-of-the-art
convolutional neural networks (CNNs), including updated versions of VGG16,
VGG19, and ResNet50, to classify COVID-19 infections from chest radiographs and
computerized tomography (CT) scans. Our results, based on an expanded dataset
of over 6000 medical images, demonstrate that the optimized ResNet50 model
achieves the highest classification performance, with 97.77% accuracy, 100%
sensitivity, 93.33% specificity, and a 98.0% F1-score. These findings reinforce
the potential of AI-assisted diagnostic tools in improving early detection and
pandemic preparedness.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 20:33:39 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 17:38:40 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Dharmik",
"Anjali",
""
]
] | TITLE: COVID 19 Diagnosis Analysis using Transfer Learning
ABSTRACT: Coronaviruses, including SARS-CoV-2, are responsible for COVID-19, a highly
transmissible disease that emerged in December 2019 in Wuhan, China. During the
past five years, significant advancements have been made in understanding and
mitigating the virus. Although the initial outbreak led to global health
crises, improved vaccination strategies, antiviral treatments, and AI-driven
diagnostic tools have contributed to better disease management. However,
COVID-19 continues to pose risks, particularly for immuno-compromised
individuals and those with pre-existing conditions. This study explores the use
of deep learning for a rapid and accurate diagnosis of COVID-19, addressing
ongoing challenges in healthcare infrastructure and testing accessibility. We
propose an enhanced automated detection system leveraging state-of-the-art
convolutional neural networks (CNNs), including updated versions of VGG16,
VGG19, and ResNet50, to classify COVID-19 infections from chest radiographs and
computerized tomography (CT) scans. Our results, based on an expanded dataset
of over 6000 medical images, demonstrate that the optimized ResNet50 model
achieves the highest classification performance, with 97.77% accuracy, 100%
sensitivity, 93.33% specificity, and a 98.0% F1-score. These findings reinforce
the potential of AI-assisted diagnostic tools in improving early detection and
pandemic preparedness.
|
2503.12799 | Qiong Wu | Qiong Wu, Xiangcong Yang, Yiyi Zhou, Chenxin Fang, Baiyang Song,
Xiaoshuai Sun, Rongrong Ji | Grounded Chain-of-Thought for Multimodal Large Language Models | null | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite great progress, existing multimodal large language models (MLLMs) are
prone to visual hallucination, greatly impeding their trustworthy applications.
In this paper, we study this problem from the perspective of visual-spatial
reasoning, and propose a new learning task for MLLMs, termed Grounded
Chain-of-Thought (GCoT). Different from recent visual CoT studies, which focus
more on visual knowledge reasoning, GCoT is keen to helping MLLMs to recognize
and ground the relevant visual cues step by step, thereby predicting the
correct answer with grounding coordinates as the intuitive basis. To facilitate
this task, we also carefully design and construct a dataset called multimodal
grounded chain-of-thought (MM-GCoT) consisting of 24,022 GCoT examples for
5,033 images. Besides, a comprehensive consistency evaluation system is also
introduced, including the metrics of answer accuracy, grounding accuracy and
answer-grounding consistency. We further design and conduct a bunch of
experiments on 12 advanced MLLMs, and reveal some notable findings: i. most
MLLMs performs poorly on the consistency evaluation, indicating obvious visual
hallucination; ii. visual hallucination is not directly related to the
parameter size and general multimodal performance, i.e., a larger and stronger
MLLM is not less affected by this issue. Lastly, we also demonstrate that the
proposed dataset can help existing MLLMs to well cultivate their GCoT
capability and reduce the inconsistent answering significantly. Moreover, their
GCoT can be also generalized to exiting multimodal tasks, such as open-world QA
and REC.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 04:07:47 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 11:30:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wu",
"Qiong",
""
],
[
"Yang",
"Xiangcong",
""
],
[
"Zhou",
"Yiyi",
""
],
[
"Fang",
"Chenxin",
""
],
[
"Song",
"Baiyang",
""
],
[
"Sun",
"Xiaoshuai",
""
],
[
"Ji",
"Rongrong",
""
]
] | TITLE: Grounded Chain-of-Thought for Multimodal Large Language Models
ABSTRACT: Despite great progress, existing multimodal large language models (MLLMs) are
prone to visual hallucination, greatly impeding their trustworthy applications.
In this paper, we study this problem from the perspective of visual-spatial
reasoning, and propose a new learning task for MLLMs, termed Grounded
Chain-of-Thought (GCoT). Different from recent visual CoT studies, which focus
more on visual knowledge reasoning, GCoT is keen to helping MLLMs to recognize
and ground the relevant visual cues step by step, thereby predicting the
correct answer with grounding coordinates as the intuitive basis. To facilitate
this task, we also carefully design and construct a dataset called multimodal
grounded chain-of-thought (MM-GCoT) consisting of 24,022 GCoT examples for
5,033 images. Besides, a comprehensive consistency evaluation system is also
introduced, including the metrics of answer accuracy, grounding accuracy and
answer-grounding consistency. We further design and conduct a bunch of
experiments on 12 advanced MLLMs, and reveal some notable findings: i. most
MLLMs performs poorly on the consistency evaluation, indicating obvious visual
hallucination; ii. visual hallucination is not directly related to the
parameter size and general multimodal performance, i.e., a larger and stronger
MLLM is not less affected by this issue. Lastly, we also demonstrate that the
proposed dataset can help existing MLLMs to well cultivate their GCoT
capability and reduce the inconsistent answering significantly. Moreover, their
GCoT can be also generalized to exiting multimodal tasks, such as open-world QA
and REC.
|
2503.12999 | Kai Zeng | Ruichuan An, Kai Zeng, Ming Lu, Sihan Yang, Renrui Zhang, Huitong Ji,
Qizhe Zhang, Yulin Luo, Hao Liang, Wentao Zhang | Concept-as-Tree: Synthetic Data is All You Need for VLM Personalization | The code is released at
$\href{https://github.com/zengkaiya/CaT}{\text{https://github.com/zengkaiya/CaT}}$ | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-Language Models (VLMs) have demonstrated exceptional performance in
various multi-modal tasks. Recently, there has been an increasing interest in
improving the personalization capabilities of VLMs. To better integrate
user-provided concepts into VLMs, many methods use positive and negative
samples to fine-tune these models. However, the scarcity of user-provided
positive samples and the low quality of retrieved negative samples pose
challenges for fine-tuning. To reveal the relationship between sample and model
performance, we systematically investigate the impact of positive and negative
samples (easy and hard) and their diversity on VLM personalization tasks. Based
on the detailed analysis, we introduce Concept-as-Tree (CaT), which represents
a concept as a tree structure, thereby enabling the data generation of positive
and negative samples with varying difficulty and diversity for VLM
personalization. With a well-designed data filtering strategy, our CaT
framework can ensure the quality of generated data, constituting a powerful
pipeline. We perform thorough experiments with various VLM personalization
baselines to assess the effectiveness of the pipeline, alleviating the lack of
positive samples and the low quality of negative samples. Our results
demonstrate that CaT equipped with the proposed data filter significantly
enhances the personalization capabilities of VLMs across the MyVLM, Yo'LLaVA,
and MC-LLaVA datasets. To our knowledge, this work is the first controllable
synthetic data pipeline for VLM personalization. The code is released at
$\href{https://github.com/zengkaiya/CaT}{\text{https://github.com/zengkaiya/CaT}}$.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 09:55:01 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 06:45:43 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"An",
"Ruichuan",
""
],
[
"Zeng",
"Kai",
""
],
[
"Lu",
"Ming",
""
],
[
"Yang",
"Sihan",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Ji",
"Huitong",
""
],
[
"Zhang",
"Qizhe",
""
],
[
"Luo",
"Yulin",
""
],
[
"Liang",
"Hao",
""
],
[
"Zhang",
"Wentao",
""
]
] | TITLE: Concept-as-Tree: Synthetic Data is All You Need for VLM Personalization
ABSTRACT: Vision-Language Models (VLMs) have demonstrated exceptional performance in
various multi-modal tasks. Recently, there has been an increasing interest in
improving the personalization capabilities of VLMs. To better integrate
user-provided concepts into VLMs, many methods use positive and negative
samples to fine-tune these models. However, the scarcity of user-provided
positive samples and the low quality of retrieved negative samples pose
challenges for fine-tuning. To reveal the relationship between sample and model
performance, we systematically investigate the impact of positive and negative
samples (easy and hard) and their diversity on VLM personalization tasks. Based
on the detailed analysis, we introduce Concept-as-Tree (CaT), which represents
a concept as a tree structure, thereby enabling the data generation of positive
and negative samples with varying difficulty and diversity for VLM
personalization. With a well-designed data filtering strategy, our CaT
framework can ensure the quality of generated data, constituting a powerful
pipeline. We perform thorough experiments with various VLM personalization
baselines to assess the effectiveness of the pipeline, alleviating the lack of
positive samples and the low quality of negative samples. Our results
demonstrate that CaT equipped with the proposed data filter significantly
enhances the personalization capabilities of VLMs across the MyVLM, Yo'LLaVA,
and MC-LLaVA datasets. To our knowledge, this work is the first controllable
synthetic data pipeline for VLM personalization. The code is released at
$\href{https://github.com/zengkaiya/CaT}{\text{https://github.com/zengkaiya/CaT}}$.
|
2503.13441 | Ri-Zhao Qiu | Ri-Zhao Qiu, Shiqi Yang, Xuxin Cheng, Chaitanya Chawla, Jialong Li,
Tairan He, Ge Yan, David J. Yoon, Ryan Hoque, Lars Paulsen, Ge Yang, Jian
Zhang, Sha Yi, Guanya Shi, Xiaolong Wang | Humanoid Policy ~ Human Policy | Code and data: https://human-as-robot.github.io/ | null | null | null | cs.RO cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Training manipulation policies for humanoid robots with diverse data enhances
their robustness and generalization across tasks and platforms. However,
learning solely from robot demonstrations is labor-intensive, requiring
expensive tele-operated data collection which is difficult to scale. This paper
investigates a more scalable data source, egocentric human demonstrations, to
serve as cross-embodiment training data for robot learning. We mitigate the
embodiment gap between humanoids and humans from both the data and modeling
perspectives. We collect an egocentric task-oriented dataset (PH2D) that is
directly aligned with humanoid manipulation demonstrations. We then train a
human-humanoid behavior policy, which we term Human Action Transformer (HAT).
The state-action space of HAT is unified for both humans and humanoid robots
and can be differentiably retargeted to robot actions. Co-trained with
smaller-scale robot data, HAT directly models humanoid robots and humans as
different embodiments without additional supervision. We show that human data
improves both generalization and robustness of HAT with significantly better
data collection efficiency. Code and data: https://human-as-robot.github.io/
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 17:59:09 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 08:31:56 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Qiu",
"Ri-Zhao",
""
],
[
"Yang",
"Shiqi",
""
],
[
"Cheng",
"Xuxin",
""
],
[
"Chawla",
"Chaitanya",
""
],
[
"Li",
"Jialong",
""
],
[
"He",
"Tairan",
""
],
[
"Yan",
"Ge",
""
],
[
"Yoon",
"David J.",
""
],
[
"Hoque",
"Ryan",
""
],
[
"Paulsen",
"Lars",
""
],
[
"Yang",
"Ge",
""
],
[
"Zhang",
"Jian",
""
],
[
"Yi",
"Sha",
""
],
[
"Shi",
"Guanya",
""
],
[
"Wang",
"Xiaolong",
""
]
] | TITLE: Humanoid Policy ~ Human Policy
ABSTRACT: Training manipulation policies for humanoid robots with diverse data enhances
their robustness and generalization across tasks and platforms. However,
learning solely from robot demonstrations is labor-intensive, requiring
expensive tele-operated data collection which is difficult to scale. This paper
investigates a more scalable data source, egocentric human demonstrations, to
serve as cross-embodiment training data for robot learning. We mitigate the
embodiment gap between humanoids and humans from both the data and modeling
perspectives. We collect an egocentric task-oriented dataset (PH2D) that is
directly aligned with humanoid manipulation demonstrations. We then train a
human-humanoid behavior policy, which we term Human Action Transformer (HAT).
The state-action space of HAT is unified for both humans and humanoid robots
and can be differentiably retargeted to robot actions. Co-trained with
smaller-scale robot data, HAT directly models humanoid robots and humans as
different embodiments without additional supervision. We show that human data
improves both generalization and robustness of HAT with significantly better
data collection efficiency. Code and data: https://human-as-robot.github.io/
|
2503.13999 | Mohaddeseh Chegini | Mohaddeseh Chegini and Ali Mahloojifar | BI-RADS prediction of mammographic masses using uncertainty information
extracted from a Bayesian Deep Learning model | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | The BI_RADS score is a probabilistic reporting tool used by radiologists to
express the level of uncertainty in predicting breast cancer based on some
morphological features in mammography images. There is a significant
variability in describing masses which sometimes leads to BI_RADS
misclassification. Using a BI_RADS prediction system is required to support the
final radiologist decisions. In this study, the uncertainty information
extracted by a Bayesian deep learning model is utilized to predict the BI_RADS
score. The investigation results based on the pathology information demonstrate
that the f1-scores of the predictions of the radiologist are 42.86%, 48.33% and
48.28%, meanwhile, the f1-scores of the model performance are 73.33%, 59.60%
and 59.26% in the BI_RADS 2, 3 and 5 dataset samples, respectively. Also, the
model can distinguish malignant from benign samples in the BI_RADS 0 category
of the used dataset with an accuracy of 75.86% and correctly identify all
malignant samples as BI_RADS 5. The Grad-CAM visualization shows the model pays
attention to the morphological features of the lesions. Therefore, this study
shows the uncertainty-aware Bayesian Deep Learning model can report his
uncertainty about the malignancy of a lesion based on morphological features,
like a radiologist.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 08:06:05 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 12:24:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chegini",
"Mohaddeseh",
""
],
[
"Mahloojifar",
"Ali",
""
]
] | TITLE: BI-RADS prediction of mammographic masses using uncertainty information
extracted from a Bayesian Deep Learning model
ABSTRACT: The BI_RADS score is a probabilistic reporting tool used by radiologists to
express the level of uncertainty in predicting breast cancer based on some
morphological features in mammography images. There is a significant
variability in describing masses which sometimes leads to BI_RADS
misclassification. Using a BI_RADS prediction system is required to support the
final radiologist decisions. In this study, the uncertainty information
extracted by a Bayesian deep learning model is utilized to predict the BI_RADS
score. The investigation results based on the pathology information demonstrate
that the f1-scores of the predictions of the radiologist are 42.86%, 48.33% and
48.28%, meanwhile, the f1-scores of the model performance are 73.33%, 59.60%
and 59.26% in the BI_RADS 2, 3 and 5 dataset samples, respectively. Also, the
model can distinguish malignant from benign samples in the BI_RADS 0 category
of the used dataset with an accuracy of 75.86% and correctly identify all
malignant samples as BI_RADS 5. The Grad-CAM visualization shows the model pays
attention to the morphological features of the lesions. Therefore, this study
shows the uncertainty-aware Bayesian Deep Learning model can report his
uncertainty about the malignancy of a lesion based on morphological features,
like a radiologist.
|
2503.14504 | Yi-Fan Zhang | Tao Yu, Yi-Fan Zhang, Chaoyou Fu, Junkang Wu, Jinda Lu, Kun Wang,
Xingyu Lu, Yunhang Shen, Guibin Zhang, Dingjie Song, Yibo Yan, Tianlong Xu,
Qingsong Wen, Zhang Zhang, Yan Huang, Liang Wang, and Tieniu Tan | Aligning Multimodal LLM with Human Preference: A Survey | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Alignment | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) can handle a wide variety of general tasks with
simple prompts, without the need for task-specific training. Multimodal Large
Language Models (MLLMs), built upon LLMs, have demonstrated impressive
potential in tackling complex tasks involving visual, auditory, and textual
data. However, critical issues related to truthfulness, safety, o1-like
reasoning, and alignment with human preference remain insufficiently addressed.
This gap has spurred the emergence of various alignment algorithms, each
targeting different application scenarios and optimization goals. Recent
studies have shown that alignment algorithms are a powerful approach to
resolving the aforementioned challenges. In this paper, we aim to provide a
comprehensive and systematic review of alignment algorithms for MLLMs.
Specifically, we explore four key aspects: (1) the application scenarios
covered by alignment algorithms, including general image understanding,
multi-image, video, and audio, and extended multimodal applications; (2) the
core factors in constructing alignment datasets, including data sources, model
responses, and preference annotations; (3) the benchmarks used to evaluate
alignment algorithms; and (4) a discussion of potential future directions for
the development of alignment algorithms. This work seeks to help researchers
organize current advancements in the field and inspire better alignment
methods. The project page of this paper is available at
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Alignment.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 17:59:56 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 15:07:54 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yu",
"Tao",
""
],
[
"Zhang",
"Yi-Fan",
""
],
[
"Fu",
"Chaoyou",
""
],
[
"Wu",
"Junkang",
""
],
[
"Lu",
"Jinda",
""
],
[
"Wang",
"Kun",
""
],
[
"Lu",
"Xingyu",
""
],
[
"Shen",
"Yunhang",
""
],
[
"Zhang",
"Guibin",
""
],
[
"Song",
"Dingjie",
""
],
[
"Yan",
"Yibo",
""
],
[
"Xu",
"Tianlong",
""
],
[
"Wen",
"Qingsong",
""
],
[
"Zhang",
"Zhang",
""
],
[
"Huang",
"Yan",
""
],
[
"Wang",
"Liang",
""
],
[
"Tan",
"Tieniu",
""
]
] | TITLE: Aligning Multimodal LLM with Human Preference: A Survey
ABSTRACT: Large language models (LLMs) can handle a wide variety of general tasks with
simple prompts, without the need for task-specific training. Multimodal Large
Language Models (MLLMs), built upon LLMs, have demonstrated impressive
potential in tackling complex tasks involving visual, auditory, and textual
data. However, critical issues related to truthfulness, safety, o1-like
reasoning, and alignment with human preference remain insufficiently addressed.
This gap has spurred the emergence of various alignment algorithms, each
targeting different application scenarios and optimization goals. Recent
studies have shown that alignment algorithms are a powerful approach to
resolving the aforementioned challenges. In this paper, we aim to provide a
comprehensive and systematic review of alignment algorithms for MLLMs.
Specifically, we explore four key aspects: (1) the application scenarios
covered by alignment algorithms, including general image understanding,
multi-image, video, and audio, and extended multimodal applications; (2) the
core factors in constructing alignment datasets, including data sources, model
responses, and preference annotations; (3) the benchmarks used to evaluate
alignment algorithms; and (4) a discussion of potential future directions for
the development of alignment algorithms. This work seeks to help researchers
organize current advancements in the field and inspire better alignment
methods. The project page of this paper is available at
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Alignment.
|
2503.14912 | Gahye Lee | Gahye Lee, Hyejeong Yoon, Jungeon Kim, Seungyong Lee | Deep Polycuboid Fitting for Compact 3D Representation of Indoor Scenes | Accepted to 3DV 2025. For project page, see this
https://waldstein94.github.io/deep-polycuboid-fitting/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel framework for compactly representing a 3D indoor
scene using a set of polycuboids through a deep learning-based fitting method.
Indoor scenes mainly consist of man-made objects, such as furniture, which
often exhibit rectilinear geometry. This property allows indoor scenes to be
represented using combinations of polycuboids, providing a compact
representation that benefits downstream applications like furniture
rearrangement. Our framework takes a noisy point cloud as input and first
detects six types of cuboid faces using a transformer network. Then, a graph
neural network is used to validate the spatial relationships of the detected
faces to form potential polycuboids. Finally, each polycuboid instance is
reconstructed by forming a set of boxes based on the aggregated face labels. To
train our networks, we introduce a synthetic dataset encompassing a diverse
range of cuboid and polycuboid shapes that reflect the characteristics of
indoor scenes. Our framework generalizes well to real-world indoor scene
datasets, including Replica, ScanNet, and scenes captured with an iPhone. The
versatility of our method is demonstrated through practical applications, such
as virtual room tours and scene editing.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 05:33:28 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 13:18:16 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lee",
"Gahye",
""
],
[
"Yoon",
"Hyejeong",
""
],
[
"Kim",
"Jungeon",
""
],
[
"Lee",
"Seungyong",
""
]
] | TITLE: Deep Polycuboid Fitting for Compact 3D Representation of Indoor Scenes
ABSTRACT: This paper presents a novel framework for compactly representing a 3D indoor
scene using a set of polycuboids through a deep learning-based fitting method.
Indoor scenes mainly consist of man-made objects, such as furniture, which
often exhibit rectilinear geometry. This property allows indoor scenes to be
represented using combinations of polycuboids, providing a compact
representation that benefits downstream applications like furniture
rearrangement. Our framework takes a noisy point cloud as input and first
detects six types of cuboid faces using a transformer network. Then, a graph
neural network is used to validate the spatial relationships of the detected
faces to form potential polycuboids. Finally, each polycuboid instance is
reconstructed by forming a set of boxes based on the aggregated face labels. To
train our networks, we introduce a synthetic dataset encompassing a diverse
range of cuboid and polycuboid shapes that reflect the characteristics of
indoor scenes. Our framework generalizes well to real-world indoor scene
datasets, including Replica, ScanNet, and scenes captured with an iPhone. The
versatility of our method is demonstrated through practical applications, such
as virtual room tours and scene editing.
|
2503.15406 | Jisu Nam | Jisu Nam, Soowon Son, Zhan Xu, Jing Shi, Difan Liu, Feng Liu, Aashish
Misraa, Seungryong Kim, Yang Zhou | Visual Persona: Foundation Model for Full-Body Human Customization | CVPR 2025, Project page is available at
https://cvlab-kaist.github.io/Visual-Persona | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce Visual Persona, a foundation model for text-to-image full-body
human customization that, given a single in-the-wild human image, generates
diverse images of the individual guided by text descriptions. Unlike prior
methods that focus solely on preserving facial identity, our approach captures
detailed full-body appearance, aligning with text descriptions for body
structure and scene variations. Training this model requires large-scale paired
human data, consisting of multiple images per individual with consistent
full-body identities, which is notoriously difficult to obtain. To address
this, we propose a data curation pipeline leveraging vision-language models to
evaluate full-body appearance consistency, resulting in Visual Persona-500K, a
dataset of 580k paired human images across 100k unique identities. For precise
appearance transfer, we introduce a transformer encoder-decoder architecture
adapted to a pre-trained text-to-image diffusion model, which augments the
input image into distinct body regions, encodes these regions as local
appearance features, and projects them into dense identity embeddings
independently to condition the diffusion model for synthesizing customized
images. Visual Persona consistently surpasses existing approaches, generating
high-quality, customized images from in-the-wild inputs. Extensive ablation
studies validate design choices, and we demonstrate the versatility of Visual
Persona across various downstream tasks.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 16:45:47 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 07:28:09 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Nam",
"Jisu",
""
],
[
"Son",
"Soowon",
""
],
[
"Xu",
"Zhan",
""
],
[
"Shi",
"Jing",
""
],
[
"Liu",
"Difan",
""
],
[
"Liu",
"Feng",
""
],
[
"Misraa",
"Aashish",
""
],
[
"Kim",
"Seungryong",
""
],
[
"Zhou",
"Yang",
""
]
] | TITLE: Visual Persona: Foundation Model for Full-Body Human Customization
ABSTRACT: We introduce Visual Persona, a foundation model for text-to-image full-body
human customization that, given a single in-the-wild human image, generates
diverse images of the individual guided by text descriptions. Unlike prior
methods that focus solely on preserving facial identity, our approach captures
detailed full-body appearance, aligning with text descriptions for body
structure and scene variations. Training this model requires large-scale paired
human data, consisting of multiple images per individual with consistent
full-body identities, which is notoriously difficult to obtain. To address
this, we propose a data curation pipeline leveraging vision-language models to
evaluate full-body appearance consistency, resulting in Visual Persona-500K, a
dataset of 580k paired human images across 100k unique identities. For precise
appearance transfer, we introduce a transformer encoder-decoder architecture
adapted to a pre-trained text-to-image diffusion model, which augments the
input image into distinct body regions, encodes these regions as local
appearance features, and projects them into dense identity embeddings
independently to condition the diffusion model for synthesizing customized
images. Visual Persona consistently surpasses existing approaches, generating
high-quality, customized images from in-the-wild inputs. Extensive ablation
studies validate design choices, and we demonstrate the versatility of Visual
Persona across various downstream tasks.
|
2503.15426 | Wei Tang | Wei Tang, Yanpeng Sun, Qinying Gu, Zechao Li | Visual Position Prompt for MLLM based Visual Grounding | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although Multimodal Large Language Models (MLLMs) excel at various
image-related tasks, they encounter challenges in precisely aligning
coordinates with spatial information within images, particularly in
position-aware tasks such as visual grounding. This limitation arises from two
key factors. First, MLLMs lack explicit spatial references, making it difficult
to associate textual descriptions with precise image locations. Second, their
feature extraction processes prioritize global context over fine-grained
spatial details, leading to weak localization capability. To address this
issue, we introduce VPP-LLaVA, an MLLM equipped with Visual Position Prompt
(VPP) to improve its grounding capability. VPP-LLaVA integrates two
complementary mechanisms. The global VPP overlays learnable, axis-like
embeddings onto the input image to provide structured spatial cues. The local
VPP focuses on fine-grained localization by incorporating position-aware
queries, which suggests probable object locations. We also introduce a VPP-SFT
dataset with 0.6M samples, consolidating high-quality visual grounding data
into a compact format for efficient model training. Training on this dataset
with VPP enhances the model's performance, achieving state-of-the-art results
on standard grounding benchmarks despite using fewer training samples compared
to other MLLMs like MiniGPT-v2, which rely on much larger datasets ($\sim$21M
samples). The code and VPP-SFT dataset will be available at
https://github.com/WayneTomas/VPP-LLaVA upon acceptance.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 17:08:13 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 16:34:55 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Tang",
"Wei",
""
],
[
"Sun",
"Yanpeng",
""
],
[
"Gu",
"Qinying",
""
],
[
"Li",
"Zechao",
""
]
] | TITLE: Visual Position Prompt for MLLM based Visual Grounding
ABSTRACT: Although Multimodal Large Language Models (MLLMs) excel at various
image-related tasks, they encounter challenges in precisely aligning
coordinates with spatial information within images, particularly in
position-aware tasks such as visual grounding. This limitation arises from two
key factors. First, MLLMs lack explicit spatial references, making it difficult
to associate textual descriptions with precise image locations. Second, their
feature extraction processes prioritize global context over fine-grained
spatial details, leading to weak localization capability. To address this
issue, we introduce VPP-LLaVA, an MLLM equipped with Visual Position Prompt
(VPP) to improve its grounding capability. VPP-LLaVA integrates two
complementary mechanisms. The global VPP overlays learnable, axis-like
embeddings onto the input image to provide structured spatial cues. The local
VPP focuses on fine-grained localization by incorporating position-aware
queries, which suggests probable object locations. We also introduce a VPP-SFT
dataset with 0.6M samples, consolidating high-quality visual grounding data
into a compact format for efficient model training. Training on this dataset
with VPP enhances the model's performance, achieving state-of-the-art results
on standard grounding benchmarks despite using fewer training samples compared
to other MLLMs like MiniGPT-v2, which rely on much larger datasets ($\sim$21M
samples). The code and VPP-SFT dataset will be available at
https://github.com/WayneTomas/VPP-LLaVA upon acceptance.
|
2503.15686 | Jiaqi Liu | Jiaqi Liu, Jichao Zhang, Paolo Rota, Nicu Sebe | Multi-focal Conditioned Latent Diffusion for Person Image Synthesis | CVPR 2025 Accepted | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The Latent Diffusion Model (LDM) has demonstrated strong capabilities in
high-resolution image generation and has been widely employed for Pose-Guided
Person Image Synthesis (PGPIS), yielding promising results. However, the
compression process of LDM often results in the deterioration of details,
particularly in sensitive areas such as facial features and clothing textures.
In this paper, we propose a Multi-focal Conditioned Latent Diffusion (MCLD)
method to address these limitations by conditioning the model on disentangled,
pose-invariant features from these sensitive regions. Our approach utilizes a
multi-focal condition aggregation module, which effectively integrates facial
identity and texture-specific information, enhancing the model's ability to
produce appearance realistic and identity-consistent images. Our method
demonstrates consistent identity and appearance generation on the DeepFashion
dataset and enables flexible person image editing due to its generation
consistency. The code is available at https://github.com/jqliu09/mcld.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 20:50:10 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 23:10:16 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Jiaqi",
""
],
[
"Zhang",
"Jichao",
""
],
[
"Rota",
"Paolo",
""
],
[
"Sebe",
"Nicu",
""
]
] | TITLE: Multi-focal Conditioned Latent Diffusion for Person Image Synthesis
ABSTRACT: The Latent Diffusion Model (LDM) has demonstrated strong capabilities in
high-resolution image generation and has been widely employed for Pose-Guided
Person Image Synthesis (PGPIS), yielding promising results. However, the
compression process of LDM often results in the deterioration of details,
particularly in sensitive areas such as facial features and clothing textures.
In this paper, we propose a Multi-focal Conditioned Latent Diffusion (MCLD)
method to address these limitations by conditioning the model on disentangled,
pose-invariant features from these sensitive regions. Our approach utilizes a
multi-focal condition aggregation module, which effectively integrates facial
identity and texture-specific information, enhancing the model's ability to
produce appearance realistic and identity-consistent images. Our method
demonstrates consistent identity and appearance generation on the DeepFashion
dataset and enables flexible person image editing due to its generation
consistency. The code is available at https://github.com/jqliu09/mcld.
|
2503.15818 | Siyi Wu | Haotian Ma, Lin Gu, Siyi Wu, Yingying Zhu | Computation-Efficient and Recognition-Friendly 3D Point Cloud Privacy
Protection | Accepted by CVPR2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | 3D point cloud has been widely used in applications such as self-driving
cars, robotics, CAD models, etc. To the best of our knowledge, these
applications raised the issue of privacy leakage in 3D point clouds, which has
not been studied well. Different from the 2D image privacy, which is related to
texture and 2D geometric structure, the 3D point cloud is texture-less and only
relevant to 3D geometric structure. In this work, we defined the 3D point cloud
privacy problem and proposed an efficient privacy-preserving framework named
PointFlowGMM that can support downstream classification and segmentation tasks
without seeing the original data. Using a flow-based generative model, the
point cloud is projected into a latent Gaussian mixture distributed subspace.
We further designed a novel angular similarity loss to obfuscate the original
geometric structure and reduce the model size from 767MB to 120MB without a
decrease in recognition performance. The projected point cloud in the latent
space is orthogonally rotated randomly to further protect the original
geometric structure, the class-to-class relationship is preserved after
rotation, thus, the protected point cloud can support the recognition task. We
evaluated our model on multiple datasets and achieved comparable recognition
results on encrypted point clouds compared to the original point clouds.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 03:09:44 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 19:45:16 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ma",
"Haotian",
""
],
[
"Gu",
"Lin",
""
],
[
"Wu",
"Siyi",
""
],
[
"Zhu",
"Yingying",
""
]
] | TITLE: Computation-Efficient and Recognition-Friendly 3D Point Cloud Privacy
Protection
ABSTRACT: 3D point cloud has been widely used in applications such as self-driving
cars, robotics, CAD models, etc. To the best of our knowledge, these
applications raised the issue of privacy leakage in 3D point clouds, which has
not been studied well. Different from the 2D image privacy, which is related to
texture and 2D geometric structure, the 3D point cloud is texture-less and only
relevant to 3D geometric structure. In this work, we defined the 3D point cloud
privacy problem and proposed an efficient privacy-preserving framework named
PointFlowGMM that can support downstream classification and segmentation tasks
without seeing the original data. Using a flow-based generative model, the
point cloud is projected into a latent Gaussian mixture distributed subspace.
We further designed a novel angular similarity loss to obfuscate the original
geometric structure and reduce the model size from 767MB to 120MB without a
decrease in recognition performance. The projected point cloud in the latent
space is orthogonally rotated randomly to further protect the original
geometric structure, the class-to-class relationship is preserved after
rotation, thus, the protected point cloud can support the recognition task. We
evaluated our model on multiple datasets and achieved comparable recognition
results on encrypted point clouds compared to the original point clouds.
|
2503.15854 | Dongwoo Gang | Dongwoo Gang | Persistent Stiefel-Whitney Classes of Tangent Bundles | 25 pages, 4 figures | null | null | null | math.AT cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stiefel-Whitney classes are invariants of the tangent bundle of a smooth
manifold, represented as cohomology classes of the base manifold. Given a point
cloud, we construct a \v{C}ech or alpha filtration. By applying the Wu formula
in a persistent setting, we derive a sequence of persistent cohomology classes
from the filtration. We show that if the filtration is homotopy equivalent to a
smooth manifold, then one of these persistent cohomology classes corresponds to
the $k$-th Stiefel-Whitney class of the tangent bundle of that manifold. To
demonstrate the effectiveness of our approach, we present experiments on
real-world datasets.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 05:24:54 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Gang",
"Dongwoo",
""
]
] | TITLE: Persistent Stiefel-Whitney Classes of Tangent Bundles
ABSTRACT: Stiefel-Whitney classes are invariants of the tangent bundle of a smooth
manifold, represented as cohomology classes of the base manifold. Given a point
cloud, we construct a \v{C}ech or alpha filtration. By applying the Wu formula
in a persistent setting, we derive a sequence of persistent cohomology classes
from the filtration. We show that if the filtration is homotopy equivalent to a
smooth manifold, then one of these persistent cohomology classes corresponds to
the $k$-th Stiefel-Whitney class of the tangent bundle of that manifold. To
demonstrate the effectiveness of our approach, we present experiments on
real-world datasets.
|
2503.16423 | Ron Campos | Ron Campos, Ashmal Vayani, Parth Parag Kulkarni, Rohit Gupta, Aritra
Dutta, Mubarak Shah | GAEA: A Geolocation Aware Conversational Model | The dataset and code used in this submission is available at:
https://ucf-crcv.github.io/GAEA/ | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Image geolocalization, in which, traditionally, an AI model predicts the
precise GPS coordinates of an image is a challenging task with many downstream
applications. However, the user cannot utilize the model to further their
knowledge other than the GPS coordinate; the model lacks an understanding of
the location and the conversational ability to communicate with the user. In
recent days, with tremendous progress of large multimodal models (LMMs) --
proprietary and open-source -- researchers have attempted to geolocalize images
via LMMs. However, the issues remain unaddressed; beyond general tasks, for
more specialized downstream tasks, one of which is geolocalization, LMMs
struggle. In this work, we propose to solve this problem by introducing a
conversational model GAEA that can provide information regarding the location
of an image, as required by a user. No large-scale dataset enabling the
training of such a model exists. Thus we propose GAEA-1.6M, a comprehensive
dataset with 800K images and around 1.6M question-answer pairs constructed by
leveraging OpenStreetMap (OSM) attributes and geographical context clues. For
quantitative evaluation, we propose a diverse benchmark, GAEA-Bench, comprising
4K image-text pairs to evaluate conversational capabilities equipped with
diverse question types. We consider 11 state-of-the-art open-source and
proprietary LMMs and demonstrate that GAEA significantly outperforms the best
open-source model, LLaVA-OneVision by 25.69% and the best proprietary model,
GPT-4o by 8.28%. Our dataset, model and codes are available.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:59:47 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 14:29:42 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Campos",
"Ron",
""
],
[
"Vayani",
"Ashmal",
""
],
[
"Kulkarni",
"Parth Parag",
""
],
[
"Gupta",
"Rohit",
""
],
[
"Dutta",
"Aritra",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: GAEA: A Geolocation Aware Conversational Model
ABSTRACT: Image geolocalization, in which, traditionally, an AI model predicts the
precise GPS coordinates of an image is a challenging task with many downstream
applications. However, the user cannot utilize the model to further their
knowledge other than the GPS coordinate; the model lacks an understanding of
the location and the conversational ability to communicate with the user. In
recent days, with tremendous progress of large multimodal models (LMMs) --
proprietary and open-source -- researchers have attempted to geolocalize images
via LMMs. However, the issues remain unaddressed; beyond general tasks, for
more specialized downstream tasks, one of which is geolocalization, LMMs
struggle. In this work, we propose to solve this problem by introducing a
conversational model GAEA that can provide information regarding the location
of an image, as required by a user. No large-scale dataset enabling the
training of such a model exists. Thus we propose GAEA-1.6M, a comprehensive
dataset with 800K images and around 1.6M question-answer pairs constructed by
leveraging OpenStreetMap (OSM) attributes and geographical context clues. For
quantitative evaluation, we propose a diverse benchmark, GAEA-Bench, comprising
4K image-text pairs to evaluate conversational capabilities equipped with
diverse question types. We consider 11 state-of-the-art open-source and
proprietary LMMs and demonstrate that GAEA significantly outperforms the best
open-source model, LLaVA-OneVision by 25.69% and the best proprietary model,
GPT-4o by 8.28%. Our dataset, model and codes are available.
|
2503.17074 | Silvia Cascianelli PhD | Vittorio Pippi, Fabio Quattrini, Silvia Cascianelli, Alessio Tonioni,
Rita Cucchiara | Zero-Shot Styled Text Image Generation, but Make It Autoregressive | Accepted at CVPR2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Styled Handwritten Text Generation (HTG) has recently received attention from
the computer vision and document analysis communities, which have developed
several solutions, either GAN- or diffusion-based, that achieved promising
results. Nonetheless, these strategies fail to generalize to novel styles and
have technical constraints, particularly in terms of maximum output length and
training efficiency. To overcome these limitations, in this work, we propose a
novel framework for text image generation, dubbed Emuru. Our approach leverages
a powerful text image representation model (a variational autoencoder) combined
with an autoregressive Transformer. Our approach enables the generation of
styled text images conditioned on textual content and style examples, such as
specific fonts or handwriting styles. We train our model solely on a diverse,
synthetic dataset of English text rendered in over 100,000 typewritten and
calligraphy fonts, which gives it the capability to reproduce unseen styles
(both fonts and users' handwriting) in zero-shot. To the best of our knowledge,
Emuru is the first autoregressive model for HTG, and the first designed
specifically for generalization to novel styles. Moreover, our model generates
images without background artifacts, which are easier to use for downstream
applications. Extensive evaluation on both typewritten and handwritten,
any-length text image generation scenarios demonstrates the effectiveness of
our approach.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 11:56:20 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 17:23:51 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Pippi",
"Vittorio",
""
],
[
"Quattrini",
"Fabio",
""
],
[
"Cascianelli",
"Silvia",
""
],
[
"Tonioni",
"Alessio",
""
],
[
"Cucchiara",
"Rita",
""
]
] | TITLE: Zero-Shot Styled Text Image Generation, but Make It Autoregressive
ABSTRACT: Styled Handwritten Text Generation (HTG) has recently received attention from
the computer vision and document analysis communities, which have developed
several solutions, either GAN- or diffusion-based, that achieved promising
results. Nonetheless, these strategies fail to generalize to novel styles and
have technical constraints, particularly in terms of maximum output length and
training efficiency. To overcome these limitations, in this work, we propose a
novel framework for text image generation, dubbed Emuru. Our approach leverages
a powerful text image representation model (a variational autoencoder) combined
with an autoregressive Transformer. Our approach enables the generation of
styled text images conditioned on textual content and style examples, such as
specific fonts or handwriting styles. We train our model solely on a diverse,
synthetic dataset of English text rendered in over 100,000 typewritten and
calligraphy fonts, which gives it the capability to reproduce unseen styles
(both fonts and users' handwriting) in zero-shot. To the best of our knowledge,
Emuru is the first autoregressive model for HTG, and the first designed
specifically for generalization to novel styles. Moreover, our model generates
images without background artifacts, which are easier to use for downstream
applications. Extensive evaluation on both typewritten and handwritten,
any-length text image generation scenarios demonstrates the effectiveness of
our approach.
|
2503.17096 | Ruiyang Ha | Ruiyang Ha, Songyi Jiang, Bin Li, Bikang Pan, Yihang Zhu, Junjie
Zhang, Xiatian Zhu, Shaogang Gong, Jingya Wang | Multi-modal Multi-platform Person Re-Identification: Benchmark and
Method | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional person re-identification (ReID) research is often limited to
single-modality sensor data from static cameras, which fails to address the
complexities of real-world scenarios where multi-modal signals are increasingly
prevalent. For instance, consider an urban ReID system integrating stationary
RGB cameras, nighttime infrared sensors, and UAVs equipped with dynamic
tracking capabilities. Such systems face significant challenges due to
variations in camera perspectives, lighting conditions, and sensor modalities,
hindering effective person ReID. To address these challenges, we introduce the
MP-ReID benchmark, a novel dataset designed specifically for multi-modality and
multi-platform ReID. This benchmark uniquely compiles data from 1,930
identities across diverse modalities, including RGB, infrared, and thermal
imaging, captured by both UAVs and ground-based cameras in indoor and outdoor
environments. Building on this benchmark, we introduce Uni-Prompt ReID, a
framework with specific-designed prompts, tailored for cross-modality and
cross-platform scenarios. Our method consistently outperforms state-of-the-art
approaches, establishing a robust foundation for future research in complex and
dynamic ReID environments. Our dataset are available
at:https://mp-reid.github.io/.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 12:27:49 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 03:49:35 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ha",
"Ruiyang",
""
],
[
"Jiang",
"Songyi",
""
],
[
"Li",
"Bin",
""
],
[
"Pan",
"Bikang",
""
],
[
"Zhu",
"Yihang",
""
],
[
"Zhang",
"Junjie",
""
],
[
"Zhu",
"Xiatian",
""
],
[
"Gong",
"Shaogang",
""
],
[
"Wang",
"Jingya",
""
]
] | TITLE: Multi-modal Multi-platform Person Re-Identification: Benchmark and
Method
ABSTRACT: Conventional person re-identification (ReID) research is often limited to
single-modality sensor data from static cameras, which fails to address the
complexities of real-world scenarios where multi-modal signals are increasingly
prevalent. For instance, consider an urban ReID system integrating stationary
RGB cameras, nighttime infrared sensors, and UAVs equipped with dynamic
tracking capabilities. Such systems face significant challenges due to
variations in camera perspectives, lighting conditions, and sensor modalities,
hindering effective person ReID. To address these challenges, we introduce the
MP-ReID benchmark, a novel dataset designed specifically for multi-modality and
multi-platform ReID. This benchmark uniquely compiles data from 1,930
identities across diverse modalities, including RGB, infrared, and thermal
imaging, captured by both UAVs and ground-based cameras in indoor and outdoor
environments. Building on this benchmark, we introduce Uni-Prompt ReID, a
framework with specific-designed prompts, tailored for cross-modality and
cross-platform scenarios. Our method consistently outperforms state-of-the-art
approaches, establishing a robust foundation for future research in complex and
dynamic ReID environments. Our dataset are available
at:https://mp-reid.github.io/.
|
2503.17162 | Tonmoy Hossain | Tonmoy Hossain and Miaomiao Zhang | CoRLD: Contrastive Representation Learning Of Deformable Shapes In
Images | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deformable shape representations, parameterized by deformations relative to a
given template, have proven effective for improved image analysis tasks.
However, their broader applicability is hindered by two major challenges.
First, existing methods mainly rely on a known template during testing, which
is impractical and limits flexibility. Second, they often struggle to capture
fine-grained, voxel-level distinctions between similar shapes (e.g., anatomical
variations among healthy individuals, those with mild cognitive impairment, and
diseased states). To address these limitations, we propose a novel framework -
Contrastive Representation Learning of Deformable shapes (CoRLD) in learned
deformation spaces and demonstrate its effectiveness in the context of image
classification. Our CoRLD leverages a class-aware contrastive supervised
learning objective in latent deformation spaces, promoting proximity among
representations of similar classes while ensuring separation of dissimilar
groups. In contrast to previous deep learning networks that require a reference
image as input to predict deformation changes, our approach eliminates this
dependency. Instead, template images are utilized solely as ground truth in the
loss function during the training process, making our model more flexible and
generalizable to a wide range of medical applications. We validate CoRLD on
diverse datasets, including real brain magnetic resonance imaging (MRIs) and
adrenal shapes derived from computed tomography (CT) scans. Experimental
results show that our model effectively extracts deformable shape features,
which can be easily integrated with existing classifiers to substantially boost
the classification accuracy. Our code is available at GitHub.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 14:06:23 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 02:43:07 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hossain",
"Tonmoy",
""
],
[
"Zhang",
"Miaomiao",
""
]
] | TITLE: CoRLD: Contrastive Representation Learning Of Deformable Shapes In
Images
ABSTRACT: Deformable shape representations, parameterized by deformations relative to a
given template, have proven effective for improved image analysis tasks.
However, their broader applicability is hindered by two major challenges.
First, existing methods mainly rely on a known template during testing, which
is impractical and limits flexibility. Second, they often struggle to capture
fine-grained, voxel-level distinctions between similar shapes (e.g., anatomical
variations among healthy individuals, those with mild cognitive impairment, and
diseased states). To address these limitations, we propose a novel framework -
Contrastive Representation Learning of Deformable shapes (CoRLD) in learned
deformation spaces and demonstrate its effectiveness in the context of image
classification. Our CoRLD leverages a class-aware contrastive supervised
learning objective in latent deformation spaces, promoting proximity among
representations of similar classes while ensuring separation of dissimilar
groups. In contrast to previous deep learning networks that require a reference
image as input to predict deformation changes, our approach eliminates this
dependency. Instead, template images are utilized solely as ground truth in the
loss function during the training process, making our model more flexible and
generalizable to a wide range of medical applications. We validate CoRLD on
diverse datasets, including real brain magnetic resonance imaging (MRIs) and
adrenal shapes derived from computed tomography (CT) scans. Experimental
results show that our model effectively extracts deformable shape features,
which can be easily integrated with existing classifiers to substantially boost
the classification accuracy. Our code is available at GitHub.
|
2503.17167 | Huy Truong | Huy Truong and Andr\'es Tello and Alexander Lazovik and Victoria
Degeler | DiTEC-WDN: A Large-Scale Dataset of Hydraulic Scenarios across Multiple
Water Distribution Networks | Submitted to Nature Scientific Data. Huy Truong and Andr\'es Tello
contributed equally to this work. For the dataset, see
https://huggingface.co/datasets/rugds/ditec-wdn | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Privacy restrictions hinder the sharing of real-world Water Distribution
Network (WDN) models, limiting the application of emerging data-driven machine
learning, which typically requires extensive observations. To address this
challenge, we propose the dataset DiTEC-WDN that comprises 36,000 unique
scenarios simulated over either short-term (24 hours) or long-term (1 year)
periods. We constructed this dataset using an automated pipeline that optimizes
crucial parameters (e.g., pressure, flow rate, and demand patterns),
facilitates large-scale simulations, and records discrete, synthetic but
hydraulically realistic states under standard conditions via rule validation
and post-hoc analysis. With a total of 228 million generated graph-based
states, DiTEC-WDN can support a variety of machine-learning tasks, including
graph-level, node-level, and link-level regression, as well as time-series
forecasting. This contribution, released under a public license, encourages
open scientific research in the critical water sector, eliminates the risk of
exposing sensitive data, and fulfills the need for a large-scale water
distribution network benchmark for study comparisons and scenario analysis.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 14:14:03 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 14:40:40 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Truong",
"Huy",
""
],
[
"Tello",
"Andrés",
""
],
[
"Lazovik",
"Alexander",
""
],
[
"Degeler",
"Victoria",
""
]
] | TITLE: DiTEC-WDN: A Large-Scale Dataset of Hydraulic Scenarios across Multiple
Water Distribution Networks
ABSTRACT: Privacy restrictions hinder the sharing of real-world Water Distribution
Network (WDN) models, limiting the application of emerging data-driven machine
learning, which typically requires extensive observations. To address this
challenge, we propose the dataset DiTEC-WDN that comprises 36,000 unique
scenarios simulated over either short-term (24 hours) or long-term (1 year)
periods. We constructed this dataset using an automated pipeline that optimizes
crucial parameters (e.g., pressure, flow rate, and demand patterns),
facilitates large-scale simulations, and records discrete, synthetic but
hydraulically realistic states under standard conditions via rule validation
and post-hoc analysis. With a total of 228 million generated graph-based
states, DiTEC-WDN can support a variety of machine-learning tasks, including
graph-level, node-level, and link-level regression, as well as time-series
forecasting. This contribution, released under a public license, encourages
open scientific research in the critical water sector, eliminates the risk of
exposing sensitive data, and fulfills the need for a large-scale water
distribution network benchmark for study comparisons and scenario analysis.
|
2503.17400 | Mohamed Elrefaie | Qian Chen, Mohamed Elrefaie, Angela Dai, Faez Ahmed | TripNet: Learning Large-scale High-fidelity 3D Car Aerodynamics with
Triplane Networks | null | null | null | null | physics.flu-dyn cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Computational Fluid Dynamics (CFD) simulations are essential in product
design, providing insights into fluid behavior around complex geometries in
aerospace and automotive applications. However, high-fidelity CFD simulations
are computationally expensive, making rapid design iterations challenging. To
address this, we propose TripNet, Triplane CFD Network, a machine
learning-based framework leveraging triplane representations to predict the
outcomes of large-scale, high-fidelity CFD simulations with significantly
reduced computation cost. Our method encodes 3D geometry into compact yet
information-rich triplane features, maintaining full geometry fidelity and
enabling accurate aerodynamic predictions. Unlike graph- and point cloud-based
models, which are inherently discrete and provide solutions only at the mesh
nodes, TripNet allows the solution to be queried at any point in the 3D space.
Validated on high-fidelity DrivAerNet and DrivAerNet++ car aerodynamics
datasets, TripNet achieves state-of-the-art performance in drag coefficient
prediction, surface field estimation, and full 3D flow field simulations of
industry-standard car designs. By utilizing a shared triplane backbone across
multiple tasks, our approach offers a scalable, accurate, and efficient
alternative to traditional CFD solvers.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 17:30:57 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Qian",
""
],
[
"Elrefaie",
"Mohamed",
""
],
[
"Dai",
"Angela",
""
],
[
"Ahmed",
"Faez",
""
]
] | TITLE: TripNet: Learning Large-scale High-fidelity 3D Car Aerodynamics with
Triplane Networks
ABSTRACT: Computational Fluid Dynamics (CFD) simulations are essential in product
design, providing insights into fluid behavior around complex geometries in
aerospace and automotive applications. However, high-fidelity CFD simulations
are computationally expensive, making rapid design iterations challenging. To
address this, we propose TripNet, Triplane CFD Network, a machine
learning-based framework leveraging triplane representations to predict the
outcomes of large-scale, high-fidelity CFD simulations with significantly
reduced computation cost. Our method encodes 3D geometry into compact yet
information-rich triplane features, maintaining full geometry fidelity and
enabling accurate aerodynamic predictions. Unlike graph- and point cloud-based
models, which are inherently discrete and provide solutions only at the mesh
nodes, TripNet allows the solution to be queried at any point in the 3D space.
Validated on high-fidelity DrivAerNet and DrivAerNet++ car aerodynamics
datasets, TripNet achieves state-of-the-art performance in drag coefficient
prediction, surface field estimation, and full 3D flow field simulations of
industry-standard car designs. By utilizing a shared triplane backbone across
multiple tasks, our approach offers a scalable, accurate, and efficient
alternative to traditional CFD solvers.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.