id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2411.11098 | Xi Fang | Xi Fang, Jiankun Wang, Xiaochen Cai, Shangqian Chen, Shuwen Yang,
Haoyi Tao, Nan Wang, Lin Yao, Linfeng Zhang, Guolin Ke | MolParser: End-to-end Visual Recognition of Molecule Structures in the
Wild | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent decades, chemistry publications and patents have increased rapidly.
A significant portion of key information is embedded in molecular structure
figures, complicating large-scale literature searches and limiting the
application of large language models in fields such as biology, chemistry, and
pharmaceuticals. The automatic extraction of precise chemical structures is of
critical importance. However, the presence of numerous Markush structures in
real-world documents, along with variations in molecular image quality, drawing
styles, and noise, significantly limits the performance of existing optical
chemical structure recognition (OCSR) methods. We present MolParser, a novel
end-to-end OCSR method that efficiently and accurately recognizes chemical
structures from real-world documents, including difficult Markush structure. We
use a extended SMILES encoding rule to annotate our training dataset. Under
this rule, we build MolParser-7M, the largest annotated molecular image dataset
to our knowledge. While utilizing a large amount of synthetic data, we employed
active learning methods to incorporate substantial in-the-wild data,
specifically samples cropped from real patents and scientific literature, into
the training process. We trained an end-to-end molecular image captioning
model, MolParser, using a curriculum learning approach. MolParser significantly
outperforms classical and learning-based methods across most scenarios, with
potential for broader downstream applications. The dataset is publicly
available.
| [
{
"version": "v1",
"created": "Sun, 17 Nov 2024 15:00:09 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 07:52:02 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Fang",
"Xi",
""
],
[
"Wang",
"Jiankun",
""
],
[
"Cai",
"Xiaochen",
""
],
[
"Chen",
"Shangqian",
""
],
[
"Yang",
"Shuwen",
""
],
[
"Tao",
"Haoyi",
""
],
[
"Wang",
"Nan",
""
],
[
"Yao",
"Lin",
""
],
[
"Zhang",
"Linfeng",
""
],
[
"Ke",
"Guolin",
""
]
]
| TITLE: MolParser: End-to-end Visual Recognition of Molecule Structures in the
Wild
ABSTRACT: In recent decades, chemistry publications and patents have increased rapidly.
A significant portion of key information is embedded in molecular structure
figures, complicating large-scale literature searches and limiting the
application of large language models in fields such as biology, chemistry, and
pharmaceuticals. The automatic extraction of precise chemical structures is of
critical importance. However, the presence of numerous Markush structures in
real-world documents, along with variations in molecular image quality, drawing
styles, and noise, significantly limits the performance of existing optical
chemical structure recognition (OCSR) methods. We present MolParser, a novel
end-to-end OCSR method that efficiently and accurately recognizes chemical
structures from real-world documents, including difficult Markush structure. We
use a extended SMILES encoding rule to annotate our training dataset. Under
this rule, we build MolParser-7M, the largest annotated molecular image dataset
to our knowledge. While utilizing a large amount of synthetic data, we employed
active learning methods to incorporate substantial in-the-wild data,
specifically samples cropped from real patents and scientific literature, into
the training process. We trained an end-to-end molecular image captioning
model, MolParser, using a curriculum learning approach. MolParser significantly
outperforms classical and learning-based methods across most scenarios, with
potential for broader downstream applications. The dataset is publicly
available.
| new_dataset | 0.883236 |
2411.11466 | Markus Sch\"on | Markus Sch\"on, Michael Buchholz, and Klaus Dietmayer | MGNiceNet: Unified Monocular Geometric Scene Understanding | null | Proceedings of the Asian Conference on Computer Vision (ACCV),
2024, pp. 1502-1519 | 10.1007/978-981-96-0966-6_20 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monocular geometric scene understanding combines panoptic segmentation and
self-supervised depth estimation, focusing on real-time application in
autonomous vehicles. We introduce MGNiceNet, a unified approach that uses a
linked kernel formulation for panoptic segmentation and self-supervised depth
estimation. MGNiceNet is based on the state-of-the-art real-time panoptic
segmentation method RT-K-Net and extends the architecture to cover both
panoptic segmentation and self-supervised monocular depth estimation. To this
end, we introduce a tightly coupled self-supervised depth estimation predictor
that explicitly uses information from the panoptic path for depth prediction.
Furthermore, we introduce a panoptic-guided motion masking method to improve
depth estimation without relying on video panoptic segmentation annotations. We
evaluate our method on two popular autonomous driving datasets, Cityscapes and
KITTI. Our model shows state-of-the-art results compared to other real-time
methods and closes the gap to computationally more demanding methods. Source
code and trained models are available at
https://github.com/markusschoen/MGNiceNet.
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2024 11:01:25 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 15:37:59 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Schön",
"Markus",
""
],
[
"Buchholz",
"Michael",
""
],
[
"Dietmayer",
"Klaus",
""
]
]
| TITLE: MGNiceNet: Unified Monocular Geometric Scene Understanding
ABSTRACT: Monocular geometric scene understanding combines panoptic segmentation and
self-supervised depth estimation, focusing on real-time application in
autonomous vehicles. We introduce MGNiceNet, a unified approach that uses a
linked kernel formulation for panoptic segmentation and self-supervised depth
estimation. MGNiceNet is based on the state-of-the-art real-time panoptic
segmentation method RT-K-Net and extends the architecture to cover both
panoptic segmentation and self-supervised monocular depth estimation. To this
end, we introduce a tightly coupled self-supervised depth estimation predictor
that explicitly uses information from the panoptic path for depth prediction.
Furthermore, we introduce a panoptic-guided motion masking method to improve
depth estimation without relying on video panoptic segmentation annotations. We
evaluate our method on two popular autonomous driving datasets, Cityscapes and
KITTI. Our model shows state-of-the-art results compared to other real-time
methods and closes the gap to computationally more demanding methods. Source
code and trained models are available at
https://github.com/markusschoen/MGNiceNet.
| no_new_dataset | 0.949248 |
2411.12073 | Brian Moser | Arundhati S. Shanbhag, Brian B. Moser, Tobias C. Nauen, Stanislav
Frolov, Federico Raue, Andreas Dengel | Just Leaf It: Accelerating Diffusion Classifiers with Hierarchical Class
Pruning | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Diffusion models, celebrated for their generative capabilities, have recently
demonstrated surprising effectiveness in image classification tasks by using
Bayes' theorem. Yet, current diffusion classifiers must evaluate every label
candidate for each input, creating high computational costs that impede their
use in large-scale applications. To address this limitation, we propose a
Hierarchical Diffusion Classifier (HDC) that exploits hierarchical label
structures or well-defined parent-child relationships in the dataset. By
pruning irrelevant high-level categories and refining predictions only within
relevant subcategories (leaf nodes and sub-trees), HDC reduces the total number
of class evaluations. As a result, HDC can speed up inference by as much as 60%
while preserving and sometimes even improving classification accuracy. In
summary, our work provides a tunable control mechanism between speed and
precision, making diffusion-based classification more feasible for large-scale
applications.
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2024 21:34:05 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 00:47:43 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Shanbhag",
"Arundhati S.",
""
],
[
"Moser",
"Brian B.",
""
],
[
"Nauen",
"Tobias C.",
""
],
[
"Frolov",
"Stanislav",
""
],
[
"Raue",
"Federico",
""
],
[
"Dengel",
"Andreas",
""
]
]
| TITLE: Just Leaf It: Accelerating Diffusion Classifiers with Hierarchical Class
Pruning
ABSTRACT: Diffusion models, celebrated for their generative capabilities, have recently
demonstrated surprising effectiveness in image classification tasks by using
Bayes' theorem. Yet, current diffusion classifiers must evaluate every label
candidate for each input, creating high computational costs that impede their
use in large-scale applications. To address this limitation, we propose a
Hierarchical Diffusion Classifier (HDC) that exploits hierarchical label
structures or well-defined parent-child relationships in the dataset. By
pruning irrelevant high-level categories and refining predictions only within
relevant subcategories (leaf nodes and sub-trees), HDC reduces the total number
of class evaluations. As a result, HDC can speed up inference by as much as 60%
while preserving and sometimes even improving classification accuracy. In
summary, our work provides a tunable control mechanism between speed and
precision, making diffusion-based classification more feasible for large-scale
applications.
| no_new_dataset | 0.952706 |
2411.13383 | Chen Bin | Bin Chen, Gehui Li, Rongyuan Wu, Xindong Zhang, Jie Chen, Jian Zhang,
Lei Zhang | Adversarial Diffusion Compression for Real-World Image Super-Resolution | Accepted by CVPR 2025 | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Real-world image super-resolution (Real-ISR) aims to reconstruct
high-resolution images from low-resolution inputs degraded by complex, unknown
processes. While many Stable Diffusion (SD)-based Real-ISR methods have
achieved remarkable success, their slow, multi-step inference hinders practical
deployment. Recent SD-based one-step networks like OSEDiff and S3Diff alleviate
this issue but still incur high computational costs due to their reliance on
large pretrained SD models. This paper proposes a novel Real-ISR method, AdcSR,
by distilling the one-step diffusion network OSEDiff into a streamlined
diffusion-GAN model under our Adversarial Diffusion Compression (ADC)
framework. We meticulously examine the modules of OSEDiff, categorizing them
into two types: (1) Removable (VAE encoder, prompt extractor, text encoder,
etc.) and (2) Prunable (denoising UNet and VAE decoder). Since direct removal
and pruning can degrade the model's generation capability, we pretrain our
pruned VAE decoder to restore its ability to decode images and employ
adversarial distillation to compensate for performance loss. This ADC-based
diffusion-GAN hybrid design effectively reduces complexity by 73% in inference
time, 78% in computation, and 74% in parameters, while preserving the model's
generation capability. Experiments manifest that our proposed AdcSR achieves
competitive recovery quality on both synthetic and real-world datasets,
offering up to 9.3$\times$ speedup over previous one-step diffusion-based
methods. Code and models are available at
https://github.com/Guaishou74851/AdcSR.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 15:13:36 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 09:31:57 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chen",
"Bin",
""
],
[
"Li",
"Gehui",
""
],
[
"Wu",
"Rongyuan",
""
],
[
"Zhang",
"Xindong",
""
],
[
"Chen",
"Jie",
""
],
[
"Zhang",
"Jian",
""
],
[
"Zhang",
"Lei",
""
]
]
| TITLE: Adversarial Diffusion Compression for Real-World Image Super-Resolution
ABSTRACT: Real-world image super-resolution (Real-ISR) aims to reconstruct
high-resolution images from low-resolution inputs degraded by complex, unknown
processes. While many Stable Diffusion (SD)-based Real-ISR methods have
achieved remarkable success, their slow, multi-step inference hinders practical
deployment. Recent SD-based one-step networks like OSEDiff and S3Diff alleviate
this issue but still incur high computational costs due to their reliance on
large pretrained SD models. This paper proposes a novel Real-ISR method, AdcSR,
by distilling the one-step diffusion network OSEDiff into a streamlined
diffusion-GAN model under our Adversarial Diffusion Compression (ADC)
framework. We meticulously examine the modules of OSEDiff, categorizing them
into two types: (1) Removable (VAE encoder, prompt extractor, text encoder,
etc.) and (2) Prunable (denoising UNet and VAE decoder). Since direct removal
and pruning can degrade the model's generation capability, we pretrain our
pruned VAE decoder to restore its ability to decode images and employ
adversarial distillation to compensate for performance loss. This ADC-based
diffusion-GAN hybrid design effectively reduces complexity by 73% in inference
time, 78% in computation, and 74% in parameters, while preserving the model's
generation capability. Experiments manifest that our proposed AdcSR achieves
competitive recovery quality on both synthetic and real-world datasets,
offering up to 9.3$\times$ speedup over previous one-step diffusion-based
methods. Code and models are available at
https://github.com/Guaishou74851/AdcSR.
| no_new_dataset | 0.949012 |
2411.13485 | John Hastings | John D. Hastings, Sherri Weitl-Harms, Joseph Doty, Zachary J. Myers,
Warren Thompson | Utilizing Large Language Models to Synthesize Product Desirability
Datasets | 9 pages, 2 figures, 6 tables, updated author list | 2024 IEEE International Conference on Big Data (IEEE BigData 2024) | 10.1109/BigData62323.2024.10826001 | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This research explores the application of large language models (LLMs) to
generate synthetic datasets for Product Desirability Toolkit (PDT) testing, a
key component in evaluating user sentiment and product experience. Utilizing
gpt-4o-mini, a cost-effective alternative to larger commercial LLMs, three
methods, Word+Review, Review+Word, and Supply-Word, were each used to
synthesize 1000 product reviews. The generated datasets were assessed for
sentiment alignment, textual diversity, and data generation cost. Results
demonstrated high sentiment alignment across all methods, with Pearson
correlations ranging from 0.93 to 0.97. Supply-Word exhibited the highest
diversity and coverage of PDT terms, although with increased generation costs.
Despite minor biases toward positive sentiments, in situations with limited
test data, LLM-generated synthetic data offers significant advantages,
including scalability, cost savings, and flexibility in dataset production.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 17:35:21 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Nov 2024 15:24:07 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hastings",
"John D.",
""
],
[
"Weitl-Harms",
"Sherri",
""
],
[
"Doty",
"Joseph",
""
],
[
"Myers",
"Zachary J.",
""
],
[
"Thompson",
"Warren",
""
]
]
| TITLE: Utilizing Large Language Models to Synthesize Product Desirability
Datasets
ABSTRACT: This research explores the application of large language models (LLMs) to
generate synthetic datasets for Product Desirability Toolkit (PDT) testing, a
key component in evaluating user sentiment and product experience. Utilizing
gpt-4o-mini, a cost-effective alternative to larger commercial LLMs, three
methods, Word+Review, Review+Word, and Supply-Word, were each used to
synthesize 1000 product reviews. The generated datasets were assessed for
sentiment alignment, textual diversity, and data generation cost. Results
demonstrated high sentiment alignment across all methods, with Pearson
correlations ranging from 0.93 to 0.97. Supply-Word exhibited the highest
diversity and coverage of PDT terms, although with increased generation costs.
Despite minor biases toward positive sentiments, in situations with limited
test data, LLM-generated synthetic data offers significant advantages,
including scalability, cost savings, and flexibility in dataset production.
| no_new_dataset | 0.932392 |
2411.13610 | Hao Ju | Hao Ju, Shaofei Huang, Si Liu, Zhedong Zheng | Video2BEV: Transforming Drone Videos to BEVs for Video-based
Geo-localization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing approaches to drone visual geo-localization predominantly adopt the
image-based setting, where a single drone-view snapshot is matched with images
from other platforms. Such task formulation, however, underutilizes the
inherent video output of the drone and is sensitive to occlusions and viewpoint
disparity. To address these limitations, we formulate a new video-based drone
geo-localization task and propose the Video2BEV paradigm. This paradigm
transforms the video into a Bird's Eye View (BEV), simplifying the subsequent
\textbf{inter-platform} matching process. In particular, we employ Gaussian
Splatting to reconstruct a 3D scene and obtain the BEV projection. Different
from the existing transform methods, \eg, polar transform, our BEVs preserve
more fine-grained details without significant distortion. To facilitate the
discriminative \textbf{intra-platform} representation learning, our Video2BEV
paradigm also incorporates a diffusion-based module for generating hard
negative samples. To validate our approach, we introduce UniV, a new
video-based geo-localization dataset that extends the image-based
University-1652 dataset. UniV features flight paths at $30^\circ$ and
$45^\circ$ elevation angles with increased frame rates of up to 10 frames per
second (FPS). Extensive experiments on the UniV dataset show that our Video2BEV
paradigm achieves competitive recall rates and outperforms conventional
video-based methods. Compared to other competitive methods, our proposed
approach exhibits robustness at lower elevations with more occlusions.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 01:52:49 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 11:49:58 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ju",
"Hao",
""
],
[
"Huang",
"Shaofei",
""
],
[
"Liu",
"Si",
""
],
[
"Zheng",
"Zhedong",
""
]
]
| TITLE: Video2BEV: Transforming Drone Videos to BEVs for Video-based
Geo-localization
ABSTRACT: Existing approaches to drone visual geo-localization predominantly adopt the
image-based setting, where a single drone-view snapshot is matched with images
from other platforms. Such task formulation, however, underutilizes the
inherent video output of the drone and is sensitive to occlusions and viewpoint
disparity. To address these limitations, we formulate a new video-based drone
geo-localization task and propose the Video2BEV paradigm. This paradigm
transforms the video into a Bird's Eye View (BEV), simplifying the subsequent
\textbf{inter-platform} matching process. In particular, we employ Gaussian
Splatting to reconstruct a 3D scene and obtain the BEV projection. Different
from the existing transform methods, \eg, polar transform, our BEVs preserve
more fine-grained details without significant distortion. To facilitate the
discriminative \textbf{intra-platform} representation learning, our Video2BEV
paradigm also incorporates a diffusion-based module for generating hard
negative samples. To validate our approach, we introduce UniV, a new
video-based geo-localization dataset that extends the image-based
University-1652 dataset. UniV features flight paths at $30^\circ$ and
$45^\circ$ elevation angles with increased frame rates of up to 10 frames per
second (FPS). Extensive experiments on the UniV dataset show that our Video2BEV
paradigm achieves competitive recall rates and outperforms conventional
video-based methods. Compared to other competitive methods, our proposed
approach exhibits robustness at lower elevations with more occlusions.
| new_dataset | 0.972727 |
2411.13842 | Kaihong Wang | Kaihong Wang, Lingzhi Zhang, Jianming Zhang | Detecting Human Artifacts from Text-to-Image Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite recent advancements, text-to-image generation models often produce
images containing artifacts, especially in human figures. These artifacts
appear as poorly generated human bodies, including distorted, missing, or extra
body parts, leading to visual inconsistencies with typical human anatomy and
greatly impairing overall fidelity. In this study, we address this challenge by
curating Human Artifact Dataset (HAD), a diverse dataset specifically designed
to localize human artifacts. HAD comprises over 37,000 images generated by
several popular text-to-image models, annotated for human artifact
localization. Using this dataset, we train the Human Artifact Detection Models
(HADM), which can identify different artifacts across multiple generative
domains and demonstrate strong generalization, even on images from unseen
generators. Additionally, to further improve generators' perception of human
structural coherence, we use the predictions from our HADM as feedback for
diffusion model finetuning. Our experiments confirm a reduction in human
artifacts in the resulting model. Furthermore, we showcase a novel application
of our HADM in an iterative inpainting framework to correct human artifacts in
arbitrary images directly, demonstrating its utility in improving image
quality. Our dataset and detection models are available at:
https://github.com/wangkaihong/HADM.
| [
{
"version": "v1",
"created": "Thu, 21 Nov 2024 05:02:13 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 06:01:01 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Kaihong",
""
],
[
"Zhang",
"Lingzhi",
""
],
[
"Zhang",
"Jianming",
""
]
]
| TITLE: Detecting Human Artifacts from Text-to-Image Models
ABSTRACT: Despite recent advancements, text-to-image generation models often produce
images containing artifacts, especially in human figures. These artifacts
appear as poorly generated human bodies, including distorted, missing, or extra
body parts, leading to visual inconsistencies with typical human anatomy and
greatly impairing overall fidelity. In this study, we address this challenge by
curating Human Artifact Dataset (HAD), a diverse dataset specifically designed
to localize human artifacts. HAD comprises over 37,000 images generated by
several popular text-to-image models, annotated for human artifact
localization. Using this dataset, we train the Human Artifact Detection Models
(HADM), which can identify different artifacts across multiple generative
domains and demonstrate strong generalization, even on images from unseen
generators. Additionally, to further improve generators' perception of human
structural coherence, we use the predictions from our HADM as feedback for
diffusion model finetuning. Our experiments confirm a reduction in human
artifacts in the resulting model. Furthermore, we showcase a novel application
of our HADM in an iterative inpainting framework to correct human artifacts in
arbitrary images directly, demonstrating its utility in improving image
quality. Our dataset and detection models are available at:
https://github.com/wangkaihong/HADM.
| new_dataset | 0.960025 |
2411.13888 | Xiaorui Qi | Xiaorui Qi, Yanlong Wen, and Xiaojie Yuan | A Hierarchical Scale-free Graph Generator under Limited Resources | under review | null | null | null | cs.DM cs.SI | http://creativecommons.org/licenses/by/4.0/ | Graph generation is one of the most challenging tasks in recent years, and
its core is to learn the ground truth distribution hiding in the training data.
However, training data may not be available due to security concerns or
unaffordable costs, which severely blows the learning models, especially the
deep generative models. The dilemma leads us to rethink non-learned generation
methods based on graph invariant features. Based on the observation of
scale-free property, we propose a hierarchical scale-free graph generation
algorithm. Specifically, we design a two-stage generation strategy. In the
first stage, we sample multiple anchor nodes to further guide the formation of
substructures, splitting the initial node set into multiple ones. Next, we
progressively generate edges by sampling nodes through a degree mixing
distribution, adjusting the tolerance towards exotic structures via two
thresholds. We provide theoretical guarantees for hierarchical generation and
verify the effectiveness of our method under 12 datasets of three categories.
Experimental results show that our method fits the ground truth distribution
better than various generation strategies and other distribution observations.
| [
{
"version": "v1",
"created": "Thu, 21 Nov 2024 07:03:10 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 02:44:24 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Qi",
"Xiaorui",
""
],
[
"Wen",
"Yanlong",
""
],
[
"Yuan",
"Xiaojie",
""
]
]
| TITLE: A Hierarchical Scale-free Graph Generator under Limited Resources
ABSTRACT: Graph generation is one of the most challenging tasks in recent years, and
its core is to learn the ground truth distribution hiding in the training data.
However, training data may not be available due to security concerns or
unaffordable costs, which severely blows the learning models, especially the
deep generative models. The dilemma leads us to rethink non-learned generation
methods based on graph invariant features. Based on the observation of
scale-free property, we propose a hierarchical scale-free graph generation
algorithm. Specifically, we design a two-stage generation strategy. In the
first stage, we sample multiple anchor nodes to further guide the formation of
substructures, splitting the initial node set into multiple ones. Next, we
progressively generate edges by sampling nodes through a degree mixing
distribution, adjusting the tolerance towards exotic structures via two
thresholds. We provide theoretical guarantees for hierarchical generation and
verify the effectiveness of our method under 12 datasets of three categories.
Experimental results show that our method fits the ground truth distribution
better than various generation strategies and other distribution observations.
| no_new_dataset | 0.949576 |
2411.14717 | Binqian Xu | Binqian Xu, Xiangbo Shu, Haiyang Mei, Guosen Xie, Basura Fernando, and
Jinhui Tang | FedMLLM: Federated Fine-tuning MLLM on Multimodal Heterogeneity Data | null | null | null | null | cs.LG cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal Large Language Models (MLLMs) have made significant advancements,
demonstrating powerful capabilities in processing and understanding multimodal
data. Fine-tuning MLLMs with Federated Learning (FL) allows for expanding the
training data scope by including private data sources, thereby enhancing their
practical applicability in privacy-sensitive domains. However, current research
remains in the early stage, particularly in addressing the \textbf{multimodal
heterogeneities} in real-world applications. In this paper, we introduce a
benchmark to evaluate the performance of federated fine-tuning of MLLMs across
various multimodal heterogeneous scenarios, laying the groundwork for future
research in the field. Our benchmark includes two lightweight MLLMs, two
downstream tasks, three evaluation metrics, and five datasets across three
domains, along with six comparison baselines, covering over ten types of
modality heterogeneities across four multimodal scenarios. To address the
challenges posed by multimodal heterogeneity, we develop a general FedMLLM
framework that integrates classic FL methods alongside two modality-agnostic
strategies. Extensive experimental results show that our proposed FL paradigm
improves the performance of MLLMs by broadening the range of training data and
mitigating multimodal heterogeneity. Code is available in supplementary
materials.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2024 04:09:23 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 13:10:57 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Xu",
"Binqian",
""
],
[
"Shu",
"Xiangbo",
""
],
[
"Mei",
"Haiyang",
""
],
[
"Xie",
"Guosen",
""
],
[
"Fernando",
"Basura",
""
],
[
"Tang",
"Jinhui",
""
]
]
| TITLE: FedMLLM: Federated Fine-tuning MLLM on Multimodal Heterogeneity Data
ABSTRACT: Multimodal Large Language Models (MLLMs) have made significant advancements,
demonstrating powerful capabilities in processing and understanding multimodal
data. Fine-tuning MLLMs with Federated Learning (FL) allows for expanding the
training data scope by including private data sources, thereby enhancing their
practical applicability in privacy-sensitive domains. However, current research
remains in the early stage, particularly in addressing the \textbf{multimodal
heterogeneities} in real-world applications. In this paper, we introduce a
benchmark to evaluate the performance of federated fine-tuning of MLLMs across
various multimodal heterogeneous scenarios, laying the groundwork for future
research in the field. Our benchmark includes two lightweight MLLMs, two
downstream tasks, three evaluation metrics, and five datasets across three
domains, along with six comparison baselines, covering over ten types of
modality heterogeneities across four multimodal scenarios. To address the
challenges posed by multimodal heterogeneity, we develop a general FedMLLM
framework that integrates classic FL methods alongside two modality-agnostic
strategies. Extensive experimental results show that our proposed FL paradigm
improves the performance of MLLMs by broadening the range of training data and
mitigating multimodal heterogeneity. Code is available in supplementary
materials.
| new_dataset | 0.903081 |
2411.14796 | Youwei Zhou | Youwei Zhou and Tianyang Xu and Cong Wu and Xiaojun Wu and Josef
Kittler | Adaptive Hyper-Graph Convolution Network for Skeleton-based Human Action
Recognition with Virtual Connections | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The shared topology of human skeletons motivated the recent investigation of
graph convolutional network (GCN) solutions for action recognition. However,
most of the existing GCNs rely on the binary connection of two neighboring
vertices (joints) formed by an edge (bone), overlooking the potential of
constructing multi-vertex convolution structures. Although some studies have
attempted to utilize hyper-graphs to represent the topology, they rely on a
fixed construction strategy, which limits their adaptivity in uncovering the
intricate latent relationships within the action. In this paper, we address
this oversight and explore the merits of an adaptive hyper-graph convolutional
network (Hyper-GCN) to achieve the aggregation of rich semantic information
conveyed by skeleton vertices. In particular, our Hyper-GCN adaptively
optimises the hyper-graphs during training, revealing the action-driven
multi-vertex relations. Besides, virtual connections are often designed to
support efficient feature aggregation, implicitly extending the spectrum of
dependencies within the skeleton. By injecting virtual connections into
hyper-graphs, the semantic clues of diverse action categories can be
highlighted. The results of experiments conducted on the NTU-60, NTU-120, and
NW-UCLA datasets demonstrate the merits of our Hyper-GCN, compared to the
state-of-the-art methods. Specifically, we outperform the existing solutions on
NTU-120, achieving 90.5\% and 91.7\% in terms of the top-1 recognition accuracy
on X-Sub and X-Set.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2024 08:41:33 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 08:14:25 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhou",
"Youwei",
""
],
[
"Xu",
"Tianyang",
""
],
[
"Wu",
"Cong",
""
],
[
"Wu",
"Xiaojun",
""
],
[
"Kittler",
"Josef",
""
]
]
| TITLE: Adaptive Hyper-Graph Convolution Network for Skeleton-based Human Action
Recognition with Virtual Connections
ABSTRACT: The shared topology of human skeletons motivated the recent investigation of
graph convolutional network (GCN) solutions for action recognition. However,
most of the existing GCNs rely on the binary connection of two neighboring
vertices (joints) formed by an edge (bone), overlooking the potential of
constructing multi-vertex convolution structures. Although some studies have
attempted to utilize hyper-graphs to represent the topology, they rely on a
fixed construction strategy, which limits their adaptivity in uncovering the
intricate latent relationships within the action. In this paper, we address
this oversight and explore the merits of an adaptive hyper-graph convolutional
network (Hyper-GCN) to achieve the aggregation of rich semantic information
conveyed by skeleton vertices. In particular, our Hyper-GCN adaptively
optimises the hyper-graphs during training, revealing the action-driven
multi-vertex relations. Besides, virtual connections are often designed to
support efficient feature aggregation, implicitly extending the spectrum of
dependencies within the skeleton. By injecting virtual connections into
hyper-graphs, the semantic clues of diverse action categories can be
highlighted. The results of experiments conducted on the NTU-60, NTU-120, and
NW-UCLA datasets demonstrate the merits of our Hyper-GCN, compared to the
state-of-the-art methods. Specifically, we outperform the existing solutions on
NTU-120, achieving 90.5\% and 91.7\% in terms of the top-1 recognition accuracy
on X-Sub and X-Set.
| no_new_dataset | 0.948394 |
2411.15239 | Evelyn Mannix | Evelyn J. Mannix, Liam Hodgkinson and Howard Bondell | Preserving Angles Improves Feature Distillation of Foundation Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Knowledge distillation approaches compress models by training a student
network using the classification outputs of a high quality teacher model, but
can fail to effectively transfer the properties of computer vision foundation
models from the teacher to the student. While it has been recently shown that
feature distillation$\unicode{x2013}$where a teacher model's output features
are replicated instead$\unicode{x2013}$can reproduce performance for foundation
models across numerous downstream tasks, they fall short in matching critical
properties such as robustness and out-of-distribution (OOD) detection
performance. This paper overcomes this shortcoming by introducing
Cosine-similarity Preserving Compression (CosPress), a feature distillation
technique that learns a mapping to compress the latent space of the teacher
model into the smaller latent space of the student, by preserving the cosine
similarities between image embeddings. This enables direct optimisation of the
student network and produces a more faithful reproduction of the teacher's
properties. It is shown that distillation with CosPress on a variety of
datasets, including ImageNet, produces more accurate models with greater
performance on generalisability, robustness and OOD detection benchmarks, and
that this technique provides a competitive pathway for training highly
performant lightweight models on small datasets. Code is available at
https://github.com/emannix/cospress.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2024 01:48:44 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 00:51:39 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Mannix",
"Evelyn J.",
""
],
[
"Hodgkinson",
"Liam",
""
],
[
"Bondell",
"Howard",
""
]
]
| TITLE: Preserving Angles Improves Feature Distillation of Foundation Models
ABSTRACT: Knowledge distillation approaches compress models by training a student
network using the classification outputs of a high quality teacher model, but
can fail to effectively transfer the properties of computer vision foundation
models from the teacher to the student. While it has been recently shown that
feature distillation$\unicode{x2013}$where a teacher model's output features
are replicated instead$\unicode{x2013}$can reproduce performance for foundation
models across numerous downstream tasks, they fall short in matching critical
properties such as robustness and out-of-distribution (OOD) detection
performance. This paper overcomes this shortcoming by introducing
Cosine-similarity Preserving Compression (CosPress), a feature distillation
technique that learns a mapping to compress the latent space of the teacher
model into the smaller latent space of the student, by preserving the cosine
similarities between image embeddings. This enables direct optimisation of the
student network and produces a more faithful reproduction of the teacher's
properties. It is shown that distillation with CosPress on a variety of
datasets, including ImageNet, produces more accurate models with greater
performance on generalisability, robustness and OOD detection benchmarks, and
that this technique provides a competitive pathway for training highly
performant lightweight models on small datasets. Code is available at
https://github.com/emannix/cospress.
| no_new_dataset | 0.948775 |
2411.15404 | Khalid Hasan | Khalid Hasan, Jamil Saquer | A Comparative Analysis of Transformer and LSTM Models for Detecting
Suicidal Ideation on Reddit | 23rd IEEE International Conference on Machine Learning and
Applications, ICMLA 2024 (camera-ready) | 2024 International Conference on Machine Learning and Applications
(ICMLA) | 10.1109/ICMLA61862.2024.00209 | null | cs.LG cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Suicide is a critical global health problem involving more than 700,000
deaths yearly, particularly among young adults. Many people express their
suicidal thoughts on social media platforms such as Reddit. This paper
evaluates the effectiveness of the deep learning transformer-based models BERT,
RoBERTa, DistilBERT, ALBERT, and ELECTRA and various Long Short-Term Memory
(LSTM) based models in detecting suicidal ideation from user posts on Reddit.
Toward this objective, we curated an extensive dataset from diverse subreddits
and conducted linguistic, topic modeling, and statistical analyses to ensure
data quality. Our results indicate that each model could reach high accuracy
and F1 scores, but among them, RoBERTa emerged as the most effective model with
an accuracy of 93.22% and F1 score of 93.14%. An LSTM model that uses attention
and BERT embeddings performed as the second best, with an accuracy of 92.65%
and an F1 score of 92.69%. Our findings show that transformer-based models have
the potential to improve suicide ideation detection, thereby providing a path
to develop robust mental health monitoring tools from social media. This
research, therefore, underlines the undeniable prospect of advanced techniques
in Natural Language Processing (NLP) while improving suicide prevention
efforts.
| [
{
"version": "v1",
"created": "Sat, 23 Nov 2024 01:17:43 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hasan",
"Khalid",
""
],
[
"Saquer",
"Jamil",
""
]
]
| TITLE: A Comparative Analysis of Transformer and LSTM Models for Detecting
Suicidal Ideation on Reddit
ABSTRACT: Suicide is a critical global health problem involving more than 700,000
deaths yearly, particularly among young adults. Many people express their
suicidal thoughts on social media platforms such as Reddit. This paper
evaluates the effectiveness of the deep learning transformer-based models BERT,
RoBERTa, DistilBERT, ALBERT, and ELECTRA and various Long Short-Term Memory
(LSTM) based models in detecting suicidal ideation from user posts on Reddit.
Toward this objective, we curated an extensive dataset from diverse subreddits
and conducted linguistic, topic modeling, and statistical analyses to ensure
data quality. Our results indicate that each model could reach high accuracy
and F1 scores, but among them, RoBERTa emerged as the most effective model with
an accuracy of 93.22% and F1 score of 93.14%. An LSTM model that uses attention
and BERT embeddings performed as the second best, with an accuracy of 92.65%
and an F1 score of 92.69%. Our findings show that transformer-based models have
the potential to improve suicide ideation detection, thereby providing a path
to develop robust mental health monitoring tools from social media. This
research, therefore, underlines the undeniable prospect of advanced techniques
in Natural Language Processing (NLP) while improving suicide prevention
efforts.
| no_new_dataset | 0.802091 |
2411.15447 | Wei Guo | Wei Guo, Heng Wang, Jianbo Ma, Weidong Cai | Gotta Hear Them All: Sound Source Aware Vision to Audio Generation | 18 pages, 13 figures, source code available at
https://github.com/wguo86/SSV2A | null | null | null | cs.MM cs.CV cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Vision-to-audio (V2A) synthesis has broad applications in multimedia. Recent
advancements of V2A methods have made it possible to generate relevant audios
from inputs of videos or still images. However, the immersiveness and
expressiveness of the generation are limited. One possible problem is that
existing methods solely rely on the global scene and overlook details of local
sounding objects (i.e., sound sources). To address this issue, we propose a
Sound Source-Aware V2A (SSV2A) generator. SSV2A is able to locally perceive
multimodal sound sources from a scene with visual detection and cross-modality
translation. It then contrastively learns a Cross-Modal Sound Source (CMSS)
Manifold to semantically disambiguate each source. Finally, we attentively mix
their CMSS semantics into a rich audio representation, from which a pretrained
audio generator outputs the sound. To model the CMSS manifold, we curate a
novel single-sound-source visual-audio dataset VGGS3 from VGGSound. We also
design a Sound Source Matching Score to measure localized audio relevance. By
addressing V2A generation at the sound-source level, SSV2A surpasses
state-of-the-art methods in both generation fidelity and relevance as evidenced
by extensive experiments. We further demonstrate SSV2A's ability to achieve
intuitive V2A control by compositing vision, text, and audio conditions. Our
generation can be tried and heard at https://ssv2a.github.io/SSV2A-demo .
| [
{
"version": "v1",
"created": "Sat, 23 Nov 2024 04:27:19 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Nov 2024 03:49:11 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 11:22:27 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Guo",
"Wei",
""
],
[
"Wang",
"Heng",
""
],
[
"Ma",
"Jianbo",
""
],
[
"Cai",
"Weidong",
""
]
]
| TITLE: Gotta Hear Them All: Sound Source Aware Vision to Audio Generation
ABSTRACT: Vision-to-audio (V2A) synthesis has broad applications in multimedia. Recent
advancements of V2A methods have made it possible to generate relevant audios
from inputs of videos or still images. However, the immersiveness and
expressiveness of the generation are limited. One possible problem is that
existing methods solely rely on the global scene and overlook details of local
sounding objects (i.e., sound sources). To address this issue, we propose a
Sound Source-Aware V2A (SSV2A) generator. SSV2A is able to locally perceive
multimodal sound sources from a scene with visual detection and cross-modality
translation. It then contrastively learns a Cross-Modal Sound Source (CMSS)
Manifold to semantically disambiguate each source. Finally, we attentively mix
their CMSS semantics into a rich audio representation, from which a pretrained
audio generator outputs the sound. To model the CMSS manifold, we curate a
novel single-sound-source visual-audio dataset VGGS3 from VGGSound. We also
design a Sound Source Matching Score to measure localized audio relevance. By
addressing V2A generation at the sound-source level, SSV2A surpasses
state-of-the-art methods in both generation fidelity and relevance as evidenced
by extensive experiments. We further demonstrate SSV2A's ability to achieve
intuitive V2A control by compositing vision, text, and audio conditions. Our
generation can be tried and heard at https://ssv2a.github.io/SSV2A-demo .
| new_dataset | 0.940134 |
2411.15867 | Teng Zhou | Teng Zhou, Xiaoyu Zhang, Yongchuan Tang | PanoLlama: Generating Endless and Coherent Panoramas with
Next-Token-Prediction LLMs | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Panoramic Image Generation (PIG) aims to create coherent images of arbitrary
lengths. Most existing methods fall in the joint diffusion paradigm, but their
complex and heuristic crop connection designs often limit their ability to
achieve multilevel coherence. By deconstructing this challenge into its core
components, we find it naturally aligns with next-token prediction, leading us
to adopt an autoregressive (AR) paradigm for PIG modeling. However, existing
visual AR (VAR) models are limited to fixed-size generation, lacking the
capability to produce panoramic images. In this paper, we propose PanoLlama, a
novel framework that achieves endless and coherent panorama generation with the
autoregressive paradigm. Our approach develops a training-free strategy that
utilizes token redirection to overcome the size limitations of existing VAR
models, enabling next-crop prediction in both horizontal and vertical
directions. This refreshes the PIG pipeline while achieving SOTA performance in
coherence (47.50\%), fidelity(28.16\%), and aesthetics (15\%). Additionally,
PanoLlama supports applications other PIG methods cannot achieve, including
mask-free layout control, multi-scale and multi-guidance synthesis. To
facilitate standardized evaluation, we also establish a dataset with 1,000
prompts spanning 100+ themes, providing a new testing benchmark for PIG
research.
| [
{
"version": "v1",
"created": "Sun, 24 Nov 2024 15:06:57 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 04:50:28 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhou",
"Teng",
""
],
[
"Zhang",
"Xiaoyu",
""
],
[
"Tang",
"Yongchuan",
""
]
]
| TITLE: PanoLlama: Generating Endless and Coherent Panoramas with
Next-Token-Prediction LLMs
ABSTRACT: Panoramic Image Generation (PIG) aims to create coherent images of arbitrary
lengths. Most existing methods fall in the joint diffusion paradigm, but their
complex and heuristic crop connection designs often limit their ability to
achieve multilevel coherence. By deconstructing this challenge into its core
components, we find it naturally aligns with next-token prediction, leading us
to adopt an autoregressive (AR) paradigm for PIG modeling. However, existing
visual AR (VAR) models are limited to fixed-size generation, lacking the
capability to produce panoramic images. In this paper, we propose PanoLlama, a
novel framework that achieves endless and coherent panorama generation with the
autoregressive paradigm. Our approach develops a training-free strategy that
utilizes token redirection to overcome the size limitations of existing VAR
models, enabling next-crop prediction in both horizontal and vertical
directions. This refreshes the PIG pipeline while achieving SOTA performance in
coherence (47.50\%), fidelity(28.16\%), and aesthetics (15\%). Additionally,
PanoLlama supports applications other PIG methods cannot achieve, including
mask-free layout control, multi-scale and multi-guidance synthesis. To
facilitate standardized evaluation, we also establish a dataset with 1,000
prompts spanning 100+ themes, providing a new testing benchmark for PIG
research.
| new_dataset | 0.943191 |
2411.15869 | Sule Bai | Sule Bai, Yong Liu, Yifei Han, Haoji Zhang, Yansong Tang | Self-Calibrated CLIP for Training-Free Open-Vocabulary Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in pre-trained vision-language models like CLIP, have
enabled the task of open-vocabulary segmentation. CLIP demonstrates impressive
zero-shot capabilities in various downstream tasks that require holistic image
understanding. However, due to its image-level pre-training, CLIP struggles to
capture local details, resulting in poor performance in segmentation tasks. Our
analysis reveals that anomaly tokens emerge during the forward pass, drawing
excessive attention from normal patch tokens, thereby diminishing spatial
awareness. To address this issue, we propose Self-Calibrated CLIP (SC-CLIP), a
training-free method that calibrates CLIP to produce finer representations
while preserving its original generalization ability, without introducing new
parameters or relying on additional backbones. Specifically, we first identify
and resolve the anomaly tokens to mitigate their negative impact. Next, we
enhance feature discriminability and attention correlation by leveraging the
semantic consistency found in CLIP's intermediate features. Furthermore, we
explore how to effectively employ multi-level feature fusion under the
training-free setting. Collectively, these strategies enhance CLIP's feature
representation with greater granularity and coherence. Experimental results
demonstrate the effectiveness of SC-CLIP, achieving state-of-the-art results
across all datasets and surpassing previous methods by 9.5%. Notably, SC-CLIP
boosts the performance of vanilla CLIP ViT-L/14 by 6.8 times. Our source code
is available at https://github.com/SuleBai/SC-CLIP.
| [
{
"version": "v1",
"created": "Sun, 24 Nov 2024 15:14:05 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 09:35:03 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Bai",
"Sule",
""
],
[
"Liu",
"Yong",
""
],
[
"Han",
"Yifei",
""
],
[
"Zhang",
"Haoji",
""
],
[
"Tang",
"Yansong",
""
]
]
| TITLE: Self-Calibrated CLIP for Training-Free Open-Vocabulary Segmentation
ABSTRACT: Recent advancements in pre-trained vision-language models like CLIP, have
enabled the task of open-vocabulary segmentation. CLIP demonstrates impressive
zero-shot capabilities in various downstream tasks that require holistic image
understanding. However, due to its image-level pre-training, CLIP struggles to
capture local details, resulting in poor performance in segmentation tasks. Our
analysis reveals that anomaly tokens emerge during the forward pass, drawing
excessive attention from normal patch tokens, thereby diminishing spatial
awareness. To address this issue, we propose Self-Calibrated CLIP (SC-CLIP), a
training-free method that calibrates CLIP to produce finer representations
while preserving its original generalization ability, without introducing new
parameters or relying on additional backbones. Specifically, we first identify
and resolve the anomaly tokens to mitigate their negative impact. Next, we
enhance feature discriminability and attention correlation by leveraging the
semantic consistency found in CLIP's intermediate features. Furthermore, we
explore how to effectively employ multi-level feature fusion under the
training-free setting. Collectively, these strategies enhance CLIP's feature
representation with greater granularity and coherence. Experimental results
demonstrate the effectiveness of SC-CLIP, achieving state-of-the-art results
across all datasets and surpassing previous methods by 9.5%. Notably, SC-CLIP
boosts the performance of vanilla CLIP ViT-L/14 by 6.8 times. Our source code
is available at https://github.com/SuleBai/SC-CLIP.
| no_new_dataset | 0.947284 |
2411.17376 | Ryo Fujii | Ryo Fujii, Hideo Saito and Ryo Hachiuma | RealTraj: Towards Real-World Pedestrian Trajectory Forecasting | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | This paper jointly addresses three key limitations in conventional pedestrian
trajectory forecasting: pedestrian perception errors, real-world data
collection costs, and person ID annotation costs. We propose a novel framework,
RealTraj, that enhances the real-world applicability of trajectory forecasting.
Our approach includes two training phases -- self-supervised pretraining on
synthetic data and weakly-supervised fine-tuning with limited real-world data
-- to minimize data collection efforts. To improve robustness to real-world
errors, we focus on both model design and training objectives. Specifically, we
present Det2TrajFormer, a trajectory forecasting model that remains invariant
to tracking noise by using past detections as inputs. Additionally, we pretrain
the model using multiple pretext tasks, which enhance robustness and improve
forecasting performance based solely on detection data. Unlike previous
trajectory forecasting methods, our approach fine-tunes the model using only
ground-truth detections, reducing the need for costly person ID annotations. In
the experiments, we comprehensively verify the effectiveness of the proposed
method against the limitations, and the method outperforms state-of-the-art
trajectory forecasting methods on multiple datasets. The code will be released
at https://fujiry0.github.io/RealTraj-project-page.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 12:35:26 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Nov 2024 06:08:02 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 13:26:35 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Fujii",
"Ryo",
""
],
[
"Saito",
"Hideo",
""
],
[
"Hachiuma",
"Ryo",
""
]
]
| TITLE: RealTraj: Towards Real-World Pedestrian Trajectory Forecasting
ABSTRACT: This paper jointly addresses three key limitations in conventional pedestrian
trajectory forecasting: pedestrian perception errors, real-world data
collection costs, and person ID annotation costs. We propose a novel framework,
RealTraj, that enhances the real-world applicability of trajectory forecasting.
Our approach includes two training phases -- self-supervised pretraining on
synthetic data and weakly-supervised fine-tuning with limited real-world data
-- to minimize data collection efforts. To improve robustness to real-world
errors, we focus on both model design and training objectives. Specifically, we
present Det2TrajFormer, a trajectory forecasting model that remains invariant
to tracking noise by using past detections as inputs. Additionally, we pretrain
the model using multiple pretext tasks, which enhance robustness and improve
forecasting performance based solely on detection data. Unlike previous
trajectory forecasting methods, our approach fine-tunes the model using only
ground-truth detections, reducing the need for costly person ID annotations. In
the experiments, we comprehensively verify the effectiveness of the proposed
method against the limitations, and the method outperforms state-of-the-art
trajectory forecasting methods on multiple datasets. The code will be released
at https://fujiry0.github.io/RealTraj-project-page.
| no_new_dataset | 0.95096 |
2411.17766 | Zhiming Xu | Zhiming Xu, Suorong Yang, Baile Xu, Jian Zhao, Furao Shen | Integrating Dual Prototypes for Task-Wise Adaption in Pre-Trained
Model-Based Class-Incremental Learning | 9 pages,6 figures,2 tables | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Class-incremental learning (CIL) aims to acquire new classes while conserving
historical knowledge incrementally. Despite existing pre-trained model (PTM)
based methods performing excellently in CIL, it is better to fine-tune them on
downstream incremental tasks with massive patterns unknown to PTMs. However,
using task streams for fine-tuning could lead to catastrophic forgetting that
will erase the knowledge in PTMs. This paper proposes the Dual Prototype
network for Task-wise Adaption (DPTA) of PTM-based CIL. For each incremental
learning task, a task-wise adapter module is built to fine-tune the PTM, where
the center-adapt loss forces the representation to be more centrally clustered
and class separable. The dual prototype network improves the prediction process
by enabling test-time adapter selection, where the raw prototypes deduce
several possible task indexes of test samples to select suitable adapter
modules for PTM, and the augmented prototypes that could separate highly
correlated classes are utilized to determine the final result. Experiments on
several benchmark datasets demonstrate the state-of-the-art performance of
DPTA. The code will be open-sourced after the paper is published.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 05:04:38 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 02:58:33 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Xu",
"Zhiming",
""
],
[
"Yang",
"Suorong",
""
],
[
"Xu",
"Baile",
""
],
[
"Zhao",
"Jian",
""
],
[
"Shen",
"Furao",
""
]
]
| TITLE: Integrating Dual Prototypes for Task-Wise Adaption in Pre-Trained
Model-Based Class-Incremental Learning
ABSTRACT: Class-incremental learning (CIL) aims to acquire new classes while conserving
historical knowledge incrementally. Despite existing pre-trained model (PTM)
based methods performing excellently in CIL, it is better to fine-tune them on
downstream incremental tasks with massive patterns unknown to PTMs. However,
using task streams for fine-tuning could lead to catastrophic forgetting that
will erase the knowledge in PTMs. This paper proposes the Dual Prototype
network for Task-wise Adaption (DPTA) of PTM-based CIL. For each incremental
learning task, a task-wise adapter module is built to fine-tune the PTM, where
the center-adapt loss forces the representation to be more centrally clustered
and class separable. The dual prototype network improves the prediction process
by enabling test-time adapter selection, where the raw prototypes deduce
several possible task indexes of test samples to select suitable adapter
modules for PTM, and the augmented prototypes that could separate highly
correlated classes are utilized to determine the final result. Experiments on
several benchmark datasets demonstrate the state-of-the-art performance of
DPTA. The code will be open-sourced after the paper is published.
| no_new_dataset | 0.948298 |
2411.17771 | Xinyu Zhang | Xinyu Zhang, Lingling Zhang, Yanrui Wu, Muye Huang, Wenjun Wu, Bo Li,
Shaowei Wang, Basura Fernando, Jun Liu | DiagramQG: Concept-Focused Diagram Question Generation via Hierarchical
Knowledge Integration | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Visual Question Generation (VQG) has gained significant attention due to its
potential in educational applications. However, VQG research mainly focuses on
natural images, largely neglecting diagrams in educational materials used to
assess students' conceptual understanding. To address this gap, we construct
DiagramQG, a dataset containing 8,372 diagrams and 19,475 questions across
various subjects. DiagramQG introduces concept and target text constraints,
guiding the model to generate concept-focused questions for educational
purposes. Meanwhile, we present the Hierarchical Knowledge Integration
framework for Diagram Question Generation (HKI-DQG) as a strong baseline. This
framework obtains multi-scale patches of diagrams and acquires knowledge using
a visual language model with frozen parameters. It then integrates knowledge,
text constraints, and patches to generate concept-focused questions. We
evaluate the performance of existing VQG models, open-source and closed-source
vision-language models, and HKI-DQG on the DiagramQG dataset. Our novel HKI-DQG
consistently outperforms existing methods, demonstrating that it serves as a
strong baseline. Furthermore, we apply HKI-DQG to four other VQG datasets of
natural images, namely VQG-COCO, K-VQG, OK-VQA, and A-OKVQA, achieving
state-of-the-art performance.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 08:27:50 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 15:16:17 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 07:48:31 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Xinyu",
""
],
[
"Zhang",
"Lingling",
""
],
[
"Wu",
"Yanrui",
""
],
[
"Huang",
"Muye",
""
],
[
"Wu",
"Wenjun",
""
],
[
"Li",
"Bo",
""
],
[
"Wang",
"Shaowei",
""
],
[
"Fernando",
"Basura",
""
],
[
"Liu",
"Jun",
""
]
]
| TITLE: DiagramQG: Concept-Focused Diagram Question Generation via Hierarchical
Knowledge Integration
ABSTRACT: Visual Question Generation (VQG) has gained significant attention due to its
potential in educational applications. However, VQG research mainly focuses on
natural images, largely neglecting diagrams in educational materials used to
assess students' conceptual understanding. To address this gap, we construct
DiagramQG, a dataset containing 8,372 diagrams and 19,475 questions across
various subjects. DiagramQG introduces concept and target text constraints,
guiding the model to generate concept-focused questions for educational
purposes. Meanwhile, we present the Hierarchical Knowledge Integration
framework for Diagram Question Generation (HKI-DQG) as a strong baseline. This
framework obtains multi-scale patches of diagrams and acquires knowledge using
a visual language model with frozen parameters. It then integrates knowledge,
text constraints, and patches to generate concept-focused questions. We
evaluate the performance of existing VQG models, open-source and closed-source
vision-language models, and HKI-DQG on the DiagramQG dataset. Our novel HKI-DQG
consistently outperforms existing methods, demonstrating that it serves as a
strong baseline. Furthermore, we apply HKI-DQG to four other VQG datasets of
natural images, namely VQG-COCO, K-VQG, OK-VQA, and A-OKVQA, achieving
state-of-the-art performance.
| new_dataset | 0.971966 |
2411.18104 | Yifan Zhang | Yifan Zhang | Training and Evaluating Language Models with Template-based Data
Generation | 9 pages, 2 figures | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advancement of large language models (LLMs) such as GPT-3, PaLM,
and Llama has significantly transformed natural language processing, showcasing
remarkable capabilities in understanding and generating language. However,
these models often struggle with tasks requiring complex reasoning,
particularly in mathematical problem-solving, due in part to the scarcity of
large-scale, high-quality, domain-specific datasets necessary for training
sophisticated reasoning abilities. To address this limitation, we introduce
Template-based Data Generation (TDG), a novel approach that leverages LLMs
(GPT-4) to automatically generate parameterized meta-templates, which are then
used to synthesize a vast array of high-quality problems and solutions.
Leveraging TDG, we create TemplateMath Part I: TemplateGSM, a dataset
comprising over 7 million synthetically generated grade school math
problems--each accompanied by code-based and natural language solutions--with
the potential to generate an effectively unlimited number more. This dataset
alleviates the scarcity of large-scale mathematical datasets and serves as a
valuable resource for pre-training, fine-tuning, and evaluating LLMs in
mathematical reasoning. Our method not only enables the generation of virtually
infinite data but also elevates data augmentation to a new level by using GPT-4
for meta-template generation, ensuring diverse and high-quality problem
structures. The TemplateMath Part I: TemplateGSM dataset is publicly available
at https://huggingface.co/datasets/math-ai/TemplateGSM. The code is available
at https://github.com/iiis-ai/TemplateMath.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2024 07:32:56 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 05:54:29 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 01:18:23 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Yifan",
""
]
]
| TITLE: Training and Evaluating Language Models with Template-based Data
Generation
ABSTRACT: The rapid advancement of large language models (LLMs) such as GPT-3, PaLM,
and Llama has significantly transformed natural language processing, showcasing
remarkable capabilities in understanding and generating language. However,
these models often struggle with tasks requiring complex reasoning,
particularly in mathematical problem-solving, due in part to the scarcity of
large-scale, high-quality, domain-specific datasets necessary for training
sophisticated reasoning abilities. To address this limitation, we introduce
Template-based Data Generation (TDG), a novel approach that leverages LLMs
(GPT-4) to automatically generate parameterized meta-templates, which are then
used to synthesize a vast array of high-quality problems and solutions.
Leveraging TDG, we create TemplateMath Part I: TemplateGSM, a dataset
comprising over 7 million synthetically generated grade school math
problems--each accompanied by code-based and natural language solutions--with
the potential to generate an effectively unlimited number more. This dataset
alleviates the scarcity of large-scale mathematical datasets and serves as a
valuable resource for pre-training, fine-tuning, and evaluating LLMs in
mathematical reasoning. Our method not only enables the generation of virtually
infinite data but also elevates data augmentation to a new level by using GPT-4
for meta-template generation, ensuring diverse and high-quality problem
structures. The TemplateMath Part I: TemplateGSM dataset is publicly available
at https://huggingface.co/datasets/math-ai/TemplateGSM. The code is available
at https://github.com/iiis-ai/TemplateMath.
| new_dataset | 0.955277 |
2411.18260 | Joanne Boisson | Joanne Boisson, Arif Mehmood and Jose Camacho-Collados | MetaphorShare: A Dynamic Collaborative Repository of Open Metaphor
Datasets | Accepted in NAACL 2025 system demonstration track | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The metaphor studies community has developed numerous valuable labelled
corpora in various languages over the years. Many of these resources are not
only unknown to the NLP community, but are also often not easily shared among
the researchers. Both in human sciences and in NLP, researchers could benefit
from a centralised database of labelled resources, easily accessible and
unified under an identical format. To facilitate this, we present
MetaphorShare, a website to integrate metaphor datasets making them open and
accessible. With this effort, our aim is to encourage researchers to share and
upload more datasets in any language in order to facilitate metaphor studies
and the development of future metaphor processing NLP systems. The website has
four main functionalities: upload, download, search and label metaphor
datasets. It is accessible at www.metaphorshare.com.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2024 11:58:34 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Dec 2024 16:28:19 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 12:09:20 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Boisson",
"Joanne",
""
],
[
"Mehmood",
"Arif",
""
],
[
"Camacho-Collados",
"Jose",
""
]
]
| TITLE: MetaphorShare: A Dynamic Collaborative Repository of Open Metaphor
Datasets
ABSTRACT: The metaphor studies community has developed numerous valuable labelled
corpora in various languages over the years. Many of these resources are not
only unknown to the NLP community, but are also often not easily shared among
the researchers. Both in human sciences and in NLP, researchers could benefit
from a centralised database of labelled resources, easily accessible and
unified under an identical format. To facilitate this, we present
MetaphorShare, a website to integrate metaphor datasets making them open and
accessible. With this effort, our aim is to encourage researchers to share and
upload more datasets in any language in order to facilitate metaphor studies
and the development of future metaphor processing NLP systems. The website has
four main functionalities: upload, download, search and label metaphor
datasets. It is accessible at www.metaphorshare.com.
| no_new_dataset | 0.946892 |
2411.19865 | Justin Chih-Yao Chen | Justin Chih-Yao Chen, Zifeng Wang, Hamid Palangi, Rujun Han, Sayna
Ebrahimi, Long Le, Vincent Perot, Swaroop Mishra, Mohit Bansal, Chen-Yu Lee,
Tomas Pfister | Reverse Thinking Makes LLMs Stronger Reasoners | Accepted to NAACL 2025 | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Reverse thinking plays a crucial role in human reasoning. Humans can reason
not only from a problem to a solution but also in reverse, i.e., start from the
solution and reason towards the problem. This often enhances overall reasoning
performance as it enables consistency checks between their forward and backward
thinking. To enable Large Language Models (LLMs) to perform reverse thinking,
we introduce Reverse-Enhanced Thinking (RevThink), a framework composed of data
augmentation and learning objectives. In RevThink, we augment the dataset by
collecting structured forward-backward reasoning from a teacher model,
consisting of: (1) the original question, (2) forward reasoning, (3) backward
question, and (4) backward reasoning. We then employ three objectives to train
a smaller student model in a multi-task learning fashion: (a) generate forward
reasoning from a question, (b) generate a backward question from a question,
and (c) generate backward reasoning from the backward question. Experiments
across 12 datasets covering commonsense, math, and logical reasoning show an
average 13.53% improvement over the student model's zero-shot performance and a
6.84% improvement over the strongest knowledge distillation baselines.
Moreover, our method demonstrates sample efficiency -- using only 10% of the
correct forward reasoning from the training data, it outperforms a standard
fine-tuning method trained on 10x more forward reasoning. RevThink also
exhibits strong generalization to out-of-distribution held-out datasets.
| [
{
"version": "v1",
"created": "Fri, 29 Nov 2024 17:27:05 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 20:33:35 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chen",
"Justin Chih-Yao",
""
],
[
"Wang",
"Zifeng",
""
],
[
"Palangi",
"Hamid",
""
],
[
"Han",
"Rujun",
""
],
[
"Ebrahimi",
"Sayna",
""
],
[
"Le",
"Long",
""
],
[
"Perot",
"Vincent",
""
],
[
"Mishra",
"Swaroop",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Lee",
"Chen-Yu",
""
],
[
"Pfister",
"Tomas",
""
]
]
| TITLE: Reverse Thinking Makes LLMs Stronger Reasoners
ABSTRACT: Reverse thinking plays a crucial role in human reasoning. Humans can reason
not only from a problem to a solution but also in reverse, i.e., start from the
solution and reason towards the problem. This often enhances overall reasoning
performance as it enables consistency checks between their forward and backward
thinking. To enable Large Language Models (LLMs) to perform reverse thinking,
we introduce Reverse-Enhanced Thinking (RevThink), a framework composed of data
augmentation and learning objectives. In RevThink, we augment the dataset by
collecting structured forward-backward reasoning from a teacher model,
consisting of: (1) the original question, (2) forward reasoning, (3) backward
question, and (4) backward reasoning. We then employ three objectives to train
a smaller student model in a multi-task learning fashion: (a) generate forward
reasoning from a question, (b) generate a backward question from a question,
and (c) generate backward reasoning from the backward question. Experiments
across 12 datasets covering commonsense, math, and logical reasoning show an
average 13.53% improvement over the student model's zero-shot performance and a
6.84% improvement over the strongest knowledge distillation baselines.
Moreover, our method demonstrates sample efficiency -- using only 10% of the
correct forward reasoning from the training data, it outperforms a standard
fine-tuning method trained on 10x more forward reasoning. RevThink also
exhibits strong generalization to out-of-distribution held-out datasets.
| no_new_dataset | 0.947769 |
2411.19903 | Prajwal Singh | Prajwal Singh, Ashish Tiwari, Gautam Vashishtha, Shanmuganathan Raman | Incremental Multi-Scene Modeling via Continual Neural Graphics
Primitives | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Neural radiance fields (NeRF) have revolutionized photorealistic rendering of
novel views for 3D scenes. Despite their growing popularity and efficiency as
3D resources, NeRFs face scalability challenges due to the need for separate
models per scene and the cumulative increase in training time for multiple
scenes. The potential for incrementally encoding multiple 3D scenes into a
single NeRF model remains largely unexplored. To address this, we introduce
Continual-Neural Graphics Primitives (C-NGP), a novel continual learning
framework that integrates multiple scenes incrementally into a single neural
radiance field. Using a generative replay approach, C-NGP adapts to new scenes
without requiring access to old data. We demonstrate that C-NGP can accommodate
multiple scenes without increasing the parameter count, producing high-quality
novel-view renderings on synthetic and real datasets. Notably, C-NGP models all
8 scenes from the Real-LLFF dataset together, with only a 2.2% drop in PSNR
compared to vanilla NeRF, which models each scene independently. Further, C-NGP
allows multiple style edits in the same network. The implementation details and
dynamic visualizations are in the supplementary material.
| [
{
"version": "v1",
"created": "Fri, 29 Nov 2024 18:05:16 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 23:06:49 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Singh",
"Prajwal",
""
],
[
"Tiwari",
"Ashish",
""
],
[
"Vashishtha",
"Gautam",
""
],
[
"Raman",
"Shanmuganathan",
""
]
]
| TITLE: Incremental Multi-Scene Modeling via Continual Neural Graphics
Primitives
ABSTRACT: Neural radiance fields (NeRF) have revolutionized photorealistic rendering of
novel views for 3D scenes. Despite their growing popularity and efficiency as
3D resources, NeRFs face scalability challenges due to the need for separate
models per scene and the cumulative increase in training time for multiple
scenes. The potential for incrementally encoding multiple 3D scenes into a
single NeRF model remains largely unexplored. To address this, we introduce
Continual-Neural Graphics Primitives (C-NGP), a novel continual learning
framework that integrates multiple scenes incrementally into a single neural
radiance field. Using a generative replay approach, C-NGP adapts to new scenes
without requiring access to old data. We demonstrate that C-NGP can accommodate
multiple scenes without increasing the parameter count, producing high-quality
novel-view renderings on synthetic and real datasets. Notably, C-NGP models all
8 scenes from the Real-LLFF dataset together, with only a 2.2% drop in PSNR
compared to vanilla NeRF, which models each scene independently. Further, C-NGP
allows multiple style edits in the same network. The implementation details and
dynamic visualizations are in the supplementary material.
| no_new_dataset | 0.947672 |
2412.00126 | Lei Zhou | Lei Zhou, Youwen Zhu, Qiao Xue, Ji Zhang and Pengfei Zhang | Streamlined Federated Unlearning: Unite as One to Be Highly Efficient | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the enactment of ``right to be forgotten" laws and regulations has
imposed new privacy requirements on federated learning (FL). Researchers aim to
remove the influence of certain data from the trained model without training
from scratch through federated unlearning (FU). While current FU research has
shown progress in enhancing unlearning efficiency, it often results in degraded
model performance upon achieving the goal of data unlearning, necessitating
additional steps to recover the performance of the unlearned model. Moreover,
these approaches also suffer from many shortcomings such as high consumption of
computational and storage resources. To this end, we propose a streamlined
federated unlearning approach (SFU) aimed at effectively removing the influence
of the target data while preserving the model performance on the retained data
without degradation. We design a practical multi-teacher system that achieves
both target data influence removal and model performance preservation by
guiding the unlearned model through several distinct teacher models. SFU is
both computationally and storage-efficient, highly flexible, and generalizable.
We conduct extensive experiments on both image and text benchmark datasets. The
results demonstrate that SFU significantly improves time and communication
efficiency compared to the benchmark retraining method and significantly
outperforms existing SOTA methods. Additionally, we verify the effectiveness of
SFU using the backdoor attack.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 12:52:48 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 00:54:33 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhou",
"Lei",
""
],
[
"Zhu",
"Youwen",
""
],
[
"Xue",
"Qiao",
""
],
[
"Zhang",
"Ji",
""
],
[
"Zhang",
"Pengfei",
""
]
]
| TITLE: Streamlined Federated Unlearning: Unite as One to Be Highly Efficient
ABSTRACT: Recently, the enactment of ``right to be forgotten" laws and regulations has
imposed new privacy requirements on federated learning (FL). Researchers aim to
remove the influence of certain data from the trained model without training
from scratch through federated unlearning (FU). While current FU research has
shown progress in enhancing unlearning efficiency, it often results in degraded
model performance upon achieving the goal of data unlearning, necessitating
additional steps to recover the performance of the unlearned model. Moreover,
these approaches also suffer from many shortcomings such as high consumption of
computational and storage resources. To this end, we propose a streamlined
federated unlearning approach (SFU) aimed at effectively removing the influence
of the target data while preserving the model performance on the retained data
without degradation. We design a practical multi-teacher system that achieves
both target data influence removal and model performance preservation by
guiding the unlearned model through several distinct teacher models. SFU is
both computationally and storage-efficient, highly flexible, and generalizable.
We conduct extensive experiments on both image and text benchmark datasets. The
results demonstrate that SFU significantly improves time and communication
efficiency compared to the benchmark retraining method and significantly
outperforms existing SOTA methods. Additionally, we verify the effectiveness of
SFU using the backdoor attack.
| no_new_dataset | 0.948965 |
2412.00136 | Wenda Shi | Wenda Shi and Yiren Song and Dengming Zhang and Jiaming Liu and
Xingxing Zou | FonTS: Text Rendering with Typography and Style Controls | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Visual text rendering are widespread in various real-world applications,
requiring careful font selection and typographic choices. Recent progress in
diffusion transformer (DiT)-based text-to-image (T2I) models show promise in
automating these processes. However, these methods still encounter challenges
like inconsistent fonts, style variation, and limited fine-grained control,
particularly at the word-level. This paper proposes a two-stage DiT-based
pipeline to address these problems by enhancing controllability over typography
and style in text rendering. We introduce typography control fine-tuning
(TC-FT), an parameter-efficient fine-tuning method (on $5\%$ key parameters)
with enclosing typography control tokens (ETC-tokens), which enables precise
word-level application of typographic features. To further address style
inconsistency in text rendering, we propose a text-agnostic style control
adapter (SCA) that prevents content leakage while enhancing style consistency.
To implement TC-FT and SCA effectively, we incorporated HTML-render into the
data synthesis pipeline and proposed the first word-level controllable dataset.
Through comprehensive experiments, we demonstrate the effectiveness of our
approach in achieving superior word-level typographic control, font
consistency, and style consistency in text rendering tasks. The datasets and
models will be available for academic use.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 16:19:37 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 08:43:03 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Shi",
"Wenda",
""
],
[
"Song",
"Yiren",
""
],
[
"Zhang",
"Dengming",
""
],
[
"Liu",
"Jiaming",
""
],
[
"Zou",
"Xingxing",
""
]
]
| TITLE: FonTS: Text Rendering with Typography and Style Controls
ABSTRACT: Visual text rendering are widespread in various real-world applications,
requiring careful font selection and typographic choices. Recent progress in
diffusion transformer (DiT)-based text-to-image (T2I) models show promise in
automating these processes. However, these methods still encounter challenges
like inconsistent fonts, style variation, and limited fine-grained control,
particularly at the word-level. This paper proposes a two-stage DiT-based
pipeline to address these problems by enhancing controllability over typography
and style in text rendering. We introduce typography control fine-tuning
(TC-FT), an parameter-efficient fine-tuning method (on $5\%$ key parameters)
with enclosing typography control tokens (ETC-tokens), which enables precise
word-level application of typographic features. To further address style
inconsistency in text rendering, we propose a text-agnostic style control
adapter (SCA) that prevents content leakage while enhancing style consistency.
To implement TC-FT and SCA effectively, we incorporated HTML-render into the
data synthesis pipeline and proposed the first word-level controllable dataset.
Through comprehensive experiments, we demonstrate the effectiveness of our
approach in achieving superior word-level typographic control, font
consistency, and style consistency in text rendering tasks. The datasets and
models will be available for academic use.
| no_new_dataset | 0.941385 |
2412.00155 | Ruslan Rakhimov | Alexander Markin, Vadim Pryadilshchikov, Artem Komarichev, Ruslan
Rakhimov, Peter Wonka, Evgeny Burnaev | T-3DGS: Removing Transient Objects for 3D Scene Reconstruction | Project website at https://transient-3dgs.github.io/ | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Transient objects in video sequences can significantly degrade the quality of
3D scene reconstructions. To address this challenge, we propose T-3DGS, a novel
framework that robustly filters out transient distractors during 3D
reconstruction using Gaussian Splatting. Our framework consists of two steps.
First, we employ an unsupervised classification network that distinguishes
transient objects from static scene elements by leveraging their distinct
training dynamics within the reconstruction process. Second, we refine these
initial detections by integrating an off-the-shelf segmentation method with a
bidirectional tracking module, which together enhance boundary accuracy and
temporal coherence. Evaluations on both sparsely and densely captured video
datasets demonstrate that T-3DGS significantly outperforms state-of-the-art
approaches, enabling high-fidelity 3D reconstructions in challenging,
real-world scenarios.
| [
{
"version": "v1",
"created": "Fri, 29 Nov 2024 07:45:24 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 11:58:03 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Markin",
"Alexander",
""
],
[
"Pryadilshchikov",
"Vadim",
""
],
[
"Komarichev",
"Artem",
""
],
[
"Rakhimov",
"Ruslan",
""
],
[
"Wonka",
"Peter",
""
],
[
"Burnaev",
"Evgeny",
""
]
]
| TITLE: T-3DGS: Removing Transient Objects for 3D Scene Reconstruction
ABSTRACT: Transient objects in video sequences can significantly degrade the quality of
3D scene reconstructions. To address this challenge, we propose T-3DGS, a novel
framework that robustly filters out transient distractors during 3D
reconstruction using Gaussian Splatting. Our framework consists of two steps.
First, we employ an unsupervised classification network that distinguishes
transient objects from static scene elements by leveraging their distinct
training dynamics within the reconstruction process. Second, we refine these
initial detections by integrating an off-the-shelf segmentation method with a
bidirectional tracking module, which together enhance boundary accuracy and
temporal coherence. Evaluations on both sparsely and densely captured video
datasets demonstrate that T-3DGS significantly outperforms state-of-the-art
approaches, enabling high-fidelity 3D reconstructions in challenging,
real-world scenarios.
| no_new_dataset | 0.952838 |
2412.02241 | Kazuto Nakashima | Kazuto Nakashima, Xiaowen Liu, Tomoya Miyawaki, Yumi Iwashita, Ryo
Kurazume | Fast LiDAR Data Generation with Rectified Flows | ICRA 2025 | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building LiDAR generative models holds promise as powerful data priors for
restoration, scene manipulation, and scalable simulation in autonomous mobile
robots. In recent years, approaches using diffusion models have emerged,
significantly improving training stability and generation quality. Despite
their success, diffusion models require numerous iterations of running neural
networks to generate high-quality samples, making the increasing computational
cost a potential barrier for robotics applications. To address this challenge,
this paper presents R2Flow, a fast and high-fidelity generative model for LiDAR
data. Our method is based on rectified flows that learn straight trajectories,
simulating data generation with significantly fewer sampling steps compared to
diffusion models. We also propose an efficient Transformer-based model
architecture for processing the image representation of LiDAR range and
reflectance measurements. Our experiments on unconditional LiDAR data
generation using the KITTI-360 dataset demonstrate the effectiveness of our
approach in terms of both efficiency and quality.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 08:10:53 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 08:39:59 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Nakashima",
"Kazuto",
""
],
[
"Liu",
"Xiaowen",
""
],
[
"Miyawaki",
"Tomoya",
""
],
[
"Iwashita",
"Yumi",
""
],
[
"Kurazume",
"Ryo",
""
]
]
| TITLE: Fast LiDAR Data Generation with Rectified Flows
ABSTRACT: Building LiDAR generative models holds promise as powerful data priors for
restoration, scene manipulation, and scalable simulation in autonomous mobile
robots. In recent years, approaches using diffusion models have emerged,
significantly improving training stability and generation quality. Despite
their success, diffusion models require numerous iterations of running neural
networks to generate high-quality samples, making the increasing computational
cost a potential barrier for robotics applications. To address this challenge,
this paper presents R2Flow, a fast and high-fidelity generative model for LiDAR
data. Our method is based on rectified flows that learn straight trajectories,
simulating data generation with significantly fewer sampling steps compared to
diffusion models. We also propose an efficient Transformer-based model
architecture for processing the image representation of LiDAR range and
reflectance measurements. Our experiments on unconditional LiDAR data
generation using the KITTI-360 dataset demonstrate the effectiveness of our
approach in terms of both efficiency and quality.
| no_new_dataset | 0.951142 |
2412.02447 | Conghao Wong | Conghao Wong, Ziqian Zou, Beihao Xia, Xinge You | Resonance: Learning to Predict Social-Aware Pedestrian Trajectories as
Co-Vibrations | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning to forecast trajectories of intelligent agents has caught much more
attention recently. However, it remains a challenge to accurately account for
agents' intentions and social behaviors when forecasting, and in particular, to
simulate the unique randomness within each of those components in an
explainable and decoupled way. Inspired by vibration systems and their
resonance properties, we propose the Resonance (short for Re) model to encode
and forecast pedestrian trajectories in the form of ``co-vibrations''. It
decomposes trajectory modifications and randomnesses into multiple vibration
portions to simulate agents' reactions to each single cause, and forecasts
trajectories as the superposition of these independent vibrations separately.
Also, benefiting from such vibrations and their spectral properties,
representations of social interactions can be learned by emulating the
resonance phenomena, further enhancing its explainability. Experiments on
multiple datasets have verified its usefulness both quantitatively and
qualitatively.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 13:31:29 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 01:37:08 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wong",
"Conghao",
""
],
[
"Zou",
"Ziqian",
""
],
[
"Xia",
"Beihao",
""
],
[
"You",
"Xinge",
""
]
]
| TITLE: Resonance: Learning to Predict Social-Aware Pedestrian Trajectories as
Co-Vibrations
ABSTRACT: Learning to forecast trajectories of intelligent agents has caught much more
attention recently. However, it remains a challenge to accurately account for
agents' intentions and social behaviors when forecasting, and in particular, to
simulate the unique randomness within each of those components in an
explainable and decoupled way. Inspired by vibration systems and their
resonance properties, we propose the Resonance (short for Re) model to encode
and forecast pedestrian trajectories in the form of ``co-vibrations''. It
decomposes trajectory modifications and randomnesses into multiple vibration
portions to simulate agents' reactions to each single cause, and forecasts
trajectories as the superposition of these independent vibrations separately.
Also, benefiting from such vibrations and their spectral properties,
representations of social interactions can be learned by emulating the
resonance phenomena, further enhancing its explainability. Experiments on
multiple datasets have verified its usefulness both quantitatively and
qualitatively.
| no_new_dataset | 0.947962 |
2412.02837 | Sarthak Kumar Maharana | Sarthak Kumar Maharana, Baoming Zhang, Leonid Karlinsky, Rogerio
Feris, Yunhui Guo | $\texttt{BATCLIP}$: Bimodal Online Test-Time Adaptation for CLIP | Preprint. Under review | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Although open-vocabulary classification models like Contrastive Language
Image Pretraining (CLIP) have demonstrated strong zero-shot learning
capabilities, their robustness to common image corruptions remains poorly
understood. Through extensive experiments, we show that zero-shot CLIP lacks
robustness to common image corruptions during test-time, necessitating the
adaptation of CLIP to unlabeled corrupted images using test-time adaptation
(TTA). However, we found that existing TTA methods have severe limitations in
adapting CLIP due to their unimodal nature. To address these limitations, we
propose $\texttt{BATCLIP}$, a bimodal $\textbf{online}$ TTA method designed to
improve CLIP's robustness to common image corruptions. The key insight of our
approach is not only to adapt the visual encoders for improving image features
but also to strengthen the alignment between image and text features by
promoting a stronger association between the image class prototype, computed
using pseudo-labels, and the corresponding text feature. We evaluate our
approach on benchmark image corruption datasets and achieve state-of-the-art
results in online TTA for CLIP. Furthermore, we evaluate our proposed TTA
approach on various domain generalization datasets to demonstrate its
generalization capabilities.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 21:02:14 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 06:10:48 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Maharana",
"Sarthak Kumar",
""
],
[
"Zhang",
"Baoming",
""
],
[
"Karlinsky",
"Leonid",
""
],
[
"Feris",
"Rogerio",
""
],
[
"Guo",
"Yunhui",
""
]
]
| TITLE: $\texttt{BATCLIP}$: Bimodal Online Test-Time Adaptation for CLIP
ABSTRACT: Although open-vocabulary classification models like Contrastive Language
Image Pretraining (CLIP) have demonstrated strong zero-shot learning
capabilities, their robustness to common image corruptions remains poorly
understood. Through extensive experiments, we show that zero-shot CLIP lacks
robustness to common image corruptions during test-time, necessitating the
adaptation of CLIP to unlabeled corrupted images using test-time adaptation
(TTA). However, we found that existing TTA methods have severe limitations in
adapting CLIP due to their unimodal nature. To address these limitations, we
propose $\texttt{BATCLIP}$, a bimodal $\textbf{online}$ TTA method designed to
improve CLIP's robustness to common image corruptions. The key insight of our
approach is not only to adapt the visual encoders for improving image features
but also to strengthen the alignment between image and text features by
promoting a stronger association between the image class prototype, computed
using pseudo-labels, and the corresponding text feature. We evaluate our
approach on benchmark image corruption datasets and achieve state-of-the-art
results in online TTA for CLIP. Furthermore, we evaluate our proposed TTA
approach on various domain generalization datasets to demonstrate its
generalization capabilities.
| no_new_dataset | 0.944893 |
2412.02930 | Quoc-Huy Tran | Fawad Javed Fateh, Umer Ahmed, Hamza Khan, M. Zeeshan Zia, Quoc-Huy
Tran | Video LLMs for Temporal Reasoning in Long Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces TemporalVLM, a video large language model (video LLM)
capable of effective temporal reasoning and fine-grained understanding in long
videos. At the core, our approach includes a visual encoder for mapping a
long-term input video into features which are time-aware and contain both local
and global cues. In particular, it first divides the input video into
short-term clips, which are jointly encoded with their timestamps into
time-sensitive local features. Next, the local features are passed through a
bidirectional long short-term memory (BiLSTM) module for global feature
aggregation. The extracted time-aware and multi-level features are important
for accurate temporal reasoning and fine-grained understanding in long videos.
Moreover, to facilitate the evaluation of TemporalVLM, we present a large-scale
long video dataset of industry assembly processes, namely IndustryASM, which
consists of videos recorded on factory floors with actions and timestamps
annotated by industrial engineers for time and motion studies and temporal
action segmentation evaluation. Finally, extensive experiments on datasets of
long videos, including TimeIT and IndustryASM, show that TemporalVLM achieves
superior performance than previous methods across temporal reasoning and
fine-grained understanding tasks, namely dense video captioning, temporal video
grounding, video highlight detection, and temporal action segmentation. To the
best of our knowledge, our work is the first to incorporate LSTMs into video
LLMs.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 00:50:33 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 07:25:51 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Fateh",
"Fawad Javed",
""
],
[
"Ahmed",
"Umer",
""
],
[
"Khan",
"Hamza",
""
],
[
"Zia",
"M. Zeeshan",
""
],
[
"Tran",
"Quoc-Huy",
""
]
]
| TITLE: Video LLMs for Temporal Reasoning in Long Videos
ABSTRACT: This paper introduces TemporalVLM, a video large language model (video LLM)
capable of effective temporal reasoning and fine-grained understanding in long
videos. At the core, our approach includes a visual encoder for mapping a
long-term input video into features which are time-aware and contain both local
and global cues. In particular, it first divides the input video into
short-term clips, which are jointly encoded with their timestamps into
time-sensitive local features. Next, the local features are passed through a
bidirectional long short-term memory (BiLSTM) module for global feature
aggregation. The extracted time-aware and multi-level features are important
for accurate temporal reasoning and fine-grained understanding in long videos.
Moreover, to facilitate the evaluation of TemporalVLM, we present a large-scale
long video dataset of industry assembly processes, namely IndustryASM, which
consists of videos recorded on factory floors with actions and timestamps
annotated by industrial engineers for time and motion studies and temporal
action segmentation evaluation. Finally, extensive experiments on datasets of
long videos, including TimeIT and IndustryASM, show that TemporalVLM achieves
superior performance than previous methods across temporal reasoning and
fine-grained understanding tasks, namely dense video captioning, temporal video
grounding, video highlight detection, and temporal action segmentation. To the
best of our knowledge, our work is the first to incorporate LSTMs into video
LLMs.
| new_dataset | 0.95511 |
2412.03002 | Shouwei Ruan | Shouwei Ruan, Hanqing Liu, Yao Huang, Xiaoqi Wang, Caixin Kang, Hang
Su, Yinpeng Dong, Xingxing Wei | AdvDreamer Unveils: Are Vision-Language Models Truly Ready for
Real-World 3D Variations? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision Language Models (VLMs) have exhibited remarkable generalization
capabilities, yet their robustness in dynamic real-world scenarios remains
largely unexplored. To systematically evaluate VLMs' robustness to real-world
3D variations, we propose AdvDreamer, the first framework capable of generating
physically reproducible Adversarial 3D Transformation (Adv-3DT) samples from
single-view observations. In AdvDreamer, we integrate three key innovations:
Firstly, to characterize real-world 3D variations with limited prior knowledge
precisely, we design a zero-shot Monocular Pose Manipulation pipeline built
upon generative 3D priors. Secondly, to ensure the visual quality of worst-case
Adv-3DT samples, we propose a Naturalness Reward Model that provides continuous
naturalness regularization during adversarial optimization, effectively
preventing convergence to hallucinated or unnatural elements. Thirdly, to
enable systematic evaluation across diverse VLM architectures and
visual-language tasks, we introduce the Inverse Semantic Probability loss as
the adversarial optimization objective, which solely operates in the
fundamental visual-textual alignment space. Based on the captured Adv-3DT
samples with high aggressiveness and transferability, we establish MM3DTBench,
the first VQA benchmark dataset tailored to evaluate VLM robustness under
challenging 3D variations. Extensive evaluations of representative VLMs with
varying architectures reveal that real-world 3D variations can pose severe
threats to model performance across various tasks.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 03:42:39 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Dec 2024 08:14:13 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 13:26:29 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ruan",
"Shouwei",
""
],
[
"Liu",
"Hanqing",
""
],
[
"Huang",
"Yao",
""
],
[
"Wang",
"Xiaoqi",
""
],
[
"Kang",
"Caixin",
""
],
[
"Su",
"Hang",
""
],
[
"Dong",
"Yinpeng",
""
],
[
"Wei",
"Xingxing",
""
]
]
| TITLE: AdvDreamer Unveils: Are Vision-Language Models Truly Ready for
Real-World 3D Variations?
ABSTRACT: Vision Language Models (VLMs) have exhibited remarkable generalization
capabilities, yet their robustness in dynamic real-world scenarios remains
largely unexplored. To systematically evaluate VLMs' robustness to real-world
3D variations, we propose AdvDreamer, the first framework capable of generating
physically reproducible Adversarial 3D Transformation (Adv-3DT) samples from
single-view observations. In AdvDreamer, we integrate three key innovations:
Firstly, to characterize real-world 3D variations with limited prior knowledge
precisely, we design a zero-shot Monocular Pose Manipulation pipeline built
upon generative 3D priors. Secondly, to ensure the visual quality of worst-case
Adv-3DT samples, we propose a Naturalness Reward Model that provides continuous
naturalness regularization during adversarial optimization, effectively
preventing convergence to hallucinated or unnatural elements. Thirdly, to
enable systematic evaluation across diverse VLM architectures and
visual-language tasks, we introduce the Inverse Semantic Probability loss as
the adversarial optimization objective, which solely operates in the
fundamental visual-textual alignment space. Based on the captured Adv-3DT
samples with high aggressiveness and transferability, we establish MM3DTBench,
the first VQA benchmark dataset tailored to evaluate VLM robustness under
challenging 3D variations. Extensive evaluations of representative VLMs with
varying architectures reveal that real-world 3D variations can pose severe
threats to model performance across various tasks.
| new_dataset | 0.968321 |
2412.03059 | Runjian Chen | Runjian Chen, Hang Zhang, Avinash Ravichandran, Hyoungseob Park, Wenqi
Shao, Alex Wong, Ping Luo | CLAP: Unsupervised 3D Representation Learning for Fusion 3D Perception
via Curvature Sampling and Prototype Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised 3D representation learning reduces the burden of labeling
multimodal 3D data for fusion perception tasks. Among different pre-training
paradigms, differentiable-rendering-based methods have shown most promise.
However, existing works separately conduct pre-training for each modalities due
to computational costs of processing large point clouds with images. As such,
mutual benefit of high-level semantics (from image) and 3D structure (from
point cloud) has not been exploited. To address this gap, we propose a joint
unsupervised differentiable-rendering-based pre-training method for images and
point clouds, termed CLAP, short for Curvature sampLing and leArnable
Prototype. Specifically, our method overcomes the computational hurdle by
Curvature Sampling to select the more informative points/pixels for
pre-training. To uncover the performance benefits brought by their
complementarity, we propose to use learnable prototypes to represent parts of
the 3D scenes in a common feature space and an Expectation-Maximization
training scheme to associate embeddings of each modality to prototypes. We
further propose a swapping prediction loss that explores their interplay
through prototypes along with a Gram Matrix Regularization term to maintain
training stability. Experiments on NuScenes and Waymo datasets show that CLAP
achieves up to 100% more performance gain as compared to previous SOTA
pre-training methods. Codes and models will be released.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 06:26:12 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 03:54:25 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chen",
"Runjian",
""
],
[
"Zhang",
"Hang",
""
],
[
"Ravichandran",
"Avinash",
""
],
[
"Park",
"Hyoungseob",
""
],
[
"Shao",
"Wenqi",
""
],
[
"Wong",
"Alex",
""
],
[
"Luo",
"Ping",
""
]
]
| TITLE: CLAP: Unsupervised 3D Representation Learning for Fusion 3D Perception
via Curvature Sampling and Prototype Learning
ABSTRACT: Unsupervised 3D representation learning reduces the burden of labeling
multimodal 3D data for fusion perception tasks. Among different pre-training
paradigms, differentiable-rendering-based methods have shown most promise.
However, existing works separately conduct pre-training for each modalities due
to computational costs of processing large point clouds with images. As such,
mutual benefit of high-level semantics (from image) and 3D structure (from
point cloud) has not been exploited. To address this gap, we propose a joint
unsupervised differentiable-rendering-based pre-training method for images and
point clouds, termed CLAP, short for Curvature sampLing and leArnable
Prototype. Specifically, our method overcomes the computational hurdle by
Curvature Sampling to select the more informative points/pixels for
pre-training. To uncover the performance benefits brought by their
complementarity, we propose to use learnable prototypes to represent parts of
the 3D scenes in a common feature space and an Expectation-Maximization
training scheme to associate embeddings of each modality to prototypes. We
further propose a swapping prediction loss that explores their interplay
through prototypes along with a Gram Matrix Regularization term to maintain
training stability. Experiments on NuScenes and Waymo datasets show that CLAP
achieves up to 100% more performance gain as compared to previous SOTA
pre-training methods. Codes and models will be released.
| no_new_dataset | 0.949106 |
2412.03342 | Zhaopeng Gu | Zhaopeng Gu, Bingke Zhu, Guibo Zhu, Yingying Chen, Ming Tang, Jinqiao
Wang | UniVAD: A Training-free Unified Model for Few-shot Visual Anomaly
Detection | Accepted by CVPR 2025; Project page: https://uni-vad.github.io/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Visual Anomaly Detection (VAD) aims to identify abnormal samples in images
that deviate from normal patterns, covering multiple domains, including
industrial, logical, and medical fields. Due to the domain gaps between these
fields, existing VAD methods are typically tailored to each domain, with
specialized detection techniques and model architectures that are difficult to
generalize across different domains. Moreover, even within the same domain,
current VAD approaches often follow a "one-category-one-model" paradigm,
requiring large amounts of normal samples to train class-specific models,
resulting in poor generalizability and hindering unified evaluation across
domains. To address this issue, we propose a generalized few-shot VAD method,
UniVAD, capable of detecting anomalies across various domains, such as
industrial, logical, and medical anomalies, with a training-free unified model.
UniVAD only needs few normal samples as references during testing to detect
anomalies in previously unseen objects, without training on the specific
domain. Specifically, UniVAD employs a Contextual Component Clustering ($C^3$)
module based on clustering and vision foundation models to segment components
within the image accurately, and leverages Component-Aware Patch Matching
(CAPM) and Graph-Enhanced Component Modeling (GECM) modules to detect anomalies
at different semantic levels, which are aggregated to produce the final
detection result. We conduct experiments on nine datasets spanning industrial,
logical, and medical fields, and the results demonstrate that UniVAD achieves
state-of-the-art performance in few-shot anomaly detection tasks across
multiple domains, outperforming domain-specific anomaly detection models. Code
is available at https://github.com/FantasticGNU/UniVAD.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 14:20:27 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Dec 2024 03:31:40 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 10:03:18 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Gu",
"Zhaopeng",
""
],
[
"Zhu",
"Bingke",
""
],
[
"Zhu",
"Guibo",
""
],
[
"Chen",
"Yingying",
""
],
[
"Tang",
"Ming",
""
],
[
"Wang",
"Jinqiao",
""
]
]
| TITLE: UniVAD: A Training-free Unified Model for Few-shot Visual Anomaly
Detection
ABSTRACT: Visual Anomaly Detection (VAD) aims to identify abnormal samples in images
that deviate from normal patterns, covering multiple domains, including
industrial, logical, and medical fields. Due to the domain gaps between these
fields, existing VAD methods are typically tailored to each domain, with
specialized detection techniques and model architectures that are difficult to
generalize across different domains. Moreover, even within the same domain,
current VAD approaches often follow a "one-category-one-model" paradigm,
requiring large amounts of normal samples to train class-specific models,
resulting in poor generalizability and hindering unified evaluation across
domains. To address this issue, we propose a generalized few-shot VAD method,
UniVAD, capable of detecting anomalies across various domains, such as
industrial, logical, and medical anomalies, with a training-free unified model.
UniVAD only needs few normal samples as references during testing to detect
anomalies in previously unseen objects, without training on the specific
domain. Specifically, UniVAD employs a Contextual Component Clustering ($C^3$)
module based on clustering and vision foundation models to segment components
within the image accurately, and leverages Component-Aware Patch Matching
(CAPM) and Graph-Enhanced Component Modeling (GECM) modules to detect anomalies
at different semantic levels, which are aggregated to produce the final
detection result. We conduct experiments on nine datasets spanning industrial,
logical, and medical fields, and the results demonstrate that UniVAD achieves
state-of-the-art performance in few-shot anomaly detection tasks across
multiple domains, outperforming domain-specific anomaly detection models. Code
is available at https://github.com/FantasticGNU/UniVAD.
| no_new_dataset | 0.951684 |
2412.03442 | Clinton Cao | Clinton Cao and Agathe Blaise and Annibale Panichella and Sicco Verwer | State Frequency Estimation for Anomaly Detection | 12 pages | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by/4.0/ | Many works have studied the efficacy of state machines for detecting
anomalies within NetFlows. These works typically learn a model from unlabeled
data and compute anomaly scores for arbitrary traces based on their likelihood
of occurrence or how well they fit within the model. However, these methods do
not dynamically adapt their scores based on the traces seen at test time. This
becomes a problem when an adversary produces seemingly common traces in their
attack, causing the model to miss the detection by assigning low anomaly
scores. We propose SEQUENT, a new unsupervised approach that uses the state
visit frequency of a state machine to adapt its scoring dynamically for anomaly
detection. SEQUENT subsequently uses the scores to generate root causes for
anomalies. These allow the grouping of alarms and simplify the analysis of
anomalies. We evaluate SEQUENT's effectiveness in detecting network anomalies
on three publicly available NetFlow datasets and compare its performance
against various existing unsupervised anomaly detection methods. Our evaluation
shows promising results for using the state visit frequency of a state machine
to detect network anomalies.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 16:30:35 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 13:19:15 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Cao",
"Clinton",
""
],
[
"Blaise",
"Agathe",
""
],
[
"Panichella",
"Annibale",
""
],
[
"Verwer",
"Sicco",
""
]
]
| TITLE: State Frequency Estimation for Anomaly Detection
ABSTRACT: Many works have studied the efficacy of state machines for detecting
anomalies within NetFlows. These works typically learn a model from unlabeled
data and compute anomaly scores for arbitrary traces based on their likelihood
of occurrence or how well they fit within the model. However, these methods do
not dynamically adapt their scores based on the traces seen at test time. This
becomes a problem when an adversary produces seemingly common traces in their
attack, causing the model to miss the detection by assigning low anomaly
scores. We propose SEQUENT, a new unsupervised approach that uses the state
visit frequency of a state machine to adapt its scoring dynamically for anomaly
detection. SEQUENT subsequently uses the scores to generate root causes for
anomalies. These allow the grouping of alarms and simplify the analysis of
anomalies. We evaluate SEQUENT's effectiveness in detecting network anomalies
on three publicly available NetFlow datasets and compare its performance
against various existing unsupervised anomaly detection methods. Our evaluation
shows promising results for using the state visit frequency of a state machine
to detect network anomalies.
| no_new_dataset | 0.952486 |
2412.04020 | Kangan Qian | Kangan Qian and Jinyu Miao and Xinyu Jiao and Ziang Luo and Zheng Fu
and Yining Shi and Yunlong Wang and Kun Jiang and Diange Yang | PriorMotion: Generative Class-Agnostic Motion Prediction with
Raster-Vector Motion Field Priors | 17 pages, 9 figures | null | null | null | cs.CV cs.PF cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable spatial and motion perception is essential for safe autonomous
navigation. Recently, class-agnostic motion prediction on bird's-eye view (BEV)
cell grids derived from LiDAR point clouds has gained significant attention.
However, existing frameworks typically perform cell classification and motion
prediction on a per-pixel basis, neglecting important motion field priors such
as rigidity constraints, temporal consistency, and future interactions between
agents. These limitations lead to degraded performance, particularly in sparse
and distant regions. To address these challenges, we introduce
\textbf{PriorMotion}, an innovative generative framework designed for
class-agnostic motion prediction that integrates essential motion priors by
modeling them as distributions within a structured latent space. Specifically,
our method captures structured motion priors using raster-vector
representations and employs a variational autoencoder with distinct dynamic and
static components to learn future motion distributions in the latent space.
Experiments on the nuScenes dataset demonstrate that \textbf{PriorMotion}
outperforms state-of-the-art methods across both traditional metrics and our
newly proposed evaluation criteria. Notably, we achieve improvements of
approximately 15.24\% in accuracy for fast-moving objects, an 3.59\% increase
in generalization, a reduction of 0.0163 in motion stability, and a 31.52\%
reduction in prediction errors in distant regions. Further validation on FMCW
LiDAR sensors confirms the robustness of our approach.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 09:56:24 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 13:44:04 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Qian",
"Kangan",
""
],
[
"Miao",
"Jinyu",
""
],
[
"Jiao",
"Xinyu",
""
],
[
"Luo",
"Ziang",
""
],
[
"Fu",
"Zheng",
""
],
[
"Shi",
"Yining",
""
],
[
"Wang",
"Yunlong",
""
],
[
"Jiang",
"Kun",
""
],
[
"Yang",
"Diange",
""
]
]
| TITLE: PriorMotion: Generative Class-Agnostic Motion Prediction with
Raster-Vector Motion Field Priors
ABSTRACT: Reliable spatial and motion perception is essential for safe autonomous
navigation. Recently, class-agnostic motion prediction on bird's-eye view (BEV)
cell grids derived from LiDAR point clouds has gained significant attention.
However, existing frameworks typically perform cell classification and motion
prediction on a per-pixel basis, neglecting important motion field priors such
as rigidity constraints, temporal consistency, and future interactions between
agents. These limitations lead to degraded performance, particularly in sparse
and distant regions. To address these challenges, we introduce
\textbf{PriorMotion}, an innovative generative framework designed for
class-agnostic motion prediction that integrates essential motion priors by
modeling them as distributions within a structured latent space. Specifically,
our method captures structured motion priors using raster-vector
representations and employs a variational autoencoder with distinct dynamic and
static components to learn future motion distributions in the latent space.
Experiments on the nuScenes dataset demonstrate that \textbf{PriorMotion}
outperforms state-of-the-art methods across both traditional metrics and our
newly proposed evaluation criteria. Notably, we achieve improvements of
approximately 15.24\% in accuracy for fast-moving objects, an 3.59\% increase
in generalization, a reduction of 0.0163 in motion stability, and a 31.52\%
reduction in prediction errors in distant regions. Further validation on FMCW
LiDAR sensors confirms the robustness of our approach.
| no_new_dataset | 0.951774 |
2412.04243 | Nicholas Konz | Yixin Zhang, Nicholas Konz, Kevin Kramer, Maciej A. Mazurowski | Quantifying the Limits of Segmentation Foundation Models: Modeling
Challenges in Segmenting Tree-Like and Low-Contrast Objects | Code: https://github.com/mazurowski-lab/SAM-TexturalConfusion-Metrics | null | null | null | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Image segmentation foundation models (SFMs) like Segment Anything Model (SAM)
have achieved impressive zero-shot and interactive segmentation across diverse
domains. However, they struggle to segment objects with certain structures,
particularly those with dense, tree-like morphology and low textural contrast
from their surroundings. These failure modes are crucial for understanding the
limitations of SFMs in real-world applications. To systematically study this
issue, we introduce interpretable metrics quantifying object tree-likeness and
textural separability. On carefully controlled synthetic experiments and
real-world datasets, we show that SFM performance (e.g., SAM, SAM 2, HQ-SAM)
noticeably correlates with these factors. We link these failures to "textural
confusion", where models misinterpret local structure as global texture,
causing over-segmentation or difficulty distinguishing objects from similar
backgrounds. Notably, targeted fine-tuning fails to resolve this issue,
indicating a fundamental limitation. Our study provides the first quantitative
framework for modeling the behavior of SFMs on challenging structures, offering
interpretable insights into their segmentation capabilities.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 15:25:51 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 14:42:44 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Yixin",
""
],
[
"Konz",
"Nicholas",
""
],
[
"Kramer",
"Kevin",
""
],
[
"Mazurowski",
"Maciej A.",
""
]
]
| TITLE: Quantifying the Limits of Segmentation Foundation Models: Modeling
Challenges in Segmenting Tree-Like and Low-Contrast Objects
ABSTRACT: Image segmentation foundation models (SFMs) like Segment Anything Model (SAM)
have achieved impressive zero-shot and interactive segmentation across diverse
domains. However, they struggle to segment objects with certain structures,
particularly those with dense, tree-like morphology and low textural contrast
from their surroundings. These failure modes are crucial for understanding the
limitations of SFMs in real-world applications. To systematically study this
issue, we introduce interpretable metrics quantifying object tree-likeness and
textural separability. On carefully controlled synthetic experiments and
real-world datasets, we show that SFM performance (e.g., SAM, SAM 2, HQ-SAM)
noticeably correlates with these factors. We link these failures to "textural
confusion", where models misinterpret local structure as global texture,
causing over-segmentation or difficulty distinguishing objects from similar
backgrounds. Notably, targeted fine-tuning fails to resolve this issue,
indicating a fundamental limitation. Our study provides the first quantitative
framework for modeling the behavior of SFMs on challenging structures, offering
interpretable insights into their segmentation capabilities.
| no_new_dataset | 0.950778 |
2412.04292 | Zhenglin Huang | Zhenglin Huang, Jinwei Hu, Xiangtai Li, Yiwei He, Xingyu Zhao, Bei
Peng, Baoyuan Wu, Xiaowei Huang, Guangliang Cheng | SIDA: Social Media Image Deepfake Detection, Localization and
Explanation with Large Multimodal Model | CVPR-2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advancement of generative models in creating highly realistic
images poses substantial risks for misinformation dissemination. For instance,
a synthetic image, when shared on social media, can mislead extensive audiences
and erode trust in digital content, resulting in severe repercussions. Despite
some progress, academia has not yet created a large and diversified deepfake
detection dataset for social media, nor has it devised an effective solution to
address this issue. In this paper, we introduce the Social media Image
Detection dataSet (SID-Set), which offers three key advantages: (1) extensive
volume, featuring 300K AI-generated/tampered and authentic images with
comprehensive annotations, (2) broad diversity, encompassing fully synthetic
and tampered images across various classes, and (3) elevated realism, with
images that are predominantly indistinguishable from genuine ones through mere
visual inspection. Furthermore, leveraging the exceptional capabilities of
large multimodal models, we propose a new image deepfake detection,
localization, and explanation framework, named SIDA (Social media Image
Detection, localization, and explanation Assistant). SIDA not only discerns the
authenticity of images, but also delineates tampered regions through mask
prediction and provides textual explanations of the model's judgment criteria.
Compared with state-of-the-art deepfake detection models on SID-Set and other
benchmarks, extensive experiments demonstrate that SIDA achieves superior
performance among diversified settings. The code, model, and dataset will be
released.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 16:12:25 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 11:03:16 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Huang",
"Zhenglin",
""
],
[
"Hu",
"Jinwei",
""
],
[
"Li",
"Xiangtai",
""
],
[
"He",
"Yiwei",
""
],
[
"Zhao",
"Xingyu",
""
],
[
"Peng",
"Bei",
""
],
[
"Wu",
"Baoyuan",
""
],
[
"Huang",
"Xiaowei",
""
],
[
"Cheng",
"Guangliang",
""
]
]
| TITLE: SIDA: Social Media Image Deepfake Detection, Localization and
Explanation with Large Multimodal Model
ABSTRACT: The rapid advancement of generative models in creating highly realistic
images poses substantial risks for misinformation dissemination. For instance,
a synthetic image, when shared on social media, can mislead extensive audiences
and erode trust in digital content, resulting in severe repercussions. Despite
some progress, academia has not yet created a large and diversified deepfake
detection dataset for social media, nor has it devised an effective solution to
address this issue. In this paper, we introduce the Social media Image
Detection dataSet (SID-Set), which offers three key advantages: (1) extensive
volume, featuring 300K AI-generated/tampered and authentic images with
comprehensive annotations, (2) broad diversity, encompassing fully synthetic
and tampered images across various classes, and (3) elevated realism, with
images that are predominantly indistinguishable from genuine ones through mere
visual inspection. Furthermore, leveraging the exceptional capabilities of
large multimodal models, we propose a new image deepfake detection,
localization, and explanation framework, named SIDA (Social media Image
Detection, localization, and explanation Assistant). SIDA not only discerns the
authenticity of images, but also delineates tampered regions through mask
prediction and provides textual explanations of the model's judgment criteria.
Compared with state-of-the-art deepfake detection models on SID-Set and other
benchmarks, extensive experiments demonstrate that SIDA achieves superior
performance among diversified settings. The code, model, and dataset will be
released.
| new_dataset | 0.964321 |
2412.04532 | Md Khairul Islam | Md. Khairul Islam, Judy Fox | WinTSR: A Windowed Temporal Saliency Rescaling Method for Interpreting
Time Series Deep Learning Models | 11 pages, 14 figures, GitHub
https://github.com/khairulislam/Timeseries-Explained | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Interpreting complex time series forecasting models is challenging due to the
temporal dependencies between time steps and the dynamic relevance of input
features over time. Existing interpretation methods are limited by focusing
mostly on classification tasks, evaluating using custom baseline models instead
of the latest time series models, using simple synthetic datasets, and
requiring training another model. We introduce a novel interpretation method,
\textit{Windowed Temporal Saliency Rescaling (WinTSR)} addressing these
limitations. WinTSR explicitly captures temporal dependencies among the past
time steps and efficiently scales the feature importance with this time
importance. We benchmark WinTSR against 10 recent interpretation techniques
with 5 state-of-the-art deep-learning models of different architectures,
including a time series foundation model. We use 3 real-world datasets for both
time-series classification and regression. Our comprehensive analysis shows
that WinTSR significantly outperforms other local interpretation methods in
overall performance. Finally, we provide a novel, open-source framework to
interpret the latest time series transformers and foundation models.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 17:15:07 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Feb 2025 16:41:01 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 03:16:36 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Islam",
"Md. Khairul",
""
],
[
"Fox",
"Judy",
""
]
]
| TITLE: WinTSR: A Windowed Temporal Saliency Rescaling Method for Interpreting
Time Series Deep Learning Models
ABSTRACT: Interpreting complex time series forecasting models is challenging due to the
temporal dependencies between time steps and the dynamic relevance of input
features over time. Existing interpretation methods are limited by focusing
mostly on classification tasks, evaluating using custom baseline models instead
of the latest time series models, using simple synthetic datasets, and
requiring training another model. We introduce a novel interpretation method,
\textit{Windowed Temporal Saliency Rescaling (WinTSR)} addressing these
limitations. WinTSR explicitly captures temporal dependencies among the past
time steps and efficiently scales the feature importance with this time
importance. We benchmark WinTSR against 10 recent interpretation techniques
with 5 state-of-the-art deep-learning models of different architectures,
including a time series foundation model. We use 3 real-world datasets for both
time-series classification and regression. Our comprehensive analysis shows
that WinTSR significantly outperforms other local interpretation methods in
overall performance. Finally, we provide a novel, open-source framework to
interpret the latest time series transformers and foundation models.
| no_new_dataset | 0.945197 |
2412.04533 | Tianheng Cheng | Yongkang Li and Tianheng Cheng and Bin Feng and Wenyu Liu and Xinggang
Wang | Mask-Adapter: The Devil is in the Masks for Open-Vocabulary Segmentation | Accepted by CVPR 2025; Code & models:
https://github.com/hustvl/MaskAdapter | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent open-vocabulary segmentation methods adopt mask generators to predict
segmentation masks and leverage pre-trained vision-language models, e.g., CLIP,
to classify these masks via mask pooling. Although these approaches show
promising results, it is counterintuitive that accurate masks often fail to
yield accurate classification results through pooling CLIP image embeddings
within the mask regions. In this paper, we reveal the performance limitations
of mask pooling and introduce Mask-Adapter, a simple yet effective method to
address these challenges in open-vocabulary segmentation. Compared to directly
using proposal masks, our proposed Mask-Adapter extracts semantic activation
maps from proposal masks, providing richer contextual information and ensuring
alignment between masks and CLIP. Additionally, we propose a mask consistency
loss that encourages proposal masks with similar IoUs to obtain similar CLIP
embeddings to enhance models' robustness to varying predicted masks.
Mask-Adapter integrates seamlessly into open-vocabulary segmentation methods
based on mask pooling in a plug-and-play manner, delivering more accurate
classification results. Extensive experiments across several zero-shot
benchmarks demonstrate significant performance gains for the proposed
Mask-Adapter on several well-established methods. Notably, Mask-Adapter also
extends effectively to SAM and achieves impressive results on several
open-vocabulary segmentation datasets. Code and models are available at
https://github.com/hustvl/MaskAdapter.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 17:42:37 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 12:14:22 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Yongkang",
""
],
[
"Cheng",
"Tianheng",
""
],
[
"Feng",
"Bin",
""
],
[
"Liu",
"Wenyu",
""
],
[
"Wang",
"Xinggang",
""
]
]
| TITLE: Mask-Adapter: The Devil is in the Masks for Open-Vocabulary Segmentation
ABSTRACT: Recent open-vocabulary segmentation methods adopt mask generators to predict
segmentation masks and leverage pre-trained vision-language models, e.g., CLIP,
to classify these masks via mask pooling. Although these approaches show
promising results, it is counterintuitive that accurate masks often fail to
yield accurate classification results through pooling CLIP image embeddings
within the mask regions. In this paper, we reveal the performance limitations
of mask pooling and introduce Mask-Adapter, a simple yet effective method to
address these challenges in open-vocabulary segmentation. Compared to directly
using proposal masks, our proposed Mask-Adapter extracts semantic activation
maps from proposal masks, providing richer contextual information and ensuring
alignment between masks and CLIP. Additionally, we propose a mask consistency
loss that encourages proposal masks with similar IoUs to obtain similar CLIP
embeddings to enhance models' robustness to varying predicted masks.
Mask-Adapter integrates seamlessly into open-vocabulary segmentation methods
based on mask pooling in a plug-and-play manner, delivering more accurate
classification results. Extensive experiments across several zero-shot
benchmarks demonstrate significant performance gains for the proposed
Mask-Adapter on several well-established methods. Notably, Mask-Adapter also
extends effectively to SAM and achieves impressive results on several
open-vocabulary segmentation datasets. Code and models are available at
https://github.com/hustvl/MaskAdapter.
| no_new_dataset | 0.953362 |
2412.05829 | Naizhu Jin | Naizhu Jin, Zhong Li, Yinggang Guo, Chao Su, Tian Zhang and Qingkai
Zeng | SABER: Model-agnostic Backdoor Attack on Chain-of-Thought in Neural Code
Generation | UNDER REVIEW | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Recent studies have proposed integrating Chain-of-Thought (CoT) reasoning to
further enhance the reliability of Code Language Models (CLMs) in generating
code, a step-by-step approach that breaks down complex programming tasks into
manageable sub-problems. Advances in this area have introduced CoT models,
specifically designed to integrate CoT reasoning effectively into language
models, achieving notable improvements in code generation. Despite these
advancements, the security of CoT models has not been systematically studied.
In this study, we aim to fill this gap by investigating the vulnerability of
CoT models to backdoor injection in code generation tasks. To address this, we
propose a model-agnostic backdoor attack method SABER (Self-Attention-BasEd
backdooR) based on the self-attention mechanism. SABER begins by selecting a
malicious output as the backdoor using code mutation operations. It then
identifies the tokens most relevant to poisoned content by analyzing
self-attention scores in the CodeBERT model. Finally, it mimicks user behavior
to generate adaptive and natural triggers. Our experiments on HumanEval-CoT and
OpenEval-CoT test sets demonstrate that CoT models are susceptible to backdoor
attacks via data poisoning. Taking the HumanEval-CoT dataset as an example,
SABER achieves an ASR of 80.95%, representing an improvement of 33.33% over
RIPPLe and a substantial 4.76% enhancement compared to BadPre. Further
evaluations using ONION for automated detection and human studies reveal that
SABER is stealthier and harder to detect, bypassing 61.90% of automated
detection, with a human detection rate of just 3.17%. Our findings reveal that
backdoors can be injected into CoT models to manipulate downstream code
generation tasks. This highlights the urgent need for further research to
understand and mitigate the security vulnerabilities in CoT models.
| [
{
"version": "v1",
"created": "Sun, 8 Dec 2024 06:36:00 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 16:31:10 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Jin",
"Naizhu",
""
],
[
"Li",
"Zhong",
""
],
[
"Guo",
"Yinggang",
""
],
[
"Su",
"Chao",
""
],
[
"Zhang",
"Tian",
""
],
[
"Zeng",
"Qingkai",
""
]
]
| TITLE: SABER: Model-agnostic Backdoor Attack on Chain-of-Thought in Neural Code
Generation
ABSTRACT: Recent studies have proposed integrating Chain-of-Thought (CoT) reasoning to
further enhance the reliability of Code Language Models (CLMs) in generating
code, a step-by-step approach that breaks down complex programming tasks into
manageable sub-problems. Advances in this area have introduced CoT models,
specifically designed to integrate CoT reasoning effectively into language
models, achieving notable improvements in code generation. Despite these
advancements, the security of CoT models has not been systematically studied.
In this study, we aim to fill this gap by investigating the vulnerability of
CoT models to backdoor injection in code generation tasks. To address this, we
propose a model-agnostic backdoor attack method SABER (Self-Attention-BasEd
backdooR) based on the self-attention mechanism. SABER begins by selecting a
malicious output as the backdoor using code mutation operations. It then
identifies the tokens most relevant to poisoned content by analyzing
self-attention scores in the CodeBERT model. Finally, it mimicks user behavior
to generate adaptive and natural triggers. Our experiments on HumanEval-CoT and
OpenEval-CoT test sets demonstrate that CoT models are susceptible to backdoor
attacks via data poisoning. Taking the HumanEval-CoT dataset as an example,
SABER achieves an ASR of 80.95%, representing an improvement of 33.33% over
RIPPLe and a substantial 4.76% enhancement compared to BadPre. Further
evaluations using ONION for automated detection and human studies reveal that
SABER is stealthier and harder to detect, bypassing 61.90% of automated
detection, with a human detection rate of just 3.17%. Our findings reveal that
backdoors can be injected into CoT models to manipulate downstream code
generation tasks. This highlights the urgent need for further research to
understand and mitigate the security vulnerabilities in CoT models.
| no_new_dataset | 0.94366 |
2412.06244 | Yunheng Li | Yunheng Li, Yuxuan Li, Quansheng Zeng, Wenhai Wang, Qibin Hou,
Ming-Ming Cheng | Unbiased Region-Language Alignment for Open-Vocabulary Dense Prediction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pre-trained vision-language models (VLMs), such as CLIP, have demonstrated
impressive zero-shot recognition capability, but still underperform in dense
prediction tasks. Self-distillation recently is emerging as a promising
approach for fine-tuning VLMs to better adapt to local regions without
requiring extensive annotations. However, previous state-of-the-art approaches
often suffer from significant `foreground bias', where models tend to wrongly
identify background regions as foreground objects. To alleviate this issue, we
propose DenseVLM, a framework designed to learn unbiased region-language
alignment from powerful pre-trained VLM representations. To alleviate this
issue, we propose DenseVLM, a framework designed to learn unbiased
region-language alignment from powerful pre-trained VLM representations.
DenseVLM leverages the pre-trained VLM to retrieve categories for unlabeled
regions and then decouples the interference between foreground and background
features. We show that DenseVLM can directly replace the original VLM in
open-vocabulary object detection and image segmentation methods, leading to
notable performance improvements. Furthermore, it exhibits promising zero-shot
scalability when training on more extensive and diverse datasets. Our code is
available at https://github.com/HVision-NKU/DenseVLM.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 06:34:23 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 07:19:10 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Yunheng",
""
],
[
"Li",
"Yuxuan",
""
],
[
"Zeng",
"Quansheng",
""
],
[
"Wang",
"Wenhai",
""
],
[
"Hou",
"Qibin",
""
],
[
"Cheng",
"Ming-Ming",
""
]
]
| TITLE: Unbiased Region-Language Alignment for Open-Vocabulary Dense Prediction
ABSTRACT: Pre-trained vision-language models (VLMs), such as CLIP, have demonstrated
impressive zero-shot recognition capability, but still underperform in dense
prediction tasks. Self-distillation recently is emerging as a promising
approach for fine-tuning VLMs to better adapt to local regions without
requiring extensive annotations. However, previous state-of-the-art approaches
often suffer from significant `foreground bias', where models tend to wrongly
identify background regions as foreground objects. To alleviate this issue, we
propose DenseVLM, a framework designed to learn unbiased region-language
alignment from powerful pre-trained VLM representations. To alleviate this
issue, we propose DenseVLM, a framework designed to learn unbiased
region-language alignment from powerful pre-trained VLM representations.
DenseVLM leverages the pre-trained VLM to retrieve categories for unlabeled
regions and then decouples the interference between foreground and background
features. We show that DenseVLM can directly replace the original VLM in
open-vocabulary object detection and image segmentation methods, leading to
notable performance improvements. Furthermore, it exhibits promising zero-shot
scalability when training on more extensive and diverse datasets. Our code is
available at https://github.com/HVision-NKU/DenseVLM.
| no_new_dataset | 0.941708 |
2412.06334 | Ilia Petrov | Ilya A. Petrov, Riccardo Marin, Julian Chibane, Gerard Pons-Moll | TriDi: Trilateral Diffusion of 3D Humans, Objects, and Interactions | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Modeling 3D human-object interaction (HOI) is a problem of great interest for
computer vision and a key enabler for virtual and mixed-reality applications.
Existing methods work in a one-way direction: some recover plausible human
interactions conditioned on a 3D object; others recover the object pose
conditioned on a human pose. Instead, we provide the first unified model -
TriDi which works in any direction. Concretely, we generate Human, Object, and
Interaction modalities simultaneously with a new three-way diffusion process,
allowing to model seven distributions with one network. We implement TriDi as a
transformer attending to the various modalities' tokens, thereby discovering
conditional relations between them. The user can control the interaction either
as a text description of HOI or a contact map. We embed these two
representations into a shared latent space, combining the practicality of text
descriptions with the expressiveness of contact maps. Using a single network,
TriDi unifies all the special cases of prior work and extends to new ones,
modeling a family of seven distributions. Remarkably, despite using a single
model, TriDi generated samples surpass one-way specialized baselines on GRAB
and BEHAVE in terms of both qualitative and quantitative metrics, and
demonstrating better diversity. We show the applicability of TriDi to scene
population, generating objects for human-contact datasets, and generalization
to unseen object geometry. The project page is available at:
https://virtualhumans.mpi-inf.mpg.de/tridi.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 09:35:05 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 15:19:30 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Petrov",
"Ilya A.",
""
],
[
"Marin",
"Riccardo",
""
],
[
"Chibane",
"Julian",
""
],
[
"Pons-Moll",
"Gerard",
""
]
]
| TITLE: TriDi: Trilateral Diffusion of 3D Humans, Objects, and Interactions
ABSTRACT: Modeling 3D human-object interaction (HOI) is a problem of great interest for
computer vision and a key enabler for virtual and mixed-reality applications.
Existing methods work in a one-way direction: some recover plausible human
interactions conditioned on a 3D object; others recover the object pose
conditioned on a human pose. Instead, we provide the first unified model -
TriDi which works in any direction. Concretely, we generate Human, Object, and
Interaction modalities simultaneously with a new three-way diffusion process,
allowing to model seven distributions with one network. We implement TriDi as a
transformer attending to the various modalities' tokens, thereby discovering
conditional relations between them. The user can control the interaction either
as a text description of HOI or a contact map. We embed these two
representations into a shared latent space, combining the practicality of text
descriptions with the expressiveness of contact maps. Using a single network,
TriDi unifies all the special cases of prior work and extends to new ones,
modeling a family of seven distributions. Remarkably, despite using a single
model, TriDi generated samples surpass one-way specialized baselines on GRAB
and BEHAVE in terms of both qualitative and quantitative metrics, and
demonstrating better diversity. We show the applicability of TriDi to scene
population, generating objects for human-contact datasets, and generalization
to unseen object geometry. The project page is available at:
https://virtualhumans.mpi-inf.mpg.de/tridi.
| no_new_dataset | 0.953794 |
2412.07385 | Nermin Samet | Ellington Kirby, Mickael Chen, Renaud Marlet, Nermin Samet | LOGen: Toward Lidar Object Generation by Point Diffusion | Project web page: https://nerminsamet.github.io/logen/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generation of LiDAR scans is a growing area of research with diverse
applications to autonomous driving. However, scan generation remains
challenging, especially when compared to the rapid advancement of 2D and 3D
object generation. We introduce a novel task: LiDAR object generation,
requiring models to produce 3D objects as viewed by a LiDAR scan. This task
focuses LiDAR scan generation on the most interesting aspect of scenes, the
objects, while also benefiting from advancements in 3D object generative
methods. We introduce a novel diffusion-based model to produce LiDAR point
clouds of dataset objects, including intensity, and with an extensive control
of the generation via conditioning information. Our experiments on nuScenes
show the quality of our generations measured with new 3D metrics developed to
suit LiDAR objects.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 10:30:27 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 13:15:45 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kirby",
"Ellington",
""
],
[
"Chen",
"Mickael",
""
],
[
"Marlet",
"Renaud",
""
],
[
"Samet",
"Nermin",
""
]
]
| TITLE: LOGen: Toward Lidar Object Generation by Point Diffusion
ABSTRACT: The generation of LiDAR scans is a growing area of research with diverse
applications to autonomous driving. However, scan generation remains
challenging, especially when compared to the rapid advancement of 2D and 3D
object generation. We introduce a novel task: LiDAR object generation,
requiring models to produce 3D objects as viewed by a LiDAR scan. This task
focuses LiDAR scan generation on the most interesting aspect of scenes, the
objects, while also benefiting from advancements in 3D object generative
methods. We introduce a novel diffusion-based model to produce LiDAR point
clouds of dataset objects, including intensity, and with an extensive control
of the generation via conditioning information. Our experiments on nuScenes
show the quality of our generations measured with new 3D metrics developed to
suit LiDAR objects.
| no_new_dataset | 0.942665 |
2412.07808 | Myeongseob Ko | Myeongseob Ko, Henry Li, Zhun Wang, Jonathan Patsenker, Jiachen T.
Wang, Qinbin Li, Ming Jin, Dawn Song, Ruoxi Jia | Boosting Alignment for Post-Unlearning Text-to-Image Generative Models | The Thirty-Eighth Annual Conference on Neural Information Processing
Systems | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large-scale generative models have shown impressive image-generation
capabilities, propelled by massive data. However, this often inadvertently
leads to the generation of harmful or inappropriate content and raises
copyright concerns. Driven by these concerns, machine unlearning has become
crucial to effectively purge undesirable knowledge from models. While existing
literature has studied various unlearning techniques, these often suffer from
either poor unlearning quality or degradation in text-image alignment after
unlearning, due to the competitive nature of these objectives. To address these
challenges, we propose a framework that seeks an optimal model update at each
unlearning iteration, ensuring monotonic improvement on both objectives. We
further derive the characterization of such an update.
In addition, we design procedures to strategically diversify the unlearning
and remaining datasets to boost performance improvement. Our evaluation
demonstrates that our method effectively removes target classes from recent
diffusion-based generative models and concepts from stable diffusion models
while maintaining close alignment with the models' original trained states,
thus outperforming state-of-the-art baselines. Our code will be made available
at https://github.com/reds-lab/Restricted_gradient_diversity_unlearning.git.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 21:36:10 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 22:38:02 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ko",
"Myeongseob",
""
],
[
"Li",
"Henry",
""
],
[
"Wang",
"Zhun",
""
],
[
"Patsenker",
"Jonathan",
""
],
[
"Wang",
"Jiachen T.",
""
],
[
"Li",
"Qinbin",
""
],
[
"Jin",
"Ming",
""
],
[
"Song",
"Dawn",
""
],
[
"Jia",
"Ruoxi",
""
]
]
| TITLE: Boosting Alignment for Post-Unlearning Text-to-Image Generative Models
ABSTRACT: Large-scale generative models have shown impressive image-generation
capabilities, propelled by massive data. However, this often inadvertently
leads to the generation of harmful or inappropriate content and raises
copyright concerns. Driven by these concerns, machine unlearning has become
crucial to effectively purge undesirable knowledge from models. While existing
literature has studied various unlearning techniques, these often suffer from
either poor unlearning quality or degradation in text-image alignment after
unlearning, due to the competitive nature of these objectives. To address these
challenges, we propose a framework that seeks an optimal model update at each
unlearning iteration, ensuring monotonic improvement on both objectives. We
further derive the characterization of such an update.
In addition, we design procedures to strategically diversify the unlearning
and remaining datasets to boost performance improvement. Our evaluation
demonstrates that our method effectively removes target classes from recent
diffusion-based generative models and concepts from stable diffusion models
while maintaining close alignment with the models' original trained states,
thus outperforming state-of-the-art baselines. Our code will be made available
at https://github.com/reds-lab/Restricted_gradient_diversity_unlearning.git.
| no_new_dataset | 0.941439 |
2412.08468 | Haosheng Li | Haosheng Li, Weixin Mao, Weipeng Deng, Chenyu Meng, Haoqiang Fan,
Tiancai Wang, Ping Tan, Hongan Wang, Xiaoming Deng | Multi-GraspLLM: A Multimodal LLM for Multi-Hand Semantic Guided Grasp
Generation | 16 pages, 10 figures | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-hand semantic grasp generation aims to generate feasible and
semantically appropriate grasp poses for different robotic hands based on
natural language instructions. Although the task is highly valuable, due to the
lack of multihand grasp datasets with fine-grained contact description between
robotic hands and objects, it is still a long-standing difficult task. In this
paper, we present Multi-GraspSet, the first large-scale multi-hand grasp
dataset with automatically contact annotations. Based on Multi-GraspSet, we
propose Multi-GraspLLM, a unified language-guided grasp generation framework,
which leverages large language models (LLM) to handle variable-length
sequences, generating grasp poses for diverse robotic hands in a single unified
architecture. Multi-GraspLLM first aligns the encoded point cloud features and
text features into a unified semantic space. It then generates grasp bin tokens
that are subsequently converted into grasp pose for each robotic hand via
hand-aware linear mapping. The experimental results demonstrate that our
approach significantly outperforms existing methods in both real-world
experiments and simulator. More information can be found on our project page
https://multi-graspllm.github.io.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 15:33:35 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 12:25:32 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Haosheng",
""
],
[
"Mao",
"Weixin",
""
],
[
"Deng",
"Weipeng",
""
],
[
"Meng",
"Chenyu",
""
],
[
"Fan",
"Haoqiang",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Tan",
"Ping",
""
],
[
"Wang",
"Hongan",
""
],
[
"Deng",
"Xiaoming",
""
]
]
| TITLE: Multi-GraspLLM: A Multimodal LLM for Multi-Hand Semantic Guided Grasp
Generation
ABSTRACT: Multi-hand semantic grasp generation aims to generate feasible and
semantically appropriate grasp poses for different robotic hands based on
natural language instructions. Although the task is highly valuable, due to the
lack of multihand grasp datasets with fine-grained contact description between
robotic hands and objects, it is still a long-standing difficult task. In this
paper, we present Multi-GraspSet, the first large-scale multi-hand grasp
dataset with automatically contact annotations. Based on Multi-GraspSet, we
propose Multi-GraspLLM, a unified language-guided grasp generation framework,
which leverages large language models (LLM) to handle variable-length
sequences, generating grasp poses for diverse robotic hands in a single unified
architecture. Multi-GraspLLM first aligns the encoded point cloud features and
text features into a unified semantic space. It then generates grasp bin tokens
that are subsequently converted into grasp pose for each robotic hand via
hand-aware linear mapping. The experimental results demonstrate that our
approach significantly outperforms existing methods in both real-world
experiments and simulator. More information can be found on our project page
https://multi-graspllm.github.io.
| new_dataset | 0.954009 |
2412.08619 | Mohammadmehdi Ataei | Vahid Balazadeh, Mohammadmehdi Ataei, Hyunmin Cheong, Amir Hosein
Khasahmadi, Rahul G. Krishnan | Physics Context Builders: A Modular Framework for Physical Reasoning in
Vision-Language Models | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Physical reasoning, which involves interpreting object behaviors within
dynamic environments, remains a significant challenge for Vision-Language
Models (VLMs). The limitations in physical reasoning arise from an inability to
translate learned knowledge into predictions about physical behavior. We
perform a careful study to show how continual fine-tuning can mitigate this
issue. However, fine-tuning is expensive for large models and impractical to
repeatedly perform for every task. This necessitates the creation of modular
and scalable ways to teach VLMs about physical reasoning. To that end, we
introduce Physics Context Builders (PCBs), a novel modular framework where
specialized VLMs are fine-tuned to generate detailed physical scene
descriptions. These can be used as physical contexts for larger VLMs to enhance
their reasoning capabilities. PCBs enable the separation of visual perception
from reasoning, allowing us to analyze their relative contributions to physical
understanding. We perform careful experiments on CLEVRER and on Falling Tower,
a stability detection dataset with both simulated and real-world scenes, to
demonstrate that PCBs provide substantial performance improvements, increasing
average accuracy by up to 13.8% on complex physical reasoning tasks. Notably,
PCBs show strong Sim2Real transfer, successfully generalizing from simulated
training data to real-world scenes. Our work demonstrates that enhancing visual
perception through modular, simulation-trained components offers a practical
approach to improving physical reasoning in VLMs, while providing insights into
the factors affecting physical understanding in these models.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 18:40:16 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 17:01:51 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Balazadeh",
"Vahid",
""
],
[
"Ataei",
"Mohammadmehdi",
""
],
[
"Cheong",
"Hyunmin",
""
],
[
"Khasahmadi",
"Amir Hosein",
""
],
[
"Krishnan",
"Rahul G.",
""
]
]
| TITLE: Physics Context Builders: A Modular Framework for Physical Reasoning in
Vision-Language Models
ABSTRACT: Physical reasoning, which involves interpreting object behaviors within
dynamic environments, remains a significant challenge for Vision-Language
Models (VLMs). The limitations in physical reasoning arise from an inability to
translate learned knowledge into predictions about physical behavior. We
perform a careful study to show how continual fine-tuning can mitigate this
issue. However, fine-tuning is expensive for large models and impractical to
repeatedly perform for every task. This necessitates the creation of modular
and scalable ways to teach VLMs about physical reasoning. To that end, we
introduce Physics Context Builders (PCBs), a novel modular framework where
specialized VLMs are fine-tuned to generate detailed physical scene
descriptions. These can be used as physical contexts for larger VLMs to enhance
their reasoning capabilities. PCBs enable the separation of visual perception
from reasoning, allowing us to analyze their relative contributions to physical
understanding. We perform careful experiments on CLEVRER and on Falling Tower,
a stability detection dataset with both simulated and real-world scenes, to
demonstrate that PCBs provide substantial performance improvements, increasing
average accuracy by up to 13.8% on complex physical reasoning tasks. Notably,
PCBs show strong Sim2Real transfer, successfully generalizing from simulated
training data to real-world scenes. Our work demonstrates that enhancing visual
perception through modular, simulation-trained components offers a practical
approach to improving physical reasoning in VLMs, while providing insights into
the factors affecting physical understanding in these models.
| new_dataset | 0.642993 |
2412.09256 | Fabrizio Boninsegna | Fabrizio Boninsegna, Francesco Silvestri | Differentially Private Release of Hierarchical Origin/Destination Data
with a TopDown Approach | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel method for generating differentially private
tabular datasets for hierarchical data, specifically focusing on
origin-destination (O/D) trips. The approach builds upon the TopDown algorithm,
a constraint-based mechanism developed by the U.S. Census to incorporate
invariant queries into tabular data. O/D hierarchical data refers to datasets
representing trips between geographical areas organized in a hierarchical
structure (e.g., region $\rightarrow$ province $\rightarrow$ city). The
proposed method is designed to improve the accuracy of queries covering broader
geographical areas, which are derived through aggregation. This feature
provides a "zoom-in" effect on the dataset, ensuring that when zoomed back out,
the overall picture is preserved. Furthermore, the approach aims to reduce
false positive detection. These characteristics can strengthen practitioners'
and decision-makers' confidence in adopting differential privacy datasets. The
main technical contribution of this paper includes a novel TopDown algorithm
that employs constrained optimization with Chebyshev distance minimization,
with theoretical guarantees on the maximum absolute error. Additionally, we
propose a new integer optimization algorithm that significantly reduces the
incidence of false positives. The effectiveness of the proposed approach is
validated using real-world and synthetic O/D datasets, demonstrating its
ability to generate private data with high utility and a reduced number of
false positives. Our experiments focus on O/D datasets with a single trip as a
unit of privacy: nevertheless, the proposed approach supports other units of
privacy and also applies to any tabular data with a hierarchical structure.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 13:14:15 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 13:55:47 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Boninsegna",
"Fabrizio",
""
],
[
"Silvestri",
"Francesco",
""
]
]
| TITLE: Differentially Private Release of Hierarchical Origin/Destination Data
with a TopDown Approach
ABSTRACT: This paper presents a novel method for generating differentially private
tabular datasets for hierarchical data, specifically focusing on
origin-destination (O/D) trips. The approach builds upon the TopDown algorithm,
a constraint-based mechanism developed by the U.S. Census to incorporate
invariant queries into tabular data. O/D hierarchical data refers to datasets
representing trips between geographical areas organized in a hierarchical
structure (e.g., region $\rightarrow$ province $\rightarrow$ city). The
proposed method is designed to improve the accuracy of queries covering broader
geographical areas, which are derived through aggregation. This feature
provides a "zoom-in" effect on the dataset, ensuring that when zoomed back out,
the overall picture is preserved. Furthermore, the approach aims to reduce
false positive detection. These characteristics can strengthen practitioners'
and decision-makers' confidence in adopting differential privacy datasets. The
main technical contribution of this paper includes a novel TopDown algorithm
that employs constrained optimization with Chebyshev distance minimization,
with theoretical guarantees on the maximum absolute error. Additionally, we
propose a new integer optimization algorithm that significantly reduces the
incidence of false positives. The effectiveness of the proposed approach is
validated using real-world and synthetic O/D datasets, demonstrating its
ability to generate private data with high utility and a reduced number of
false positives. Our experiments focus on O/D datasets with a single trip as a
unit of privacy: nevertheless, the proposed approach supports other units of
privacy and also applies to any tabular data with a hierarchical structure.
| no_new_dataset | 0.947624 |
2412.09921 | Jaehwan Jeong | Jaehwan Jeong, Sumin In, Sieun Kim, Hannie Shin, Jongheon Jeong, Sang
Ho Yoon, Jaewook Chung, Sangpil Kim | FaceShield: Defending Facial Image against Deepfake Threats | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rising use of deepfakes in criminal activities presents a significant
issue, inciting widespread controversy. While numerous studies have tackled
this problem, most primarily focus on deepfake detection. These reactive
solutions are insufficient as a fundamental approach for crimes where
authenticity is disregarded. Existing proactive defenses also have limitations,
as they are effective only for deepfake models based on specific Generative
Adversarial Networks (GANs), making them less applicable in light of recent
advancements in diffusion-based models. In this paper, we propose a proactive
defense method named FaceShield, which introduces novel defense strategies
targeting deepfakes generated by Diffusion Models (DMs) and facilitates
defenses on various existing GAN-based deepfake models through facial feature
extractor manipulations. Our approach consists of three main components: (i)
manipulating the attention mechanism of DMs to exclude protected facial
features during the denoising process, (ii) targeting prominent facial feature
extraction models to enhance the robustness of our adversarial perturbation,
and (iii) employing Gaussian blur and low-pass filtering techniques to improve
imperceptibility while enhancing robustness against JPEG compression.
Experimental results on the CelebA-HQ and VGGFace2-HQ datasets demonstrate that
our method achieves state-of-the-art performance against the latest deepfake
models based on DMs, while also exhibiting transferability to GANs and
showcasing greater imperceptibility of noise along with enhanced robustness.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 07:20:35 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 08:36:55 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Jeong",
"Jaehwan",
""
],
[
"In",
"Sumin",
""
],
[
"Kim",
"Sieun",
""
],
[
"Shin",
"Hannie",
""
],
[
"Jeong",
"Jongheon",
""
],
[
"Yoon",
"Sang Ho",
""
],
[
"Chung",
"Jaewook",
""
],
[
"Kim",
"Sangpil",
""
]
]
| TITLE: FaceShield: Defending Facial Image against Deepfake Threats
ABSTRACT: The rising use of deepfakes in criminal activities presents a significant
issue, inciting widespread controversy. While numerous studies have tackled
this problem, most primarily focus on deepfake detection. These reactive
solutions are insufficient as a fundamental approach for crimes where
authenticity is disregarded. Existing proactive defenses also have limitations,
as they are effective only for deepfake models based on specific Generative
Adversarial Networks (GANs), making them less applicable in light of recent
advancements in diffusion-based models. In this paper, we propose a proactive
defense method named FaceShield, which introduces novel defense strategies
targeting deepfakes generated by Diffusion Models (DMs) and facilitates
defenses on various existing GAN-based deepfake models through facial feature
extractor manipulations. Our approach consists of three main components: (i)
manipulating the attention mechanism of DMs to exclude protected facial
features during the denoising process, (ii) targeting prominent facial feature
extraction models to enhance the robustness of our adversarial perturbation,
and (iii) employing Gaussian blur and low-pass filtering techniques to improve
imperceptibility while enhancing robustness against JPEG compression.
Experimental results on the CelebA-HQ and VGGFace2-HQ datasets demonstrate that
our method achieves state-of-the-art performance against the latest deepfake
models based on DMs, while also exhibiting transferability to GANs and
showcasing greater imperceptibility of noise along with enhanced robustness.
| no_new_dataset | 0.945901 |
2412.09959 | Xinhao Zhong | Xinhao Zhong, Shuoyang Sun, Xulin Gu, Zhaoyang Xu, Yaowei Wang, Min
Zhang, Bin Chen | Efficient Dataset Distillation via Diffusion-Driven Patch Selection for
Improved Generalization | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Dataset distillation offers an efficient way to reduce memory and
computational costs by optimizing a smaller dataset with performance comparable
to the full-scale original. However, for large datasets and complex deep
networks (e.g., ImageNet-1K with ResNet-101), the extensive optimization space
limits performance, reducing its practicality. Recent approaches employ
pre-trained diffusion models to generate informative images directly, avoiding
pixel-level optimization and achieving notable results. However, these methods
often face challenges due to distribution shifts between pre-trained models and
target datasets, along with the need for multiple distillation steps across
varying settings. To address these issues, we propose a novel framework
orthogonal to existing diffusion-based distillation methods, leveraging
diffusion models for selection rather than generation. Our method starts by
predicting noise generated by the diffusion model based on input images and
text prompts (with or without label text), then calculates the corresponding
loss for each pair. With the loss differences, we identify distinctive regions
of the original images. Additionally, we perform intra-class clustering and
ranking on selected patches to maintain diversity constraints. This streamlined
framework enables a single-step distillation process, and extensive experiments
demonstrate that our approach outperforms state-of-the-art methods across
various metrics.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 08:34:46 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Feb 2025 16:11:13 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 09:32:43 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhong",
"Xinhao",
""
],
[
"Sun",
"Shuoyang",
""
],
[
"Gu",
"Xulin",
""
],
[
"Xu",
"Zhaoyang",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Zhang",
"Min",
""
],
[
"Chen",
"Bin",
""
]
]
| TITLE: Efficient Dataset Distillation via Diffusion-Driven Patch Selection for
Improved Generalization
ABSTRACT: Dataset distillation offers an efficient way to reduce memory and
computational costs by optimizing a smaller dataset with performance comparable
to the full-scale original. However, for large datasets and complex deep
networks (e.g., ImageNet-1K with ResNet-101), the extensive optimization space
limits performance, reducing its practicality. Recent approaches employ
pre-trained diffusion models to generate informative images directly, avoiding
pixel-level optimization and achieving notable results. However, these methods
often face challenges due to distribution shifts between pre-trained models and
target datasets, along with the need for multiple distillation steps across
varying settings. To address these issues, we propose a novel framework
orthogonal to existing diffusion-based distillation methods, leveraging
diffusion models for selection rather than generation. Our method starts by
predicting noise generated by the diffusion model based on input images and
text prompts (with or without label text), then calculates the corresponding
loss for each pair. With the loss differences, we identify distinctive regions
of the original images. Additionally, we perform intra-class clustering and
ranking on selected patches to maintain diversity constraints. This streamlined
framework enables a single-step distillation process, and extensive experiments
demonstrate that our approach outperforms state-of-the-art methods across
various metrics.
| no_new_dataset | 0.946399 |
2412.10601 | Jingtao Min | Jingtao Min and Alexander Grayver | A decade of the fast-varying ionospheric and magnetospheric magnetic
fields from ground and multi-satellite observations | 49 pages, 21 figures | Geophysical Journal International, 241(2), 797-825 (2025) | 10.1093/gji/ggaf065 | null | physics.space-ph physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The time-varying geomagnetic field is a superposition of contributions from
multiple internal and external current systems. A major source of geomagnetic
variations at periods less than a few years are current systems external to the
solid Earth, namely the ionospheric and magnetospheric currents, as well as
associated induced currents. The separation of these three sources is
mathematically underdetermined using either ground or satellite measurements
alone, but becomes tractable when the two datasets are combined. Based on this
concept, we developed a new geomagnetic field modelling approach that allows us
to simultaneously characterise the mid-latitude ionospheric, magnetospheric and
the internal induced magnetic fields using ground and satellite observations
for all local times and magnetic conditions, and without prescribing any
harmonic behaviour on these current systems in time, as is typical in other
models. By applying this new method to a 10-year dataset of ground observatory
and multi-satellite measurements from 2014 to 2023, we obtained the time series
of the spherical harmonic coefficients of the ionospheric, magnetospheric and
induced fields. These new time series allow the study of complex non-periodic
dynamics of the external magnetic fields during global geomagnetic storms, as
well as periodicities in the magnetospheric coefficients linked to solar
activities and periodic ionospheric magnetic fields linked to lunar daily
variations, contributing to a more complete picture of the dynamics of the
external currents and magnetosphere-ionosphere interactions, and facilitating
more accurate space weather nowcast and forecast. Finally, the new approach
allows for a better characterisation of internal induced field sources, leading
to higher quality electromagnetic transfer functions.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 23:07:02 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 11:41:46 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Min",
"Jingtao",
""
],
[
"Grayver",
"Alexander",
""
]
]
| TITLE: A decade of the fast-varying ionospheric and magnetospheric magnetic
fields from ground and multi-satellite observations
ABSTRACT: The time-varying geomagnetic field is a superposition of contributions from
multiple internal and external current systems. A major source of geomagnetic
variations at periods less than a few years are current systems external to the
solid Earth, namely the ionospheric and magnetospheric currents, as well as
associated induced currents. The separation of these three sources is
mathematically underdetermined using either ground or satellite measurements
alone, but becomes tractable when the two datasets are combined. Based on this
concept, we developed a new geomagnetic field modelling approach that allows us
to simultaneously characterise the mid-latitude ionospheric, magnetospheric and
the internal induced magnetic fields using ground and satellite observations
for all local times and magnetic conditions, and without prescribing any
harmonic behaviour on these current systems in time, as is typical in other
models. By applying this new method to a 10-year dataset of ground observatory
and multi-satellite measurements from 2014 to 2023, we obtained the time series
of the spherical harmonic coefficients of the ionospheric, magnetospheric and
induced fields. These new time series allow the study of complex non-periodic
dynamics of the external magnetic fields during global geomagnetic storms, as
well as periodicities in the magnetospheric coefficients linked to solar
activities and periodic ionospheric magnetic fields linked to lunar daily
variations, contributing to a more complete picture of the dynamics of the
external currents and magnetosphere-ionosphere interactions, and facilitating
more accurate space weather nowcast and forecast. Finally, the new approach
allows for a better characterisation of internal induced field sources, leading
to higher quality electromagnetic transfer functions.
| no_new_dataset | 0.953013 |
2412.11934 | Jingyu Peng | Jingyu Peng, Maolin Wang, Xiangyu Zhao, Kai Zhang, Wanyu Wang, Pengyue
Jia, Qidong Liu, Ruocheng Guo, Qi Liu | Stepwise Reasoning Error Disruption Attack of LLMs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have made remarkable strides in complex
reasoning tasks, but their safety and robustness in reasoning processes remain
underexplored. Existing attacks on LLM reasoning are constrained by specific
settings or lack of imperceptibility, limiting their feasibility and
generalizability. To address these challenges, we propose the Stepwise
rEasoning Error Disruption (SEED) attack, which subtly injects errors into
prior reasoning steps to mislead the model into producing incorrect subsequent
reasoning and final answers. Unlike previous methods, SEED is compatible with
zero-shot and few-shot settings, maintains the natural reasoning flow, and
ensures covert execution without modifying the instruction. Extensive
experiments on four datasets across four different models demonstrate SEED's
effectiveness, revealing the vulnerabilities of LLMs to disruptions in
reasoning processes. These findings underscore the need for greater attention
to the robustness of LLM reasoning to ensure safety in practical applications.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 16:20:41 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Dec 2024 03:55:40 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 06:22:15 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Peng",
"Jingyu",
""
],
[
"Wang",
"Maolin",
""
],
[
"Zhao",
"Xiangyu",
""
],
[
"Zhang",
"Kai",
""
],
[
"Wang",
"Wanyu",
""
],
[
"Jia",
"Pengyue",
""
],
[
"Liu",
"Qidong",
""
],
[
"Guo",
"Ruocheng",
""
],
[
"Liu",
"Qi",
""
]
]
| TITLE: Stepwise Reasoning Error Disruption Attack of LLMs
ABSTRACT: Large language models (LLMs) have made remarkable strides in complex
reasoning tasks, but their safety and robustness in reasoning processes remain
underexplored. Existing attacks on LLM reasoning are constrained by specific
settings or lack of imperceptibility, limiting their feasibility and
generalizability. To address these challenges, we propose the Stepwise
rEasoning Error Disruption (SEED) attack, which subtly injects errors into
prior reasoning steps to mislead the model into producing incorrect subsequent
reasoning and final answers. Unlike previous methods, SEED is compatible with
zero-shot and few-shot settings, maintains the natural reasoning flow, and
ensures covert execution without modifying the instruction. Extensive
experiments on four datasets across four different models demonstrate SEED's
effectiveness, revealing the vulnerabilities of LLMs to disruptions in
reasoning processes. These findings underscore the need for greater attention
to the robustness of LLM reasoning to ensure safety in practical applications.
| no_new_dataset | 0.945951 |
2412.12778 | Huihui Fang Miss | Chengzhou Yu (South China University of Technology), Huihui Fang
(Pazhou Laboratory), Hongqiu Wang (The Hong Kong University of Science and
Technology (Guangzhou)), Ting Deng (South China University of Technology),
Qing Du (South China University of Technology), Yanwu Xu (South China
University of Technology), and Weihua Yang (Shenzhen Eye Hospital) | Rethinking Diffusion-Based Image Generators for Fundus Fluorescein
Angiography Synthesis on Limited Data | The first author has a conflict with the data access authority | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Fundus imaging is a critical tool in ophthalmology, with different imaging
modalities offering unique advantages. For instance, fundus fluorescein
angiography (FFA) can accurately identify eye diseases. However, traditional
invasive FFA involves the injection of sodium fluorescein, which can cause
discomfort and risks. Generating corresponding FFA images from non-invasive
fundus images holds significant practical value but also presents challenges.
First, limited datasets constrain the performance and effectiveness of models.
Second, previous studies have primarily focused on generating FFA for single
diseases or single modalities, often resulting in poor performance for patients
with various ophthalmic conditions. To address these issues, we propose a novel
latent diffusion model-based framework, Diffusion, which introduces a
fine-tuning protocol to overcome the challenge of limited medical data and
unleash the generative capabilities of diffusion models. Furthermore, we
designed a new approach to tackle the challenges of generating across different
modalities and disease types. On limited datasets, our framework achieves
state-of-the-art results compared to existing methods, offering significant
potential to enhance ophthalmic diagnostics and patient care. Our code will be
released soon to support further research in this field.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 10:37:46 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 02:53:38 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yu",
"Chengzhou",
"",
"South China University of Technology"
],
[
"Fang",
"Huihui",
"",
"Pazhou Laboratory"
],
[
"Wang",
"Hongqiu",
"",
"The Hong Kong University of Science and\n Technology"
],
[
"Deng",
"Ting",
"",
"South China University of Technology"
],
[
"Du",
"Qing",
"",
"South China University of Technology"
],
[
"Xu",
"Yanwu",
"",
"South China\n University of Technology"
],
[
"Yang",
"Weihua",
"",
"Shenzhen Eye Hospital"
]
]
| TITLE: Rethinking Diffusion-Based Image Generators for Fundus Fluorescein
Angiography Synthesis on Limited Data
ABSTRACT: Fundus imaging is a critical tool in ophthalmology, with different imaging
modalities offering unique advantages. For instance, fundus fluorescein
angiography (FFA) can accurately identify eye diseases. However, traditional
invasive FFA involves the injection of sodium fluorescein, which can cause
discomfort and risks. Generating corresponding FFA images from non-invasive
fundus images holds significant practical value but also presents challenges.
First, limited datasets constrain the performance and effectiveness of models.
Second, previous studies have primarily focused on generating FFA for single
diseases or single modalities, often resulting in poor performance for patients
with various ophthalmic conditions. To address these issues, we propose a novel
latent diffusion model-based framework, Diffusion, which introduces a
fine-tuning protocol to overcome the challenge of limited medical data and
unleash the generative capabilities of diffusion models. Furthermore, we
designed a new approach to tackle the challenges of generating across different
modalities and disease types. On limited datasets, our framework achieves
state-of-the-art results compared to existing methods, offering significant
potential to enhance ophthalmic diagnostics and patient care. Our code will be
released soon to support further research in this field.
| no_new_dataset | 0.948442 |
2412.12892 | Xing Liufu | Xing Liufu, Chaolei Tan, Xiaotong Lin, Yonggang Qi, Jinxuan Li,
Jian-Fang Hu | SAUGE: Taming SAM for Uncertainty-Aligned Multi-Granularity Edge
Detection | Accepted to AAAI 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Edge labels are typically at various granularity levels owing to the varying
preferences of annotators, thus handling the subjectivity of per-pixel labels
has been a focal point for edge detection. Previous methods often employ a
simple voting strategy to diminish such label uncertainty or impose a strong
assumption of labels with a pre-defined distribution, e.g., Gaussian. In this
work, we unveil that the segment anything model (SAM) provides strong prior
knowledge to model the uncertainty in edge labels. Our key insight is that the
intermediate SAM features inherently correspond to object edges at various
granularities, which reflects different edge options due to uncertainty.
Therefore, we attempt to align uncertainty with granularity by regressing
intermediate SAM features from different layers to object edges at
multi-granularity levels. In doing so, the model can fully and explicitly
explore diverse ``uncertainties'' in a data-driven fashion. Specifically, we
inject a lightweight module (~ 1.5% additional parameters) into the frozen SAM
to progressively fuse and adapt its intermediate features to estimate edges
from coarse to fine. It is crucial to normalize the granularity level of human
edge labels to match their innate uncertainty. For this, we simply perform
linear blending to the real edge labels at hand to create pseudo labels with
varying granularities. Consequently, our uncertainty-aligned edge detector can
flexibly produce edges at any desired granularity (including an optimal one).
Thanks to SAM, our model uniquely demonstrates strong generalizability for
cross-dataset edge detection. Extensive experimental results on BSDS500,
Muticue and NYUDv2 validate our model's superiority.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 13:18:41 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 17:43:15 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liufu",
"Xing",
""
],
[
"Tan",
"Chaolei",
""
],
[
"Lin",
"Xiaotong",
""
],
[
"Qi",
"Yonggang",
""
],
[
"Li",
"Jinxuan",
""
],
[
"Hu",
"Jian-Fang",
""
]
]
| TITLE: SAUGE: Taming SAM for Uncertainty-Aligned Multi-Granularity Edge
Detection
ABSTRACT: Edge labels are typically at various granularity levels owing to the varying
preferences of annotators, thus handling the subjectivity of per-pixel labels
has been a focal point for edge detection. Previous methods often employ a
simple voting strategy to diminish such label uncertainty or impose a strong
assumption of labels with a pre-defined distribution, e.g., Gaussian. In this
work, we unveil that the segment anything model (SAM) provides strong prior
knowledge to model the uncertainty in edge labels. Our key insight is that the
intermediate SAM features inherently correspond to object edges at various
granularities, which reflects different edge options due to uncertainty.
Therefore, we attempt to align uncertainty with granularity by regressing
intermediate SAM features from different layers to object edges at
multi-granularity levels. In doing so, the model can fully and explicitly
explore diverse ``uncertainties'' in a data-driven fashion. Specifically, we
inject a lightweight module (~ 1.5% additional parameters) into the frozen SAM
to progressively fuse and adapt its intermediate features to estimate edges
from coarse to fine. It is crucial to normalize the granularity level of human
edge labels to match their innate uncertainty. For this, we simply perform
linear blending to the real edge labels at hand to create pseudo labels with
varying granularities. Consequently, our uncertainty-aligned edge detector can
flexibly produce edges at any desired granularity (including an optimal one).
Thanks to SAM, our model uniquely demonstrates strong generalizability for
cross-dataset edge detection. Extensive experimental results on BSDS500,
Muticue and NYUDv2 validate our model's superiority.
| no_new_dataset | 0.945701 |
2412.13178 | Sheng Yin | Sheng Yin, Xianghe Pang, Yuanzhuo Ding, Menglan Chen, Yutong Bi,
Yichen Xiong, Wenhao Huang, Zhen Xiang, Jing Shao, and Siheng Chen | SafeAgentBench: A Benchmark for Safe Task Planning of Embodied LLM
Agents | 23 pages, 17 tables, 14 figures | null | null | null | cs.CR cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the integration of large language models (LLMs), embodied agents have
strong capabilities to understand and plan complicated natural language
instructions. However, a foreseeable issue is that those embodied agents can
also flawlessly execute some hazardous tasks, potentially causing damages in
the real world. Existing benchmarks predominantly overlook critical safety
risks, focusing solely on planning performance, while a few evaluate LLMs'
safety awareness only on non-interactive image-text data. To address this gap,
we present SafeAgentBench-the first benchmark for safety-aware task planning of
embodied LLM agents in interactive simulation environments. SafeAgentBench
includes: (1) an executable, diverse, and high-quality dataset of 750 tasks,
rigorously curated to cover 10 potential hazards and 3 task types; (2)
SafeAgentEnv, a universal embodied environment with a low-level controller,
supporting multi-agent execution with 17 high-level actions for 8
state-of-the-art baselines; and (3) reliable evaluation methods from both
execution and semantic perspectives. Experimental results show that, although
agents based on different design frameworks exhibit substantial differences in
task success rates, their overall safety awareness remains weak. The most
safety-conscious baseline achieves only a 10\% rejection rate for detailed
hazardous tasks. Moreover, simply replacing the LLM driving the agent does not
lead to notable improvements in safety awareness. More details and code are
available at https://github.com/shengyin1224/SafeAgentBench.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 18:55:58 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Dec 2024 14:00:02 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Feb 2025 09:20:21 GMT"
},
{
"version": "v4",
"created": "Mon, 10 Mar 2025 12:13:09 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yin",
"Sheng",
""
],
[
"Pang",
"Xianghe",
""
],
[
"Ding",
"Yuanzhuo",
""
],
[
"Chen",
"Menglan",
""
],
[
"Bi",
"Yutong",
""
],
[
"Xiong",
"Yichen",
""
],
[
"Huang",
"Wenhao",
""
],
[
"Xiang",
"Zhen",
""
],
[
"Shao",
"Jing",
""
],
[
"Chen",
"Siheng",
""
]
]
| TITLE: SafeAgentBench: A Benchmark for Safe Task Planning of Embodied LLM
Agents
ABSTRACT: With the integration of large language models (LLMs), embodied agents have
strong capabilities to understand and plan complicated natural language
instructions. However, a foreseeable issue is that those embodied agents can
also flawlessly execute some hazardous tasks, potentially causing damages in
the real world. Existing benchmarks predominantly overlook critical safety
risks, focusing solely on planning performance, while a few evaluate LLMs'
safety awareness only on non-interactive image-text data. To address this gap,
we present SafeAgentBench-the first benchmark for safety-aware task planning of
embodied LLM agents in interactive simulation environments. SafeAgentBench
includes: (1) an executable, diverse, and high-quality dataset of 750 tasks,
rigorously curated to cover 10 potential hazards and 3 task types; (2)
SafeAgentEnv, a universal embodied environment with a low-level controller,
supporting multi-agent execution with 17 high-level actions for 8
state-of-the-art baselines; and (3) reliable evaluation methods from both
execution and semantic perspectives. Experimental results show that, although
agents based on different design frameworks exhibit substantial differences in
task success rates, their overall safety awareness remains weak. The most
safety-conscious baseline achieves only a 10\% rejection rate for detailed
hazardous tasks. Moreover, simply replacing the LLM driving the agent does not
lead to notable improvements in safety awareness. More details and code are
available at https://github.com/shengyin1224/SafeAgentBench.
| new_dataset | 0.91837 |
2412.13654 | Yuning Peng | Yuning Peng, Haiping Wang, Yuan Liu, Chenglu Wen, Zhen Dong, Bisheng
Yang | GAGS: Granularity-Aware Feature Distillation for Language Gaussian
Splatting | Project page: https://pz0826.github.io/GAGS-Webpage/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D open-vocabulary scene understanding, which accurately perceives complex
semantic properties of objects in space, has gained significant attention in
recent years. In this paper, we propose GAGS, a framework that distills 2D CLIP
features into 3D Gaussian splatting, enabling open-vocabulary queries for
renderings on arbitrary viewpoints. The main challenge of distilling 2D
features for 3D fields lies in the multiview inconsistency of extracted 2D
features, which provides unstable supervision for the 3D feature field. GAGS
addresses this challenge with two novel strategies. First, GAGS associates the
prompt point density of SAM with the camera distances, which significantly
improves the multiview consistency of segmentation results. Second, GAGS
further decodes a granularity factor to guide the distillation process and this
granularity factor can be learned in a unsupervised manner to only select the
multiview consistent 2D features in the distillation process. Experimental
results on two datasets demonstrate significant performance and stability
improvements of GAGS in visual grounding and semantic segmentation, with an
inference speed 2$\times$ faster than baseline methods. The code and additional
results are available at https://pz0826.github.io/GAGS-Webpage/ .
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 09:33:20 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 13:37:13 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Peng",
"Yuning",
""
],
[
"Wang",
"Haiping",
""
],
[
"Liu",
"Yuan",
""
],
[
"Wen",
"Chenglu",
""
],
[
"Dong",
"Zhen",
""
],
[
"Yang",
"Bisheng",
""
]
]
| TITLE: GAGS: Granularity-Aware Feature Distillation for Language Gaussian
Splatting
ABSTRACT: 3D open-vocabulary scene understanding, which accurately perceives complex
semantic properties of objects in space, has gained significant attention in
recent years. In this paper, we propose GAGS, a framework that distills 2D CLIP
features into 3D Gaussian splatting, enabling open-vocabulary queries for
renderings on arbitrary viewpoints. The main challenge of distilling 2D
features for 3D fields lies in the multiview inconsistency of extracted 2D
features, which provides unstable supervision for the 3D feature field. GAGS
addresses this challenge with two novel strategies. First, GAGS associates the
prompt point density of SAM with the camera distances, which significantly
improves the multiview consistency of segmentation results. Second, GAGS
further decodes a granularity factor to guide the distillation process and this
granularity factor can be learned in a unsupervised manner to only select the
multiview consistent 2D features in the distillation process. Experimental
results on two datasets demonstrate significant performance and stability
improvements of GAGS in visual grounding and semantic segmentation, with an
inference speed 2$\times$ faster than baseline methods. The code and additional
results are available at https://pz0826.github.io/GAGS-Webpage/ .
| no_new_dataset | 0.948106 |
2412.14833 | Hao Huang | Hao Huang, Yujie Lin, Siyu Chen, Haiyang Liu | Synchronized and Fine-Grained Head for Skeleton-Based Ambiguous Action
Recognition | 25pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Skeleton-based action recognition using GCNs has achieved remarkable
performance, but recognizing ambiguous actions, such as "waving" and
"saluting", remains a significant challenge. Existing methods typically rely on
a serial combination of GCNs and TCNs, where spatial and temporal features are
extracted independently, leading to an unbalanced spatial-temporal information,
which hinders accurate action recognition. Moreover, existing methods for
ambiguous actions often overemphasize local details, resulting in the loss of
crucial global context, which further complicates the task of differentiating
ambiguous actions. To address these challenges, we propose a lightweight
plug-and-play module called SF-Head, inserted between GCN and TCN layers.
SF-Head first conducts SSTE with a Feature Redundancy Loss (F-RL), ensuring a
balanced interaction. It then performs AC-FA, with a Feature Consistency Loss
(F-CL), which aligns the aggregated feature with their original
spatial-temporal feature. Experimental results on NTU RGB+D 60, NTU RGB+D 120,
NW-UCLA and PKU-MMD I datasets demonstrate significant improvements in
distinguishing ambiguous actions.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2024 13:21:04 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 09:43:50 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Huang",
"Hao",
""
],
[
"Lin",
"Yujie",
""
],
[
"Chen",
"Siyu",
""
],
[
"Liu",
"Haiyang",
""
]
]
| TITLE: Synchronized and Fine-Grained Head for Skeleton-Based Ambiguous Action
Recognition
ABSTRACT: Skeleton-based action recognition using GCNs has achieved remarkable
performance, but recognizing ambiguous actions, such as "waving" and
"saluting", remains a significant challenge. Existing methods typically rely on
a serial combination of GCNs and TCNs, where spatial and temporal features are
extracted independently, leading to an unbalanced spatial-temporal information,
which hinders accurate action recognition. Moreover, existing methods for
ambiguous actions often overemphasize local details, resulting in the loss of
crucial global context, which further complicates the task of differentiating
ambiguous actions. To address these challenges, we propose a lightweight
plug-and-play module called SF-Head, inserted between GCN and TCN layers.
SF-Head first conducts SSTE with a Feature Redundancy Loss (F-RL), ensuring a
balanced interaction. It then performs AC-FA, with a Feature Consistency Loss
(F-CL), which aligns the aggregated feature with their original
spatial-temporal feature. Experimental results on NTU RGB+D 60, NTU RGB+D 120,
NW-UCLA and PKU-MMD I datasets demonstrate significant improvements in
distinguishing ambiguous actions.
| no_new_dataset | 0.951278 |
2412.17210 | Hongsong Wang | Hongsong Wang, Andi Xu, Pinle Ding, Jie Gui | Dual Conditioned Motion Diffusion for Pose-Based Video Anomaly Detection | Code is on https://github.com/guijiejie/DCMD-main | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Video Anomaly Detection (VAD) is essential for computer vision research.
Existing VAD methods utilize either reconstruction-based or prediction-based
frameworks. The former excels at detecting irregular patterns or structures,
whereas the latter is capable of spotting abnormal deviations or trends. We
address pose-based video anomaly detection and introduce a novel framework
called Dual Conditioned Motion Diffusion (DCMD), which enjoys the advantages of
both approaches. The DCMD integrates conditioned motion and conditioned
embedding to comprehensively utilize the pose characteristics and latent
semantics of observed movements, respectively. In the reverse diffusion
process, a motion transformer is proposed to capture potential correlations
from multi-layered characteristics within the spectrum space of human motion.
To enhance the discriminability between normal and abnormal instances, we
design a novel United Association Discrepancy (UAD) regularization that
primarily relies on a Gaussian kernel-based time association and a
self-attention-based global association. Finally, a mask completion strategy is
introduced during the inference stage of the reverse diffusion process to
enhance the utilization of conditioned motion for the prediction branch of
anomaly detection. Extensive experiments on four datasets demonstrate that our
method dramatically outperforms state-of-the-art methods and exhibits superior
generalization performance.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2024 01:31:39 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 11:09:18 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Hongsong",
""
],
[
"Xu",
"Andi",
""
],
[
"Ding",
"Pinle",
""
],
[
"Gui",
"Jie",
""
]
]
| TITLE: Dual Conditioned Motion Diffusion for Pose-Based Video Anomaly Detection
ABSTRACT: Video Anomaly Detection (VAD) is essential for computer vision research.
Existing VAD methods utilize either reconstruction-based or prediction-based
frameworks. The former excels at detecting irregular patterns or structures,
whereas the latter is capable of spotting abnormal deviations or trends. We
address pose-based video anomaly detection and introduce a novel framework
called Dual Conditioned Motion Diffusion (DCMD), which enjoys the advantages of
both approaches. The DCMD integrates conditioned motion and conditioned
embedding to comprehensively utilize the pose characteristics and latent
semantics of observed movements, respectively. In the reverse diffusion
process, a motion transformer is proposed to capture potential correlations
from multi-layered characteristics within the spectrum space of human motion.
To enhance the discriminability between normal and abnormal instances, we
design a novel United Association Discrepancy (UAD) regularization that
primarily relies on a Gaussian kernel-based time association and a
self-attention-based global association. Finally, a mask completion strategy is
introduced during the inference stage of the reverse diffusion process to
enhance the utilization of conditioned motion for the prediction branch of
anomaly detection. Extensive experiments on four datasets demonstrate that our
method dramatically outperforms state-of-the-art methods and exhibits superior
generalization performance.
| no_new_dataset | 0.944022 |
2412.17804 | Yidi Shao | Yidi Shao, Mu Huang, Chen Change Loy, Bo Dai | GausSim: Foreseeing Reality by Gaussian Simulator for Elastic Objects | Project page: https://www.mmlab-ntu.com/project/gausim/index.html | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce GausSim, a novel neural network-based simulator designed to
capture the dynamic behaviors of real-world elastic objects represented through
Gaussian kernels. We leverage continuum mechanics and treat each kernel as a
Center of Mass System (CMS) that represents continuous piece of matter,
accounting for realistic deformations without idealized assumptions. To improve
computational efficiency and fidelity, we employ a hierarchical structure that
further organizes kernels into CMSs with explicit formulations, enabling a
coarse-to-fine simulation approach. This structure significantly reduces
computational overhead while preserving detailed dynamics. In addition, GausSim
incorporates explicit physics constraints, such as mass and momentum
conservation, ensuring interpretable results and robust, physically plausible
simulations. To validate our approach, we present a new dataset, READY,
containing multi-view videos of real-world elastic deformations. Experimental
results demonstrate that GausSim achieves superior performance compared to
existing physics-driven baselines, offering a practical and accurate solution
for simulating complex dynamic behaviors. Code and model will be released.
Project page: https://www.mmlab-ntu.com/project/gausim/index.html .
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2024 18:58:17 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 17:50:32 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Shao",
"Yidi",
""
],
[
"Huang",
"Mu",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Dai",
"Bo",
""
]
]
| TITLE: GausSim: Foreseeing Reality by Gaussian Simulator for Elastic Objects
ABSTRACT: We introduce GausSim, a novel neural network-based simulator designed to
capture the dynamic behaviors of real-world elastic objects represented through
Gaussian kernels. We leverage continuum mechanics and treat each kernel as a
Center of Mass System (CMS) that represents continuous piece of matter,
accounting for realistic deformations without idealized assumptions. To improve
computational efficiency and fidelity, we employ a hierarchical structure that
further organizes kernels into CMSs with explicit formulations, enabling a
coarse-to-fine simulation approach. This structure significantly reduces
computational overhead while preserving detailed dynamics. In addition, GausSim
incorporates explicit physics constraints, such as mass and momentum
conservation, ensuring interpretable results and robust, physically plausible
simulations. To validate our approach, we present a new dataset, READY,
containing multi-view videos of real-world elastic deformations. Experimental
results demonstrate that GausSim achieves superior performance compared to
existing physics-driven baselines, offering a practical and accurate solution
for simulating complex dynamic behaviors. Code and model will be released.
Project page: https://www.mmlab-ntu.com/project/gausim/index.html .
| new_dataset | 0.955981 |
2412.20268 | Laslo Hunhold | Laslo Hunhold, James Quinlan | Evaluation of Bfloat16, Posit, and Takum Arithmetics in Sparse Linear
Solvers | 8 pages, 6 figures | null | null | null | math.NA cs.NA | http://creativecommons.org/licenses/by/4.0/ | Solving sparse linear systems lies at the core of numerous computational
applications. Consequently, understanding the performance of recently proposed
alternatives to the established IEEE 754 floating-point numbers, such as
bfloat16 and the tapered-precision posit and takum machine number formats, is
of significant interest. This paper examines these formats in the context of
widely used solvers, namely LU, QR, and GMRES, with incomplete LU
preconditioning and mixed precision iterative refinement (MPIR). This contrasts
with the prevailing emphasis on designing specialized algorithms tailored to
new arithmetic formats.
This paper presents an extensive and unprecedented evaluation based on the
SuiteSparse Matrix Collection -- a dataset of real-world matrices with diverse
sizes and condition numbers. A key contribution is the faithful reproduction of
SuiteSparse's UMFPACK multifrontal LU factorization and SPQR multifrontal QR
factorization for machine number formats beyond single and double-precision
IEEE 754. Tapered-precision posit and takum formats show better accuracy in
direct solvers and reduced iteration counts in indirect solvers. Takum
arithmetic, in particular, exhibits exceptional stability, even at low
precision.
| [
{
"version": "v1",
"created": "Sat, 28 Dec 2024 20:49:46 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 10:13:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hunhold",
"Laslo",
""
],
[
"Quinlan",
"James",
""
]
]
| TITLE: Evaluation of Bfloat16, Posit, and Takum Arithmetics in Sparse Linear
Solvers
ABSTRACT: Solving sparse linear systems lies at the core of numerous computational
applications. Consequently, understanding the performance of recently proposed
alternatives to the established IEEE 754 floating-point numbers, such as
bfloat16 and the tapered-precision posit and takum machine number formats, is
of significant interest. This paper examines these formats in the context of
widely used solvers, namely LU, QR, and GMRES, with incomplete LU
preconditioning and mixed precision iterative refinement (MPIR). This contrasts
with the prevailing emphasis on designing specialized algorithms tailored to
new arithmetic formats.
This paper presents an extensive and unprecedented evaluation based on the
SuiteSparse Matrix Collection -- a dataset of real-world matrices with diverse
sizes and condition numbers. A key contribution is the faithful reproduction of
SuiteSparse's UMFPACK multifrontal LU factorization and SPQR multifrontal QR
factorization for machine number formats beyond single and double-precision
IEEE 754. Tapered-precision posit and takum formats show better accuracy in
direct solvers and reduced iteration counts in indirect solvers. Takum
arithmetic, in particular, exhibits exceptional stability, even at low
precision.
| new_dataset | 0.962532 |
2412.20436 | Shonosuke Harada | Shonosuke Harada, Ryosuke Yoneda, Hisashi Kashima | Treatment Effect Estimation for Graph-Structured Targets | update | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Treatment effect estimation, which helps understand the causality between
treatment and outcome variable, is a central task in decision-making across
various domains. While most studies focus on treatment effect estimation on
individual targets, in specific contexts, there is a necessity to comprehend
the treatment effect on a group of targets, especially those that have
relationships represented as a graph structure between them. In such cases, the
focus of treatment assignment is prone to depend on a particular node of the
graph, such as the one with the highest degree, thus resulting in an
observational bias from a small part of the entire graph. Whereas a bias tends
to be caused by the small part, straightforward extensions of previous studies
cannot provide efficient bias mitigation owing to the use of the entire graph
information. In this study, we propose Graph-target Treatment Effect Estimation
(GraphTEE), a framework designed to estimate treatment effects specifically on
graph-structured targets. GraphTEE aims to mitigate observational bias by
focusing on confounding variable sets and consider a new regularization
framework. Additionally, we provide a theoretical analysis on how GraphTEE
performs better in terms of bias mitigation. Experiments on synthetic and
semi-synthetic datasets demonstrate the effectiveness of our proposed method.
| [
{
"version": "v1",
"created": "Sun, 29 Dec 2024 11:21:17 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 14:36:33 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Harada",
"Shonosuke",
""
],
[
"Yoneda",
"Ryosuke",
""
],
[
"Kashima",
"Hisashi",
""
]
]
| TITLE: Treatment Effect Estimation for Graph-Structured Targets
ABSTRACT: Treatment effect estimation, which helps understand the causality between
treatment and outcome variable, is a central task in decision-making across
various domains. While most studies focus on treatment effect estimation on
individual targets, in specific contexts, there is a necessity to comprehend
the treatment effect on a group of targets, especially those that have
relationships represented as a graph structure between them. In such cases, the
focus of treatment assignment is prone to depend on a particular node of the
graph, such as the one with the highest degree, thus resulting in an
observational bias from a small part of the entire graph. Whereas a bias tends
to be caused by the small part, straightforward extensions of previous studies
cannot provide efficient bias mitigation owing to the use of the entire graph
information. In this study, we propose Graph-target Treatment Effect Estimation
(GraphTEE), a framework designed to estimate treatment effects specifically on
graph-structured targets. GraphTEE aims to mitigate observational bias by
focusing on confounding variable sets and consider a new regularization
framework. Additionally, we provide a theoretical analysis on how GraphTEE
performs better in terms of bias mitigation. Experiments on synthetic and
semi-synthetic datasets demonstrate the effectiveness of our proposed method.
| no_new_dataset | 0.944638 |
2501.00574 | Xinhao Li | Xinhao Li, Yi Wang, Jiashuo Yu, Xiangyu Zeng, Yuhan Zhu, Haian Huang,
Jianfei Gao, Kunchang Li, Yinan He, Chenting Wang, Yu Qiao, Yali Wang, Limin
Wang | VideoChat-Flash: Hierarchical Compression for Long-Context Video
Modeling | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Long-context video modeling is critical for multimodal large language models
(MLLMs), enabling them to process movies, online video streams, and so on.
Despite its advances, handling long videos remains challenging due to the
difficulty in efficiently understanding the extremely long video context. This
paper aims to address this issue from aspects of model architecture, training
data, training strategy and evaluation benchmark. First, we propose a novel
Hierarchical video token Compression (HiCo) method, which leverages visual
redundancy in long videos to compress long video context from Clip-level to
Video-level, reducing the computation significantly while preserving essential
details, achieving an extreme compression ratio of approximately 1/50 with
almost no performance loss. Second, we introduce a multi-stage short-to-long
learning scheme, a large-scale dataset of real-world long videos named LongVid,
and a challenging ``Multi-Hop Needle-In-A-Video-Haystack'' benchmark. Finally,
we build a powerful video MLLM named VideoChat-Flash, which shows a leading
performance on both mainstream long and short video benchmarks at the 2B and 7B
model scale. It first gets 99.1% accuracy over 10,000 frames in NIAH among
open-source models.
| [
{
"version": "v1",
"created": "Tue, 31 Dec 2024 18:01:23 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jan 2025 12:00:51 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 07:32:35 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Xinhao",
""
],
[
"Wang",
"Yi",
""
],
[
"Yu",
"Jiashuo",
""
],
[
"Zeng",
"Xiangyu",
""
],
[
"Zhu",
"Yuhan",
""
],
[
"Huang",
"Haian",
""
],
[
"Gao",
"Jianfei",
""
],
[
"Li",
"Kunchang",
""
],
[
"He",
"Yinan",
""
],
[
"Wang",
"Chenting",
""
],
[
"Qiao",
"Yu",
""
],
[
"Wang",
"Yali",
""
],
[
"Wang",
"Limin",
""
]
]
| TITLE: VideoChat-Flash: Hierarchical Compression for Long-Context Video
Modeling
ABSTRACT: Long-context video modeling is critical for multimodal large language models
(MLLMs), enabling them to process movies, online video streams, and so on.
Despite its advances, handling long videos remains challenging due to the
difficulty in efficiently understanding the extremely long video context. This
paper aims to address this issue from aspects of model architecture, training
data, training strategy and evaluation benchmark. First, we propose a novel
Hierarchical video token Compression (HiCo) method, which leverages visual
redundancy in long videos to compress long video context from Clip-level to
Video-level, reducing the computation significantly while preserving essential
details, achieving an extreme compression ratio of approximately 1/50 with
almost no performance loss. Second, we introduce a multi-stage short-to-long
learning scheme, a large-scale dataset of real-world long videos named LongVid,
and a challenging ``Multi-Hop Needle-In-A-Video-Haystack'' benchmark. Finally,
we build a powerful video MLLM named VideoChat-Flash, which shows a leading
performance on both mainstream long and short video benchmarks at the 2B and 7B
model scale. It first gets 99.1% accuracy over 10,000 frames in NIAH among
open-source models.
| new_dataset | 0.953319 |
2501.02229 | S M Mostaq Hossain | S M Mostaq Hossain, Amani Altarawneh and Jesse Roberts | Leveraging Large Language Models and Machine Learning for Smart Contract
Vulnerability Detection | 7 pages, 4 figures, 1 table. This paper has accepted in 2025 IEEE
15th Annual Computing and Communication Workshop and Conference (CCWC) | null | 10.1109/CCWC62904.2025.10903869 | null | cs.CR | http://creativecommons.org/licenses/by-sa/4.0/ | As blockchain technology and smart contracts become widely adopted, securing
them throughout every stage of the transaction process is essential. The
concern of improved security for smart contracts is to find and detect
vulnerabilities using classical Machine Learning (ML) models and fine-tuned
Large Language Models (LLM). The robustness of such work rests on a labeled
smart contract dataset that includes annotated vulnerabilities on which several
LLMs alongside various traditional machine learning algorithms such as
DistilBERT model is trained and tested. We train and test machine learning
algorithms to classify smart contract codes according to vulnerability types in
order to compare model performance. Having fine-tuned the LLMs specifically for
smart contract code classification should help in getting better results when
detecting several types of well-known vulnerabilities, such as Reentrancy,
Integer Overflow, Timestamp Dependency and Dangerous Delegatecall. From our
initial experimental results, it can be seen that our fine-tuned LLM surpasses
the accuracy of any other model by achieving an accuracy of over 90%, and this
advances the existing vulnerability detection benchmarks. Such performance
provides a great deal of evidence for LLMs ability to describe the subtle
patterns in the code that traditional ML models could miss. Thus, we compared
each of the ML and LLM models to give a good overview of each models strengths,
from which we can choose the most effective one for real-world applications in
smart contract security. Our research combines machine learning and large
language models to provide a rich and interpretable framework for detecting
different smart contract vulnerabilities, which lays a foundation for a more
secure blockchain ecosystem.
| [
{
"version": "v1",
"created": "Sat, 4 Jan 2025 08:32:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hossain",
"S M Mostaq",
""
],
[
"Altarawneh",
"Amani",
""
],
[
"Roberts",
"Jesse",
""
]
]
| TITLE: Leveraging Large Language Models and Machine Learning for Smart Contract
Vulnerability Detection
ABSTRACT: As blockchain technology and smart contracts become widely adopted, securing
them throughout every stage of the transaction process is essential. The
concern of improved security for smart contracts is to find and detect
vulnerabilities using classical Machine Learning (ML) models and fine-tuned
Large Language Models (LLM). The robustness of such work rests on a labeled
smart contract dataset that includes annotated vulnerabilities on which several
LLMs alongside various traditional machine learning algorithms such as
DistilBERT model is trained and tested. We train and test machine learning
algorithms to classify smart contract codes according to vulnerability types in
order to compare model performance. Having fine-tuned the LLMs specifically for
smart contract code classification should help in getting better results when
detecting several types of well-known vulnerabilities, such as Reentrancy,
Integer Overflow, Timestamp Dependency and Dangerous Delegatecall. From our
initial experimental results, it can be seen that our fine-tuned LLM surpasses
the accuracy of any other model by achieving an accuracy of over 90%, and this
advances the existing vulnerability detection benchmarks. Such performance
provides a great deal of evidence for LLMs ability to describe the subtle
patterns in the code that traditional ML models could miss. Thus, we compared
each of the ML and LLM models to give a good overview of each models strengths,
from which we can choose the most effective one for real-world applications in
smart contract security. Our research combines machine learning and large
language models to provide a rich and interpretable framework for detecting
different smart contract vulnerabilities, which lays a foundation for a more
secure blockchain ecosystem.
| new_dataset | 0.972831 |
2501.02766 | Fei Gao | Fei Gao and Ruyue Xin and Xiaocui Li and Yaqiang Zhang | Are GNNs Actually Effective for Multimodal Fault Diagnosis in
Microservice Systems? | 6 pages, 5 figures, submitted to conference | null | null | null | cs.SE cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) are widely adopted for fault diagnosis in
microservice systems, premised on their ability to model service dependencies.
However, the necessity of explicit graph structures remains underexamined, as
existing evaluations conflate preprocessing with architectural contributions.
To isolate the true value of GNNs, we propose DiagMLP, a deliberately minimal,
topology-agnostic baseline that retains multimodal fusion capabilities while
excluding graph modeling. Through ablation experiments across five datasets,
DiagMLP achieves performance parity with state-of-the-art GNN-based methods in
fault detection, localization, and classification. These findings challenge the
prevailing assumption that graph structures are indispensable, revealing that:
(i) preprocessing pipelines already encode critical dependency information, and
(ii) GNN modules contribute marginally beyond multimodality fusion. Our work
advocates for systematic re-evaluation of architectural complexity and
highlights the need for standardized baseline protocols to validate model
innovations.
| [
{
"version": "v1",
"created": "Mon, 6 Jan 2025 05:18:13 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 09:51:12 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Gao",
"Fei",
""
],
[
"Xin",
"Ruyue",
""
],
[
"Li",
"Xiaocui",
""
],
[
"Zhang",
"Yaqiang",
""
]
]
| TITLE: Are GNNs Actually Effective for Multimodal Fault Diagnosis in
Microservice Systems?
ABSTRACT: Graph Neural Networks (GNNs) are widely adopted for fault diagnosis in
microservice systems, premised on their ability to model service dependencies.
However, the necessity of explicit graph structures remains underexamined, as
existing evaluations conflate preprocessing with architectural contributions.
To isolate the true value of GNNs, we propose DiagMLP, a deliberately minimal,
topology-agnostic baseline that retains multimodal fusion capabilities while
excluding graph modeling. Through ablation experiments across five datasets,
DiagMLP achieves performance parity with state-of-the-art GNN-based methods in
fault detection, localization, and classification. These findings challenge the
prevailing assumption that graph structures are indispensable, revealing that:
(i) preprocessing pipelines already encode critical dependency information, and
(ii) GNN modules contribute marginally beyond multimodality fusion. Our work
advocates for systematic re-evaluation of architectural complexity and
highlights the need for standardized baseline protocols to validate model
innovations.
| no_new_dataset | 0.941277 |
2501.04696 | Ulindu De Silva | Ulindu De Silva, Didula Samaraweera, Sasini Wanigathunga, Kavindu
Kariyawasam, Kanchana Ranasinghe, Muzammal Naseer, Ranga Rodrigo | Test-Time Optimization for Domain Adaptive Open Vocabulary Segmentation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present Seg-TTO, a novel framework for zero-shot, open-vocabulary semantic
segmentation (OVSS), designed to excel in specialized domain tasks. While
current open-vocabulary approaches show impressive performance on standard
segmentation benchmarks under zero-shot settings, they fall short of supervised
counterparts on highly domain-specific datasets. We focus on
segmentation-specific test-time optimization to address this gap. Segmentation
requires an understanding of multiple concepts within a single image while
retaining the locality and spatial structure of representations. We propose a
novel self-supervised objective adhering to these requirements and use it to
align the model parameters with input images at test time. In the textual
modality, we learn multiple embeddings for each category to capture diverse
concepts within an image, while in the visual modality, we calculate
pixel-level losses followed by embedding aggregation operations specific to
preserving spatial structure. Our resulting framework termed Seg-TTO is a
plug-and-play module. We integrate Seg-TTO with three state-of-the-art OVSS
approaches and evaluate across 22 challenging OVSS tasks covering a range of
specialized domains. Our Seg-TTO demonstrates clear performance improvements
(up to 27% mIoU increase on some datasets) establishing new state-of-the-art.
Our code and models will be released publicly.
| [
{
"version": "v1",
"created": "Wed, 8 Jan 2025 18:58:24 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 11:17:47 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"De Silva",
"Ulindu",
""
],
[
"Samaraweera",
"Didula",
""
],
[
"Wanigathunga",
"Sasini",
""
],
[
"Kariyawasam",
"Kavindu",
""
],
[
"Ranasinghe",
"Kanchana",
""
],
[
"Naseer",
"Muzammal",
""
],
[
"Rodrigo",
"Ranga",
""
]
]
| TITLE: Test-Time Optimization for Domain Adaptive Open Vocabulary Segmentation
ABSTRACT: We present Seg-TTO, a novel framework for zero-shot, open-vocabulary semantic
segmentation (OVSS), designed to excel in specialized domain tasks. While
current open-vocabulary approaches show impressive performance on standard
segmentation benchmarks under zero-shot settings, they fall short of supervised
counterparts on highly domain-specific datasets. We focus on
segmentation-specific test-time optimization to address this gap. Segmentation
requires an understanding of multiple concepts within a single image while
retaining the locality and spatial structure of representations. We propose a
novel self-supervised objective adhering to these requirements and use it to
align the model parameters with input images at test time. In the textual
modality, we learn multiple embeddings for each category to capture diverse
concepts within an image, while in the visual modality, we calculate
pixel-level losses followed by embedding aggregation operations specific to
preserving spatial structure. Our resulting framework termed Seg-TTO is a
plug-and-play module. We integrate Seg-TTO with three state-of-the-art OVSS
approaches and evaluate across 22 challenging OVSS tasks covering a range of
specialized domains. Our Seg-TTO demonstrates clear performance improvements
(up to 27% mIoU increase on some datasets) establishing new state-of-the-art.
Our code and models will be released publicly.
| no_new_dataset | 0.945601 |
2501.07643 | Andrew Larkoski | Andrew J. Larkoski | A Step Toward Interpretability: Smearing the Likelihood | 16+1 pages, 3 figures; v2: JHEP version, added more motivation and
context in introduction, added more future directions and follow-ups in
conclusion, fixed some typos | null | null | null | hep-ph cs.LG hep-ex stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of interpretability of machine learning architecture in particle
physics has no agreed-upon definition, much less any proposed solution. We
present a first modest step toward these goals by proposing a definition and
corresponding practical method for isolation and identification of relevant
physical energy scales exploited by the machine. This is accomplished by
smearing or averaging over all input events that lie within a prescribed metric
energy distance of one another and correspondingly renders any quantity
measured on a finite, discrete dataset continuous over the dataspace. Within
this approach, we are able to explicitly demonstrate that (approximate) scaling
laws are a consequence of extreme value theory applied to analysis of the
distribution of the irreducible minimal distance over which a machine must
extrapolate given a finite dataset. As an example, we study quark versus gluon
jet identification, construct the smeared likelihood, and show that
discrimination power steadily increases as resolution decreases, indicating
that the true likelihood for the problem is sensitive to emissions at all
scales.
| [
{
"version": "v1",
"created": "Mon, 13 Jan 2025 19:09:42 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 16:35:05 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Larkoski",
"Andrew J.",
""
]
]
| TITLE: A Step Toward Interpretability: Smearing the Likelihood
ABSTRACT: The problem of interpretability of machine learning architecture in particle
physics has no agreed-upon definition, much less any proposed solution. We
present a first modest step toward these goals by proposing a definition and
corresponding practical method for isolation and identification of relevant
physical energy scales exploited by the machine. This is accomplished by
smearing or averaging over all input events that lie within a prescribed metric
energy distance of one another and correspondingly renders any quantity
measured on a finite, discrete dataset continuous over the dataspace. Within
this approach, we are able to explicitly demonstrate that (approximate) scaling
laws are a consequence of extreme value theory applied to analysis of the
distribution of the irreducible minimal distance over which a machine must
extrapolate given a finite dataset. As an example, we study quark versus gluon
jet identification, construct the smeared likelihood, and show that
discrimination power steadily increases as resolution decreases, indicating
that the true likelihood for the problem is sensitive to emissions at all
scales.
| no_new_dataset | 0.946843 |
2501.08521 | Huy Le Quang | Huy Q. Le, Ye Lin Tun, Yu Qiao, Minh N. H. Nguyen, Keon Oh Kim, Choong
Seon Hong | Mitigating Domain Shift in Federated Learning via Intra- and
Inter-Domain Prototypes | 13 pages, 11 figures, 7 tables | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) has emerged as a decentralized machine learning
technique, allowing clients to train a global model collaboratively without
sharing private data. However, most FL studies ignore the crucial challenge of
heterogeneous domains where each client has a distinct feature distribution,
which is popular in real-world scenarios. Prototype learning, which leverages
the mean feature vectors within the same classes, has become a prominent
solution for federated learning under domain shift. However, existing federated
prototype learning methods focus soley on inter-domain prototypes and neglect
intra-domain perspectives. In this work, we introduce a novel federated
prototype learning method, namely I$^2$PFL, which incorporates
$\textbf{I}$ntra-domain and $\textbf{I}$nter-domain $\textbf{P}$rototypes, to
mitigate domain shift from both perspectives and learn a generalized global
model across multiple domains in federated learning. To construct intra-domain
prototypes, we propose feature alignment with MixUp-based augmented prototypes
to capture the diversity within local domains and enhance the generalization of
local features. Additionally, we introduce a reweighting mechanism for
inter-domain prototypes to generate generalized prototypes that reduce domain
shift while providing inter-domain knowledge across multiple clients. Extensive
experiments on the Digits, Office-10, and PACS datasets illustrate the superior
performance of our method compared to other baselines.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2025 02:17:38 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 02:01:38 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Le",
"Huy Q.",
""
],
[
"Tun",
"Ye Lin",
""
],
[
"Qiao",
"Yu",
""
],
[
"Nguyen",
"Minh N. H.",
""
],
[
"Kim",
"Keon Oh",
""
],
[
"Hong",
"Choong Seon",
""
]
]
| TITLE: Mitigating Domain Shift in Federated Learning via Intra- and
Inter-Domain Prototypes
ABSTRACT: Federated Learning (FL) has emerged as a decentralized machine learning
technique, allowing clients to train a global model collaboratively without
sharing private data. However, most FL studies ignore the crucial challenge of
heterogeneous domains where each client has a distinct feature distribution,
which is popular in real-world scenarios. Prototype learning, which leverages
the mean feature vectors within the same classes, has become a prominent
solution for federated learning under domain shift. However, existing federated
prototype learning methods focus soley on inter-domain prototypes and neglect
intra-domain perspectives. In this work, we introduce a novel federated
prototype learning method, namely I$^2$PFL, which incorporates
$\textbf{I}$ntra-domain and $\textbf{I}$nter-domain $\textbf{P}$rototypes, to
mitigate domain shift from both perspectives and learn a generalized global
model across multiple domains in federated learning. To construct intra-domain
prototypes, we propose feature alignment with MixUp-based augmented prototypes
to capture the diversity within local domains and enhance the generalization of
local features. Additionally, we introduce a reweighting mechanism for
inter-domain prototypes to generate generalized prototypes that reduce domain
shift while providing inter-domain knowledge across multiple clients. Extensive
experiments on the Digits, Office-10, and PACS datasets illustrate the superior
performance of our method compared to other baselines.
| no_new_dataset | 0.944587 |
2501.08654 | Xianqi Wang | Xianqi Wang, Hao Yang, Gangwei Xu, Junda Cheng, Min Lin, Yong Deng,
Jinliang Zang, Yurui Chen, Xin Yang | ZeroStereo: Zero-shot Stereo Matching from Single Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art supervised stereo matching methods have achieved remarkable
performance on various benchmarks. However, their generalization to real-world
scenarios remains challenging due to the scarcity of annotated real-world
stereo data. In this paper, we propose ZeroStereo, a novel stereo image
generation pipeline for zero-shot stereo matching. Our approach synthesizes
high-quality right images from arbitrary single images by leveraging pseudo
disparities generated by a monocular depth estimation model. Unlike previous
methods that address occluded regions by filling missing areas with neighboring
pixels or random backgrounds, we fine-tune a diffusion inpainting model to
recover missing details while preserving semantic structure. Additionally, we
propose Training-Free Confidence Generation, which mitigates the impact of
unreliable pseudo labels without additional training, and Adaptive Disparity
Selection, which ensures a diverse and realistic disparity distribution while
preventing excessive occlusion and foreground distortion. Experiments
demonstrate that models trained with our pipeline achieve state-of-the-art
zero-shot generalization across multiple datasets with only a dataset volume
comparable to Scene Flow. Code: https://github.com/Windsrain/ZeroStereo.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2025 08:43:48 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 09:29:56 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Xianqi",
""
],
[
"Yang",
"Hao",
""
],
[
"Xu",
"Gangwei",
""
],
[
"Cheng",
"Junda",
""
],
[
"Lin",
"Min",
""
],
[
"Deng",
"Yong",
""
],
[
"Zang",
"Jinliang",
""
],
[
"Chen",
"Yurui",
""
],
[
"Yang",
"Xin",
""
]
]
| TITLE: ZeroStereo: Zero-shot Stereo Matching from Single Images
ABSTRACT: State-of-the-art supervised stereo matching methods have achieved remarkable
performance on various benchmarks. However, their generalization to real-world
scenarios remains challenging due to the scarcity of annotated real-world
stereo data. In this paper, we propose ZeroStereo, a novel stereo image
generation pipeline for zero-shot stereo matching. Our approach synthesizes
high-quality right images from arbitrary single images by leveraging pseudo
disparities generated by a monocular depth estimation model. Unlike previous
methods that address occluded regions by filling missing areas with neighboring
pixels or random backgrounds, we fine-tune a diffusion inpainting model to
recover missing details while preserving semantic structure. Additionally, we
propose Training-Free Confidence Generation, which mitigates the impact of
unreliable pseudo labels without additional training, and Adaptive Disparity
Selection, which ensures a diverse and realistic disparity distribution while
preventing excessive occlusion and foreground distortion. Experiments
demonstrate that models trained with our pipeline achieve state-of-the-art
zero-shot generalization across multiple datasets with only a dataset volume
comparable to Scene Flow. Code: https://github.com/Windsrain/ZeroStereo.
| no_new_dataset | 0.952794 |
2501.09347 | Shiu-Hong Kao | Shiu-hong Kao, Xiao Li, Jinglu Wang, Yang Li, Chi-Keung Tang, Yu-Wing
Tai, Yan Lu | UVRM: A Scalable 3D Reconstruction Model from Unposed Videos | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Large Reconstruction Models (LRMs) have recently become a popular method for
creating 3D foundational models. Training 3D reconstruction models with 2D
visual data traditionally requires prior knowledge of camera poses for the
training samples, a process that is both time-consuming and prone to errors.
Consequently, 3D reconstruction training has been confined to either synthetic
3D datasets or small-scale datasets with annotated poses. In this study, we
investigate the feasibility of 3D reconstruction using unposed video data of
various objects. We introduce UVRM, a novel 3D reconstruction model capable of
being trained and evaluated on monocular videos without requiring any
information about the pose. UVRM uses a transformer network to implicitly
aggregate video frames into a pose-invariant latent feature space, which is
then decoded into a tri-plane 3D representation. To obviate the need for
ground-truth pose annotations during training, UVRM employs a combination of
the score distillation sampling (SDS) method and an analysis-by-synthesis
approach, progressively synthesizing pseudo novel-views using a pre-trained
diffusion model. We qualitatively and quantitatively evaluate UVRM's
performance on the G-Objaverse and CO3D datasets without relying on pose
information. Extensive experiments show that UVRM is capable of effectively and
efficiently reconstructing a wide range of 3D objects from unposed videos.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2025 08:00:17 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 14:55:33 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kao",
"Shiu-hong",
""
],
[
"Li",
"Xiao",
""
],
[
"Wang",
"Jinglu",
""
],
[
"Li",
"Yang",
""
],
[
"Tang",
"Chi-Keung",
""
],
[
"Tai",
"Yu-Wing",
""
],
[
"Lu",
"Yan",
""
]
]
| TITLE: UVRM: A Scalable 3D Reconstruction Model from Unposed Videos
ABSTRACT: Large Reconstruction Models (LRMs) have recently become a popular method for
creating 3D foundational models. Training 3D reconstruction models with 2D
visual data traditionally requires prior knowledge of camera poses for the
training samples, a process that is both time-consuming and prone to errors.
Consequently, 3D reconstruction training has been confined to either synthetic
3D datasets or small-scale datasets with annotated poses. In this study, we
investigate the feasibility of 3D reconstruction using unposed video data of
various objects. We introduce UVRM, a novel 3D reconstruction model capable of
being trained and evaluated on monocular videos without requiring any
information about the pose. UVRM uses a transformer network to implicitly
aggregate video frames into a pose-invariant latent feature space, which is
then decoded into a tri-plane 3D representation. To obviate the need for
ground-truth pose annotations during training, UVRM employs a combination of
the score distillation sampling (SDS) method and an analysis-by-synthesis
approach, progressively synthesizing pseudo novel-views using a pre-trained
diffusion model. We qualitatively and quantitatively evaluate UVRM's
performance on the G-Objaverse and CO3D datasets without relying on pose
information. Extensive experiments show that UVRM is capable of effectively and
efficiently reconstructing a wide range of 3D objects from unposed videos.
| no_new_dataset | 0.947137 |
2501.09481 | Jan Skvrna | Jan Skvrna, Lukas Neumann | MonoSOWA: Scalable monocular 3D Object detector Without human
Annotations | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Inferring object 3D position and orientation from a single RGB camera is a
foundational task in computer vision with many important applications.
Traditionally, 3D object detection methods are trained in a fully-supervised
setup, requiring LiDAR and vast amounts of human annotations, which are
laborious, costly, and do not scale well with the ever-increasing amounts of
data being captured.
We present a novel method to train a 3D object detector from a single RGB
camera without domain-specific human annotations, making orders of magnitude
more data available for training. The method uses newly proposed Local Object
Motion Model to disentangle object movement source between subsequent frames,
is approximately 700 times faster than previous work and compensates camera
focal length differences to aggregate multiple datasets.
The method is evaluated on three public datasets, where despite using no
human labels, it outperforms prior work by a significant margin. It also shows
its versatility as a pre-training tool for fully-supervised training and shows
that combining pseudo-labels from multiple datasets can achieve comparable
accuracy to using human labels from a single dataset. The source code and model
will be published soon.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2025 11:35:22 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 12:27:10 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Skvrna",
"Jan",
""
],
[
"Neumann",
"Lukas",
""
]
]
| TITLE: MonoSOWA: Scalable monocular 3D Object detector Without human
Annotations
ABSTRACT: Inferring object 3D position and orientation from a single RGB camera is a
foundational task in computer vision with many important applications.
Traditionally, 3D object detection methods are trained in a fully-supervised
setup, requiring LiDAR and vast amounts of human annotations, which are
laborious, costly, and do not scale well with the ever-increasing amounts of
data being captured.
We present a novel method to train a 3D object detector from a single RGB
camera without domain-specific human annotations, making orders of magnitude
more data available for training. The method uses newly proposed Local Object
Motion Model to disentangle object movement source between subsequent frames,
is approximately 700 times faster than previous work and compensates camera
focal length differences to aggregate multiple datasets.
The method is evaluated on three public datasets, where despite using no
human labels, it outperforms prior work by a significant margin. It also shows
its versatility as a pre-training tool for fully-supervised training and shows
that combining pseudo-labels from multiple datasets can achieve comparable
accuracy to using human labels from a single dataset. The source code and model
will be published soon.
| no_new_dataset | 0.947721 |
2501.10105 | Jianxiong Li | Jinliang Zheng, Jianxiong Li, Dongxiu Liu, Yinan Zheng, Zhihao Wang,
Zhonghong Ou, Yu Liu, Jingjing Liu, Ya-Qin Zhang, Xianyuan Zhan | Universal Actions for Enhanced Embodied Foundation Models | CVPR 2025 | null | null | null | cs.RO cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Training on diverse, internet-scale data is a key factor in the success of
recent large foundation models. Yet, using the same recipe for building
embodied agents has faced noticeable difficulties. Despite the availability of
many crowd-sourced embodied datasets, their action spaces often exhibit
significant heterogeneity due to distinct physical embodiment and control
interfaces for different robots, causing substantial challenges in developing
embodied foundation models using cross-domain data. In this paper, we introduce
UniAct, a new embodied foundation modeling framework operating in a Universal
Action Space. Our learned universal actions capture the generic atomic
behaviors across diverse robots by exploiting their shared structural features,
and enable enhanced cross-domain data utilization and cross-embodiment
generalizations by eliminating the notorious heterogeneity. The universal
actions can be efficiently translated back to heterogeneous actionable commands
by simply adding embodiment-specific details, from which fast adaptation to new
robots becomes simple and straightforward. Our 0.5B instantiation of UniAct
outperforms 14X larger SOTA embodied foundation models in extensive evaluations
on various real-world and simulation robots, showcasing exceptional
cross-embodiment control and adaptation capability, highlighting the crucial
benefit of adopting universal actions. Project page:
https://github.com/2toinf/UniAct
| [
{
"version": "v1",
"created": "Fri, 17 Jan 2025 10:45:22 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 13:55:48 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zheng",
"Jinliang",
""
],
[
"Li",
"Jianxiong",
""
],
[
"Liu",
"Dongxiu",
""
],
[
"Zheng",
"Yinan",
""
],
[
"Wang",
"Zhihao",
""
],
[
"Ou",
"Zhonghong",
""
],
[
"Liu",
"Yu",
""
],
[
"Liu",
"Jingjing",
""
],
[
"Zhang",
"Ya-Qin",
""
],
[
"Zhan",
"Xianyuan",
""
]
]
| TITLE: Universal Actions for Enhanced Embodied Foundation Models
ABSTRACT: Training on diverse, internet-scale data is a key factor in the success of
recent large foundation models. Yet, using the same recipe for building
embodied agents has faced noticeable difficulties. Despite the availability of
many crowd-sourced embodied datasets, their action spaces often exhibit
significant heterogeneity due to distinct physical embodiment and control
interfaces for different robots, causing substantial challenges in developing
embodied foundation models using cross-domain data. In this paper, we introduce
UniAct, a new embodied foundation modeling framework operating in a Universal
Action Space. Our learned universal actions capture the generic atomic
behaviors across diverse robots by exploiting their shared structural features,
and enable enhanced cross-domain data utilization and cross-embodiment
generalizations by eliminating the notorious heterogeneity. The universal
actions can be efficiently translated back to heterogeneous actionable commands
by simply adding embodiment-specific details, from which fast adaptation to new
robots becomes simple and straightforward. Our 0.5B instantiation of UniAct
outperforms 14X larger SOTA embodied foundation models in extensive evaluations
on various real-world and simulation robots, showcasing exceptional
cross-embodiment control and adaptation capability, highlighting the crucial
benefit of adopting universal actions. Project page:
https://github.com/2toinf/UniAct
| no_new_dataset | 0.942454 |
2501.12295 | Wenxin Ma | Wenxin Ma, Qingsong Yao, Xiang Zhang, Zhelong Huang, Zihang Jiang, S.
Kevin Zhou | Towards Accurate Unified Anomaly Segmentation | 8 pages, 5 figures | WACV 2025 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised anomaly detection (UAD) from images strives to model normal data
distributions, creating discriminative representations to distinguish and
precisely localize anomalies. Despite recent advancements in the efficient and
unified one-for-all scheme, challenges persist in accurately segmenting
anomalies for further monitoring. Moreover, this problem is obscured by the
widely-used AUROC metric under imbalanced UAD settings. This motivates us to
emphasize the significance of precise segmentation of anomaly pixels using pAP
and DSC as metrics. To address the unsolved segmentation task, we introduce the
Unified Anomaly Segmentation (UniAS). UniAS presents a multi-level hybrid
pipeline that progressively enhances normal information from coarse to fine,
incorporating a novel multi-granularity gated CNN (MGG-CNN) into Transformer
layers to explicitly aggregate local details from different granularities.
UniAS achieves state-of-the-art anomaly segmentation performance, attaining
65.12/59.33 and 40.06/32.50 in pAP/DSC on the MVTec-AD and VisA datasets,
respectively, surpassing previous methods significantly. The codes are shared
at https://github.com/Mwxinnn/UniAS.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 17:02:51 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ma",
"Wenxin",
""
],
[
"Yao",
"Qingsong",
""
],
[
"Zhang",
"Xiang",
""
],
[
"Huang",
"Zhelong",
""
],
[
"Jiang",
"Zihang",
""
],
[
"Zhou",
"S. Kevin",
""
]
]
| TITLE: Towards Accurate Unified Anomaly Segmentation
ABSTRACT: Unsupervised anomaly detection (UAD) from images strives to model normal data
distributions, creating discriminative representations to distinguish and
precisely localize anomalies. Despite recent advancements in the efficient and
unified one-for-all scheme, challenges persist in accurately segmenting
anomalies for further monitoring. Moreover, this problem is obscured by the
widely-used AUROC metric under imbalanced UAD settings. This motivates us to
emphasize the significance of precise segmentation of anomaly pixels using pAP
and DSC as metrics. To address the unsolved segmentation task, we introduce the
Unified Anomaly Segmentation (UniAS). UniAS presents a multi-level hybrid
pipeline that progressively enhances normal information from coarse to fine,
incorporating a novel multi-granularity gated CNN (MGG-CNN) into Transformer
layers to explicitly aggregate local details from different granularities.
UniAS achieves state-of-the-art anomaly segmentation performance, attaining
65.12/59.33 and 40.06/32.50 in pAP/DSC on the MVTec-AD and VisA datasets,
respectively, surpassing previous methods significantly. The codes are shared
at https://github.com/Mwxinnn/UniAS.
| no_new_dataset | 0.945399 |
2501.13340 | Hao Fang | Hao Fang, Xiaohang Sui, Hongyao Yu, Kuofeng Gao, Jiawei Kong, Sijin
Yu, Bin Chen, Hao Wu, Shu-Tao Xia | Retrievals Can Be Detrimental: A Contrastive Backdoor Attack Paradigm on
Retrieval-Augmented Diffusion Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Diffusion models (DMs) have recently demonstrated remarkable generation
capability. However, their training generally requires huge computational
resources and large-scale datasets. To solve these, recent studies empower DMs
with the advanced Retrieval-Augmented Generation (RAG) technique and propose
retrieval-augmented diffusion models (RDMs). By incorporating rich knowledge
from an auxiliary database, RAG enhances diffusion models' generation and
generalization ability while significantly reducing model parameters. Despite
the great success, RAG may introduce novel security issues that warrant further
investigation. In this paper, we reveal that the RDM is susceptible to backdoor
attacks by proposing a multimodal contrastive attack approach named BadRDM. Our
framework fully considers RAG's characteristics and is devised to manipulate
the retrieved items for given text triggers, thereby further controlling the
generated contents. Specifically, we first insert a tiny portion of images into
the retrieval database as target toxicity surrogates. Subsequently, a malicious
variant of contrastive learning is adopted to inject backdoors into the
retriever, which builds shortcuts from triggers to the toxicity surrogates.
Furthermore, we enhance the attacks through novel entropy-based selection and
generative augmentation strategies that can derive better toxicity surrogates.
Extensive experiments on two mainstream tasks demonstrate the proposed BadRDM
achieves outstanding attack effects while preserving the model's benign
utility.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 02:42:28 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 06:55:26 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Fang",
"Hao",
""
],
[
"Sui",
"Xiaohang",
""
],
[
"Yu",
"Hongyao",
""
],
[
"Gao",
"Kuofeng",
""
],
[
"Kong",
"Jiawei",
""
],
[
"Yu",
"Sijin",
""
],
[
"Chen",
"Bin",
""
],
[
"Wu",
"Hao",
""
],
[
"Xia",
"Shu-Tao",
""
]
]
| TITLE: Retrievals Can Be Detrimental: A Contrastive Backdoor Attack Paradigm on
Retrieval-Augmented Diffusion Models
ABSTRACT: Diffusion models (DMs) have recently demonstrated remarkable generation
capability. However, their training generally requires huge computational
resources and large-scale datasets. To solve these, recent studies empower DMs
with the advanced Retrieval-Augmented Generation (RAG) technique and propose
retrieval-augmented diffusion models (RDMs). By incorporating rich knowledge
from an auxiliary database, RAG enhances diffusion models' generation and
generalization ability while significantly reducing model parameters. Despite
the great success, RAG may introduce novel security issues that warrant further
investigation. In this paper, we reveal that the RDM is susceptible to backdoor
attacks by proposing a multimodal contrastive attack approach named BadRDM. Our
framework fully considers RAG's characteristics and is devised to manipulate
the retrieved items for given text triggers, thereby further controlling the
generated contents. Specifically, we first insert a tiny portion of images into
the retrieval database as target toxicity surrogates. Subsequently, a malicious
variant of contrastive learning is adopted to inject backdoors into the
retriever, which builds shortcuts from triggers to the toxicity surrogates.
Furthermore, we enhance the attacks through novel entropy-based selection and
generative augmentation strategies that can derive better toxicity surrogates.
Extensive experiments on two mainstream tasks demonstrate the proposed BadRDM
achieves outstanding attack effects while preserving the model's benign
utility.
| no_new_dataset | 0.945701 |
2501.13802 | Mowafak Allaham | Mowafak Allaham, Ayse D. Lokmanoglu, P. Sol Hart, Erik C. Nisbet | Enhancing LLMs for Governance with Human Oversight: Evaluating and
Aligning LLMs on Expert Classification of Climate Misinformation for
Detecting False or Misleading Claims about Climate Change | International Workshop on AI Governance: Alignment, Morality and Law
(AIGOV) 2025. AAAI Conference on Artificial Intelligence | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Climate misinformation is a problem that has the potential to be
substantially aggravated by the development of Large Language Models (LLMs). In
this study we evaluate the potential for LLMs to be part of the solution for
mitigating online dis/misinformation rather than the problem. Employing a
public expert annotated dataset and a curated sample of social media content we
evaluate the performance of proprietary vs. open source LLMs on climate
misinformation classification task, comparing them to existing climate-focused
computer-assisted tools and expert assessments. Results show (1) open-source
models substantially under-perform in classifying climate misinformation
compared to proprietary models, (2) existing climate-focused computer-assisted
tools leveraging expert-annotated datasets continues to outperform many of
proprietary models, including GPT-4o, and (3) demonstrate the efficacy and
generalizability of fine-tuning GPT-3.5-turbo on expert annotated dataset in
classifying claims about climate change at the equivalency of climate change
experts with over 20 years of experience in climate communication. These
findings highlight 1) the importance of incorporating human-oversight, such as
incorporating expert-annotated datasets in training LLMs, for governance tasks
that require subject-matter expertise like classifying climate misinformation,
and 2) the potential for LLMs in facilitating civil society organizations to
engage in various governance tasks such as classifying false or misleading
claims in domains beyond climate change such as politics and health science.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 16:21:15 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 16:39:06 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Allaham",
"Mowafak",
""
],
[
"Lokmanoglu",
"Ayse D.",
""
],
[
"Hart",
"P. Sol",
""
],
[
"Nisbet",
"Erik C.",
""
]
]
| TITLE: Enhancing LLMs for Governance with Human Oversight: Evaluating and
Aligning LLMs on Expert Classification of Climate Misinformation for
Detecting False or Misleading Claims about Climate Change
ABSTRACT: Climate misinformation is a problem that has the potential to be
substantially aggravated by the development of Large Language Models (LLMs). In
this study we evaluate the potential for LLMs to be part of the solution for
mitigating online dis/misinformation rather than the problem. Employing a
public expert annotated dataset and a curated sample of social media content we
evaluate the performance of proprietary vs. open source LLMs on climate
misinformation classification task, comparing them to existing climate-focused
computer-assisted tools and expert assessments. Results show (1) open-source
models substantially under-perform in classifying climate misinformation
compared to proprietary models, (2) existing climate-focused computer-assisted
tools leveraging expert-annotated datasets continues to outperform many of
proprietary models, including GPT-4o, and (3) demonstrate the efficacy and
generalizability of fine-tuning GPT-3.5-turbo on expert annotated dataset in
classifying claims about climate change at the equivalency of climate change
experts with over 20 years of experience in climate communication. These
findings highlight 1) the importance of incorporating human-oversight, such as
incorporating expert-annotated datasets in training LLMs, for governance tasks
that require subject-matter expertise like classifying climate misinformation,
and 2) the potential for LLMs in facilitating civil society organizations to
engage in various governance tasks such as classifying false or misleading
claims in domains beyond climate change such as politics and health science.
| no_new_dataset | 0.953794 |
2501.14951 | Hongbo Zheng | Hongbo Zheng, Suyuan Wang, Neeraj Gangwar, Nickvash Kani | E-Gen: Leveraging E-Graphs to Improve Continuous Representations of
Symbolic Expressions | null | null | null | null | cs.LG cs.CL cs.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vector representations have been pivotal in advancing natural language
processing (NLP), with prior research focusing on embedding techniques for
mathematical expressions using mathematically equivalent formulations. While
effective, these approaches are constrained by the size and diversity of
training data. In this work, we address these limitations by introducing E-Gen,
a novel e-graph-based dataset generation scheme that synthesizes large and
diverse mathematical expression datasets, surpassing prior methods in size and
operator variety. Leveraging this dataset, we train embedding models using two
strategies: (1) generating mathematically equivalent expressions, and (2)
contrastive learning to explicitly group equivalent expressions. We evaluate
these embeddings on both in-distribution and out-of-distribution mathematical
language processing tasks, comparing them against prior methods. Finally, we
demonstrate that our embedding-based approach outperforms state-of-the-art
large language models (LLMs) on several tasks, underscoring the necessity of
optimizing embedding methods for the mathematical data modality. The source
code and datasets are available at https://github.com/MLPgroup/E-Gen.
| [
{
"version": "v1",
"created": "Fri, 24 Jan 2025 22:39:08 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 20:31:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zheng",
"Hongbo",
""
],
[
"Wang",
"Suyuan",
""
],
[
"Gangwar",
"Neeraj",
""
],
[
"Kani",
"Nickvash",
""
]
]
| TITLE: E-Gen: Leveraging E-Graphs to Improve Continuous Representations of
Symbolic Expressions
ABSTRACT: Vector representations have been pivotal in advancing natural language
processing (NLP), with prior research focusing on embedding techniques for
mathematical expressions using mathematically equivalent formulations. While
effective, these approaches are constrained by the size and diversity of
training data. In this work, we address these limitations by introducing E-Gen,
a novel e-graph-based dataset generation scheme that synthesizes large and
diverse mathematical expression datasets, surpassing prior methods in size and
operator variety. Leveraging this dataset, we train embedding models using two
strategies: (1) generating mathematically equivalent expressions, and (2)
contrastive learning to explicitly group equivalent expressions. We evaluate
these embeddings on both in-distribution and out-of-distribution mathematical
language processing tasks, comparing them against prior methods. Finally, we
demonstrate that our embedding-based approach outperforms state-of-the-art
large language models (LLMs) on several tasks, underscoring the necessity of
optimizing embedding methods for the mathematical data modality. The source
code and datasets are available at https://github.com/MLPgroup/E-Gen.
| no_new_dataset | 0.939359 |
2501.15211 | Yuanze Hu | Siqi Wang, Yuanze Hu, Xinwang Liu, Siwei Wang, Guangpu Wang, Chuanfu
Xu, Jie Liu, Ping Chen | "Stones from Other Hills can Polish Jade": Zero-shot Anomaly Image
Synthesis via Cross-domain Anomaly Injection | 10 pages, 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Industrial image anomaly detection (IAD) is a pivotal topic with huge value.
Due to anomaly's nature, real anomalies in a specific modern industrial domain
(i.e. domain-specific anomalies) are usually too rare to collect, which
severely hinders IAD. Thus, zero-shot anomaly synthesis (ZSAS), which
synthesizes pseudo anomaly images without any domain-specific anomaly, emerges
as a vital technique for IAD. However, existing solutions are either unable to
synthesize authentic pseudo anomalies, or require cumbersome training. Thus, we
focus on ZSAS and propose a brand-new paradigm that can realize both authentic
and training-free ZSAS. It is based on a chronically-ignored fact: Although
domain-specific anomalies are rare, real anomalies from other domains (i.e.
cross-domain anomalies) are actually abundant and directly applicable to ZSAS.
Specifically, our new ZSAS paradigm makes three-fold contributions: First, we
propose a novel method named Cross-domain Anomaly Injection (CAI), which
directly exploits cross-domain anomalies to enable highly authentic ZSAS in a
training-free manner. Second, to supply CAI with sufficient cross-domain
anomalies, we build the first Domain-agnostic Anomaly Dataset within our best
knowledge, which provides ZSAS with abundant real anomaly patterns. Third, we
propose a CAI-guided Diffusion Mechanism, which further breaks the quantity
limit of real anomalies and enable unlimited anomaly synthesis. Our
head-to-head comparison with existing ZSAS solutions justifies our paradigm's
superior performance for IAD and demonstrates it as an effective and pragmatic
ZSAS solution.
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2025 13:30:03 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 12:58:44 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Siqi",
""
],
[
"Hu",
"Yuanze",
""
],
[
"Liu",
"Xinwang",
""
],
[
"Wang",
"Siwei",
""
],
[
"Wang",
"Guangpu",
""
],
[
"Xu",
"Chuanfu",
""
],
[
"Liu",
"Jie",
""
],
[
"Chen",
"Ping",
""
]
]
| TITLE: "Stones from Other Hills can Polish Jade": Zero-shot Anomaly Image
Synthesis via Cross-domain Anomaly Injection
ABSTRACT: Industrial image anomaly detection (IAD) is a pivotal topic with huge value.
Due to anomaly's nature, real anomalies in a specific modern industrial domain
(i.e. domain-specific anomalies) are usually too rare to collect, which
severely hinders IAD. Thus, zero-shot anomaly synthesis (ZSAS), which
synthesizes pseudo anomaly images without any domain-specific anomaly, emerges
as a vital technique for IAD. However, existing solutions are either unable to
synthesize authentic pseudo anomalies, or require cumbersome training. Thus, we
focus on ZSAS and propose a brand-new paradigm that can realize both authentic
and training-free ZSAS. It is based on a chronically-ignored fact: Although
domain-specific anomalies are rare, real anomalies from other domains (i.e.
cross-domain anomalies) are actually abundant and directly applicable to ZSAS.
Specifically, our new ZSAS paradigm makes three-fold contributions: First, we
propose a novel method named Cross-domain Anomaly Injection (CAI), which
directly exploits cross-domain anomalies to enable highly authentic ZSAS in a
training-free manner. Second, to supply CAI with sufficient cross-domain
anomalies, we build the first Domain-agnostic Anomaly Dataset within our best
knowledge, which provides ZSAS with abundant real anomaly patterns. Third, we
propose a CAI-guided Diffusion Mechanism, which further breaks the quantity
limit of real anomalies and enable unlimited anomaly synthesis. Our
head-to-head comparison with existing ZSAS solutions justifies our paradigm's
superior performance for IAD and demonstrates it as an effective and pragmatic
ZSAS solution.
| no_new_dataset | 0.948202 |
2501.15572 | Mahshid Shiri | Mahshid Shiri, Chandra Bortolotto, Alessandro Bruno, Alessio Consonni,
Daniela Maria Grasso, Leonardo Brizzi, Daniele Loiacono, Lorenzo Preda | Comparative clinical evaluation of "memory-efficient" synthetic 3d
generative adversarial networks (gan) head-to-head to state of art: results
on computed tomography of the chest | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Introduction: Generative Adversarial Networks (GANs) are increasingly used to
generate synthetic medical images, addressing the critical shortage of
annotated data for training Artificial Intelligence (AI) systems. This study
introduces a novel memory-efficient GAN architecture, incorporating Conditional
Random Fields (CRFs) to generate high-resolution 3D medical images and
evaluates its performance against the state-of-the-art hierarchical (HA)-GAN
model.
Materials and Methods: The CRF-GAN was trained using the open-source lung CT
LUNA16 dataset. The architecture was compared to HA-GAN through a quantitative
evaluation, using Frechet Inception Distance (FID) and Maximum Mean Discrepancy
(MMD) metrics, and a qualitative evaluation, through a two-alternative forced
choice (2AFC) test completed by a pool of 12 resident radiologists, in order to
assess the realism of the generated images.
Results: CRF-GAN outperformed HA-GAN with lower FID (0.047 vs. 0.061) and MMD
(0.084 vs. 0.086) scores, indicating better image fidelity. The 2AFC test
showed a significant preference for images generated by CRF-Gan over those
generated by HA-GAN with a p-value of 1.93e-05. Additionally, CRF-GAN
demonstrated 9.34% lower memory usage at 256 resolution and achieved up to
14.6% faster training speeds, offering substantial computational savings.
Discussion: CRF-GAN model successfully generates high-resolution 3D medical
images with non-inferior quality to conventional models, while being more
memory-efficient and faster. Computational power and time saved can be used to
improve the spatial resolution and anatomical accuracy of generated images,
which is still a critical factor limiting their direct clinical applicability.
| [
{
"version": "v1",
"created": "Sun, 26 Jan 2025 15:57:44 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 09:46:24 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Shiri",
"Mahshid",
""
],
[
"Bortolotto",
"Chandra",
""
],
[
"Bruno",
"Alessandro",
""
],
[
"Consonni",
"Alessio",
""
],
[
"Grasso",
"Daniela Maria",
""
],
[
"Brizzi",
"Leonardo",
""
],
[
"Loiacono",
"Daniele",
""
],
[
"Preda",
"Lorenzo",
""
]
]
| TITLE: Comparative clinical evaluation of "memory-efficient" synthetic 3d
generative adversarial networks (gan) head-to-head to state of art: results
on computed tomography of the chest
ABSTRACT: Introduction: Generative Adversarial Networks (GANs) are increasingly used to
generate synthetic medical images, addressing the critical shortage of
annotated data for training Artificial Intelligence (AI) systems. This study
introduces a novel memory-efficient GAN architecture, incorporating Conditional
Random Fields (CRFs) to generate high-resolution 3D medical images and
evaluates its performance against the state-of-the-art hierarchical (HA)-GAN
model.
Materials and Methods: The CRF-GAN was trained using the open-source lung CT
LUNA16 dataset. The architecture was compared to HA-GAN through a quantitative
evaluation, using Frechet Inception Distance (FID) and Maximum Mean Discrepancy
(MMD) metrics, and a qualitative evaluation, through a two-alternative forced
choice (2AFC) test completed by a pool of 12 resident radiologists, in order to
assess the realism of the generated images.
Results: CRF-GAN outperformed HA-GAN with lower FID (0.047 vs. 0.061) and MMD
(0.084 vs. 0.086) scores, indicating better image fidelity. The 2AFC test
showed a significant preference for images generated by CRF-Gan over those
generated by HA-GAN with a p-value of 1.93e-05. Additionally, CRF-GAN
demonstrated 9.34% lower memory usage at 256 resolution and achieved up to
14.6% faster training speeds, offering substantial computational savings.
Discussion: CRF-GAN model successfully generates high-resolution 3D medical
images with non-inferior quality to conventional models, while being more
memory-efficient and faster. Computational power and time saved can be used to
improve the spatial resolution and anatomical accuracy of generated images,
which is still a critical factor limiting their direct clinical applicability.
| no_new_dataset | 0.955899 |
2501.17304 | Shalev Shaer | Igor Abramovski, Alon Vinnikov, Shalev Shaer, Naoyuki Kanda, Xiaofei
Wang, Amir Ivry, Eyal Krupka | Summary of the NOTSOFAR-1 Challenge: Highlights and Learnings | null | null | null | null | cs.SD cs.LG eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The first Natural Office Talkers in Settings of Far-field Audio Recordings
(NOTSOFAR-1) Challenge is a pivotal initiative that sets new benchmarks by
offering datasets more representative of the needs of real-world business
applications than those previously available. The challenge provides a unique
combination of 280 recorded meetings across 30 diverse environments, capturing
real-world acoustic conditions and conversational dynamics, and a 1000-hour
simulated training dataset, synthesized with enhanced authenticity for
real-world generalization, incorporating 15,000 real acoustic transfer
functions. In this paper, we provide an overview of the systems submitted to
the challenge and analyze the top-performing approaches, hypothesizing the
factors behind their success. Additionally, we highlight promising directions
left unexplored by participants. By presenting key findings and actionable
insights, this work aims to drive further innovation and progress in DASR
research and applications.
| [
{
"version": "v1",
"created": "Tue, 28 Jan 2025 21:25:08 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 08:01:06 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Abramovski",
"Igor",
""
],
[
"Vinnikov",
"Alon",
""
],
[
"Shaer",
"Shalev",
""
],
[
"Kanda",
"Naoyuki",
""
],
[
"Wang",
"Xiaofei",
""
],
[
"Ivry",
"Amir",
""
],
[
"Krupka",
"Eyal",
""
]
]
| TITLE: Summary of the NOTSOFAR-1 Challenge: Highlights and Learnings
ABSTRACT: The first Natural Office Talkers in Settings of Far-field Audio Recordings
(NOTSOFAR-1) Challenge is a pivotal initiative that sets new benchmarks by
offering datasets more representative of the needs of real-world business
applications than those previously available. The challenge provides a unique
combination of 280 recorded meetings across 30 diverse environments, capturing
real-world acoustic conditions and conversational dynamics, and a 1000-hour
simulated training dataset, synthesized with enhanced authenticity for
real-world generalization, incorporating 15,000 real acoustic transfer
functions. In this paper, we provide an overview of the systems submitted to
the challenge and analyze the top-performing approaches, hypothesizing the
factors behind their success. Additionally, we highlight promising directions
left unexplored by participants. By presenting key findings and actionable
insights, this work aims to drive further innovation and progress in DASR
research and applications.
| new_dataset | 0.950041 |
2501.17823 | Md Kaykobad Reza | Md Kaykobad Reza, Ameya Patil, Mashhour Solh, M. Salman Asif | Robust Multimodal Learning via Cross-Modal Proxy Tokens | 17 Pages, 10 Figures, 6 Tables | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal models often experience a significant performance drop when one or
more modalities are missing during inference. To address this challenge, we
propose a simple yet effective approach that enhances robustness to missing
modalities while maintaining strong performance when all modalities are
available. Our method introduces cross-modal proxy tokens (CMPTs), which
approximate the class token of a missing modality by attending only to the
tokens of the available modality. To efficiently learn the approximation for
the missing modality via CMPTs with minimal computational overhead, we employ
low-rank adapters in frozen unimodal encoders and jointly optimize an alignment
loss with a task-specific loss. Extensive experiments on five multimodal
datasets show that our method outperforms state-of-the-art baselines across
various missing rates while achieving competitive results in complete-modality
settings. Overall, our method offers a flexible and efficient solution for
robust multimodal learning. The code and pretrained models will be released on
GitHub.
| [
{
"version": "v1",
"created": "Wed, 29 Jan 2025 18:15:49 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 01:34:24 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Reza",
"Md Kaykobad",
""
],
[
"Patil",
"Ameya",
""
],
[
"Solh",
"Mashhour",
""
],
[
"Asif",
"M. Salman",
""
]
]
| TITLE: Robust Multimodal Learning via Cross-Modal Proxy Tokens
ABSTRACT: Multimodal models often experience a significant performance drop when one or
more modalities are missing during inference. To address this challenge, we
propose a simple yet effective approach that enhances robustness to missing
modalities while maintaining strong performance when all modalities are
available. Our method introduces cross-modal proxy tokens (CMPTs), which
approximate the class token of a missing modality by attending only to the
tokens of the available modality. To efficiently learn the approximation for
the missing modality via CMPTs with minimal computational overhead, we employ
low-rank adapters in frozen unimodal encoders and jointly optimize an alignment
loss with a task-specific loss. Extensive experiments on five multimodal
datasets show that our method outperforms state-of-the-art baselines across
various missing rates while achieving competitive results in complete-modality
settings. Overall, our method offers a flexible and efficient solution for
robust multimodal learning. The code and pretrained models will be released on
GitHub.
| no_new_dataset | 0.947962 |
2501.18328 | Yicheng Wu | Yicheng Wu, Tao Song, Zhonghua Wu, Jin Ye, Zongyuan Ge, Zhaolin Chen,
Jianfei Cai | CodeBrain: Imputing Any Brain MRI via Modality- and Instance-Specific
Codes | CodeBrain v2 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Unified MRI imputation, which can adapt to diverse imputation scenarios, is
highly desirable as it reduces scanning costs and provides comprehensive MRI
information for improved clinical diagnosis. Existing unified MRI imputation
methods either rely on specific prompts to guide their transformation network
or require multiple modality-specific modules. However, these approaches
struggle to capture large modality and instance variations or become too
complex to generalize effectively. To address these limitations, we propose
CodeBrain, a fundamentally different pipeline for unified brain MRI imputation.
Our key idea is to reframe various inter-modality transformations as a
full-modality code prediction task via a two-stage framework. In the first
stage, CodeBrain reconstructs a target modality from any other modalities by
learning a compact scalar-quantized code for each instance and modality. Any
target modality can then be reconstructed with high fidelity by combining the
corresponding code with shared features extracted from any available modality.
In the second stage, a projection encoder is trained to predict full-modality
compact codes from any incomplete MRI samples, effectively simulating various
imputation scenarios. We evaluate our CodeBrain on two public brain MRI
datasets (i.e., IXI and BraTS 2023). Extensive experiments demonstrate that
CodeBrain outperforms state-of-the-art methods, setting a new benchmark for
unified brain MRI imputation. Our code will be released.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2025 13:14:40 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 02:55:58 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wu",
"Yicheng",
""
],
[
"Song",
"Tao",
""
],
[
"Wu",
"Zhonghua",
""
],
[
"Ye",
"Jin",
""
],
[
"Ge",
"Zongyuan",
""
],
[
"Chen",
"Zhaolin",
""
],
[
"Cai",
"Jianfei",
""
]
]
| TITLE: CodeBrain: Imputing Any Brain MRI via Modality- and Instance-Specific
Codes
ABSTRACT: Unified MRI imputation, which can adapt to diverse imputation scenarios, is
highly desirable as it reduces scanning costs and provides comprehensive MRI
information for improved clinical diagnosis. Existing unified MRI imputation
methods either rely on specific prompts to guide their transformation network
or require multiple modality-specific modules. However, these approaches
struggle to capture large modality and instance variations or become too
complex to generalize effectively. To address these limitations, we propose
CodeBrain, a fundamentally different pipeline for unified brain MRI imputation.
Our key idea is to reframe various inter-modality transformations as a
full-modality code prediction task via a two-stage framework. In the first
stage, CodeBrain reconstructs a target modality from any other modalities by
learning a compact scalar-quantized code for each instance and modality. Any
target modality can then be reconstructed with high fidelity by combining the
corresponding code with shared features extracted from any available modality.
In the second stage, a projection encoder is trained to predict full-modality
compact codes from any incomplete MRI samples, effectively simulating various
imputation scenarios. We evaluate our CodeBrain on two public brain MRI
datasets (i.e., IXI and BraTS 2023). Extensive experiments demonstrate that
CodeBrain outperforms state-of-the-art methods, setting a new benchmark for
unified brain MRI imputation. Our code will be released.
| no_new_dataset | 0.941975 |
2501.19017 | Yinxuan Gui | Bin Zhu, Huiyan Qi, Yinxuan Gui, Jingjing Chen, Chong-Wah Ngo, Ee-Peng
Lim | Calling a Spade a Heart: Gaslighting Multimodal Large Language Models
via Negation | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Multimodal Large Language Models (MLLMs) have exhibited remarkable
advancements in integrating different modalities, excelling in complex
understanding and generation tasks. Despite their success, MLLMs remain
vulnerable to conversational adversarial inputs, particularly negation
arguments. This paper systematically evaluates state-of-the-art MLLMs across
diverse benchmarks, revealing significant performance drops when negation
arguments are introduced to initially correct responses. Notably, we introduce
the first benchmark GaslightingBench, specifically designed to evaluate the
vulnerability of MLLMs to negation arguments. GaslightingBench consists of
multiple-choice questions curated from existing datasets, along with generated
negation prompts across 20 diverse categories. Throughout extensive evaluation,
we find that proprietary models such as Gemini-1.5-flash, GPT-4o and
Claude-3.5-Sonnet demonstrate better resilience compared to open-source
counterparts like Qwen2-VL and LLaVA. However, all evaluated MLLMs struggle to
maintain logical consistency under negation arguments during conversation. Our
findings provide critical insights for improving the robustness of MLLMs
against negation inputs, contributing to the development of more reliable and
trustworthy multimodal AI systems.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 10:37:48 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 13:50:13 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhu",
"Bin",
""
],
[
"Qi",
"Huiyan",
""
],
[
"Gui",
"Yinxuan",
""
],
[
"Chen",
"Jingjing",
""
],
[
"Ngo",
"Chong-Wah",
""
],
[
"Lim",
"Ee-Peng",
""
]
]
| TITLE: Calling a Spade a Heart: Gaslighting Multimodal Large Language Models
via Negation
ABSTRACT: Multimodal Large Language Models (MLLMs) have exhibited remarkable
advancements in integrating different modalities, excelling in complex
understanding and generation tasks. Despite their success, MLLMs remain
vulnerable to conversational adversarial inputs, particularly negation
arguments. This paper systematically evaluates state-of-the-art MLLMs across
diverse benchmarks, revealing significant performance drops when negation
arguments are introduced to initially correct responses. Notably, we introduce
the first benchmark GaslightingBench, specifically designed to evaluate the
vulnerability of MLLMs to negation arguments. GaslightingBench consists of
multiple-choice questions curated from existing datasets, along with generated
negation prompts across 20 diverse categories. Throughout extensive evaluation,
we find that proprietary models such as Gemini-1.5-flash, GPT-4o and
Claude-3.5-Sonnet demonstrate better resilience compared to open-source
counterparts like Qwen2-VL and LLaVA. However, all evaluated MLLMs struggle to
maintain logical consistency under negation arguments during conversation. Our
findings provide critical insights for improving the robustness of MLLMs
against negation inputs, contributing to the development of more reliable and
trustworthy multimodal AI systems.
| new_dataset | 0.954563 |
2501.19083 | Lei Jiang | Lei Jiang and Ye Wei and Hao Ni | MotionPCM: Real-Time Motion Synthesis with Phased Consistency Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion models have become a popular choice for human motion synthesis due
to their powerful generative capabilities. However, their high computational
complexity and large sampling steps pose challenges for real-time applications.
Fortunately, the Consistency Model (CM) provides a solution to greatly reduce
the number of sampling steps from hundreds to a few, typically fewer than four,
significantly accelerating the synthesis of diffusion models. However, applying
CM to text-conditioned human motion synthesis in latent space yields
unsatisfactory generation results. In this paper, we introduce
\textbf{MotionPCM}, a phased consistency model-based approach designed to
improve the quality and efficiency for real-time motion synthesis in latent
space. Experimental results on the HumanML3D dataset show that our model
achieves real-time inference at over 30 frames per second in a single sampling
step while outperforming the previous state-of-the-art with a 38.9\%
improvement in FID. The code will be available for reproduction.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 12:17:04 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 15:06:47 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Jiang",
"Lei",
""
],
[
"Wei",
"Ye",
""
],
[
"Ni",
"Hao",
""
]
]
| TITLE: MotionPCM: Real-Time Motion Synthesis with Phased Consistency Model
ABSTRACT: Diffusion models have become a popular choice for human motion synthesis due
to their powerful generative capabilities. However, their high computational
complexity and large sampling steps pose challenges for real-time applications.
Fortunately, the Consistency Model (CM) provides a solution to greatly reduce
the number of sampling steps from hundreds to a few, typically fewer than four,
significantly accelerating the synthesis of diffusion models. However, applying
CM to text-conditioned human motion synthesis in latent space yields
unsatisfactory generation results. In this paper, we introduce
\textbf{MotionPCM}, a phased consistency model-based approach designed to
improve the quality and efficiency for real-time motion synthesis in latent
space. Experimental results on the HumanML3D dataset show that our model
achieves real-time inference at over 30 frames per second in a single sampling
step while outperforming the previous state-of-the-art with a 38.9\%
improvement in FID. The code will be available for reproduction.
| no_new_dataset | 0.95018 |
2501.19172 | Georgia Channing | Aqib Mahfuz, Georgia Channing, Mark van der Wilk, Philip Torr, Fabio
Pizzati, Christian Schroeder de Witt | PSyDUCK: Training-Free Steganography for Latent Diffusion | null | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by/4.0/ | Recent advances in generative AI have opened promising avenues for
steganography, which can securely protect sensitive information for individuals
operating in hostile environments, such as journalists, activists, and
whistleblowers. However, existing methods for generative steganography have
significant limitations, particularly in scalability and their dependence on
retraining diffusion models. We introduce PSyDUCK, a training-free,
model-agnostic steganography framework specifically designed for latent
diffusion models. PSyDUCK leverages controlled divergence and local mixing
within the latent denoising process, enabling high-capacity, secure message
embedding without compromising visual fidelity. Our method dynamically adapts
embedding strength to balance accuracy and detectability, significantly
improving upon existing pixel-space approaches. Crucially, PSyDUCK extends
generative steganography to latent-space video diffusion models, surpassing
previous methods in both encoding capacity and robustness. Extensive
experiments demonstrate PSyDUCK's superiority over state-of-the-art techniques,
achieving higher transmission accuracy and lower detectability rates across
diverse image and video datasets. By overcoming the key challenges associated
with latent diffusion model architectures, PSyDUCK sets a new standard for
generative steganography, paving the way for scalable, real-world
steganographic applications.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 14:39:12 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 19:32:30 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Mahfuz",
"Aqib",
""
],
[
"Channing",
"Georgia",
""
],
[
"van der Wilk",
"Mark",
""
],
[
"Torr",
"Philip",
""
],
[
"Pizzati",
"Fabio",
""
],
[
"de Witt",
"Christian Schroeder",
""
]
]
| TITLE: PSyDUCK: Training-Free Steganography for Latent Diffusion
ABSTRACT: Recent advances in generative AI have opened promising avenues for
steganography, which can securely protect sensitive information for individuals
operating in hostile environments, such as journalists, activists, and
whistleblowers. However, existing methods for generative steganography have
significant limitations, particularly in scalability and their dependence on
retraining diffusion models. We introduce PSyDUCK, a training-free,
model-agnostic steganography framework specifically designed for latent
diffusion models. PSyDUCK leverages controlled divergence and local mixing
within the latent denoising process, enabling high-capacity, secure message
embedding without compromising visual fidelity. Our method dynamically adapts
embedding strength to balance accuracy and detectability, significantly
improving upon existing pixel-space approaches. Crucially, PSyDUCK extends
generative steganography to latent-space video diffusion models, surpassing
previous methods in both encoding capacity and robustness. Extensive
experiments demonstrate PSyDUCK's superiority over state-of-the-art techniques,
achieving higher transmission accuracy and lower detectability rates across
diverse image and video datasets. By overcoming the key challenges associated
with latent diffusion model architectures, PSyDUCK sets a new standard for
generative steganography, paving the way for scalable, real-world
steganographic applications.
| no_new_dataset | 0.940953 |
2501.19255 | Mian Muhammad Naeem Abid | Mian Muhammad Naeem Abid, Nancy Mehta, Zongwei Wu, Radu Timofte | ContextFormer: Redefining Efficiency in Semantic Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic segmentation assigns labels to pixels in images, a critical yet
challenging task in computer vision. Convolutional methods, although capturing
local dependencies well, struggle with long-range relationships. Vision
Transformers (ViTs) excel in global context capture but are hindered by high
computational demands, especially for high-resolution inputs. Most research
optimizes the encoder architecture, leaving the bottleneck underexplored - a
key area for enhancing performance and efficiency. We propose ContextFormer, a
hybrid framework leveraging the strengths of CNNs and ViTs in the bottleneck to
balance efficiency, accuracy, and robustness for real-time semantic
segmentation. The framework's efficiency is driven by three synergistic
modules: the Token Pyramid Extraction Module (TPEM) for hierarchical
multi-scale representation, the Transformer and Branched DepthwiseConv
(Trans-BDC) block for dynamic scale-aware feature modeling, and the Feature
Merging Module (FMM) for robust integration with enhanced spatial and
contextual consistency. Extensive experiments on ADE20K, Pascal Context,
CityScapes, and COCO-Stuff datasets show ContextFormer significantly
outperforms existing models, achieving state-of-the-art mIoU scores, setting a
new benchmark for efficiency and performance. The codes will be made publicly
available upon acceptance.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 16:11:04 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 14:00:08 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Abid",
"Mian Muhammad Naeem",
""
],
[
"Mehta",
"Nancy",
""
],
[
"Wu",
"Zongwei",
""
],
[
"Timofte",
"Radu",
""
]
]
| TITLE: ContextFormer: Redefining Efficiency in Semantic Segmentation
ABSTRACT: Semantic segmentation assigns labels to pixels in images, a critical yet
challenging task in computer vision. Convolutional methods, although capturing
local dependencies well, struggle with long-range relationships. Vision
Transformers (ViTs) excel in global context capture but are hindered by high
computational demands, especially for high-resolution inputs. Most research
optimizes the encoder architecture, leaving the bottleneck underexplored - a
key area for enhancing performance and efficiency. We propose ContextFormer, a
hybrid framework leveraging the strengths of CNNs and ViTs in the bottleneck to
balance efficiency, accuracy, and robustness for real-time semantic
segmentation. The framework's efficiency is driven by three synergistic
modules: the Token Pyramid Extraction Module (TPEM) for hierarchical
multi-scale representation, the Transformer and Branched DepthwiseConv
(Trans-BDC) block for dynamic scale-aware feature modeling, and the Feature
Merging Module (FMM) for robust integration with enhanced spatial and
contextual consistency. Extensive experiments on ADE20K, Pascal Context,
CityScapes, and COCO-Stuff datasets show ContextFormer significantly
outperforms existing models, achieving state-of-the-art mIoU scores, setting a
new benchmark for efficiency and performance. The codes will be made publicly
available upon acceptance.
| no_new_dataset | 0.947962 |
2502.05928 | Hongyu Ge | Hongyu Ge, Longkun Hao, Zihui Xu, Zhenxin Lin, Bin Li, Shoujun Zhou,
Hongjin Zhao, Yihang Liu | ClinKD: Cross-Modal Clinical Knowledge Distiller For Multi-Task Medical
Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical Visual Question Answering (Med-VQA) represents a critical and
challenging subtask within the general VQA domain. Despite significant
advancements in general Visual Question Answering (VQA), multimodal large
language models (MLLMs) still exhibit substantial limitations when handling
multi-task VQA scenarios. These limitations manifest through erroneous spatial
localization and misinterpretation of medical images, which primarily arise
from two fundamental issues: inadequate image-text alignment and insufficient
medical knowledge in general-purpose MLLMs for specialized medical
applications. To address these issues, we introduce the Cross-Modal Clinical
Knowledge Distiller (ClinKD), an innovative framework designed to enhance
image-text alignment and establish more effective medical knowledge adaptation
mechanisms, which enables MLLMs to adapt to medical knowledge. Our extensive
experimental evaluations demonstrate that the ClinKD achieves state-of-the-art
performance on the Med-GRIT-270k dataset, a challenging medical benchmark
containing fine-grained multi-task QA pairs. The results indicate that our
approach not only significantly improves image-text alignment but also
effectively enables MLLMs to adapt to the medical knowledge. The source code
for ClinKD is available at: https://github.com/overloadedHenry/ClinKD.
| [
{
"version": "v1",
"created": "Sun, 9 Feb 2025 15:08:10 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 15:52:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ge",
"Hongyu",
""
],
[
"Hao",
"Longkun",
""
],
[
"Xu",
"Zihui",
""
],
[
"Lin",
"Zhenxin",
""
],
[
"Li",
"Bin",
""
],
[
"Zhou",
"Shoujun",
""
],
[
"Zhao",
"Hongjin",
""
],
[
"Liu",
"Yihang",
""
]
]
| TITLE: ClinKD: Cross-Modal Clinical Knowledge Distiller For Multi-Task Medical
Images
ABSTRACT: Medical Visual Question Answering (Med-VQA) represents a critical and
challenging subtask within the general VQA domain. Despite significant
advancements in general Visual Question Answering (VQA), multimodal large
language models (MLLMs) still exhibit substantial limitations when handling
multi-task VQA scenarios. These limitations manifest through erroneous spatial
localization and misinterpretation of medical images, which primarily arise
from two fundamental issues: inadequate image-text alignment and insufficient
medical knowledge in general-purpose MLLMs for specialized medical
applications. To address these issues, we introduce the Cross-Modal Clinical
Knowledge Distiller (ClinKD), an innovative framework designed to enhance
image-text alignment and establish more effective medical knowledge adaptation
mechanisms, which enables MLLMs to adapt to medical knowledge. Our extensive
experimental evaluations demonstrate that the ClinKD achieves state-of-the-art
performance on the Med-GRIT-270k dataset, a challenging medical benchmark
containing fine-grained multi-task QA pairs. The results indicate that our
approach not only significantly improves image-text alignment but also
effectively enables MLLMs to adapt to the medical knowledge. The source code
for ClinKD is available at: https://github.com/overloadedHenry/ClinKD.
| no_new_dataset | 0.945248 |
2502.07972 | Zach Nussbaum | Zach Nussbaum, Brandon Duderstadt | Training Sparse Mixture Of Experts Text Embedding Models | null | null | null | null | cs.CL cs.AI cs.IR | http://creativecommons.org/licenses/by/4.0/ | Transformer-based text embedding models have improved their performance on
benchmarks like MIRACL and BEIR by increasing their parameter counts. However,
this scaling approach introduces significant deployment challenges, including
increased inference latency and memory usage. These challenges are particularly
severe in retrieval-augmented generation (RAG) applications, where large
models' increased memory requirements constrain dataset ingestion capacity, and
their higher latency directly impacts query-time performance. While causal
language models have addressed similar efficiency challenges using Mixture of
Experts (MoE) architectures, this approach hasn't been successfully adapted to
the general text embedding setting. In this paper, we introduce Nomic Embed v2,
the first general purpose MoE text embedding model. Our model outperforms
models in the same parameter class on both monolingual and multilingual
benchmarks while also maintaining competitive performance with models twice its
size. We open-source all code, models, and evaluation data to ensure full
reproducibility of our training pipeline at
\href{https://github.com/nomic-ai/contrastors}{https://github.com/nomic-ai/contrastors}.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 21:36:31 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Feb 2025 01:23:29 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 19:39:00 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Nussbaum",
"Zach",
""
],
[
"Duderstadt",
"Brandon",
""
]
]
| TITLE: Training Sparse Mixture Of Experts Text Embedding Models
ABSTRACT: Transformer-based text embedding models have improved their performance on
benchmarks like MIRACL and BEIR by increasing their parameter counts. However,
this scaling approach introduces significant deployment challenges, including
increased inference latency and memory usage. These challenges are particularly
severe in retrieval-augmented generation (RAG) applications, where large
models' increased memory requirements constrain dataset ingestion capacity, and
their higher latency directly impacts query-time performance. While causal
language models have addressed similar efficiency challenges using Mixture of
Experts (MoE) architectures, this approach hasn't been successfully adapted to
the general text embedding setting. In this paper, we introduce Nomic Embed v2,
the first general purpose MoE text embedding model. Our model outperforms
models in the same parameter class on both monolingual and multilingual
benchmarks while also maintaining competitive performance with models twice its
size. We open-source all code, models, and evaluation data to ensure full
reproducibility of our training pipeline at
\href{https://github.com/nomic-ai/contrastors}{https://github.com/nomic-ai/contrastors}.
| no_new_dataset | 0.947866 |
2502.08649 | Jun Yan | David Tussey and Jun Yan | Principles for Open Data Curation: A Case Study with the New York City
311 Service Request Data | null | null | null | null | cs.DB cs.CY stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the early 21st century, the open data movement began to transform
societies and governments by promoting transparency, innovation, and public
engagement. The City of New York (NYC) has been at the forefront of this
movement since the enactment of the Open Data Law in 2012, creating the NYC
Open Data portal. The portal currently hosts 2,700 datasets, serving as a
crucial resource for research across various domains, including health, urban
development, and transportation. However, the effective use of open data relies
heavily on data quality and usability, challenges that remain insufficiently
addressed in the literature. This paper examines these challenges via a case
study of the NYC 311 Service Request dataset, identifying key issues in data
validity, consistency, and curation efficiency. We propose a set of data
curation principles, tailored for government-released open data, to address
these challenges. Our findings highlight the importance of harmonized field
definitions, streamlined storage, and automated quality checks, offering
practical guidelines for improving the reliability and utility of open
datasets.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2025 12:06:20 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 02:07:39 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Tussey",
"David",
""
],
[
"Yan",
"Jun",
""
]
]
| TITLE: Principles for Open Data Curation: A Case Study with the New York City
311 Service Request Data
ABSTRACT: In the early 21st century, the open data movement began to transform
societies and governments by promoting transparency, innovation, and public
engagement. The City of New York (NYC) has been at the forefront of this
movement since the enactment of the Open Data Law in 2012, creating the NYC
Open Data portal. The portal currently hosts 2,700 datasets, serving as a
crucial resource for research across various domains, including health, urban
development, and transportation. However, the effective use of open data relies
heavily on data quality and usability, challenges that remain insufficiently
addressed in the literature. This paper examines these challenges via a case
study of the NYC 311 Service Request dataset, identifying key issues in data
validity, consistency, and curation efficiency. We propose a set of data
curation principles, tailored for government-released open data, to address
these challenges. Our findings highlight the importance of harmonized field
definitions, streamlined storage, and automated quality checks, offering
practical guidelines for improving the reliability and utility of open
datasets.
| no_new_dataset | 0.956594 |
2502.09564 | Massimiliano Ciranni M.Sc. | Massimiliano Ciranni, Vito Paolo Pastore, Roberto Di Via, Enzo
Tartaglione, Francesca Odone, Vittorio Murino | Diffusing DeBias: Synthetic Bias Amplification for Model Debiasing | 27 Pages, 12 Figures | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning model effectiveness in classification tasks is often challenged
by the quality and quantity of training data whenever they are affected by
strong spurious correlations between specific attributes and target labels.
This results in a form of bias affecting training data, which typically leads
to unrecoverable weak generalization in prediction. This paper aims at facing
this problem by leveraging bias amplification with generated synthetic data: we
introduce Diffusing DeBias (DDB), a novel approach acting as a plug-in for
common methods of unsupervised model debiasing exploiting the inherent
bias-learning tendency of diffusion models in data generation. Specifically,
our approach adopts conditional diffusion models to generate synthetic
bias-aligned images, which replace the original training set for learning an
effective bias amplifier model that we subsequently incorporate into an
end-to-end and a two-step unsupervised debiasing approach. By tackling the
fundamental issue of bias-conflicting training samples memorization in learning
auxiliary models, typical of this type of techniques, our proposed method beats
current state-of-the-art in multiple benchmark datasets, demonstrating its
potential as a versatile and effective tool for tackling bias in deep learning
models.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2025 18:17:03 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Feb 2025 22:42:41 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 18:41:50 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ciranni",
"Massimiliano",
""
],
[
"Pastore",
"Vito Paolo",
""
],
[
"Di Via",
"Roberto",
""
],
[
"Tartaglione",
"Enzo",
""
],
[
"Odone",
"Francesca",
""
],
[
"Murino",
"Vittorio",
""
]
]
| TITLE: Diffusing DeBias: Synthetic Bias Amplification for Model Debiasing
ABSTRACT: Deep learning model effectiveness in classification tasks is often challenged
by the quality and quantity of training data whenever they are affected by
strong spurious correlations between specific attributes and target labels.
This results in a form of bias affecting training data, which typically leads
to unrecoverable weak generalization in prediction. This paper aims at facing
this problem by leveraging bias amplification with generated synthetic data: we
introduce Diffusing DeBias (DDB), a novel approach acting as a plug-in for
common methods of unsupervised model debiasing exploiting the inherent
bias-learning tendency of diffusion models in data generation. Specifically,
our approach adopts conditional diffusion models to generate synthetic
bias-aligned images, which replace the original training set for learning an
effective bias amplifier model that we subsequently incorporate into an
end-to-end and a two-step unsupervised debiasing approach. By tackling the
fundamental issue of bias-conflicting training samples memorization in learning
auxiliary models, typical of this type of techniques, our proposed method beats
current state-of-the-art in multiple benchmark datasets, demonstrating its
potential as a versatile and effective tool for tackling bias in deep learning
models.
| no_new_dataset | 0.945551 |
2502.10868 | Chompakorn Chaksangchaichot | Pawitsapak Akarajaradwong, Pirat Pothavorn, Chompakorn
Chaksangchaichot, Panuthep Tasawong, Thitiwat Nopparatbundit, Sarana Nutanong | NitiBench: A Comprehensive Study of LLM Framework Capabilities for Thai
Legal Question Answering | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The application of large language models (LLMs) in the legal domain holds
significant potential for information retrieval and question answering, yet
Thai legal QA systems face challenges due to a lack of standardized evaluation
benchmarks and the complexity of Thai legal structures. This paper introduces
NitiBench, a benchmark comprising two datasets: the NitiBench-CCL, covering
general Thai financial law, and the NitiBench-Tax, which includes real-world
tax law cases requiring advanced legal reasoning. We evaluate
retrieval-augmented generation (RAG) and long-context LLM-based approaches to
address three key research questions: the impact of domain-specific components
like section-based chunking and cross-referencing, the comparative performance
of different retrievers and LLMs, and the viability of long-context LLMs as an
alternative to RAG. Our results show that section-based chunking significantly
improves retrieval and end-to-end performance, current retrievers struggle with
complex queries, and long-context LLMs still underperform RAG-based systems in
Thai legal QA. To support fair evaluation, we propose tailored multi-label
retrieval metrics and the use of an LLM-as-judge for coverage and contradiction
detection method. These findings highlight the limitations of current Thai
legal NLP solutions and provide a foundation for future research in the field.
We also open-sourced our codes and dataset to available publicly.
| [
{
"version": "v1",
"created": "Sat, 15 Feb 2025 17:52:14 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 06:45:23 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 05:11:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Akarajaradwong",
"Pawitsapak",
""
],
[
"Pothavorn",
"Pirat",
""
],
[
"Chaksangchaichot",
"Chompakorn",
""
],
[
"Tasawong",
"Panuthep",
""
],
[
"Nopparatbundit",
"Thitiwat",
""
],
[
"Nutanong",
"Sarana",
""
]
]
| TITLE: NitiBench: A Comprehensive Study of LLM Framework Capabilities for Thai
Legal Question Answering
ABSTRACT: The application of large language models (LLMs) in the legal domain holds
significant potential for information retrieval and question answering, yet
Thai legal QA systems face challenges due to a lack of standardized evaluation
benchmarks and the complexity of Thai legal structures. This paper introduces
NitiBench, a benchmark comprising two datasets: the NitiBench-CCL, covering
general Thai financial law, and the NitiBench-Tax, which includes real-world
tax law cases requiring advanced legal reasoning. We evaluate
retrieval-augmented generation (RAG) and long-context LLM-based approaches to
address three key research questions: the impact of domain-specific components
like section-based chunking and cross-referencing, the comparative performance
of different retrievers and LLMs, and the viability of long-context LLMs as an
alternative to RAG. Our results show that section-based chunking significantly
improves retrieval and end-to-end performance, current retrievers struggle with
complex queries, and long-context LLMs still underperform RAG-based systems in
Thai legal QA. To support fair evaluation, we propose tailored multi-label
retrieval metrics and the use of an LLM-as-judge for coverage and contradiction
detection method. These findings highlight the limitations of current Thai
legal NLP solutions and provide a foundation for future research in the field.
We also open-sourced our codes and dataset to available publicly.
| new_dataset | 0.974797 |
2502.11418 | Geon Lee | Geon Lee, Wenchao Yu, Kijung Shin, Wei Cheng, Haifeng Chen | TimeCAP: Learning to Contextualize, Augment, and Predict Time Series
Events with Large Language Model Agents | AAAI 2025 | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series data is essential in various applications, including climate
modeling, healthcare monitoring, and financial analytics. Understanding the
contextual information associated with real-world time series data is often
essential for accurate and reliable event predictions. In this paper, we
introduce TimeCAP, a time-series processing framework that creatively employs
Large Language Models (LLMs) as contextualizers of time series data, extending
their typical usage as predictors. TimeCAP incorporates two independent LLM
agents: one generates a textual summary capturing the context of the time
series, while the other uses this enriched summary to make more informed
predictions. In addition, TimeCAP employs a multi-modal encoder that synergizes
with the LLM agents, enhancing predictive performance through mutual
augmentation of inputs with in-context examples. Experimental results on
real-world datasets demonstrate that TimeCAP outperforms state-of-the-art
methods for time series event prediction, including those utilizing LLMs as
predictors, achieving an average improvement of 28.75% in F1 score.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 04:17:27 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 04:15:20 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Lee",
"Geon",
""
],
[
"Yu",
"Wenchao",
""
],
[
"Shin",
"Kijung",
""
],
[
"Cheng",
"Wei",
""
],
[
"Chen",
"Haifeng",
""
]
]
| TITLE: TimeCAP: Learning to Contextualize, Augment, and Predict Time Series
Events with Large Language Model Agents
ABSTRACT: Time series data is essential in various applications, including climate
modeling, healthcare monitoring, and financial analytics. Understanding the
contextual information associated with real-world time series data is often
essential for accurate and reliable event predictions. In this paper, we
introduce TimeCAP, a time-series processing framework that creatively employs
Large Language Models (LLMs) as contextualizers of time series data, extending
their typical usage as predictors. TimeCAP incorporates two independent LLM
agents: one generates a textual summary capturing the context of the time
series, while the other uses this enriched summary to make more informed
predictions. In addition, TimeCAP employs a multi-modal encoder that synergizes
with the LLM agents, enhancing predictive performance through mutual
augmentation of inputs with in-context examples. Experimental results on
real-world datasets demonstrate that TimeCAP outperforms state-of-the-art
methods for time series event prediction, including those utilizing LLMs as
predictors, achieving an average improvement of 28.75% in F1 score.
| no_new_dataset | 0.949482 |
2502.11925 | Yi Fang | Yi Fang, Bowen Jin, Jiacheng Shen, Sirui Ding, Qiaoyu Tan, Jiawei Han | GRAPHGPT-O: Synergistic Multimodal Comprehension and Generation on
Graphs | null | null | null | null | cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The rapid development of Multimodal Large Language Models (MLLMs) has enabled
the integration of multiple modalities, including texts and images, within the
large language model (LLM) framework. However, texts and images are usually
interconnected, forming a multimodal attributed graph (MMAG). It is
underexplored how MLLMs can incorporate the relational information
(\textit{i.e.}, graph structure) and semantic information (\textit{i.e.,} texts
and images) on such graphs for multimodal comprehension and generation. In this
paper, we propose GraphGPT-o, which supports omni-multimodal understanding and
creation on MMAGs. We first comprehensively study linearization variants to
transform semantic and structural information as input for MLLMs. Then, we
propose a hierarchical aligner that enables deep graph encoding, bridging the
gap between MMAGs and MLLMs. Finally, we explore the inference choices,
adapting MLLM to interleaved text and image generation in graph scenarios.
Extensive experiments on three datasets from different domains demonstrate the
effectiveness of our proposed method. Datasets and codes will be open-sourced
upon acceptance.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 15:35:36 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 02:59:52 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Fang",
"Yi",
""
],
[
"Jin",
"Bowen",
""
],
[
"Shen",
"Jiacheng",
""
],
[
"Ding",
"Sirui",
""
],
[
"Tan",
"Qiaoyu",
""
],
[
"Han",
"Jiawei",
""
]
]
| TITLE: GRAPHGPT-O: Synergistic Multimodal Comprehension and Generation on
Graphs
ABSTRACT: The rapid development of Multimodal Large Language Models (MLLMs) has enabled
the integration of multiple modalities, including texts and images, within the
large language model (LLM) framework. However, texts and images are usually
interconnected, forming a multimodal attributed graph (MMAG). It is
underexplored how MLLMs can incorporate the relational information
(\textit{i.e.}, graph structure) and semantic information (\textit{i.e.,} texts
and images) on such graphs for multimodal comprehension and generation. In this
paper, we propose GraphGPT-o, which supports omni-multimodal understanding and
creation on MMAGs. We first comprehensively study linearization variants to
transform semantic and structural information as input for MLLMs. Then, we
propose a hierarchical aligner that enables deep graph encoding, bridging the
gap between MMAGs and MLLMs. Finally, we explore the inference choices,
adapting MLLM to interleaved text and image generation in graph scenarios.
Extensive experiments on three datasets from different domains demonstrate the
effectiveness of our proposed method. Datasets and codes will be open-sourced
upon acceptance.
| no_new_dataset | 0.946892 |
2502.11926 | Nedjma Ousidhoum | Shamsuddeen Hassan Muhammad, Nedjma Ousidhoum, Idris Abdulmumin, Jan
Philip Wahle, Terry Ruas, Meriem Beloucif, Christine de Kock, Nirmal Surange,
Daniela Teodorescu, Ibrahim Said Ahmad, David Ifeoluwa Adelani, Alham Fikri
Aji, Felermino D. M. A. Ali, Ilseyar Alimova, Vladimir Araujo, Nikolay
Babakov, Naomi Baes, Ana-Maria Bucur, Andiswa Bukula, Guanqun Cao, Rodrigo
Tufino Cardenas, Rendi Chevi, Chiamaka Ijeoma Chukwuneke, Alexandra
Ciobotaru, Daryna Dementieva, Murja Sani Gadanya, Robert Geislinger, Bela
Gipp, Oumaima Hourrane, Oana Ignat, Falalu Ibrahim Lawan, Rooweither Mabuya,
Rahmad Mahendra, Vukosi Marivate, Andrew Piper, Alexander Panchenko, Charles
Henrique Porto Ferreira, Vitaly Protasov, Samuel Rutunda, Manish Shrivastava,
Aura Cristina Udrea, Lilian Diana Awuor Wanzare, Sophie Wu, Florian Valentin
Wunderlich, Hanif Muhammad Zhafran, Tianhui Zhang, Yi Zhou, Saif M. Mohammad | BRIGHTER: BRIdging the Gap in Human-Annotated Textual Emotion
Recognition Datasets for 28 Languages | 20 pages, under review | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People worldwide use language in subtle and complex ways to express emotions.
While emotion recognition -- an umbrella term for several NLP tasks --
significantly impacts different applications in NLP and other fields, most work
in the area is focused on high-resource languages. Therefore, this has led to
major disparities in research and proposed solutions, especially for
low-resource languages that suffer from the lack of high-quality datasets. In
this paper, we present BRIGHTER -- a collection of multilabeled
emotion-annotated datasets in 28 different languages. BRIGHTER covers
predominantly low-resource languages from Africa, Asia, Eastern Europe, and
Latin America, with instances from various domains annotated by fluent
speakers. We describe the data collection and annotation processes and the
challenges of building these datasets. Then, we report different experimental
results for monolingual and crosslingual multi-label emotion identification, as
well as intensity-level emotion recognition. We investigate results with and
without using LLMs and analyse the large variability in performance across
languages and text domains. We show that BRIGHTER datasets are a step towards
bridging the gap in text-based emotion recognition and discuss their impact and
utility.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 15:39:50 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 12:20:14 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Muhammad",
"Shamsuddeen Hassan",
""
],
[
"Ousidhoum",
"Nedjma",
""
],
[
"Abdulmumin",
"Idris",
""
],
[
"Wahle",
"Jan Philip",
""
],
[
"Ruas",
"Terry",
""
],
[
"Beloucif",
"Meriem",
""
],
[
"de Kock",
"Christine",
""
],
[
"Surange",
"Nirmal",
""
],
[
"Teodorescu",
"Daniela",
""
],
[
"Ahmad",
"Ibrahim Said",
""
],
[
"Adelani",
"David Ifeoluwa",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Ali",
"Felermino D. M. A.",
""
],
[
"Alimova",
"Ilseyar",
""
],
[
"Araujo",
"Vladimir",
""
],
[
"Babakov",
"Nikolay",
""
],
[
"Baes",
"Naomi",
""
],
[
"Bucur",
"Ana-Maria",
""
],
[
"Bukula",
"Andiswa",
""
],
[
"Cao",
"Guanqun",
""
],
[
"Cardenas",
"Rodrigo Tufino",
""
],
[
"Chevi",
"Rendi",
""
],
[
"Chukwuneke",
"Chiamaka Ijeoma",
""
],
[
"Ciobotaru",
"Alexandra",
""
],
[
"Dementieva",
"Daryna",
""
],
[
"Gadanya",
"Murja Sani",
""
],
[
"Geislinger",
"Robert",
""
],
[
"Gipp",
"Bela",
""
],
[
"Hourrane",
"Oumaima",
""
],
[
"Ignat",
"Oana",
""
],
[
"Lawan",
"Falalu Ibrahim",
""
],
[
"Mabuya",
"Rooweither",
""
],
[
"Mahendra",
"Rahmad",
""
],
[
"Marivate",
"Vukosi",
""
],
[
"Piper",
"Andrew",
""
],
[
"Panchenko",
"Alexander",
""
],
[
"Ferreira",
"Charles Henrique Porto",
""
],
[
"Protasov",
"Vitaly",
""
],
[
"Rutunda",
"Samuel",
""
],
[
"Shrivastava",
"Manish",
""
],
[
"Udrea",
"Aura Cristina",
""
],
[
"Wanzare",
"Lilian Diana Awuor",
""
],
[
"Wu",
"Sophie",
""
],
[
"Wunderlich",
"Florian Valentin",
""
],
[
"Zhafran",
"Hanif Muhammad",
""
],
[
"Zhang",
"Tianhui",
""
],
[
"Zhou",
"Yi",
""
],
[
"Mohammad",
"Saif M.",
""
]
]
| TITLE: BRIGHTER: BRIdging the Gap in Human-Annotated Textual Emotion
Recognition Datasets for 28 Languages
ABSTRACT: People worldwide use language in subtle and complex ways to express emotions.
While emotion recognition -- an umbrella term for several NLP tasks --
significantly impacts different applications in NLP and other fields, most work
in the area is focused on high-resource languages. Therefore, this has led to
major disparities in research and proposed solutions, especially for
low-resource languages that suffer from the lack of high-quality datasets. In
this paper, we present BRIGHTER -- a collection of multilabeled
emotion-annotated datasets in 28 different languages. BRIGHTER covers
predominantly low-resource languages from Africa, Asia, Eastern Europe, and
Latin America, with instances from various domains annotated by fluent
speakers. We describe the data collection and annotation processes and the
challenges of building these datasets. Then, we report different experimental
results for monolingual and crosslingual multi-label emotion identification, as
well as intensity-level emotion recognition. We investigate results with and
without using LLMs and analyse the large variability in performance across
languages and text domains. We show that BRIGHTER datasets are a step towards
bridging the gap in text-based emotion recognition and discuss their impact and
utility.
| new_dataset | 0.951863 |
2502.13833 | Milton Nicol\'as Plasencia Palacios | Milton Nicol\'as Plasencia Palacios, Sebastiano Saccani, Gabriele
Sgroi, Alexander Boudewijn and Luca Bortolussi | Contrastive Learning-Based privacy metrics in Tabular Synthetic Datasets | null | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic data has garnered attention as a Privacy Enhancing Technology (PET)
in sectors such as healthcare and finance. When using synthetic data in
practical applications, it is important to provide protection guarantees. In
the literature, two family of approaches are proposed for tabular data: on the
one hand, Similarity-based methods aim at finding the level of similarity
between training and synthetic data. Indeed, a privacy breach can occur if the
generated data is consistently too similar or even identical to the train data.
On the other hand, Attack-based methods conduce deliberate attacks on synthetic
datasets. The success rates of these attacks reveal how secure the synthetic
datasets are.
In this paper, we introduce a contrastive method that improves privacy
assessment of synthetic datasets by embedding the data in a more representative
space. This overcomes obstacles surrounding the multitude of data types and
attributes. It also makes the use of intuitive distance metrics possible for
similarity measurements and as an attack vector. In a series of experiments
with publicly available datasets, we compare the performances of
similarity-based and attack-based methods, both with and without use of the
contrastive learning-based embeddings. Our results show that relatively
efficient, easy to implement privacy metrics can perform equally well as more
advanced metrics explicitly modeling conditions for privacy referred to by the
GDPR.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2025 15:52:23 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 09:01:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Palacios",
"Milton Nicolás Plasencia",
""
],
[
"Saccani",
"Sebastiano",
""
],
[
"Sgroi",
"Gabriele",
""
],
[
"Boudewijn",
"Alexander",
""
],
[
"Bortolussi",
"Luca",
""
]
]
| TITLE: Contrastive Learning-Based privacy metrics in Tabular Synthetic Datasets
ABSTRACT: Synthetic data has garnered attention as a Privacy Enhancing Technology (PET)
in sectors such as healthcare and finance. When using synthetic data in
practical applications, it is important to provide protection guarantees. In
the literature, two family of approaches are proposed for tabular data: on the
one hand, Similarity-based methods aim at finding the level of similarity
between training and synthetic data. Indeed, a privacy breach can occur if the
generated data is consistently too similar or even identical to the train data.
On the other hand, Attack-based methods conduce deliberate attacks on synthetic
datasets. The success rates of these attacks reveal how secure the synthetic
datasets are.
In this paper, we introduce a contrastive method that improves privacy
assessment of synthetic datasets by embedding the data in a more representative
space. This overcomes obstacles surrounding the multitude of data types and
attributes. It also makes the use of intuitive distance metrics possible for
similarity measurements and as an attack vector. In a series of experiments
with publicly available datasets, we compare the performances of
similarity-based and attack-based methods, both with and without use of the
contrastive learning-based embeddings. Our results show that relatively
efficient, easy to implement privacy metrics can perform equally well as more
advanced metrics explicitly modeling conditions for privacy referred to by the
GDPR.
| no_new_dataset | 0.949623 |
2502.15027 | Zhao Hengyuan | Henry Hengyuan Zhao, Wenqi Pei, Yifei Tao, Haiyang Mei, Mike Zheng
Shou | InterFeedback: Unveiling Interactive Intelligence of Large Multimodal
Models via Human Feedback | 18 pages, 10 figures | null | null | null | cs.CL cs.AI cs.CV cs.HC | http://creativecommons.org/licenses/by/4.0/ | Existing benchmarks do not test Large Multimodal Models (LMMs) on their
interactive intelligence with human users, which is vital for developing
general-purpose AI assistants. We design InterFeedback, an interactive
framework, which can be applied to any LMM and dataset to assess this ability
autonomously. On top of this, we introduce InterFeedback-Bench which evaluates
interactive intelligence using two representative datasets, MMMU-Pro and
MathVerse, to test 10 different open-source LMMs. Additionally, we present
InterFeedback-Human, a newly collected dataset of 120 cases designed for
manually testing interactive performance in leading models such as OpenAI-o1
and Claude-3.5-Sonnet. Our evaluation results indicate that even the
state-of-the-art LMM, OpenAI-o1, struggles to refine its responses based on
human feedback, achieving an average score of less than 50%. Our findings point
to the need for methods that can enhance LMMs' capabilities to interpret and
benefit from feedback.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 20:27:06 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 01:07:59 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhao",
"Henry Hengyuan",
""
],
[
"Pei",
"Wenqi",
""
],
[
"Tao",
"Yifei",
""
],
[
"Mei",
"Haiyang",
""
],
[
"Shou",
"Mike Zheng",
""
]
]
| TITLE: InterFeedback: Unveiling Interactive Intelligence of Large Multimodal
Models via Human Feedback
ABSTRACT: Existing benchmarks do not test Large Multimodal Models (LMMs) on their
interactive intelligence with human users, which is vital for developing
general-purpose AI assistants. We design InterFeedback, an interactive
framework, which can be applied to any LMM and dataset to assess this ability
autonomously. On top of this, we introduce InterFeedback-Bench which evaluates
interactive intelligence using two representative datasets, MMMU-Pro and
MathVerse, to test 10 different open-source LMMs. Additionally, we present
InterFeedback-Human, a newly collected dataset of 120 cases designed for
manually testing interactive performance in leading models such as OpenAI-o1
and Claude-3.5-Sonnet. Our evaluation results indicate that even the
state-of-the-art LMM, OpenAI-o1, struggles to refine its responses based on
human feedback, achieving an average score of less than 50%. Our findings point
to the need for methods that can enhance LMMs' capabilities to interpret and
benefit from feedback.
| new_dataset | 0.958069 |
2502.16660 | Haiteng Zhao | Haiteng Zhao, Chang Ma, Fangzhi Xu, Lingpeng Kong, Zhi-Hong Deng | BioMaze: Benchmarking and Enhancing Large Language Models for Biological
Pathway Reasoning | null | null | null | null | cs.LG cs.AI q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The applications of large language models (LLMs) in various biological
domains have been explored recently, but their reasoning ability in complex
biological systems, such as pathways, remains underexplored, which is crucial
for predicting biological phenomena, formulating hypotheses, and designing
experiments. This work explores the potential of LLMs in pathway reasoning. We
introduce BioMaze, a dataset with 5.1K complex pathway problems derived from
real research, covering various biological contexts including natural dynamic
changes, disturbances, additional intervention conditions, and multi-scale
research targets. Our evaluation of methods such as CoT and graph-augmented
reasoning, shows that LLMs struggle with pathway reasoning, especially in
perturbed systems. To address this, we propose PathSeeker, an LLM agent that
enhances reasoning through interactive subgraph-based navigation, enabling a
more effective approach to handling the complexities of biological systems in a
scientifically aligned manner. The dataset and code are available at
https://github.com/zhao-ht/BioMaze.
| [
{
"version": "v1",
"created": "Sun, 23 Feb 2025 17:38:10 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 17:17:08 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 04:21:05 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhao",
"Haiteng",
""
],
[
"Ma",
"Chang",
""
],
[
"Xu",
"Fangzhi",
""
],
[
"Kong",
"Lingpeng",
""
],
[
"Deng",
"Zhi-Hong",
""
]
]
| TITLE: BioMaze: Benchmarking and Enhancing Large Language Models for Biological
Pathway Reasoning
ABSTRACT: The applications of large language models (LLMs) in various biological
domains have been explored recently, but their reasoning ability in complex
biological systems, such as pathways, remains underexplored, which is crucial
for predicting biological phenomena, formulating hypotheses, and designing
experiments. This work explores the potential of LLMs in pathway reasoning. We
introduce BioMaze, a dataset with 5.1K complex pathway problems derived from
real research, covering various biological contexts including natural dynamic
changes, disturbances, additional intervention conditions, and multi-scale
research targets. Our evaluation of methods such as CoT and graph-augmented
reasoning, shows that LLMs struggle with pathway reasoning, especially in
perturbed systems. To address this, we propose PathSeeker, an LLM agent that
enhances reasoning through interactive subgraph-based navigation, enabling a
more effective approach to handling the complexities of biological systems in a
scientifically aligned manner. The dataset and code are available at
https://github.com/zhao-ht/BioMaze.
| new_dataset | 0.957715 |
2502.18041 | Yunpeng Gao | Yunpeng Gao, Chenhui Li, Zhongrui You, Junli Liu, Zhen Li, Pengan
Chen, Qizhi Chen, Zhonghan Tang, Liansheng Wang, Penghui Yang, Yiwen Tang,
Yuhang Tang, Shuai Liang, Songyi Zhu, Ziqin Xiong, Yifei Su, Xinyi Ye, Jianan
Li, Yan Ding, Dong Wang, Zhigang Wang, Bin Zhao, Xuelong Li | OpenFly: A Versatile Toolchain and Large-scale Benchmark for Aerial
Vision-Language Navigation | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-Language Navigation (VLN) aims to guide agents through an environment
by leveraging both language instructions and visual cues, playing a pivotal
role in embodied AI. Indoor VLN has been extensively studied, whereas outdoor
aerial VLN remains underexplored. The potential reason is that outdoor aerial
view encompasses vast areas, making data collection more challenging, which
results in a lack of benchmarks. To address this problem, we propose OpenFly, a
platform comprising a versatile toolchain and large-scale benchmark for aerial
VLN. Firstly, we develop a highly automated toolchain for data collection,
enabling automatic point cloud acquisition, scene semantic segmentation, flight
trajectory creation, and instruction generation. Secondly, based on the
toolchain, we construct a large-scale aerial VLN dataset with 100k
trajectories, covering diverse heights and lengths across 18 scenes. The
corresponding visual data are generated using various rendering engines and
advanced techniques, including Unreal Engine, GTA V, Google Earth, and 3D
Gaussian Splatting (3D GS). All data exhibit high visual quality. Particularly,
3D GS supports real-to-sim rendering, further enhancing the realism of the
dataset. Thirdly, we propose OpenFly-Agent, a keyframe-aware VLN model, which
takes language instructions, current observations, and historical keyframes as
input, and outputs flight actions directly. Extensive analyses and experiments
are conducted, showcasing the superiority of our OpenFly platform and
OpenFly-Agent. The toolchain, dataset, and codes will be open-sourced.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 09:57:18 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 02:10:39 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 08:38:58 GMT"
},
{
"version": "v4",
"created": "Sat, 8 Mar 2025 10:11:32 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Gao",
"Yunpeng",
""
],
[
"Li",
"Chenhui",
""
],
[
"You",
"Zhongrui",
""
],
[
"Liu",
"Junli",
""
],
[
"Li",
"Zhen",
""
],
[
"Chen",
"Pengan",
""
],
[
"Chen",
"Qizhi",
""
],
[
"Tang",
"Zhonghan",
""
],
[
"Wang",
"Liansheng",
""
],
[
"Yang",
"Penghui",
""
],
[
"Tang",
"Yiwen",
""
],
[
"Tang",
"Yuhang",
""
],
[
"Liang",
"Shuai",
""
],
[
"Zhu",
"Songyi",
""
],
[
"Xiong",
"Ziqin",
""
],
[
"Su",
"Yifei",
""
],
[
"Ye",
"Xinyi",
""
],
[
"Li",
"Jianan",
""
],
[
"Ding",
"Yan",
""
],
[
"Wang",
"Dong",
""
],
[
"Wang",
"Zhigang",
""
],
[
"Zhao",
"Bin",
""
],
[
"Li",
"Xuelong",
""
]
]
| TITLE: OpenFly: A Versatile Toolchain and Large-scale Benchmark for Aerial
Vision-Language Navigation
ABSTRACT: Vision-Language Navigation (VLN) aims to guide agents through an environment
by leveraging both language instructions and visual cues, playing a pivotal
role in embodied AI. Indoor VLN has been extensively studied, whereas outdoor
aerial VLN remains underexplored. The potential reason is that outdoor aerial
view encompasses vast areas, making data collection more challenging, which
results in a lack of benchmarks. To address this problem, we propose OpenFly, a
platform comprising a versatile toolchain and large-scale benchmark for aerial
VLN. Firstly, we develop a highly automated toolchain for data collection,
enabling automatic point cloud acquisition, scene semantic segmentation, flight
trajectory creation, and instruction generation. Secondly, based on the
toolchain, we construct a large-scale aerial VLN dataset with 100k
trajectories, covering diverse heights and lengths across 18 scenes. The
corresponding visual data are generated using various rendering engines and
advanced techniques, including Unreal Engine, GTA V, Google Earth, and 3D
Gaussian Splatting (3D GS). All data exhibit high visual quality. Particularly,
3D GS supports real-to-sim rendering, further enhancing the realism of the
dataset. Thirdly, we propose OpenFly-Agent, a keyframe-aware VLN model, which
takes language instructions, current observations, and historical keyframes as
input, and outputs flight actions directly. Extensive analyses and experiments
are conducted, showcasing the superiority of our OpenFly platform and
OpenFly-Agent. The toolchain, dataset, and codes will be open-sourced.
| new_dataset | 0.961171 |
2502.18101 | Yuxuan Cao | Cao Yuxuan, Wu Jiayang, Alistair Cheong Liang Chuen, Bryan Shan
Guanrong, Theodore Lee Chong Jen, Sherman Chann Zhi Shen | Detecting Offensive Memes with Social Biases in Singapore Context Using
Multimodal Large Language Models | Accepted at 3rd Workshop on Cross-Cultural Considerations in NLP
(C3NLP), co-located with NAACL 2025. This is an extended version with some
appendix moved to the main body | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | Traditional online content moderation systems struggle to classify modern
multimodal means of communication, such as memes, a highly nuanced and
information-dense medium. This task is especially hard in a culturally diverse
society like Singapore, where low-resource languages are used and extensive
knowledge on local context is needed to interpret online content. We curate a
large collection of 112K memes labeled by GPT-4V for fine-tuning a VLM to
classify offensive memes in Singapore context. We show the effectiveness of
fine-tuned VLMs on our dataset, and propose a pipeline containing OCR,
translation and a 7-billion parameter-class VLM. Our solutions reach 80.62%
accuracy and 0.8192 AUROC on a held-out test set, and can greatly aid human in
moderating online contents. The dataset, code, and model weights have been
open-sourced at https://github.com/aliencaocao/vlm-for-memes-aisg.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 11:15:49 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 08:35:02 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yuxuan",
"Cao",
""
],
[
"Jiayang",
"Wu",
""
],
[
"Chuen",
"Alistair Cheong Liang",
""
],
[
"Guanrong",
"Bryan Shan",
""
],
[
"Jen",
"Theodore Lee Chong",
""
],
[
"Shen",
"Sherman Chann Zhi",
""
]
]
| TITLE: Detecting Offensive Memes with Social Biases in Singapore Context Using
Multimodal Large Language Models
ABSTRACT: Traditional online content moderation systems struggle to classify modern
multimodal means of communication, such as memes, a highly nuanced and
information-dense medium. This task is especially hard in a culturally diverse
society like Singapore, where low-resource languages are used and extensive
knowledge on local context is needed to interpret online content. We curate a
large collection of 112K memes labeled by GPT-4V for fine-tuning a VLM to
classify offensive memes in Singapore context. We show the effectiveness of
fine-tuned VLMs on our dataset, and propose a pipeline containing OCR,
translation and a 7-billion parameter-class VLM. Our solutions reach 80.62%
accuracy and 0.8192 AUROC on a held-out test set, and can greatly aid human in
moderating online contents. The dataset, code, and model weights have been
open-sourced at https://github.com/aliencaocao/vlm-for-memes-aisg.
| new_dataset | 0.972152 |
2502.18150 | Marco Pesavento | Ayushi Dutta, Marco Pesavento, Marco Volino, Adrian Hilton, Armin
Mustafa | Realistic Clothed Human and Object Joint Reconstruction from a Single
Image | null | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | Recent approaches to jointly reconstruct 3D humans and objects from a single
RGB image represent 3D shapes with template-based or coarse models, which fail
to capture details of loose clothing on human bodies. In this paper, we
introduce a novel implicit approach for jointly reconstructing realistic 3D
clothed humans and objects from a monocular view. For the first time, we model
both the human and the object with an implicit representation, allowing to
capture more realistic details such as clothing. This task is extremely
challenging due to human-object occlusions and the lack of 3D information in 2D
images, often leading to poor detail reconstruction and depth ambiguity. To
address these problems, we propose a novel attention-based neural implicit
model that leverages image pixel alignment from both the input human-object
image for a global understanding of the human-object scene and from local
separate views of the human and object images to improve realism with, for
example, clothing details. Additionally, the network is conditioned on semantic
features derived from an estimated human-object pose prior, which provides 3D
spatial information about the shared space of humans and objects. To handle
human occlusion caused by objects, we use a generative diffusion model that
inpaints the occluded regions, recovering otherwise lost details. For training
and evaluation, we introduce a synthetic dataset featuring rendered scenes of
inter-occluded 3D human scans and diverse objects. Extensive evaluation on both
synthetic and real-world datasets demonstrates the superior quality of the
proposed human-object reconstructions over competitive methods.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 12:26:36 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 12:51:25 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Dutta",
"Ayushi",
""
],
[
"Pesavento",
"Marco",
""
],
[
"Volino",
"Marco",
""
],
[
"Hilton",
"Adrian",
""
],
[
"Mustafa",
"Armin",
""
]
]
| TITLE: Realistic Clothed Human and Object Joint Reconstruction from a Single
Image
ABSTRACT: Recent approaches to jointly reconstruct 3D humans and objects from a single
RGB image represent 3D shapes with template-based or coarse models, which fail
to capture details of loose clothing on human bodies. In this paper, we
introduce a novel implicit approach for jointly reconstructing realistic 3D
clothed humans and objects from a monocular view. For the first time, we model
both the human and the object with an implicit representation, allowing to
capture more realistic details such as clothing. This task is extremely
challenging due to human-object occlusions and the lack of 3D information in 2D
images, often leading to poor detail reconstruction and depth ambiguity. To
address these problems, we propose a novel attention-based neural implicit
model that leverages image pixel alignment from both the input human-object
image for a global understanding of the human-object scene and from local
separate views of the human and object images to improve realism with, for
example, clothing details. Additionally, the network is conditioned on semantic
features derived from an estimated human-object pose prior, which provides 3D
spatial information about the shared space of humans and objects. To handle
human occlusion caused by objects, we use a generative diffusion model that
inpaints the occluded regions, recovering otherwise lost details. For training
and evaluation, we introduce a synthetic dataset featuring rendered scenes of
inter-occluded 3D human scans and diverse objects. Extensive evaluation on both
synthetic and real-world datasets demonstrates the superior quality of the
proposed human-object reconstructions over competitive methods.
| new_dataset | 0.958421 |
2502.18786 | Jun-En Ding | Jun-En Ding, Dongsheng Luo, Anna Zilverstand, Feng Liu | NeuroTree: Hierarchical Functional Brain Pathway Decoding for Mental
Health Disorders | null | null | null | null | cs.NE cs.AI q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Analyzing functional brain networks using functional magnetic resonance
imaging (fMRI) is crucial for understanding psychiatric disorders and addictive
behaviors. While existing fMRI-based graph convolutional networks (GCNs) show
considerable promise for feature extraction, they often fall short in
characterizing complex relationships between brain regions and demographic
factors and accounting for interpretable variables linked to psychiatric
conditions. We propose NeuroTree to overcome these limitations, integrating a
k-hop AGE-GCN with neural ordinary differential equations (ODEs). This
framework leverages an attention mechanism to optimize functional connectivity
(FC), thereby enhancing dynamic FC feature learning for brain disease
classification. Furthermore, NeuroTree effectively decodes fMRI network
features into tree structures, which improves the capture of high-order brain
regional pathway features and enables the identification of hierarchical neural
behavioral patterns essential for understanding disease-related brain
subnetworks. Our empirical evaluations demonstrate that NeuroTree achieves
state-of-the-art performance across two distinct mental disorder datasets and
provides valuable insights into age-related deterioration patterns. These
findings underscore the model's efficacy in predicting psychiatric disorders
and elucidating their underlying neural mechanisms.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 03:42:58 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 03:03:09 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ding",
"Jun-En",
""
],
[
"Luo",
"Dongsheng",
""
],
[
"Zilverstand",
"Anna",
""
],
[
"Liu",
"Feng",
""
]
]
| TITLE: NeuroTree: Hierarchical Functional Brain Pathway Decoding for Mental
Health Disorders
ABSTRACT: Analyzing functional brain networks using functional magnetic resonance
imaging (fMRI) is crucial for understanding psychiatric disorders and addictive
behaviors. While existing fMRI-based graph convolutional networks (GCNs) show
considerable promise for feature extraction, they often fall short in
characterizing complex relationships between brain regions and demographic
factors and accounting for interpretable variables linked to psychiatric
conditions. We propose NeuroTree to overcome these limitations, integrating a
k-hop AGE-GCN with neural ordinary differential equations (ODEs). This
framework leverages an attention mechanism to optimize functional connectivity
(FC), thereby enhancing dynamic FC feature learning for brain disease
classification. Furthermore, NeuroTree effectively decodes fMRI network
features into tree structures, which improves the capture of high-order brain
regional pathway features and enables the identification of hierarchical neural
behavioral patterns essential for understanding disease-related brain
subnetworks. Our empirical evaluations demonstrate that NeuroTree achieves
state-of-the-art performance across two distinct mental disorder datasets and
provides valuable insights into age-related deterioration patterns. These
findings underscore the model's efficacy in predicting psychiatric disorders
and elucidating their underlying neural mechanisms.
| no_new_dataset | 0.944638 |
2502.18889 | Tianyun Liu | Tianyun Liu | Clip-TTS: Contrastive Text-content and Mel-spectrogram, A High-Quality
Text-to-Speech Method based on Contextual Semantic Understanding | null | null | null | null | cs.SD cs.AI cs.CL cs.HC cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional text-to-speech (TTS) methods primarily focus on establishing a
mapping between phonemes and mel-spectrograms. However, during the phoneme
encoding stage, there is often a lack of real mel-spectrogram auxiliary
information, which results in the encoding process lacking true semantic
understanding. At the same time, traditional TTS systems often struggle to
balance the inference speed of the model with the quality of the synthesized
speech. Methods that generate high-quality synthesized speech tend to have
slower inference speeds, while faster inference methods often sacrifice speech
quality. In this paper, I propose Clip-TTS, a TTS method based on the Clip
architecture. This method uses the Clip framework to establish a connection
between text content and real mel-spectrograms during the text encoding stage,
enabling the text encoder to directly learn the true semantics of the global
context, thereby ensuring the quality of the synthesized speech. In terms of
model architecture, I adopt the basic structure of Transformer, which allows
Clip-TTS to achieve fast inference speeds. Experimental results show that on
the LJSpeech and Baker datasets, the speech generated by Clip-TTS achieves
state-of-the-art MOS scores, and it also performs excellently on multi-emotion
datasets.Audio samples are available at: https://ltydd1314.github.io/.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 07:09:33 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 09:24:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Tianyun",
""
]
]
| TITLE: Clip-TTS: Contrastive Text-content and Mel-spectrogram, A High-Quality
Text-to-Speech Method based on Contextual Semantic Understanding
ABSTRACT: Traditional text-to-speech (TTS) methods primarily focus on establishing a
mapping between phonemes and mel-spectrograms. However, during the phoneme
encoding stage, there is often a lack of real mel-spectrogram auxiliary
information, which results in the encoding process lacking true semantic
understanding. At the same time, traditional TTS systems often struggle to
balance the inference speed of the model with the quality of the synthesized
speech. Methods that generate high-quality synthesized speech tend to have
slower inference speeds, while faster inference methods often sacrifice speech
quality. In this paper, I propose Clip-TTS, a TTS method based on the Clip
architecture. This method uses the Clip framework to establish a connection
between text content and real mel-spectrograms during the text encoding stage,
enabling the text encoder to directly learn the true semantics of the global
context, thereby ensuring the quality of the synthesized speech. In terms of
model architecture, I adopt the basic structure of Transformer, which allows
Clip-TTS to achieve fast inference speeds. Experimental results show that on
the LJSpeech and Baker datasets, the speech generated by Clip-TTS achieves
state-of-the-art MOS scores, and it also performs excellently on multi-emotion
datasets.Audio samples are available at: https://ltydd1314.github.io/.
| no_new_dataset | 0.953579 |
2502.18978 | Jie Li | Hongyi Cai, Jie Li, Wenzhen Dong | Low-Confidence Gold: Refining Low-Confidence Samples for Efficient
Instruction Tuning | 8 pages | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The effectiveness of instruction fine-tuning for Large Language Models is
fundamentally constrained by the quality and efficiency of training datasets.
This work introduces Low-Confidence Gold (LCG), a novel filtering framework
that employs centroid-based clustering and confidence-guided selection for
identifying valuable instruction pairs. Through a semi-supervised approach
using a lightweight classifier trained on representative samples, LCG curates
high-quality subsets while preserving data diversity. Experimental evaluation
demonstrates that models fine-tuned on LCG-filtered subsets of 6K samples
achieve superior performance compared to existing methods, with substantial
improvements on MT-bench and consistent gains across comprehensive evaluation
metrics. The framework's efficacy while maintaining model performance
establishes a promising direction for efficient instruction tuning.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 09:37:21 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 03:20:03 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 09:47:20 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Cai",
"Hongyi",
""
],
[
"Li",
"Jie",
""
],
[
"Dong",
"Wenzhen",
""
]
]
| TITLE: Low-Confidence Gold: Refining Low-Confidence Samples for Efficient
Instruction Tuning
ABSTRACT: The effectiveness of instruction fine-tuning for Large Language Models is
fundamentally constrained by the quality and efficiency of training datasets.
This work introduces Low-Confidence Gold (LCG), a novel filtering framework
that employs centroid-based clustering and confidence-guided selection for
identifying valuable instruction pairs. Through a semi-supervised approach
using a lightweight classifier trained on representative samples, LCG curates
high-quality subsets while preserving data diversity. Experimental evaluation
demonstrates that models fine-tuned on LCG-filtered subsets of 6K samples
achieve superior performance compared to existing methods, with substantial
improvements on MT-bench and consistent gains across comprehensive evaluation
metrics. The framework's efficacy while maintaining model performance
establishes a promising direction for efficient instruction tuning.
| no_new_dataset | 0.949106 |
2502.19068 | Guoqiang Zhong | Huiqiang Wang, Mingchen Song, Guoqiang Zhong | Dynamic Degradation Decomposition Network for All-in-One Image
Restoration | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Currently, restoring clean images from a variety of degradation types using a
single model is still a challenging task. Existing all-in-one image restoration
approaches struggle with addressing complex and ambiguously defined degradation
types. In this paper, we introduce a dynamic degradation decomposition network
for all-in-one image restoration, named D$^3$Net. D$^3$Net achieves
degradation-adaptive image restoration with guided prompt through cross-domain
interaction and dynamic degradation decomposition. Concretely, in D$^3$Net, the
proposed Cross-Domain Degradation Analyzer (CDDA) engages in deep interaction
between frequency domain degradation characteristics and spatial domain image
features to identify and model variations of different degradation types on the
image manifold, generating degradation correction prompt and strategy prompt,
which guide the following decomposition process. Furthermore, the prompt-based
Dynamic Decomposition Mechanism (DDM) for progressive degradation
decomposition, that encourages the network to adaptively select restoration
strategies utilizing the two-level prompt generated by CDDA. Thanks to the
synergistic cooperation between CDDA and DDM, D$^3$Net achieves superior
flexibility and scalability in handling unknown degradation, while effectively
reducing unnecessary computational overhead. Extensive experiments on multiple
image restoration tasks demonstrate that D$^3$Net significantly outperforms the
state-of-the-art approaches, especially improving PSNR by 5.47dB and 3.30dB on
the SOTS-Outdoor and GoPro datasets, respectively.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 11:49:58 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 14:50:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Huiqiang",
""
],
[
"Song",
"Mingchen",
""
],
[
"Zhong",
"Guoqiang",
""
]
]
| TITLE: Dynamic Degradation Decomposition Network for All-in-One Image
Restoration
ABSTRACT: Currently, restoring clean images from a variety of degradation types using a
single model is still a challenging task. Existing all-in-one image restoration
approaches struggle with addressing complex and ambiguously defined degradation
types. In this paper, we introduce a dynamic degradation decomposition network
for all-in-one image restoration, named D$^3$Net. D$^3$Net achieves
degradation-adaptive image restoration with guided prompt through cross-domain
interaction and dynamic degradation decomposition. Concretely, in D$^3$Net, the
proposed Cross-Domain Degradation Analyzer (CDDA) engages in deep interaction
between frequency domain degradation characteristics and spatial domain image
features to identify and model variations of different degradation types on the
image manifold, generating degradation correction prompt and strategy prompt,
which guide the following decomposition process. Furthermore, the prompt-based
Dynamic Decomposition Mechanism (DDM) for progressive degradation
decomposition, that encourages the network to adaptively select restoration
strategies utilizing the two-level prompt generated by CDDA. Thanks to the
synergistic cooperation between CDDA and DDM, D$^3$Net achieves superior
flexibility and scalability in handling unknown degradation, while effectively
reducing unnecessary computational overhead. Extensive experiments on multiple
image restoration tasks demonstrate that D$^3$Net significantly outperforms the
state-of-the-art approaches, especially improving PSNR by 5.47dB and 3.30dB on
the SOTS-Outdoor and GoPro datasets, respectively.
| no_new_dataset | 0.945349 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.