Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.19897 | Lifu Wang | Lifu Wang, Daqing Liu, Xinchen Liu, Xiaodong He | Scaling Down Text Encoders of Text-to-Image Diffusion Models | accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Text encoders in diffusion models have rapidly evolved, transitioning from
CLIP to T5-XXL. Although this evolution has significantly enhanced the models'
ability to understand complex prompts and generate text, it also leads to a
substantial increase in the number of parameters. Despite T5 series encoders
being trained on the C4 natural language corpus, which includes a significant
amount of non-visual data, diffusion models with T5 encoder do not respond to
those non-visual prompts, indicating redundancy in representational power.
Therefore, it raises an important question: "Do we really need such a large
text encoder?" In pursuit of an answer, we employ vision-based knowledge
distillation to train a series of T5 encoder models. To fully inherit its
capabilities, we constructed our dataset based on three criteria: image
quality, semantic understanding, and text-rendering. Our results demonstrate
the scaling down pattern that the distilled T5-base model can generate images
of comparable quality to those produced by T5-XXL, while being 50 times smaller
in size. This reduction in model size significantly lowers the GPU requirements
for running state-of-the-art models such as FLUX and SD3, making high-quality
text-to-image generation more accessible.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 17:55:20 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Wang",
"Lifu",
""
],
[
"Liu",
"Daqing",
""
],
[
"Liu",
"Xinchen",
""
],
[
"He",
"Xiaodong",
""
]
] | TITLE: Scaling Down Text Encoders of Text-to-Image Diffusion Models
ABSTRACT: Text encoders in diffusion models have rapidly evolved, transitioning from
CLIP to T5-XXL. Although this evolution has significantly enhanced the models'
ability to understand complex prompts and generate text, it also leads to a
substantial increase in the number of parameters. Despite T5 series encoders
being trained on the C4 natural language corpus, which includes a significant
amount of non-visual data, diffusion models with T5 encoder do not respond to
those non-visual prompts, indicating redundancy in representational power.
Therefore, it raises an important question: "Do we really need such a large
text encoder?" In pursuit of an answer, we employ vision-based knowledge
distillation to train a series of T5 encoder models. To fully inherit its
capabilities, we constructed our dataset based on three criteria: image
quality, semantic understanding, and text-rendering. Our results demonstrate
the scaling down pattern that the distilled T5-base model can generate images
of comparable quality to those produced by T5-XXL, while being 50 times smaller
in size. This reduction in model size significantly lowers the GPU requirements
for running state-of-the-art models such as FLUX and SD3, making high-quality
text-to-image generation more accessible.
|
2503.19910 | Chuong Huynh | Chuong Huynh, Jinyu Yang, Ashish Tawari, Mubarak Shah, Son Tran,
Raffay Hamid, Trishul Chilimbi, Abhinav Shrivastava | CoLLM: A Large Language Model for Composed Image Retrieval | CVPR 2025. Project page: https://collm-cvpr25.github.io/ | null | null | null | cs.CV cs.IR | http://creativecommons.org/licenses/by/4.0/ | Composed Image Retrieval (CIR) is a complex task that aims to retrieve images
based on a multimodal query. Typical training data consists of triplets
containing a reference image, a textual description of desired modifications,
and the target image, which are expensive and time-consuming to acquire. The
scarcity of CIR datasets has led to zero-shot approaches utilizing synthetic
triplets or leveraging vision-language models (VLMs) with ubiquitous
web-crawled image-caption pairs. However, these methods have significant
limitations: synthetic triplets suffer from limited scale, lack of diversity,
and unnatural modification text, while image-caption pairs hinder joint
embedding learning of the multimodal query due to the absence of triplet data.
Moreover, existing approaches struggle with complex and nuanced modification
texts that demand sophisticated fusion and understanding of vision and language
modalities. We present CoLLM, a one-stop framework that effectively addresses
these limitations. Our approach generates triplets on-the-fly from
image-caption pairs, enabling supervised training without manual annotation. We
leverage Large Language Models (LLMs) to generate joint embeddings of reference
images and modification texts, facilitating deeper multimodal fusion.
Additionally, we introduce Multi-Text CIR (MTCIR), a large-scale dataset
comprising 3.4M samples, and refine existing CIR benchmarks (CIRR and
Fashion-IQ) to enhance evaluation reliability. Experimental results demonstrate
that CoLLM achieves state-of-the-art performance across multiple CIR benchmarks
and settings. MTCIR yields competitive results, with up to 15% performance
improvement. Our refined benchmarks provide more reliable evaluation metrics
for CIR models, contributing to the advancement of this important field.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 17:59:50 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Huynh",
"Chuong",
""
],
[
"Yang",
"Jinyu",
""
],
[
"Tawari",
"Ashish",
""
],
[
"Shah",
"Mubarak",
""
],
[
"Tran",
"Son",
""
],
[
"Hamid",
"Raffay",
""
],
[
"Chilimbi",
"Trishul",
""
],
[
"Shrivastava",
"Abhinav",
""
]
] | TITLE: CoLLM: A Large Language Model for Composed Image Retrieval
ABSTRACT: Composed Image Retrieval (CIR) is a complex task that aims to retrieve images
based on a multimodal query. Typical training data consists of triplets
containing a reference image, a textual description of desired modifications,
and the target image, which are expensive and time-consuming to acquire. The
scarcity of CIR datasets has led to zero-shot approaches utilizing synthetic
triplets or leveraging vision-language models (VLMs) with ubiquitous
web-crawled image-caption pairs. However, these methods have significant
limitations: synthetic triplets suffer from limited scale, lack of diversity,
and unnatural modification text, while image-caption pairs hinder joint
embedding learning of the multimodal query due to the absence of triplet data.
Moreover, existing approaches struggle with complex and nuanced modification
texts that demand sophisticated fusion and understanding of vision and language
modalities. We present CoLLM, a one-stop framework that effectively addresses
these limitations. Our approach generates triplets on-the-fly from
image-caption pairs, enabling supervised training without manual annotation. We
leverage Large Language Models (LLMs) to generate joint embeddings of reference
images and modification texts, facilitating deeper multimodal fusion.
Additionally, we introduce Multi-Text CIR (MTCIR), a large-scale dataset
comprising 3.4M samples, and refine existing CIR benchmarks (CIRR and
Fashion-IQ) to enhance evaluation reliability. Experimental results demonstrate
that CoLLM achieves state-of-the-art performance across multiple CIR benchmarks
and settings. MTCIR yields competitive results, with up to 15% performance
improvement. Our refined benchmarks provide more reliable evaluation metrics
for CIR models, contributing to the advancement of this important field.
|
2503.19912 | Lingdong Kong | Xiang Xu and Lingdong Kong and Hui Shuai and Wenwei Zhang and Liang
Pan and Kai Chen and Ziwei Liu and Qingshan Liu | SuperFlow++: Enhanced Spatiotemporal Consistency for Cross-Modal Data
Pretraining | Preprint; 15 pages, 6 figures, 10 tables; Code at
https://github.com/Xiangxu-0103/SuperFlow | null | null | null | cs.CV cs.LG cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | LiDAR representation learning has emerged as a promising approach to reducing
reliance on costly and labor-intensive human annotations. While existing
methods primarily focus on spatial alignment between LiDAR and camera sensors,
they often overlook the temporal dynamics critical for capturing motion and
scene continuity in driving scenarios. To address this limitation, we propose
SuperFlow++, a novel framework that integrates spatiotemporal cues in both
pretraining and downstream tasks using consecutive LiDAR-camera pairs.
SuperFlow++ introduces four key components: (1) a view consistency alignment
module to unify semantic information across camera views, (2) a dense-to-sparse
consistency regularization mechanism to enhance feature robustness across
varying point cloud densities, (3) a flow-based contrastive learning approach
that models temporal relationships for improved scene understanding, and (4) a
temporal voting strategy that propagates semantic information across LiDAR
scans to improve prediction consistency. Extensive evaluations on 11
heterogeneous LiDAR datasets demonstrate that SuperFlow++ outperforms
state-of-the-art methods across diverse tasks and driving conditions.
Furthermore, by scaling both 2D and 3D backbones during pretraining, we uncover
emergent properties that provide deeper insights into developing scalable 3D
foundation models. With strong generalizability and computational efficiency,
SuperFlow++ establishes a new benchmark for data-efficient LiDAR-based
perception in autonomous driving. The code is publicly available at
https://github.com/Xiangxu-0103/SuperFlow
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 17:59:57 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Xu",
"Xiang",
""
],
[
"Kong",
"Lingdong",
""
],
[
"Shuai",
"Hui",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Pan",
"Liang",
""
],
[
"Chen",
"Kai",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Liu",
"Qingshan",
""
]
] | TITLE: SuperFlow++: Enhanced Spatiotemporal Consistency for Cross-Modal Data
Pretraining
ABSTRACT: LiDAR representation learning has emerged as a promising approach to reducing
reliance on costly and labor-intensive human annotations. While existing
methods primarily focus on spatial alignment between LiDAR and camera sensors,
they often overlook the temporal dynamics critical for capturing motion and
scene continuity in driving scenarios. To address this limitation, we propose
SuperFlow++, a novel framework that integrates spatiotemporal cues in both
pretraining and downstream tasks using consecutive LiDAR-camera pairs.
SuperFlow++ introduces four key components: (1) a view consistency alignment
module to unify semantic information across camera views, (2) a dense-to-sparse
consistency regularization mechanism to enhance feature robustness across
varying point cloud densities, (3) a flow-based contrastive learning approach
that models temporal relationships for improved scene understanding, and (4) a
temporal voting strategy that propagates semantic information across LiDAR
scans to improve prediction consistency. Extensive evaluations on 11
heterogeneous LiDAR datasets demonstrate that SuperFlow++ outperforms
state-of-the-art methods across diverse tasks and driving conditions.
Furthermore, by scaling both 2D and 3D backbones during pretraining, we uncover
emergent properties that provide deeper insights into developing scalable 3D
foundation models. With strong generalizability and computational efficiency,
SuperFlow++ establishes a new benchmark for data-efficient LiDAR-based
perception in autonomous driving. The code is publicly available at
https://github.com/Xiangxu-0103/SuperFlow
|
2503.19913 | Mingju Gao | Mingju Gao, Yike Pan, Huan-ang Gao, Zongzheng Zhang, Wenyi Li, Hao
Dong, Hao Tang, Li Yi, Hao Zhao | PartRM: Modeling Part-Level Dynamics with Large Cross-State
Reconstruction Model | Accepted to CVPR 2025. Project Page: https://partrm.c7w.tech/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As interest grows in world models that predict future states from current
observations and actions, accurately modeling part-level dynamics has become
increasingly relevant for various applications. Existing approaches, such as
Puppet-Master, rely on fine-tuning large-scale pre-trained video diffusion
models, which are impractical for real-world use due to the limitations of 2D
video representation and slow processing times. To overcome these challenges,
we present PartRM, a novel 4D reconstruction framework that simultaneously
models appearance, geometry, and part-level motion from multi-view images of a
static object. PartRM builds upon large 3D Gaussian reconstruction models,
leveraging their extensive knowledge of appearance and geometry in static
objects. To address data scarcity in 4D, we introduce the PartDrag-4D dataset,
providing multi-view observations of part-level dynamics across over 20,000
states. We enhance the model's understanding of interaction conditions with a
multi-scale drag embedding module that captures dynamics at varying
granularities. To prevent catastrophic forgetting during fine-tuning, we
implement a two-stage training process that focuses sequentially on motion and
appearance learning. Experimental results show that PartRM establishes a new
state-of-the-art in part-level motion learning and can be applied in
manipulation tasks in robotics. Our code, data, and models are publicly
available to facilitate future research.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 17:59:58 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Gao",
"Mingju",
""
],
[
"Pan",
"Yike",
""
],
[
"Gao",
"Huan-ang",
""
],
[
"Zhang",
"Zongzheng",
""
],
[
"Li",
"Wenyi",
""
],
[
"Dong",
"Hao",
""
],
[
"Tang",
"Hao",
""
],
[
"Yi",
"Li",
""
],
[
"Zhao",
"Hao",
""
]
] | TITLE: PartRM: Modeling Part-Level Dynamics with Large Cross-State
Reconstruction Model
ABSTRACT: As interest grows in world models that predict future states from current
observations and actions, accurately modeling part-level dynamics has become
increasingly relevant for various applications. Existing approaches, such as
Puppet-Master, rely on fine-tuning large-scale pre-trained video diffusion
models, which are impractical for real-world use due to the limitations of 2D
video representation and slow processing times. To overcome these challenges,
we present PartRM, a novel 4D reconstruction framework that simultaneously
models appearance, geometry, and part-level motion from multi-view images of a
static object. PartRM builds upon large 3D Gaussian reconstruction models,
leveraging their extensive knowledge of appearance and geometry in static
objects. To address data scarcity in 4D, we introduce the PartDrag-4D dataset,
providing multi-view observations of part-level dynamics across over 20,000
states. We enhance the model's understanding of interaction conditions with a
multi-scale drag embedding module that captures dynamics at varying
granularities. To prevent catastrophic forgetting during fine-tuning, we
implement a two-stage training process that focuses sequentially on motion and
appearance learning. Experimental results show that PartRM establishes a new
state-of-the-art in part-level motion learning and can be applied in
manipulation tasks in robotics. Our code, data, and models are publicly
available to facilitate future research.
|
2503.19914 | Sangwon Beak | Sangwon Beak, Hyeonwoo Kim, Hanbyul Joo | Learning 3D Object Spatial Relationships from Pre-trained 2D Diffusion
Models | Project Page: https://tlb-miss.github.io/oor/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present a method for learning 3D spatial relationships between object
pairs, referred to as object-object spatial relationships (OOR), by leveraging
synthetically generated 3D samples from pre-trained 2D diffusion models. We
hypothesize that images synthesized by 2D diffusion models inherently capture
plausible and realistic OOR cues, enabling efficient ways to collect a 3D
dataset to learn OOR for various unbounded object categories. Our approach
begins by synthesizing diverse images that capture plausible OOR cues, which we
then uplift into 3D samples. Leveraging our diverse collection of plausible 3D
samples for the object pairs, we train a score-based OOR diffusion model to
learn the distribution of their relative spatial relationships. Additionally,
we extend our pairwise OOR to multi-object OOR by enforcing consistency across
pairwise relations and preventing object collisions. Extensive experiments
demonstrate the robustness of our method across various object-object spatial
relationships, along with its applicability to real-world 3D scene arrangement
tasks using the OOR diffusion model.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 17:59:58 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Beak",
"Sangwon",
""
],
[
"Kim",
"Hyeonwoo",
""
],
[
"Joo",
"Hanbyul",
""
]
] | TITLE: Learning 3D Object Spatial Relationships from Pre-trained 2D Diffusion
Models
ABSTRACT: We present a method for learning 3D spatial relationships between object
pairs, referred to as object-object spatial relationships (OOR), by leveraging
synthetically generated 3D samples from pre-trained 2D diffusion models. We
hypothesize that images synthesized by 2D diffusion models inherently capture
plausible and realistic OOR cues, enabling efficient ways to collect a 3D
dataset to learn OOR for various unbounded object categories. Our approach
begins by synthesizing diverse images that capture plausible OOR cues, which we
then uplift into 3D samples. Leveraging our diverse collection of plausible 3D
samples for the object pairs, we train a score-based OOR diffusion model to
learn the distribution of their relative spatial relationships. Additionally,
we extend our pairwise OOR to multi-object OOR by enforcing consistency across
pairwise relations and preventing object collisions. Extensive experiments
demonstrate the robustness of our method across various object-object spatial
relationships, along with its applicability to real-world 3D scene arrangement
tasks using the OOR diffusion model.
|
1801.07691 | Junfeng Liu | Yicheng He, Junfeng Liu and Xia Ning | Drug Selection via Joint Push and Learning to Rank | null | IEEE/ACM Transactions on Computational Biology and Bioinformatics
( Volume: 17, Issue: 1, 01 Jan.-Feb. 2020) | 10.1109/TCBB.2018.2848908 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Selecting the right drugs for the right patients is a primary goal of
precision medicine. In this manuscript, we consider the problem of cancer drug
selection in a learning-to-rank framework. We have formulated the cancer drug
selection problem as to accurately predicting 1). the ranking positions of
sensitive drugs and 2). the ranking orders among sensitive drugs in cancer cell
lines based on their responses to cancer drugs. We have developed a new
learning-to-rank method, denoted as pLETORg , that predicts drug ranking
structures in each cell line via using drug latent vectors and cell line latent
vectors. The pLETORg method learns such latent vectors through explicitly
enforcing that, in the drug ranking list of each cell line, the sensitive drugs
are pushed above insensitive drugs, and meanwhile the ranking orders among
sensitive drugs are correct. Genomics information on cell lines is leveraged in
learning the latent vectors. Our experimental results on a benchmark cell
line-drug response dataset demonstrate that the new pLETORg significantly
outperforms the state-of-the-art method in prioritizing new sensitive drugs.
| [
{
"version": "v1",
"created": "Tue, 23 Jan 2018 18:26:54 GMT"
},
{
"version": "v2",
"created": "Fri, 18 May 2018 22:50:38 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"He",
"Yicheng",
""
],
[
"Liu",
"Junfeng",
""
],
[
"Ning",
"Xia",
""
]
] | TITLE: Drug Selection via Joint Push and Learning to Rank
ABSTRACT: Selecting the right drugs for the right patients is a primary goal of
precision medicine. In this manuscript, we consider the problem of cancer drug
selection in a learning-to-rank framework. We have formulated the cancer drug
selection problem as to accurately predicting 1). the ranking positions of
sensitive drugs and 2). the ranking orders among sensitive drugs in cancer cell
lines based on their responses to cancer drugs. We have developed a new
learning-to-rank method, denoted as pLETORg , that predicts drug ranking
structures in each cell line via using drug latent vectors and cell line latent
vectors. The pLETORg method learns such latent vectors through explicitly
enforcing that, in the drug ranking list of each cell line, the sensitive drugs
are pushed above insensitive drugs, and meanwhile the ranking orders among
sensitive drugs are correct. Genomics information on cell lines is leveraged in
learning the latent vectors. Our experimental results on a benchmark cell
line-drug response dataset demonstrate that the new pLETORg significantly
outperforms the state-of-the-art method in prioritizing new sensitive drugs.
|
2108.13898 | Wenjie Yin | Wenjie Yin, Rabab Alkhalifa, Arkaitz Zubiaga | The emojification of sentiment on social media: Collection and analysis
of a longitudinal Twitter sentiment dataset | corrected typo in appendix | null | null | null | cs.SI cs.CL cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social media, as a means for computer-mediated communication, has been
extensively used to study the sentiment expressed by users around events or
topics. There is however a gap in the longitudinal study of how sentiment
evolved in social media over the years. To fill this gap, we develop TM-Senti,
a new large-scale, distantly supervised Twitter sentiment dataset with over 184
million tweets and covering a time period of over seven years. We describe and
assess our methodology to put together a large-scale, emoticon- and emoji-based
labelled sentiment analysis dataset, along with an analysis of the resulting
dataset. Our analysis highlights interesting temporal changes, among others in
the increasing use of emojis over emoticons. We publicly release the dataset
for further research in tasks including sentiment analysis and text
classification of tweets. The dataset can be fully rehydrated including tweet
metadata and without missing tweets thanks to the archive of tweets publicly
available on the Internet Archive, which the dataset is based on.
| [
{
"version": "v1",
"created": "Tue, 31 Aug 2021 14:54:46 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2023 18:11:10 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 17:29:20 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yin",
"Wenjie",
""
],
[
"Alkhalifa",
"Rabab",
""
],
[
"Zubiaga",
"Arkaitz",
""
]
] | TITLE: The emojification of sentiment on social media: Collection and analysis
of a longitudinal Twitter sentiment dataset
ABSTRACT: Social media, as a means for computer-mediated communication, has been
extensively used to study the sentiment expressed by users around events or
topics. There is however a gap in the longitudinal study of how sentiment
evolved in social media over the years. To fill this gap, we develop TM-Senti,
a new large-scale, distantly supervised Twitter sentiment dataset with over 184
million tweets and covering a time period of over seven years. We describe and
assess our methodology to put together a large-scale, emoticon- and emoji-based
labelled sentiment analysis dataset, along with an analysis of the resulting
dataset. Our analysis highlights interesting temporal changes, among others in
the increasing use of emojis over emoticons. We publicly release the dataset
for further research in tasks including sentiment analysis and text
classification of tweets. The dataset can be fully rehydrated including tweet
metadata and without missing tweets thanks to the archive of tweets publicly
available on the Internet Archive, which the dataset is based on.
|
2205.05469 | Alhasan Abdellatif | Alhasan Abdellatif, Ahmed H. Elsheikh, Daniel Busby, Philippe Berthet | Generation of non-stationary stochastic fields using Generative
Adversarial Networks | null | null | 10.3389/feart.2025.1545002 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of generating geological facies conditioned on observed data,
samples corresponding to all possible conditions are not generally available in
the training set and hence the generation of these realizations depends primary
on the generalization capability of the trained generative model. The problem
becomes more complex when applied on non-stationary fields. In this work, we
investigate the problem of using Generative Adversarial Networks (GANs) models
to generate non-stationary geological channelized patterns and examine the
models generalization capability at new spatial modes that were never seen in
the given training set. The developed training method based on
spatial-conditioning allowed for effective learning of the correlation between
the spatial conditions (i.e. non-stationary maps) and the realizations
implicitly without using additional loss terms or solving optimization problems
for every new given data after training. In addition, our models can be trained
on 2D and 3D samples. The results on real and artificial datasets show that we
were able to generate geologically-plausible realizations beyond the training
samples and with a strong correlation with the target maps.
| [
{
"version": "v1",
"created": "Wed, 11 May 2022 13:09:47 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 13:21:23 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Abdellatif",
"Alhasan",
""
],
[
"Elsheikh",
"Ahmed H.",
""
],
[
"Busby",
"Daniel",
""
],
[
"Berthet",
"Philippe",
""
]
] | TITLE: Generation of non-stationary stochastic fields using Generative
Adversarial Networks
ABSTRACT: In the context of generating geological facies conditioned on observed data,
samples corresponding to all possible conditions are not generally available in
the training set and hence the generation of these realizations depends primary
on the generalization capability of the trained generative model. The problem
becomes more complex when applied on non-stationary fields. In this work, we
investigate the problem of using Generative Adversarial Networks (GANs) models
to generate non-stationary geological channelized patterns and examine the
models generalization capability at new spatial modes that were never seen in
the given training set. The developed training method based on
spatial-conditioning allowed for effective learning of the correlation between
the spatial conditions (i.e. non-stationary maps) and the realizations
implicitly without using additional loss terms or solving optimization problems
for every new given data after training. In addition, our models can be trained
on 2D and 3D samples. The results on real and artificial datasets show that we
were able to generate geologically-plausible realizations beyond the training
samples and with a strong correlation with the target maps.
|
2212.03699 | Junfeng Liu | Junfeng Liu, Christopher Symons, Ranga Raju Vatsavai | Persona-Based Conversational AI: State of the Art and Challenges | 2022 International Conference on Data Mining Workshops (ICDMW) | 2022 IEEE International Conference on Data Mining Workshops
(ICDMW) | 10.1109/ICDMW58026.2022.00129 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Conversational AI has become an increasingly prominent and practical
application of machine learning. However, existing conversational AI techniques
still suffer from various limitations. One such limitation is a lack of
well-developed methods for incorporating auxiliary information that could help
a model understand conversational context better. In this paper, we explore how
persona-based information could help improve the quality of response generation
in conversations. First, we provide a literature review focusing on the current
state-of-the-art methods that utilize persona information. We evaluate two
strong baseline methods, the Ranking Profile Memory Network and the
Poly-Encoder, on the NeurIPS ConvAI2 benchmark dataset. Our analysis elucidates
the importance of incorporating persona information into conversational
systems. Additionally, our study highlights several limitations with current
state-of-the-art methods and outlines challenges and future research directions
for advancing personalized conversational AI technology.
| [
{
"version": "v1",
"created": "Sun, 4 Dec 2022 18:16:57 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Junfeng",
""
],
[
"Symons",
"Christopher",
""
],
[
"Vatsavai",
"Ranga Raju",
""
]
] | TITLE: Persona-Based Conversational AI: State of the Art and Challenges
ABSTRACT: Conversational AI has become an increasingly prominent and practical
application of machine learning. However, existing conversational AI techniques
still suffer from various limitations. One such limitation is a lack of
well-developed methods for incorporating auxiliary information that could help
a model understand conversational context better. In this paper, we explore how
persona-based information could help improve the quality of response generation
in conversations. First, we provide a literature review focusing on the current
state-of-the-art methods that utilize persona information. We evaluate two
strong baseline methods, the Ranking Profile Memory Network and the
Poly-Encoder, on the NeurIPS ConvAI2 benchmark dataset. Our analysis elucidates
the importance of incorporating persona information into conversational
systems. Additionally, our study highlights several limitations with current
state-of-the-art methods and outlines challenges and future research directions
for advancing personalized conversational AI technology.
|
2302.05614 | Xin Liu | Xin Liu, Yaran Chen, Haoran Li, Boyu Li and Dongbin Zhao | Cross-domain Random Pre-training with Prototypes for Reinforcement
Learning | This work has been submitted to the IEEE for possible publication | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work has been submitted to the IEEE for possible publication. Copyright
may be transferred without notice, after which this version may no longer be
accessible. Unsupervised cross-domain Reinforcement Learning (RL) pre-training
shows great potential for challenging continuous visual control but poses a big
challenge. In this paper, we propose \textbf{C}ross-domain \textbf{R}andom
\textbf{P}re-\textbf{T}raining with \textbf{pro}totypes (CRPTpro), a novel,
efficient, and effective self-supervised cross-domain RL pre-training
framework. CRPTpro decouples data sampling from encoder pre-training, proposing
decoupled random collection to easily and quickly generate a qualified
cross-domain pre-training dataset. Moreover, a novel prototypical
self-supervised algorithm is proposed to pre-train an effective visual encoder
that is generic across different domains. Without finetuning, the cross-domain
encoder can be implemented for challenging downstream tasks defined in
different domains, either seen or unseen. Compared with recent advanced
methods, CRPTpro achieves better performance on downstream policy learning
without extra training on exploration agents for data collection, greatly
reducing the burden of pre-training. We conduct extensive experiments across
eight challenging continuous visual-control domains, including balance control,
robot locomotion, and manipulation. CRPTpro significantly outperforms the next
best Proto-RL(C) on 11/12 cross-domain downstream tasks with only 54.5\%
wall-clock pre-training time, exhibiting state-of-the-art pre-training
performance with greatly improved pre-training efficiency.
| [
{
"version": "v1",
"created": "Sat, 11 Feb 2023 06:32:28 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Mar 2024 07:29:42 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Mar 2024 09:34:11 GMT"
},
{
"version": "v4",
"created": "Sat, 22 Feb 2025 07:56:47 GMT"
},
{
"version": "v5",
"created": "Mon, 24 Mar 2025 07:52:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Xin",
""
],
[
"Chen",
"Yaran",
""
],
[
"Li",
"Haoran",
""
],
[
"Li",
"Boyu",
""
],
[
"Zhao",
"Dongbin",
""
]
] | TITLE: Cross-domain Random Pre-training with Prototypes for Reinforcement
Learning
ABSTRACT: This work has been submitted to the IEEE for possible publication. Copyright
may be transferred without notice, after which this version may no longer be
accessible. Unsupervised cross-domain Reinforcement Learning (RL) pre-training
shows great potential for challenging continuous visual control but poses a big
challenge. In this paper, we propose \textbf{C}ross-domain \textbf{R}andom
\textbf{P}re-\textbf{T}raining with \textbf{pro}totypes (CRPTpro), a novel,
efficient, and effective self-supervised cross-domain RL pre-training
framework. CRPTpro decouples data sampling from encoder pre-training, proposing
decoupled random collection to easily and quickly generate a qualified
cross-domain pre-training dataset. Moreover, a novel prototypical
self-supervised algorithm is proposed to pre-train an effective visual encoder
that is generic across different domains. Without finetuning, the cross-domain
encoder can be implemented for challenging downstream tasks defined in
different domains, either seen or unseen. Compared with recent advanced
methods, CRPTpro achieves better performance on downstream policy learning
without extra training on exploration agents for data collection, greatly
reducing the burden of pre-training. We conduct extensive experiments across
eight challenging continuous visual-control domains, including balance control,
robot locomotion, and manipulation. CRPTpro significantly outperforms the next
best Proto-RL(C) on 11/12 cross-domain downstream tasks with only 54.5\%
wall-clock pre-training time, exhibiting state-of-the-art pre-training
performance with greatly improved pre-training efficiency.
|
2306.07532 | Xuying Zhang | Xuying Zhang, Bowen Yin, Zheng Lin, Qibin Hou, Deng-Ping Fan,
Ming-Ming Cheng | Referring Camouflaged Object Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of referring camouflaged object detection (Ref-COD),
a new task that aims to segment specified camouflaged objects based on a small
set of referring images with salient target objects. We first assemble a
large-scale dataset, called R2C7K, which consists of 7K images covering 64
object categories in real-world scenarios. Then, we develop a simple but strong
dual-branch framework, dubbed R2CNet, with a reference branch embedding the
common representations of target objects from referring images and a
segmentation branch identifying and segmenting camouflaged objects under the
guidance of the common representations. In particular, we design a Referring
Mask Generation module to generate pixel-level prior mask and a Referring
Feature Enrichment module to enhance the capability of identifying specified
camouflaged objects. Extensive experiments show the superiority of our Ref-COD
methods over their COD counterparts in segmenting specified camouflaged objects
and identifying the main body of target objects. Our code and dataset are
publicly available at https://github.com/zhangxuying1004/RefCOD.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2023 04:15:37 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jul 2023 05:15:34 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 14:12:57 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Xuying",
""
],
[
"Yin",
"Bowen",
""
],
[
"Lin",
"Zheng",
""
],
[
"Hou",
"Qibin",
""
],
[
"Fan",
"Deng-Ping",
""
],
[
"Cheng",
"Ming-Ming",
""
]
] | TITLE: Referring Camouflaged Object Detection
ABSTRACT: We consider the problem of referring camouflaged object detection (Ref-COD),
a new task that aims to segment specified camouflaged objects based on a small
set of referring images with salient target objects. We first assemble a
large-scale dataset, called R2C7K, which consists of 7K images covering 64
object categories in real-world scenarios. Then, we develop a simple but strong
dual-branch framework, dubbed R2CNet, with a reference branch embedding the
common representations of target objects from referring images and a
segmentation branch identifying and segmenting camouflaged objects under the
guidance of the common representations. In particular, we design a Referring
Mask Generation module to generate pixel-level prior mask and a Referring
Feature Enrichment module to enhance the capability of identifying specified
camouflaged objects. Extensive experiments show the superiority of our Ref-COD
methods over their COD counterparts in segmenting specified camouflaged objects
and identifying the main body of target objects. Our code and dataset are
publicly available at https://github.com/zhangxuying1004/RefCOD.
|
2306.14348 | Raed Al Kontar | Xubo Yue, Raed Al Kontar, Albert S. Berahas, Yang Liu, Blake N.
Johnson | Collaborative and Distributed Bayesian Optimization via Consensus:
Showcasing the Power of Collaboration for Optimal Design | 41 pages | IEEE Transactions on Automation Science and Engineering, 2025 | 10.1109/TASE.2025.3529349 | null | cs.LG cs.DC | http://creativecommons.org/licenses/by/4.0/ | Optimal design is a critical yet challenging task within many applications.
This challenge arises from the need for extensive trial and error, often done
through simulations or running field experiments. Fortunately, sequential
optimal design, also referred to as Bayesian optimization when using surrogates
with a Bayesian flavor, has played a key role in accelerating the design
process through efficient sequential sampling strategies. However, a key
opportunity exists nowadays. The increased connectivity of edge devices sets
forth a new collaborative paradigm for Bayesian optimization. A paradigm
whereby different clients collaboratively borrow strength from each other by
effectively distributing their experimentation efforts to improve and
fast-track their optimal design process. To this end, we bring the notion of
consensus to Bayesian optimization, where clients agree (i.e., reach a
consensus) on their next-to-sample designs. Our approach provides a generic and
flexible framework that can incorporate different collaboration mechanisms. In
lieu of this, we propose transitional collaborative mechanisms where clients
initially rely more on each other to maneuver through the early stages with
scant data, then, at the late stages, focus on their own objectives to get
client-specific solutions. Theoretically, we show the sub-linear growth in
regret for our proposed framework. Empirically, through simulated datasets and
a real-world collaborative sensor design experiment, we show that our framework
can effectively accelerate and improve the optimal design process and benefit
all participants.
| [
{
"version": "v1",
"created": "Sun, 25 Jun 2023 21:43:05 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Mar 2024 23:37:40 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yue",
"Xubo",
""
],
[
"Kontar",
"Raed Al",
""
],
[
"Berahas",
"Albert S.",
""
],
[
"Liu",
"Yang",
""
],
[
"Johnson",
"Blake N.",
""
]
] | TITLE: Collaborative and Distributed Bayesian Optimization via Consensus:
Showcasing the Power of Collaboration for Optimal Design
ABSTRACT: Optimal design is a critical yet challenging task within many applications.
This challenge arises from the need for extensive trial and error, often done
through simulations or running field experiments. Fortunately, sequential
optimal design, also referred to as Bayesian optimization when using surrogates
with a Bayesian flavor, has played a key role in accelerating the design
process through efficient sequential sampling strategies. However, a key
opportunity exists nowadays. The increased connectivity of edge devices sets
forth a new collaborative paradigm for Bayesian optimization. A paradigm
whereby different clients collaboratively borrow strength from each other by
effectively distributing their experimentation efforts to improve and
fast-track their optimal design process. To this end, we bring the notion of
consensus to Bayesian optimization, where clients agree (i.e., reach a
consensus) on their next-to-sample designs. Our approach provides a generic and
flexible framework that can incorporate different collaboration mechanisms. In
lieu of this, we propose transitional collaborative mechanisms where clients
initially rely more on each other to maneuver through the early stages with
scant data, then, at the late stages, focus on their own objectives to get
client-specific solutions. Theoretically, we show the sub-linear growth in
regret for our proposed framework. Empirically, through simulated datasets and
a real-world collaborative sensor design experiment, we show that our framework
can effectively accelerate and improve the optimal design process and benefit
all participants.
|
2307.01069 | Konstantin Pakulev Stanislavovich | Konstantin Pakulev, Alexander Vakhitov, Gonzalo Ferrer | NeSS-ST: Detecting Good and Stable Keypoints with a Neural Stability
Score and the Shi-Tomasi Detector | Camera-ready version of ICCV 2023 paper | null | 10.1109/ICCV51070.2023.00878 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning a feature point detector presents a challenge both due to the
ambiguity of the definition of a keypoint and, correspondingly, the need for
specially prepared ground truth labels for such points. In our work, we address
both of these issues by utilizing a combination of a hand-crafted Shi-Tomasi
detector, a specially designed metric that assesses the quality of keypoints,
the stability score (SS), and a neural network. We build on the principled and
localized keypoints provided by the Shi-Tomasi detector and learn the neural
network to select good feature points via the stability score. The neural
network incorporates the knowledge from the training targets in the form of the
neural stability score (NeSS). Therefore, our method is named NeSS-ST since it
combines the Shi-Tomasi detector and the properties of the neural stability
score. It only requires sets of images for training without dataset
pre-labeling or the need for reconstructed correspondence labels. We evaluate
NeSS-ST on HPatches, ScanNet, MegaDepth and IMC-PT demonstrating
state-of-the-art performance and good generalization on downstream tasks.
| [
{
"version": "v1",
"created": "Mon, 3 Jul 2023 14:50:14 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 14:04:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Pakulev",
"Konstantin",
""
],
[
"Vakhitov",
"Alexander",
""
],
[
"Ferrer",
"Gonzalo",
""
]
] | TITLE: NeSS-ST: Detecting Good and Stable Keypoints with a Neural Stability
Score and the Shi-Tomasi Detector
ABSTRACT: Learning a feature point detector presents a challenge both due to the
ambiguity of the definition of a keypoint and, correspondingly, the need for
specially prepared ground truth labels for such points. In our work, we address
both of these issues by utilizing a combination of a hand-crafted Shi-Tomasi
detector, a specially designed metric that assesses the quality of keypoints,
the stability score (SS), and a neural network. We build on the principled and
localized keypoints provided by the Shi-Tomasi detector and learn the neural
network to select good feature points via the stability score. The neural
network incorporates the knowledge from the training targets in the form of the
neural stability score (NeSS). Therefore, our method is named NeSS-ST since it
combines the Shi-Tomasi detector and the properties of the neural stability
score. It only requires sets of images for training without dataset
pre-labeling or the need for reconstructed correspondence labels. We evaluate
NeSS-ST on HPatches, ScanNet, MegaDepth and IMC-PT demonstrating
state-of-the-art performance and good generalization on downstream tasks.
|
2307.06608 | Jiaming Zhang | Jiaming Zhang, Lingyu Qiu, Qi Yi, Yige Li, Jitao Sang, Changsheng Xu,
and Dit-Yan Yeung | MF-CLIP: Leveraging CLIP as Surrogate Models for No-box Adversarial
Attacks | null | null | null | null | cs.LG cs.AI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The vulnerability of Deep Neural Networks (DNNs) to adversarial attacks poses
a significant challenge to their deployment in safety-critical applications.
While extensive research has addressed various attack scenarios, the no-box
attack setting where adversaries have no prior knowledge, including access to
training data of the target model, remains relatively underexplored despite its
practical relevance. This work presents a systematic investigation into
leveraging large-scale Vision-Language Models (VLMs), particularly CLIP, as
surrogate models for executing no-box attacks. Our theoretical and empirical
analyses reveal a key limitation in the execution of no-box attacks stemming
from insufficient discriminative capabilities for direct application of vanilla
CLIP as a surrogate model. To address this limitation, we propose MF-CLIP: a
novel framework that enhances CLIP's effectiveness as a surrogate model through
margin-aware feature space optimization. Comprehensive evaluations across
diverse architectures and datasets demonstrate that MF-CLIP substantially
advances the state-of-the-art in no-box attacks, surpassing existing baselines
by 15.23% on standard models and achieving a 9.52% improvement on adversarially
trained models. Our code will be made publicly available to facilitate
reproducibility and future research in this direction.
| [
{
"version": "v1",
"created": "Thu, 13 Jul 2023 08:10:48 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jul 2023 01:27:57 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 15:27:02 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Jiaming",
""
],
[
"Qiu",
"Lingyu",
""
],
[
"Yi",
"Qi",
""
],
[
"Li",
"Yige",
""
],
[
"Sang",
"Jitao",
""
],
[
"Xu",
"Changsheng",
""
],
[
"Yeung",
"Dit-Yan",
""
]
] | TITLE: MF-CLIP: Leveraging CLIP as Surrogate Models for No-box Adversarial
Attacks
ABSTRACT: The vulnerability of Deep Neural Networks (DNNs) to adversarial attacks poses
a significant challenge to their deployment in safety-critical applications.
While extensive research has addressed various attack scenarios, the no-box
attack setting where adversaries have no prior knowledge, including access to
training data of the target model, remains relatively underexplored despite its
practical relevance. This work presents a systematic investigation into
leveraging large-scale Vision-Language Models (VLMs), particularly CLIP, as
surrogate models for executing no-box attacks. Our theoretical and empirical
analyses reveal a key limitation in the execution of no-box attacks stemming
from insufficient discriminative capabilities for direct application of vanilla
CLIP as a surrogate model. To address this limitation, we propose MF-CLIP: a
novel framework that enhances CLIP's effectiveness as a surrogate model through
margin-aware feature space optimization. Comprehensive evaluations across
diverse architectures and datasets demonstrate that MF-CLIP substantially
advances the state-of-the-art in no-box attacks, surpassing existing baselines
by 15.23% on standard models and achieving a 9.52% improvement on adversarially
trained models. Our code will be made publicly available to facilitate
reproducibility and future research in this direction.
|
2307.10299 | Xinwei Shen | Xinwei Shen, Peter B\"uhlmann, Armeen Taeb | Causality-oriented robustness: exploiting general noise interventions | null | null | null | null | stat.ME cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since distribution shifts are common in real-world applications, there is a
pressing need to develop prediction models that are robust against such shifts.
Existing frameworks, such as empirical risk minimization or distributionally
robust optimization, either lack generalizability for unseen distributions or
rely on postulated distance measures. Alternatively, causality offers a
data-driven and structural perspective to robust predictions. However, the
assumptions necessary for causal inference can be overly stringent, and the
robustness offered by such causal models often lacks flexibility. In this
paper, we focus on causality-oriented robustness and propose Distributional
Robustness via Invariant Gradients (DRIG), a method that exploits general noise
interventions in training data for robust predictions against unseen
interventions, and naturally interpolates between in-distribution prediction
and causality. In a linear setting, we prove that DRIG yields predictions that
are robust among a data-dependent class of distribution shifts. Furthermore, we
show that our framework includes anchor regression as a special case, and that
it yields prediction models that protect against more diverse perturbations. We
establish finite-sample results and extend our approach to semi-supervised
domain adaptation to further improve prediction performance. Finally, we
empirically validate our methods on synthetic simulations and on single-cell
and intensive health care datasets.
| [
{
"version": "v1",
"created": "Tue, 18 Jul 2023 16:22:50 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 15:37:46 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Shen",
"Xinwei",
""
],
[
"Bühlmann",
"Peter",
""
],
[
"Taeb",
"Armeen",
""
]
] | TITLE: Causality-oriented robustness: exploiting general noise interventions
ABSTRACT: Since distribution shifts are common in real-world applications, there is a
pressing need to develop prediction models that are robust against such shifts.
Existing frameworks, such as empirical risk minimization or distributionally
robust optimization, either lack generalizability for unseen distributions or
rely on postulated distance measures. Alternatively, causality offers a
data-driven and structural perspective to robust predictions. However, the
assumptions necessary for causal inference can be overly stringent, and the
robustness offered by such causal models often lacks flexibility. In this
paper, we focus on causality-oriented robustness and propose Distributional
Robustness via Invariant Gradients (DRIG), a method that exploits general noise
interventions in training data for robust predictions against unseen
interventions, and naturally interpolates between in-distribution prediction
and causality. In a linear setting, we prove that DRIG yields predictions that
are robust among a data-dependent class of distribution shifts. Furthermore, we
show that our framework includes anchor regression as a special case, and that
it yields prediction models that protect against more diverse perturbations. We
establish finite-sample results and extend our approach to semi-supervised
domain adaptation to further improve prediction performance. Finally, we
empirically validate our methods on synthetic simulations and on single-cell
and intensive health care datasets.
|
2307.10926 | Olivier Colliot | R. El Jurdi, G. Varoquaux, O. Colliot | Confidence Intervals for Performance Estimates in Brain MRI Segmentation | null | null | null | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Medical segmentation models are evaluated empirically. As such an evaluation
is based on a limited set of example images, it is unavoidably noisy. Beyond a
mean performance measure, reporting confidence intervals is thus crucial.
However, this is rarely done in medical image segmentation. The width of the
confidence interval depends on the test set size and on the spread of the
performance measure (its standard-deviation across the test set). For
classification, many test images are needed to avoid wide confidence intervals.
Segmentation, however, has not been studied, and it differs by the amount of
information brought by a given test image. In this paper, we study the typical
confidence intervals in the context of segmentation in 3D brain magnetic
resonance imaging (MRI). We carry experiments on using the standard nnU-net
framework, two datasets from the Medical Decathlon challenge that concern brain
MRI (hippocampus and brain tumor segmentation) and two performance measures:
the Dice Similarity Coefficient and the Hausdorff distance. We show that the
parametric confidence intervals are reasonable approximations of the bootstrap
estimates for varying test set sizes and spread of the performance metric.
Importantly, we show that the test size needed to achieve a given precision is
often much lower than for classification tasks. Typically, a 1\% wide
confidence interval requires about 100-200 test samples when the spread is low
(standard-deviation around 3\%). More difficult segmentation tasks may lead to
higher spreads and require over 1000 samples.
| [
{
"version": "v1",
"created": "Thu, 20 Jul 2023 14:52:45 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 09:47:01 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Mar 2025 18:22:12 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Jurdi",
"R. El",
""
],
[
"Varoquaux",
"G.",
""
],
[
"Colliot",
"O.",
""
]
] | TITLE: Confidence Intervals for Performance Estimates in Brain MRI Segmentation
ABSTRACT: Medical segmentation models are evaluated empirically. As such an evaluation
is based on a limited set of example images, it is unavoidably noisy. Beyond a
mean performance measure, reporting confidence intervals is thus crucial.
However, this is rarely done in medical image segmentation. The width of the
confidence interval depends on the test set size and on the spread of the
performance measure (its standard-deviation across the test set). For
classification, many test images are needed to avoid wide confidence intervals.
Segmentation, however, has not been studied, and it differs by the amount of
information brought by a given test image. In this paper, we study the typical
confidence intervals in the context of segmentation in 3D brain magnetic
resonance imaging (MRI). We carry experiments on using the standard nnU-net
framework, two datasets from the Medical Decathlon challenge that concern brain
MRI (hippocampus and brain tumor segmentation) and two performance measures:
the Dice Similarity Coefficient and the Hausdorff distance. We show that the
parametric confidence intervals are reasonable approximations of the bootstrap
estimates for varying test set sizes and spread of the performance metric.
Importantly, we show that the test size needed to achieve a given precision is
often much lower than for classification tasks. Typically, a 1\% wide
confidence interval requires about 100-200 test samples when the spread is low
(standard-deviation around 3\%). More difficult segmentation tasks may lead to
higher spreads and require over 1000 samples.
|
2309.01274 | Mohsen Zand | Mohsen Zand, Ali Etemad, Michael Greenspan | Diffusion Models with Deterministic Normalizing Flow Priors | 17 pages, 7 figures, Published in Transactions on Machine Learning
Research (TMLR) | https://openreview.net/pdf?id=ACMNVwcR6v, Transactions on Machine
Learning Research (TMLR), 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For faster sampling and higher sample quality, we propose DiNof
($\textbf{Di}$ffusion with $\textbf{No}$rmalizing $\textbf{f}$low priors), a
technique that makes use of normalizing flows and diffusion models. We use
normalizing flows to parameterize the noisy data at any arbitrary step of the
diffusion process and utilize it as the prior in the reverse diffusion process.
More specifically, the forward noising process turns a data distribution into
partially noisy data, which are subsequently transformed into a Gaussian
distribution by a nonlinear process. The backward denoising procedure begins
with a prior created by sampling from the Gaussian distribution and applying
the invertible normalizing flow transformations deterministically. To generate
the data distribution, the prior then undergoes the remaining diffusion
stochastic denoising procedure. Through the reduction of the number of total
diffusion steps, we are able to speed up both the forward and backward
processes. More importantly, we improve the expressive power of diffusion
models by employing both deterministic and stochastic mappings. Experiments on
standard image generation datasets demonstrate the advantage of the proposed
method over existing approaches. On the unconditional CIFAR10 dataset, for
example, we achieve an FID of 2.01 and an Inception score of 9.96. Our method
also demonstrates competitive performance on CelebA-HQ-256 dataset as it
obtains an FID score of 7.11. Code is available at
https://github.com/MohsenZand/DiNof.
| [
{
"version": "v1",
"created": "Sun, 3 Sep 2023 21:26:56 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 03:57:35 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zand",
"Mohsen",
""
],
[
"Etemad",
"Ali",
""
],
[
"Greenspan",
"Michael",
""
]
] | TITLE: Diffusion Models with Deterministic Normalizing Flow Priors
ABSTRACT: For faster sampling and higher sample quality, we propose DiNof
($\textbf{Di}$ffusion with $\textbf{No}$rmalizing $\textbf{f}$low priors), a
technique that makes use of normalizing flows and diffusion models. We use
normalizing flows to parameterize the noisy data at any arbitrary step of the
diffusion process and utilize it as the prior in the reverse diffusion process.
More specifically, the forward noising process turns a data distribution into
partially noisy data, which are subsequently transformed into a Gaussian
distribution by a nonlinear process. The backward denoising procedure begins
with a prior created by sampling from the Gaussian distribution and applying
the invertible normalizing flow transformations deterministically. To generate
the data distribution, the prior then undergoes the remaining diffusion
stochastic denoising procedure. Through the reduction of the number of total
diffusion steps, we are able to speed up both the forward and backward
processes. More importantly, we improve the expressive power of diffusion
models by employing both deterministic and stochastic mappings. Experiments on
standard image generation datasets demonstrate the advantage of the proposed
method over existing approaches. On the unconditional CIFAR10 dataset, for
example, we achieve an FID of 2.01 and an Inception score of 9.96. Our method
also demonstrates competitive performance on CelebA-HQ-256 dataset as it
obtains an FID score of 7.11. Code is available at
https://github.com/MohsenZand/DiNof.
|
2309.05186 | Xinpeng Ding | Xinpeng Ding, Jianhua Han, Hang Xu, Wei Zhang, Xiaomeng Li | HiLM-D: Enhancing MLLMs with Multi-Scale High-Resolution Details for
Autonomous Driving | Accepted by IJCV | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent efforts to use natural language for interpretable driving focus mainly
on planning, neglecting perception tasks. In this paper, we address this gap by
introducing ROLISP (Risk Object Localization and Intention and Suggestion
Prediction), which towards interpretable risk object detection and suggestion
for ego car motions. Accurate ROLISP implementation requires extensive
reasoning to identify critical traffic objects and infer their intentions,
prompting us to explore the capabilities of multimodal large language models
(MLLMs). However, the limited perception performance of CLIP-ViT vision
encoders in existing MLLMs struggles with capturing essential visual perception
information, e.g., high-resolution, multi-scale and visual-related inductive
biases, which are important for autonomous driving. Addressing these
challenges, we introduce HiLM-D, a resource-efficient framework that enhances
visual information processing in MLLMs for ROLISP. Our method is motivated by
the fact that the primary variations in autonomous driving scenarios are the
motion trajectories rather than the semantic or appearance information (e.g.,
the shapes and colors) of objects. Hence, the visual process of HiLM-D is a
two-stream framework: (i) a temporal reasoning stream, receiving low-resolution
dynamic video content, to capture temporal semantics, and (ii) a spatial
perception stream, receiving a single high-resolution frame, to capture
holistic visual perception-related information. The spatial perception stream
can be made very lightweight by a well-designed P-Adapter, which is
lightweight, training-efficient, and easily integrated into existing MLLMs.
Experiments on the DRAMA-ROLISP dataset show HiLM-D's significant improvements
over current MLLMs, with a 3.7% in BLEU-4 for captioning and 8.7% in mIoU for
detection.
| [
{
"version": "v1",
"created": "Mon, 11 Sep 2023 01:24:13 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 07:07:59 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ding",
"Xinpeng",
""
],
[
"Han",
"Jianhua",
""
],
[
"Xu",
"Hang",
""
],
[
"Zhang",
"Wei",
""
],
[
"Li",
"Xiaomeng",
""
]
] | TITLE: HiLM-D: Enhancing MLLMs with Multi-Scale High-Resolution Details for
Autonomous Driving
ABSTRACT: Recent efforts to use natural language for interpretable driving focus mainly
on planning, neglecting perception tasks. In this paper, we address this gap by
introducing ROLISP (Risk Object Localization and Intention and Suggestion
Prediction), which towards interpretable risk object detection and suggestion
for ego car motions. Accurate ROLISP implementation requires extensive
reasoning to identify critical traffic objects and infer their intentions,
prompting us to explore the capabilities of multimodal large language models
(MLLMs). However, the limited perception performance of CLIP-ViT vision
encoders in existing MLLMs struggles with capturing essential visual perception
information, e.g., high-resolution, multi-scale and visual-related inductive
biases, which are important for autonomous driving. Addressing these
challenges, we introduce HiLM-D, a resource-efficient framework that enhances
visual information processing in MLLMs for ROLISP. Our method is motivated by
the fact that the primary variations in autonomous driving scenarios are the
motion trajectories rather than the semantic or appearance information (e.g.,
the shapes and colors) of objects. Hence, the visual process of HiLM-D is a
two-stream framework: (i) a temporal reasoning stream, receiving low-resolution
dynamic video content, to capture temporal semantics, and (ii) a spatial
perception stream, receiving a single high-resolution frame, to capture
holistic visual perception-related information. The spatial perception stream
can be made very lightweight by a well-designed P-Adapter, which is
lightweight, training-efficient, and easily integrated into existing MLLMs.
Experiments on the DRAMA-ROLISP dataset show HiLM-D's significant improvements
over current MLLMs, with a 3.7% in BLEU-4 for captioning and 8.7% in mIoU for
detection.
|
2309.16633 | Zijian Dong | Yilei Wu, Zijian Dong, Chongyao Chen, Wangchunshu Zhou, Juan Helen
Zhou | SupReMix: Supervised Contrastive Learning for Medical Imaging Regression
with Mixup | The first two authors equally contributed to this work. Previously
titled "Mixup Your Own Pair", content extended and revised | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | In medical image analysis, regression plays a critical role in computer-aided
diagnosis. It enables quantitative measurements such as age prediction from
structural imaging, cardiac function quantification, and molecular measurement
from PET scans. While deep learning has shown promise for these tasks, most
approaches focus solely on optimizing regression loss or model architecture,
neglecting the quality of learned feature representations which are crucial for
robust clinical predictions. Directly applying representation learning
techniques designed for classification to regression often results in
fragmented representations in the latent space, yielding sub-optimal
performance. In this paper, we argue that the potential of contrastive learning
for medical image regression has been overshadowed due to the neglect of two
crucial aspects: ordinality-awareness and hardness. To address these
challenges, we propose Supervised Contrastive Learning for Medical Imaging
Regression with Mixup (SupReMix). It takes anchor-inclusive mixtures (mixup of
the anchor and a distinct negative sample) as hard negative pairs and
anchor-exclusive mixtures (mixup of two distinct negative samples) as hard
positive pairs at the embedding level. This strategy formulates harder
contrastive pairs by integrating richer ordinal information. Through
theoretical analysis and extensive experiments on six datasets spanning MRI,
X-ray, ultrasound, and PET modalities, we demonstrate that SupReMix fosters
continuous ordered representations, significantly improving regression
performance.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 17:38:59 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 04:22:54 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 19:37:46 GMT"
},
{
"version": "v4",
"created": "Sun, 23 Mar 2025 08:28:08 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wu",
"Yilei",
""
],
[
"Dong",
"Zijian",
""
],
[
"Chen",
"Chongyao",
""
],
[
"Zhou",
"Wangchunshu",
""
],
[
"Zhou",
"Juan Helen",
""
]
] | TITLE: SupReMix: Supervised Contrastive Learning for Medical Imaging Regression
with Mixup
ABSTRACT: In medical image analysis, regression plays a critical role in computer-aided
diagnosis. It enables quantitative measurements such as age prediction from
structural imaging, cardiac function quantification, and molecular measurement
from PET scans. While deep learning has shown promise for these tasks, most
approaches focus solely on optimizing regression loss or model architecture,
neglecting the quality of learned feature representations which are crucial for
robust clinical predictions. Directly applying representation learning
techniques designed for classification to regression often results in
fragmented representations in the latent space, yielding sub-optimal
performance. In this paper, we argue that the potential of contrastive learning
for medical image regression has been overshadowed due to the neglect of two
crucial aspects: ordinality-awareness and hardness. To address these
challenges, we propose Supervised Contrastive Learning for Medical Imaging
Regression with Mixup (SupReMix). It takes anchor-inclusive mixtures (mixup of
the anchor and a distinct negative sample) as hard negative pairs and
anchor-exclusive mixtures (mixup of two distinct negative samples) as hard
positive pairs at the embedding level. This strategy formulates harder
contrastive pairs by integrating richer ordinal information. Through
theoretical analysis and extensive experiments on six datasets spanning MRI,
X-ray, ultrasound, and PET modalities, we demonstrate that SupReMix fosters
continuous ordered representations, significantly improving regression
performance.
|
2309.16770 | Junfeng Liu | Junfeng Liu, Christopher Symons, Ranga Raju Vatsavai | Persona-Coded Poly-Encoder: Persona-Guided Multi-Stream Conversational
Sentence Scoring | The 35th IEEE International Conference on Tools with Artificial
Intelligence (ICTAI) | 2023 IEEE 35th International Conference on Tools with Artificial
Intelligence (ICTAI) | 10.1109/ICTAI59109.2023.00044 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent advances in machine learning and deep learning have led to the
widespread use of Conversational AI in many practical applications. However, it
is still very challenging to leverage auxiliary information that can provide
conversational context or personalized tuning to improve the quality of
conversations. For example, there has only been limited research on using an
individuals persona information to improve conversation quality, and even
state-of-the-art conversational AI techniques are unable to effectively
leverage signals from heterogeneous sources of auxiliary data, such as
multi-modal interaction data, demographics, SDOH data, etc. In this paper, we
present a novel Persona-Coded Poly-Encoder method that leverages persona
information in a multi-stream encoding scheme to improve the quality of
response generation for conversations. To show the efficacy of the proposed
method, we evaluate our method on two different persona-based conversational
datasets, and compared against two state-of-the-art methods. Our experimental
results and analysis demonstrate that our method can improve conversation
quality over the baseline method Poly-Encoder by 3.32% and 2.94% in terms of
BLEU score and HR@1, respectively. More significantly, our method offers a path
to better utilization of multi-modal data in conversational tasks. Lastly, our
study outlines several challenges and future research directions for advancing
personalized conversational AI technology.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 18:07:01 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Dec 2023 18:45:12 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Junfeng",
""
],
[
"Symons",
"Christopher",
""
],
[
"Vatsavai",
"Ranga Raju",
""
]
] | TITLE: Persona-Coded Poly-Encoder: Persona-Guided Multi-Stream Conversational
Sentence Scoring
ABSTRACT: Recent advances in machine learning and deep learning have led to the
widespread use of Conversational AI in many practical applications. However, it
is still very challenging to leverage auxiliary information that can provide
conversational context or personalized tuning to improve the quality of
conversations. For example, there has only been limited research on using an
individuals persona information to improve conversation quality, and even
state-of-the-art conversational AI techniques are unable to effectively
leverage signals from heterogeneous sources of auxiliary data, such as
multi-modal interaction data, demographics, SDOH data, etc. In this paper, we
present a novel Persona-Coded Poly-Encoder method that leverages persona
information in a multi-stream encoding scheme to improve the quality of
response generation for conversations. To show the efficacy of the proposed
method, we evaluate our method on two different persona-based conversational
datasets, and compared against two state-of-the-art methods. Our experimental
results and analysis demonstrate that our method can improve conversation
quality over the baseline method Poly-Encoder by 3.32% and 2.94% in terms of
BLEU score and HR@1, respectively. More significantly, our method offers a path
to better utilization of multi-modal data in conversational tasks. Lastly, our
study outlines several challenges and future research directions for advancing
personalized conversational AI technology.
|
2310.00093 | Ahmad Sajedi | Ahmad Sajedi, Samir Khaki, Ehsan Amjadian, Lucy Z. Liu, Yuri A.
Lawryshyn, Konstantinos N. Plataniotis | DataDAM: Efficient Dataset Distillation with Attention Matching | Accepted in International Conference in Computer Vision (ICCV) 2023 | Proceedings of the IEEE/CVF International Conference on Computer
Vision (ICCV), October 2023, pages 17097-17107 | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Researchers have long tried to minimize training costs in deep learning while
maintaining strong generalization across diverse datasets. Emerging research on
dataset distillation aims to reduce training costs by creating a small
synthetic set that contains the information of a larger real dataset and
ultimately achieves test accuracy equivalent to a model trained on the whole
dataset. Unfortunately, the synthetic data generated by previous methods are
not guaranteed to distribute and discriminate as well as the original training
data, and they incur significant computational costs. Despite promising
results, there still exists a significant performance gap between models
trained on condensed synthetic sets and those trained on the whole dataset. In
this paper, we address these challenges using efficient Dataset Distillation
with Attention Matching (DataDAM), achieving state-of-the-art performance while
reducing training costs. Specifically, we learn synthetic images by matching
the spatial attention maps of real and synthetic data generated by different
layers within a family of randomly initialized neural networks. Our method
outperforms the prior methods on several datasets, including CIFAR10/100,
TinyImageNet, ImageNet-1K, and subsets of ImageNet-1K across most of the
settings, and achieves improvements of up to 6.5% and 4.1% on CIFAR100 and
ImageNet-1K, respectively. We also show that our high-quality distilled images
have practical benefits for downstream applications, such as continual learning
and neural architecture search.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 19:07:48 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Oct 2023 16:23:34 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 19:43:19 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Sajedi",
"Ahmad",
""
],
[
"Khaki",
"Samir",
""
],
[
"Amjadian",
"Ehsan",
""
],
[
"Liu",
"Lucy Z.",
""
],
[
"Lawryshyn",
"Yuri A.",
""
],
[
"Plataniotis",
"Konstantinos N.",
""
]
] | TITLE: DataDAM: Efficient Dataset Distillation with Attention Matching
ABSTRACT: Researchers have long tried to minimize training costs in deep learning while
maintaining strong generalization across diverse datasets. Emerging research on
dataset distillation aims to reduce training costs by creating a small
synthetic set that contains the information of a larger real dataset and
ultimately achieves test accuracy equivalent to a model trained on the whole
dataset. Unfortunately, the synthetic data generated by previous methods are
not guaranteed to distribute and discriminate as well as the original training
data, and they incur significant computational costs. Despite promising
results, there still exists a significant performance gap between models
trained on condensed synthetic sets and those trained on the whole dataset. In
this paper, we address these challenges using efficient Dataset Distillation
with Attention Matching (DataDAM), achieving state-of-the-art performance while
reducing training costs. Specifically, we learn synthetic images by matching
the spatial attention maps of real and synthetic data generated by different
layers within a family of randomly initialized neural networks. Our method
outperforms the prior methods on several datasets, including CIFAR10/100,
TinyImageNet, ImageNet-1K, and subsets of ImageNet-1K across most of the
settings, and achieves improvements of up to 6.5% and 4.1% on CIFAR100 and
ImageNet-1K, respectively. We also show that our high-quality distilled images
have practical benefits for downstream applications, such as continual learning
and neural architecture search.
|
2311.10116 | Chong Wang | Chong Wang, Cheng Xu, Adeel Akram, Zhong Wang, Zhilin Shan, Qixing
Zhang | Wildfire Smoke Detection System: Model Architecture, Training Mechanism,
and Dataset | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vanilla Transformers focus on semantic relevance between mid- to high-level
features and are not good at extracting smoke features as they overlook subtle
changes in low-level features like color, transparency, and texture which are
essential for smoke recognition. To address this, we propose the Cross Contrast
Patch Embedding (CCPE) module based on the Swin Transformer. This module
leverages multi-scale spatial contrast information in both vertical and
horizontal directions to enhance the network's discrimination of underlying
details. By combining Cross Contrast with Transformer, we exploit the
advantages of Transformer in global receptive field and context modeling while
compensating for its inability to capture very low-level details, resulting in
a more powerful backbone network tailored for smoke recognition tasks.
Additionally, we introduce the Separable Negative Sampling Mechanism (SNSM) to
address supervision signal confusion during training and release the
SKLFS-WildFire Test dataset, the largest real-world wildfire testset to date,
for systematic evaluation. Extensive testing and evaluation on the benchmark
dataset FIgLib and the SKLFS-WildFire Test dataset show significant performance
improvements of the proposed method over baseline detection models. The code
and data are available at github.com/WCUSTC/CCPE.
| [
{
"version": "v1",
"created": "Thu, 16 Nov 2023 06:53:03 GMT"
},
{
"version": "v2",
"created": "Sun, 31 Dec 2023 09:40:10 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Mar 2025 00:34:39 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Chong",
""
],
[
"Xu",
"Cheng",
""
],
[
"Akram",
"Adeel",
""
],
[
"Wang",
"Zhong",
""
],
[
"Shan",
"Zhilin",
""
],
[
"Zhang",
"Qixing",
""
]
] | TITLE: Wildfire Smoke Detection System: Model Architecture, Training Mechanism,
and Dataset
ABSTRACT: Vanilla Transformers focus on semantic relevance between mid- to high-level
features and are not good at extracting smoke features as they overlook subtle
changes in low-level features like color, transparency, and texture which are
essential for smoke recognition. To address this, we propose the Cross Contrast
Patch Embedding (CCPE) module based on the Swin Transformer. This module
leverages multi-scale spatial contrast information in both vertical and
horizontal directions to enhance the network's discrimination of underlying
details. By combining Cross Contrast with Transformer, we exploit the
advantages of Transformer in global receptive field and context modeling while
compensating for its inability to capture very low-level details, resulting in
a more powerful backbone network tailored for smoke recognition tasks.
Additionally, we introduce the Separable Negative Sampling Mechanism (SNSM) to
address supervision signal confusion during training and release the
SKLFS-WildFire Test dataset, the largest real-world wildfire testset to date,
for systematic evaluation. Extensive testing and evaluation on the benchmark
dataset FIgLib and the SKLFS-WildFire Test dataset show significant performance
improvements of the proposed method over baseline detection models. The code
and data are available at github.com/WCUSTC/CCPE.
|
2311.13186 | Somayeh Hussaini | Somayeh Hussaini, Michael Milford, Tobias Fischer | Applications of Spiking Neural Networks in Visual Place Recognition | 20 pages, 10 figures, IEEE Transactions on Robotics (TRO) | IEEE Transactions on Robotics 41 (2025) 518-537 | 10.1109/TRO.2024.3508053 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In robotics, Spiking Neural Networks (SNNs) are increasingly recognized for
their largely-unrealized potential energy efficiency and low latency
particularly when implemented on neuromorphic hardware. Our paper highlights
three advancements for SNNs in Visual Place Recognition (VPR). Firstly, we
propose Modular SNNs, where each SNN represents a set of non-overlapping
geographically distinct places, enabling scalable networks for large
environments. Secondly, we present Ensembles of Modular SNNs, where multiple
networks represent the same place, significantly enhancing accuracy compared to
single-network models. Each of our Modular SNN modules is compact, comprising
only 1500 neurons and 474k synapses, making them ideally suited for ensembling
due to their small size. Lastly, we investigate the role of sequence matching
in SNN-based VPR, a technique where consecutive images are used to refine place
recognition. We demonstrate competitive performance of our method on a range of
datasets, including higher responsiveness to ensembling compared to
conventional VPR techniques and higher R@1 improvements with sequence matching
than VPR techniques with comparable baseline performance. Our contributions
highlight the viability of SNNs for VPR, offering scalable and robust
solutions, and paving the way for their application in various energy-sensitive
robotic tasks.
| [
{
"version": "v1",
"created": "Wed, 22 Nov 2023 06:26:24 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Aug 2024 00:56:39 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Nov 2024 06:53:21 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Mar 2025 02:51:37 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hussaini",
"Somayeh",
""
],
[
"Milford",
"Michael",
""
],
[
"Fischer",
"Tobias",
""
]
] | TITLE: Applications of Spiking Neural Networks in Visual Place Recognition
ABSTRACT: In robotics, Spiking Neural Networks (SNNs) are increasingly recognized for
their largely-unrealized potential energy efficiency and low latency
particularly when implemented on neuromorphic hardware. Our paper highlights
three advancements for SNNs in Visual Place Recognition (VPR). Firstly, we
propose Modular SNNs, where each SNN represents a set of non-overlapping
geographically distinct places, enabling scalable networks for large
environments. Secondly, we present Ensembles of Modular SNNs, where multiple
networks represent the same place, significantly enhancing accuracy compared to
single-network models. Each of our Modular SNN modules is compact, comprising
only 1500 neurons and 474k synapses, making them ideally suited for ensembling
due to their small size. Lastly, we investigate the role of sequence matching
in SNN-based VPR, a technique where consecutive images are used to refine place
recognition. We demonstrate competitive performance of our method on a range of
datasets, including higher responsiveness to ensembling compared to
conventional VPR techniques and higher R@1 improvements with sequence matching
than VPR techniques with comparable baseline performance. Our contributions
highlight the viability of SNNs for VPR, offering scalable and robust
solutions, and paving the way for their application in various energy-sensitive
robotic tasks.
|
2311.13811 | Ling Feng | Ling Feng, Tianhao Wu, Xiangrong Ren, Zhi Jing, Xuliang Duan | Education distillation:getting student models to learn in shcools | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a new knowledge distillation method, called education
distillation (ED), which is inspired by the structured and progressive nature
of human learning. ED mimics the educational stages of primary school, middle
school, and university and designs teaching reference blocks. The student model
is split into a main body and multiple teaching reference blocks to learn from
teachers step by step. This promotes efficient knowledge distillation while
maintaining the architecture of the student model. Experimental results on the
CIFAR100, Tiny Imagenet, Caltech and Food-101 datasets show that the teaching
reference blocks can effectively avoid the problem of forgetting. Compared with
conventional single-teacher and multi-teacher knowledge distillation methods,
ED significantly improves the accuracy and generalization ability of the
student model. These findings highlight the potential of ED to improve model
performance across different architectures and datasets, indicating its value
in various deep learning scenarios. Code examples can be obtained at:
https://github.com/Revolutioner1/ED.git.
| [
{
"version": "v1",
"created": "Thu, 23 Nov 2023 05:20:18 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Nov 2023 02:32:54 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 01:49:29 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Feng",
"Ling",
""
],
[
"Wu",
"Tianhao",
""
],
[
"Ren",
"Xiangrong",
""
],
[
"Jing",
"Zhi",
""
],
[
"Duan",
"Xuliang",
""
]
] | TITLE: Education distillation:getting student models to learn in shcools
ABSTRACT: This paper introduces a new knowledge distillation method, called education
distillation (ED), which is inspired by the structured and progressive nature
of human learning. ED mimics the educational stages of primary school, middle
school, and university and designs teaching reference blocks. The student model
is split into a main body and multiple teaching reference blocks to learn from
teachers step by step. This promotes efficient knowledge distillation while
maintaining the architecture of the student model. Experimental results on the
CIFAR100, Tiny Imagenet, Caltech and Food-101 datasets show that the teaching
reference blocks can effectively avoid the problem of forgetting. Compared with
conventional single-teacher and multi-teacher knowledge distillation methods,
ED significantly improves the accuracy and generalization ability of the
student model. These findings highlight the potential of ED to improve model
performance across different architectures and datasets, indicating its value
in various deep learning scenarios. Code examples can be obtained at:
https://github.com/Revolutioner1/ED.git.
|
2312.07352 | Chinedu Nwoye | Chinedu Innocent Nwoye, Kareem Elgohary, Anvita Srinivas, Fauzan Zaid,
Jo\"el L. Lavanchy, Nicolas Padoy | CholecTrack20: A Multi-Perspective Tracking Dataset for Surgical Tools | Surgical tool tracking dataset paper, 11 pages, 10 figures, 3 tables,
CVPR 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Tool tracking in surgical videos is essential for advancing computer-assisted
interventions, such as skill assessment, safety zone estimation, and
human-machine collaboration. However, the lack of context-rich datasets limits
AI applications in this field. Existing datasets rely on overly generic
tracking formalizations that fail to capture surgical-specific dynamics, such
as tools moving out of the camera's view or exiting the body. This results in
less clinically relevant trajectories and a lack of flexibility for real-world
surgical applications. Methods trained on these datasets often struggle with
visual challenges such as smoke, reflection, and bleeding, further exposing the
limitations of current approaches. We introduce CholecTrack20, a specialized
dataset for multi-class, multi-tool tracking in surgical procedures. It
redefines tracking formalization with three perspectives: (i) intraoperative,
(ii) intracorporeal, and (iii) visibility, enabling adaptable and clinically
meaningful tool trajectories. The dataset comprises 20 full-length surgical
videos, annotated at 1 fps, yielding over 35K frames and 65K labeled tool
instances. Annotations include spatial location, category, identity, operator,
phase, and scene visual challenge. Benchmarking state-of-the-art methods on
CholecTrack20 reveals significant performance gaps, with current approaches (<
45\% HOTA) failing to meet the accuracy required for clinical translation.
These findings motivate the need for advanced and intuitive tracking algorithms
and establish CholecTrack20 as a foundation for developing robust AI-driven
surgical assistance systems.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 15:18:15 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 14:12:43 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Nwoye",
"Chinedu Innocent",
""
],
[
"Elgohary",
"Kareem",
""
],
[
"Srinivas",
"Anvita",
""
],
[
"Zaid",
"Fauzan",
""
],
[
"Lavanchy",
"Joël L.",
""
],
[
"Padoy",
"Nicolas",
""
]
] | TITLE: CholecTrack20: A Multi-Perspective Tracking Dataset for Surgical Tools
ABSTRACT: Tool tracking in surgical videos is essential for advancing computer-assisted
interventions, such as skill assessment, safety zone estimation, and
human-machine collaboration. However, the lack of context-rich datasets limits
AI applications in this field. Existing datasets rely on overly generic
tracking formalizations that fail to capture surgical-specific dynamics, such
as tools moving out of the camera's view or exiting the body. This results in
less clinically relevant trajectories and a lack of flexibility for real-world
surgical applications. Methods trained on these datasets often struggle with
visual challenges such as smoke, reflection, and bleeding, further exposing the
limitations of current approaches. We introduce CholecTrack20, a specialized
dataset for multi-class, multi-tool tracking in surgical procedures. It
redefines tracking formalization with three perspectives: (i) intraoperative,
(ii) intracorporeal, and (iii) visibility, enabling adaptable and clinically
meaningful tool trajectories. The dataset comprises 20 full-length surgical
videos, annotated at 1 fps, yielding over 35K frames and 65K labeled tool
instances. Annotations include spatial location, category, identity, operator,
phase, and scene visual challenge. Benchmarking state-of-the-art methods on
CholecTrack20 reveals significant performance gaps, with current approaches (<
45\% HOTA) failing to meet the accuracy required for clinical translation.
These findings motivate the need for advanced and intuitive tracking algorithms
and establish CholecTrack20 as a foundation for developing robust AI-driven
surgical assistance systems.
|
2312.09799 | Tian-Sheuan Chang | Yu-Han Sun, Chiang Lo-Hsuan Lee and Tian-Sheuan Chang | IQNet: Image Quality Assessment Guided Just Noticeable Difference
Prefiltering For Versatile Video Coding | null | in IEEE Open Journal of Circuits and Systems, vol. 5, pp. 17-27,
2024 | 10.1109/OJCAS.2023.3344094 | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Image prefiltering with just noticeable distortion (JND) improves coding
efficiency in a visual lossless way by filtering the perceptually redundant
information prior to compression. However, real JND cannot be well modeled with
inaccurate masking equations in traditional approaches or image-level subject
tests in deep learning approaches. Thus, this paper proposes a fine-grained JND
prefiltering dataset guided by image quality assessment for accurate
block-level JND modeling. The dataset is constructed from decoded images to
include coding effects and is also perceptually enhanced with block overlap and
edge preservation. Furthermore, based on this dataset, we propose a lightweight
JND prefiltering network, IQNet, which can be applied directly to different
quantization cases with the same model and only needs 3K parameters. The
experimental results show that the proposed approach to Versatile Video Coding
could yield maximum/average bitrate savings of 41\%/15\% and 53\%/19\% for
all-intra and low-delay P configurations, respectively, with negligible
subjective quality loss. Our method demonstrates higher perceptual quality and
a model size that is an order of magnitude smaller than previous deep learning
methods.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 13:58:10 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Sun",
"Yu-Han",
""
],
[
"Lee",
"Chiang Lo-Hsuan",
""
],
[
"Chang",
"Tian-Sheuan",
""
]
] | TITLE: IQNet: Image Quality Assessment Guided Just Noticeable Difference
Prefiltering For Versatile Video Coding
ABSTRACT: Image prefiltering with just noticeable distortion (JND) improves coding
efficiency in a visual lossless way by filtering the perceptually redundant
information prior to compression. However, real JND cannot be well modeled with
inaccurate masking equations in traditional approaches or image-level subject
tests in deep learning approaches. Thus, this paper proposes a fine-grained JND
prefiltering dataset guided by image quality assessment for accurate
block-level JND modeling. The dataset is constructed from decoded images to
include coding effects and is also perceptually enhanced with block overlap and
edge preservation. Furthermore, based on this dataset, we propose a lightweight
JND prefiltering network, IQNet, which can be applied directly to different
quantization cases with the same model and only needs 3K parameters. The
experimental results show that the proposed approach to Versatile Video Coding
could yield maximum/average bitrate savings of 41\%/15\% and 53\%/19\% for
all-intra and low-delay P configurations, respectively, with negligible
subjective quality loss. Our method demonstrates higher perceptual quality and
a model size that is an order of magnitude smaller than previous deep learning
methods.
|
2401.02764 | Hugo Chan-To-Hing | Hugo Chan-To-Hing, Bharadwaj Veeravalli | Fus-MAE: A cross-attention-based data fusion approach for Masked
Autoencoders in remote sensing | null | IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing
Symposium. IEEE, 2024 | 10.1109/IGARSS53475.2024.10642424 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Self-supervised frameworks for representation learning have recently stirred
up interest among the remote sensing community, given their potential to
mitigate the high labeling costs associated with curating large satellite image
datasets. In the realm of multimodal data fusion, while the often used
contrastive learning methods can help bridging the domain gap between different
sensor types, they rely on data augmentations techniques that require expertise
and careful design, especially for multispectral remote sensing data. A
possible but rather scarcely studied way to circumvent these limitations is to
use a masked image modelling based pretraining strategy. In this paper, we
introduce Fus-MAE, a self-supervised learning framework based on masked
autoencoders that uses cross-attention to perform early and feature-level data
fusion between synthetic aperture radar and multispectral optical data - two
modalities with a significant domain gap. Our empirical findings demonstrate
that Fus-MAE can effectively compete with contrastive learning strategies
tailored for SAR-optical data fusion and outperforms other masked-autoencoders
frameworks trained on a larger corpus.
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2024 11:36:21 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 21:58:13 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chan-To-Hing",
"Hugo",
""
],
[
"Veeravalli",
"Bharadwaj",
""
]
] | TITLE: Fus-MAE: A cross-attention-based data fusion approach for Masked
Autoencoders in remote sensing
ABSTRACT: Self-supervised frameworks for representation learning have recently stirred
up interest among the remote sensing community, given their potential to
mitigate the high labeling costs associated with curating large satellite image
datasets. In the realm of multimodal data fusion, while the often used
contrastive learning methods can help bridging the domain gap between different
sensor types, they rely on data augmentations techniques that require expertise
and careful design, especially for multispectral remote sensing data. A
possible but rather scarcely studied way to circumvent these limitations is to
use a masked image modelling based pretraining strategy. In this paper, we
introduce Fus-MAE, a self-supervised learning framework based on masked
autoencoders that uses cross-attention to perform early and feature-level data
fusion between synthetic aperture radar and multispectral optical data - two
modalities with a significant domain gap. Our empirical findings demonstrate
that Fus-MAE can effectively compete with contrastive learning strategies
tailored for SAR-optical data fusion and outperforms other masked-autoencoders
frameworks trained on a larger corpus.
|
2401.13488 | Jian Luo | Shenshen Chen, Jian Luo, Dong Guo, Kai Gao, Yang Richard Yang | Fast Inverse Model Transformation: Algebraic Framework for Fast Data
Plane Verification | The paper is under polishment | null | null | null | cs.NI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Data plane verification (DPV) analyzes routing tables and detects routing
abnormalities and policy violations during network operation and planning.
Thus, it has become an important tool to harden the networking infrastructure
and the computing systems building on top. Substantial advancements have been
made in the last decade and state-of-the-art DPV systems can achieve sub-us
verification for an update of a single forwarding rule.
In this paper, we introduce fast inverse model transformation (FIMT), the
first theoretical framework to systematically model and analyze centralized DPV
systems. FIMT reveals the algebraic structure in the model update process, a
key step in fast DPV systems. Thus, it can systematically analyze the
correctness of several DPV systems, using algebraic properties. The theory also
guides the design and implementation of NeoFlash, a refactored version of Flash
with new optimization techniques. Evaluations show that NeoFlash outperforms
existing state-of-the-art centralized DPV systems in various datasets and
reveal insights to key techniques towards fast DPV.
| [
{
"version": "v1",
"created": "Wed, 24 Jan 2024 14:36:08 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Feb 2024 14:09:06 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Mar 2025 02:52:40 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Shenshen",
""
],
[
"Luo",
"Jian",
""
],
[
"Guo",
"Dong",
""
],
[
"Gao",
"Kai",
""
],
[
"Yang",
"Yang Richard",
""
]
] | TITLE: Fast Inverse Model Transformation: Algebraic Framework for Fast Data
Plane Verification
ABSTRACT: Data plane verification (DPV) analyzes routing tables and detects routing
abnormalities and policy violations during network operation and planning.
Thus, it has become an important tool to harden the networking infrastructure
and the computing systems building on top. Substantial advancements have been
made in the last decade and state-of-the-art DPV systems can achieve sub-us
verification for an update of a single forwarding rule.
In this paper, we introduce fast inverse model transformation (FIMT), the
first theoretical framework to systematically model and analyze centralized DPV
systems. FIMT reveals the algebraic structure in the model update process, a
key step in fast DPV systems. Thus, it can systematically analyze the
correctness of several DPV systems, using algebraic properties. The theory also
guides the design and implementation of NeoFlash, a refactored version of Flash
with new optimization techniques. Evaluations show that NeoFlash outperforms
existing state-of-the-art centralized DPV systems in various datasets and
reveal insights to key techniques towards fast DPV.
|
2401.16458 | Javier Arroyo | Mario Sanz-Guerrero, Javier Arroyo | Credit Risk Meets Large Language Models: Building a Risk Indicator from
Loan Descriptions in P2P Lending | null | Inteligencia Artificial, 28(75) (2025), 220-247 | 10.4114/intartif.vol28iss75pp220-247 | null | q-fin.RM cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Peer-to-peer (P2P) lending connects borrowers and lenders through online
platforms but suffers from significant information asymmetry, as lenders often
lack sufficient data to assess borrowers' creditworthiness. This paper
addresses this challenge by leveraging BERT, a Large Language Model (LLM) known
for its ability to capture contextual nuances in text, to generate a risk score
based on borrowers' loan descriptions using a dataset from the Lending Club
platform. We fine-tune BERT to distinguish between defaulted and non-defaulted
loans using the loan descriptions provided by the borrowers. The resulting
BERT-generated risk score is then integrated as an additional feature into an
XGBoost classifier used at the loan granting stage, where decision-makers have
limited information available to guide their decisions. This integration
enhances predictive performance, with improvements in balanced accuracy and
AUC, highlighting the value of textual features in complementing traditional
inputs. Moreover, we find that the incorporation of the BERT score alters how
classification models utilize traditional input variables, with these changes
varying by loan purpose. These findings suggest that BERT discerns meaningful
patterns in loan descriptions, encompassing borrower-specific features,
specific purposes, and linguistic characteristics. However, the inherent
opacity of LLMs and their potential biases underscore the need for transparent
frameworks to ensure regulatory compliance and foster trust. Overall, this
study demonstrates how LLM-derived insights interact with traditional features
in credit risk modeling, opening new avenues to enhance the explainability and
fairness of these models.
| [
{
"version": "v1",
"created": "Mon, 29 Jan 2024 10:11:05 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Aug 2024 07:59:19 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Mar 2025 09:42:11 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Sanz-Guerrero",
"Mario",
""
],
[
"Arroyo",
"Javier",
""
]
] | TITLE: Credit Risk Meets Large Language Models: Building a Risk Indicator from
Loan Descriptions in P2P Lending
ABSTRACT: Peer-to-peer (P2P) lending connects borrowers and lenders through online
platforms but suffers from significant information asymmetry, as lenders often
lack sufficient data to assess borrowers' creditworthiness. This paper
addresses this challenge by leveraging BERT, a Large Language Model (LLM) known
for its ability to capture contextual nuances in text, to generate a risk score
based on borrowers' loan descriptions using a dataset from the Lending Club
platform. We fine-tune BERT to distinguish between defaulted and non-defaulted
loans using the loan descriptions provided by the borrowers. The resulting
BERT-generated risk score is then integrated as an additional feature into an
XGBoost classifier used at the loan granting stage, where decision-makers have
limited information available to guide their decisions. This integration
enhances predictive performance, with improvements in balanced accuracy and
AUC, highlighting the value of textual features in complementing traditional
inputs. Moreover, we find that the incorporation of the BERT score alters how
classification models utilize traditional input variables, with these changes
varying by loan purpose. These findings suggest that BERT discerns meaningful
patterns in loan descriptions, encompassing borrower-specific features,
specific purposes, and linguistic characteristics. However, the inherent
opacity of LLMs and their potential biases underscore the need for transparent
frameworks to ensure regulatory compliance and foster trust. Overall, this
study demonstrates how LLM-derived insights interact with traditional features
in credit risk modeling, opening new avenues to enhance the explainability and
fairness of these models.
|
2402.06196 | Shervin Minaee | Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu,
Richard Socher, Xavier Amatriain, Jianfeng Gao | Large Language Models: A Survey | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have drawn a lot of attention due to their
strong performance on a wide range of natural language tasks, since the release
of ChatGPT in November 2022. LLMs' ability of general-purpose language
understanding and generation is acquired by training billions of model's
parameters on massive amounts of text data, as predicted by scaling laws
\cite{kaplan2020scaling,hoffmann2022training}. The research area of LLMs, while
very recent, is evolving rapidly in many different ways. In this paper, we
review some of the most prominent LLMs, including three popular LLM families
(GPT, LLaMA, PaLM), and discuss their characteristics, contributions and
limitations. We also give an overview of techniques developed to build, and
augment LLMs. We then survey popular datasets prepared for LLM training,
fine-tuning, and evaluation, review widely used LLM evaluation metrics, and
compare the performance of several popular LLMs on a set of representative
benchmarks. Finally, we conclude the paper by discussing open challenges and
future research directions.
| [
{
"version": "v1",
"created": "Fri, 9 Feb 2024 05:37:09 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Feb 2024 13:33:49 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Mar 2025 14:51:01 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Minaee",
"Shervin",
""
],
[
"Mikolov",
"Tomas",
""
],
[
"Nikzad",
"Narjes",
""
],
[
"Chenaghlu",
"Meysam",
""
],
[
"Socher",
"Richard",
""
],
[
"Amatriain",
"Xavier",
""
],
[
"Gao",
"Jianfeng",
""
]
] | TITLE: Large Language Models: A Survey
ABSTRACT: Large Language Models (LLMs) have drawn a lot of attention due to their
strong performance on a wide range of natural language tasks, since the release
of ChatGPT in November 2022. LLMs' ability of general-purpose language
understanding and generation is acquired by training billions of model's
parameters on massive amounts of text data, as predicted by scaling laws
\cite{kaplan2020scaling,hoffmann2022training}. The research area of LLMs, while
very recent, is evolving rapidly in many different ways. In this paper, we
review some of the most prominent LLMs, including three popular LLM families
(GPT, LLaMA, PaLM), and discuss their characteristics, contributions and
limitations. We also give an overview of techniques developed to build, and
augment LLMs. We then survey popular datasets prepared for LLM training,
fine-tuning, and evaluation, review widely used LLM evaluation metrics, and
compare the performance of several popular LLMs on a set of representative
benchmarks. Finally, we conclude the paper by discussing open challenges and
future research directions.
|
2402.07625 | Yifan Zhang | Yifan Zhang, Yifan Luo, Yang Yuan, Andrew Chi-Chih Yao | Autonomous Data Selection with Zero-shot Generative Classifiers for
Mathematical Texts | 24 pages, 8 figures. arXiv admin note: text overlap with
arXiv:0808.2664, arXiv:0806.2159, arXiv:1703.08834, arXiv:math/0610707 by
other authors | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Autonomous Data Selection (AutoDS), a method that leverages base
language models themselves as zero-shot "generative classifiers" to
automatically curate high-quality mathematical texts. Unlike prior approaches
that require human annotations or training a dedicated data filter, AutoDS
relies solely on a model's logits to determine whether a given passage is
mathematically informative and educational. By integrating AutoDS into a
continual pretraining pipeline, we substantially boost downstream performance
on challenging math benchmarks (MATH, GSM8K, and BBH) while using far fewer
tokens than previous methods. Empirically, our approach achieves roughly a
twofold improvement in pretraining token efficiency over strong baselines,
underscoring the potential of self-directed data selection in enhancing
mathematical reasoning. We release our curated AutoMathText dataset to
facilitate future research in automated domain-specific data curation. The
AutoMathText dataset is available at
https://huggingface.co/datasets/math-ai/AutoMathText. The code is available at
https://github.com/yifanzhang-pro/AutoMathText.
| [
{
"version": "v1",
"created": "Mon, 12 Feb 2024 13:09:21 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Apr 2024 04:17:30 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Oct 2024 22:08:22 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Feb 2025 01:02:54 GMT"
},
{
"version": "v5",
"created": "Sun, 23 Mar 2025 02:11:03 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Yifan",
""
],
[
"Luo",
"Yifan",
""
],
[
"Yuan",
"Yang",
""
],
[
"Yao",
"Andrew Chi-Chih",
""
]
] | TITLE: Autonomous Data Selection with Zero-shot Generative Classifiers for
Mathematical Texts
ABSTRACT: We present Autonomous Data Selection (AutoDS), a method that leverages base
language models themselves as zero-shot "generative classifiers" to
automatically curate high-quality mathematical texts. Unlike prior approaches
that require human annotations or training a dedicated data filter, AutoDS
relies solely on a model's logits to determine whether a given passage is
mathematically informative and educational. By integrating AutoDS into a
continual pretraining pipeline, we substantially boost downstream performance
on challenging math benchmarks (MATH, GSM8K, and BBH) while using far fewer
tokens than previous methods. Empirically, our approach achieves roughly a
twofold improvement in pretraining token efficiency over strong baselines,
underscoring the potential of self-directed data selection in enhancing
mathematical reasoning. We release our curated AutoMathText dataset to
facilitate future research in automated domain-specific data curation. The
AutoMathText dataset is available at
https://huggingface.co/datasets/math-ai/AutoMathText. The code is available at
https://github.com/yifanzhang-pro/AutoMathText.
|
2402.18128 | Han Guo | Han Guo, Ramtin Hosseini, Ruiyi Zhang, Sai Ashish Somayajula, Ranak
Roy Chowdhury, Rajesh K. Gupta, Pengtao Xie | Downstream Task Guided Masking Learning in Masked Autoencoders Using
Multi-Level Optimization | Published in Transactions on Machine Learning Research (TMLR) | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Masked Autoencoder (MAE) is a notable method for self-supervised pretraining
in visual representation learning. It operates by randomly masking image
patches and reconstructing these masked patches using the unmasked ones. A key
limitation of MAE lies in its disregard for the varying informativeness of
different patches, as it uniformly selects patches to mask. To overcome this,
some approaches propose masking based on patch informativeness. However, these
methods often do not consider the specific requirements of downstream tasks,
potentially leading to suboptimal representations for these tasks. In response,
we introduce the Multi-level Optimized Mask Autoencoder (MLO-MAE), a novel
framework that leverages end-to-end feedback from downstream tasks to learn an
optimal masking strategy during pretraining. Our experimental findings
highlight MLO-MAE's significant advancements in visual representation learning.
Compared to existing methods, it demonstrates remarkable improvements across
diverse datasets and tasks, showcasing its adaptability and efficiency.
| [
{
"version": "v1",
"created": "Wed, 28 Feb 2024 07:37:26 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 19:12:25 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Guo",
"Han",
""
],
[
"Hosseini",
"Ramtin",
""
],
[
"Zhang",
"Ruiyi",
""
],
[
"Somayajula",
"Sai Ashish",
""
],
[
"Chowdhury",
"Ranak Roy",
""
],
[
"Gupta",
"Rajesh K.",
""
],
[
"Xie",
"Pengtao",
""
]
] | TITLE: Downstream Task Guided Masking Learning in Masked Autoencoders Using
Multi-Level Optimization
ABSTRACT: Masked Autoencoder (MAE) is a notable method for self-supervised pretraining
in visual representation learning. It operates by randomly masking image
patches and reconstructing these masked patches using the unmasked ones. A key
limitation of MAE lies in its disregard for the varying informativeness of
different patches, as it uniformly selects patches to mask. To overcome this,
some approaches propose masking based on patch informativeness. However, these
methods often do not consider the specific requirements of downstream tasks,
potentially leading to suboptimal representations for these tasks. In response,
we introduce the Multi-level Optimized Mask Autoencoder (MLO-MAE), a novel
framework that leverages end-to-end feedback from downstream tasks to learn an
optimal masking strategy during pretraining. Our experimental findings
highlight MLO-MAE's significant advancements in visual representation learning.
Compared to existing methods, it demonstrates remarkable improvements across
diverse datasets and tasks, showcasing its adaptability and efficiency.
|
2403.03372 | Jay Patrikar | Jay Patrikar, Joao Dantas, Brady Moon, Milad Hamidi, Sourish Ghosh,
Nikhil Keetha, Ian Higgins, Atharva Chandak, Takashi Yoneyama, and Sebastian
Scherer | TartanAviation: Image, Speech, and ADS-B Trajectory Datasets for
Terminal Airspace Operations | 8 pages, 6 figures, 2 tables | Scientific Data volume 12, Article number: 468 (2025) | 10.1038/s41597-025-04775-6 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We introduce TartanAviation, an open-source multi-modal dataset focused on
terminal-area airspace operations. TartanAviation provides a holistic view of
the airport environment by concurrently collecting image, speech, and ADS-B
trajectory data using setups installed inside airport boundaries. The datasets
were collected at both towered and non-towered airfields across multiple months
to capture diversity in aircraft operations, seasons, aircraft types, and
weather conditions. In total, TartanAviation provides 3.1M images, 3374 hours
of Air Traffic Control speech data, and 661 days of ADS-B trajectory data. The
data was filtered, processed, and validated to create a curated dataset. In
addition to the dataset, we also open-source the code-base used to collect and
pre-process the dataset, further enhancing accessibility and usability. We
believe this dataset has many potential use cases and would be particularly
vital in allowing AI and machine learning technologies to be integrated into
air traffic control systems and advance the adoption of autonomous aircraft in
the airspace.
| [
{
"version": "v1",
"created": "Tue, 5 Mar 2024 23:37:43 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Patrikar",
"Jay",
""
],
[
"Dantas",
"Joao",
""
],
[
"Moon",
"Brady",
""
],
[
"Hamidi",
"Milad",
""
],
[
"Ghosh",
"Sourish",
""
],
[
"Keetha",
"Nikhil",
""
],
[
"Higgins",
"Ian",
""
],
[
"Chandak",
"Atharva",
""
],
[
"Yoneyama",
"Takashi",
""
],
[
"Scherer",
"Sebastian",
""
]
] | TITLE: TartanAviation: Image, Speech, and ADS-B Trajectory Datasets for
Terminal Airspace Operations
ABSTRACT: We introduce TartanAviation, an open-source multi-modal dataset focused on
terminal-area airspace operations. TartanAviation provides a holistic view of
the airport environment by concurrently collecting image, speech, and ADS-B
trajectory data using setups installed inside airport boundaries. The datasets
were collected at both towered and non-towered airfields across multiple months
to capture diversity in aircraft operations, seasons, aircraft types, and
weather conditions. In total, TartanAviation provides 3.1M images, 3374 hours
of Air Traffic Control speech data, and 661 days of ADS-B trajectory data. The
data was filtered, processed, and validated to create a curated dataset. In
addition to the dataset, we also open-source the code-base used to collect and
pre-process the dataset, further enhancing accessibility and usability. We
believe this dataset has many potential use cases and would be particularly
vital in allowing AI and machine learning technologies to be integrated into
air traffic control systems and advance the adoption of autonomous aircraft in
the airspace.
|
2403.05451 | Amir Mohammad Mansourian | Amir M. Mansourian, Arya Jalali, Rozhan Ahmadi, Shohreh Kasaei | Attention-guided Feature Distillation for Semantic Segmentation | 26 pages, 10 figures, and 6 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning models have achieved significant results across various
computer vision tasks. However, due to the large number of parameters in these
models, deploying them in real-time scenarios is a critical challenge,
specifically in dense prediction tasks such as semantic segmentation. Knowledge
distillation has emerged as a successful technique for addressing this problem
by transferring knowledge from a cumbersome model (teacher) to a lighter model
(student). In contrast to existing complex methodologies commonly employed for
distilling knowledge from a teacher to a student, this paper showcases the
efficacy of a simple yet powerful method for utilizing refined feature maps to
transfer attention. The proposed method has proven to be effective in
distilling rich information, outperforming existing methods in semantic
segmentation as a dense prediction task. The proposed Attention-guided Feature
Distillation (AttnFD) method, employs the Convolutional Block Attention Module
(CBAM), which refines feature maps by taking into account both channel-specific
and spatial information content. Simply using the Mean Squared Error (MSE) loss
function between the refined feature maps of the teacher and the student,
AttnFD demonstrates outstanding performance in semantic segmentation, achieving
state-of-the-art results in terms of improving the mean Intersection over Union
(mIoU) of the student network on the PascalVoc 2012, Cityscapes, COCO, and
CamVid datasets.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 16:57:47 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Aug 2024 13:58:16 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 06:34:34 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Mansourian",
"Amir M.",
""
],
[
"Jalali",
"Arya",
""
],
[
"Ahmadi",
"Rozhan",
""
],
[
"Kasaei",
"Shohreh",
""
]
] | TITLE: Attention-guided Feature Distillation for Semantic Segmentation
ABSTRACT: Deep learning models have achieved significant results across various
computer vision tasks. However, due to the large number of parameters in these
models, deploying them in real-time scenarios is a critical challenge,
specifically in dense prediction tasks such as semantic segmentation. Knowledge
distillation has emerged as a successful technique for addressing this problem
by transferring knowledge from a cumbersome model (teacher) to a lighter model
(student). In contrast to existing complex methodologies commonly employed for
distilling knowledge from a teacher to a student, this paper showcases the
efficacy of a simple yet powerful method for utilizing refined feature maps to
transfer attention. The proposed method has proven to be effective in
distilling rich information, outperforming existing methods in semantic
segmentation as a dense prediction task. The proposed Attention-guided Feature
Distillation (AttnFD) method, employs the Convolutional Block Attention Module
(CBAM), which refines feature maps by taking into account both channel-specific
and spatial information content. Simply using the Mean Squared Error (MSE) loss
function between the refined feature maps of the teacher and the student,
AttnFD demonstrates outstanding performance in semantic segmentation, achieving
state-of-the-art results in terms of improving the mean Intersection over Union
(mIoU) of the student network on the PascalVoc 2012, Cityscapes, COCO, and
CamVid datasets.
|
2403.07376 | Bingqian Lin | Bingqian Lin, Yunshuang Nie, Ziming Wei, Jiaqi Chen, Shikui Ma,
Jianhua Han, Hang Xu, Xiaojun Chang, Xiaodan Liang | NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning
Disentangled Reasoning | Accepted by TPAMI 2025 | null | null | null | cs.CV cs.AI cs.CL cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-and-Language Navigation (VLN), as a crucial research problem of
Embodied AI, requires an embodied agent to navigate through complex 3D
environments following natural language instructions. Recent research has
highlighted the promising capacity of large language models (LLMs) in VLN by
improving navigational reasoning accuracy and interpretability. However, their
predominant use in an offline manner usually suffers from substantial domain
gap between the VLN task and the LLM training corpus. This paper introduces a
novel strategy called Navigational Chain-of-Thought (NavCoT), where we fulfill
parameter-efficient in-domain training to enable self-guided navigational
decision, leading to a significant mitigation of the domain gap in a
cost-effective manner. Specifically, at each timestep, the LLM is prompted to
forecast the navigational chain-of-thought by: 1) acting as a world model to
imagine the next observation according to the instruction, 2) selecting the
candidate observation that best aligns with the imagination, and 3) determining
the action based on the reasoning from the prior steps. Through constructing
formalized labels for training, the LLM can learn to generate desired and
reasonable chain-of-thought outputs for improving the action decision.
Experimental results across various training settings and popular VLN
benchmarks (e.g., Room-to-Room (R2R), Room-across-Room (RxR), Room-for-Room
(R4R)) show the significant superiority of NavCoT over the direct action
prediction variants. Through simple parameter-efficient finetuning, our NavCoT
outperforms a recent GPT4-based approach with ~7% relative improvement on the
R2R dataset. We believe that NavCoT will help unlock more task-adaptive and
scalable LLM-based embodied agents, which are helpful for developing real-world
robotics applications. Code is available at
https://github.com/expectorlin/NavCoT.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 07:27:02 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 11:04:36 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lin",
"Bingqian",
""
],
[
"Nie",
"Yunshuang",
""
],
[
"Wei",
"Ziming",
""
],
[
"Chen",
"Jiaqi",
""
],
[
"Ma",
"Shikui",
""
],
[
"Han",
"Jianhua",
""
],
[
"Xu",
"Hang",
""
],
[
"Chang",
"Xiaojun",
""
],
[
"Liang",
"Xiaodan",
""
]
] | TITLE: NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning
Disentangled Reasoning
ABSTRACT: Vision-and-Language Navigation (VLN), as a crucial research problem of
Embodied AI, requires an embodied agent to navigate through complex 3D
environments following natural language instructions. Recent research has
highlighted the promising capacity of large language models (LLMs) in VLN by
improving navigational reasoning accuracy and interpretability. However, their
predominant use in an offline manner usually suffers from substantial domain
gap between the VLN task and the LLM training corpus. This paper introduces a
novel strategy called Navigational Chain-of-Thought (NavCoT), where we fulfill
parameter-efficient in-domain training to enable self-guided navigational
decision, leading to a significant mitigation of the domain gap in a
cost-effective manner. Specifically, at each timestep, the LLM is prompted to
forecast the navigational chain-of-thought by: 1) acting as a world model to
imagine the next observation according to the instruction, 2) selecting the
candidate observation that best aligns with the imagination, and 3) determining
the action based on the reasoning from the prior steps. Through constructing
formalized labels for training, the LLM can learn to generate desired and
reasonable chain-of-thought outputs for improving the action decision.
Experimental results across various training settings and popular VLN
benchmarks (e.g., Room-to-Room (R2R), Room-across-Room (RxR), Room-for-Room
(R4R)) show the significant superiority of NavCoT over the direct action
prediction variants. Through simple parameter-efficient finetuning, our NavCoT
outperforms a recent GPT4-based approach with ~7% relative improvement on the
R2R dataset. We believe that NavCoT will help unlock more task-adaptive and
scalable LLM-based embodied agents, which are helpful for developing real-world
robotics applications. Code is available at
https://github.com/expectorlin/NavCoT.
|
2403.14103 | Bin Xie | Bin Xie, Hao Tang, Bin Duan, Dawen Cai, Yan Yan, Gady Agam | MaskSAM: Towards Auto-prompt SAM with Mask Classification for Volumetric
Medical Image Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Segment Anything Model (SAM), a prompt-driven foundation model for natural
image segmentation, has demonstrated impressive zero-shot performance. However,
SAM does not work when directly applied to medical image segmentation, since
SAM lacks the ability to predict semantic labels, requires additional prompts,
and presents suboptimal performance. Following the above issues, we propose
MaskSAM, a novel mask classification prompt-free SAM adaptation framework for
medical image segmentation. We design a prompt generator combined with the
image encoder in SAM to generate a set of auxiliary classifier tokens,
auxiliary binary masks, and auxiliary bounding boxes. Each pair of auxiliary
mask and box prompts can solve the requirements of extra prompts. The semantic
label prediction can be addressed by the sum of the auxiliary classifier tokens
and the learnable global classifier tokens in the mask decoder of SAM.
Meanwhile, we design a 3D depth-convolution adapter for image embeddings and a
3D depth-MLP adapter for prompt embeddings to efficiently fine-tune SAM. Our
method achieves state-of-the-art performance on AMOS2022, 90.52% Dice, which
improved by 2.7% compared to nnUNet. Our method surpasses nnUNet by 1.7% on
ACDC and 1.0% on Synapse datasets.
| [
{
"version": "v1",
"created": "Thu, 21 Mar 2024 03:28:24 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 17:02:53 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xie",
"Bin",
""
],
[
"Tang",
"Hao",
""
],
[
"Duan",
"Bin",
""
],
[
"Cai",
"Dawen",
""
],
[
"Yan",
"Yan",
""
],
[
"Agam",
"Gady",
""
]
] | TITLE: MaskSAM: Towards Auto-prompt SAM with Mask Classification for Volumetric
Medical Image Segmentation
ABSTRACT: Segment Anything Model (SAM), a prompt-driven foundation model for natural
image segmentation, has demonstrated impressive zero-shot performance. However,
SAM does not work when directly applied to medical image segmentation, since
SAM lacks the ability to predict semantic labels, requires additional prompts,
and presents suboptimal performance. Following the above issues, we propose
MaskSAM, a novel mask classification prompt-free SAM adaptation framework for
medical image segmentation. We design a prompt generator combined with the
image encoder in SAM to generate a set of auxiliary classifier tokens,
auxiliary binary masks, and auxiliary bounding boxes. Each pair of auxiliary
mask and box prompts can solve the requirements of extra prompts. The semantic
label prediction can be addressed by the sum of the auxiliary classifier tokens
and the learnable global classifier tokens in the mask decoder of SAM.
Meanwhile, we design a 3D depth-convolution adapter for image embeddings and a
3D depth-MLP adapter for prompt embeddings to efficiently fine-tune SAM. Our
method achieves state-of-the-art performance on AMOS2022, 90.52% Dice, which
improved by 2.7% compared to nnUNet. Our method surpasses nnUNet by 1.7% on
ACDC and 1.0% on Synapse datasets.
|
2403.14368 | Aram Davtyan | Aram Davtyan, Sepehr Sameni, Bj\"orn Ommer, Paolo Favaro | CAGE: Unsupervised Visual Composition and Animation for Controllable
Video Generation | Published at AAAI2025; Project website:
https://araachie.github.io/cage | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The field of video generation has expanded significantly in recent years,
with controllable and compositional video generation garnering considerable
interest. Most methods rely on leveraging annotations such as text, objects'
bounding boxes, and motion cues, which require substantial human effort and
thus limit their scalability. In contrast, we address the challenge of
controllable and compositional video generation without any annotations by
introducing a novel unsupervised approach. Our model is trained from scratch on
a dataset of unannotated videos. At inference time, it can compose plausible
novel scenes and animate objects by placing object parts at the desired
locations in space and time. The core innovation of our method lies in the
unified control format and the training process, where video generation is
conditioned on a randomly selected subset of pre-trained self-supervised local
features. This conditioning compels the model to learn how to inpaint the
missing information in the video both spatially and temporally, thereby
learning the inherent compositionality of a scene and the dynamics of moving
objects. The abstraction level and the imposed invariance of the conditioning
input to minor visual perturbations enable control over object motion by simply
using the same features at all the desired future locations. We call our model
CAGE, which stands for visual Composition and Animation for video GEneration.
We conduct extensive experiments to validate the effectiveness of CAGE across
various scenarios, demonstrating its capability to accurately follow the
control and to generate high-quality videos that exhibit coherent scene
composition and realistic animation.
| [
{
"version": "v1",
"created": "Thu, 21 Mar 2024 12:50:15 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 14:21:55 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Davtyan",
"Aram",
""
],
[
"Sameni",
"Sepehr",
""
],
[
"Ommer",
"Björn",
""
],
[
"Favaro",
"Paolo",
""
]
] | TITLE: CAGE: Unsupervised Visual Composition and Animation for Controllable
Video Generation
ABSTRACT: The field of video generation has expanded significantly in recent years,
with controllable and compositional video generation garnering considerable
interest. Most methods rely on leveraging annotations such as text, objects'
bounding boxes, and motion cues, which require substantial human effort and
thus limit their scalability. In contrast, we address the challenge of
controllable and compositional video generation without any annotations by
introducing a novel unsupervised approach. Our model is trained from scratch on
a dataset of unannotated videos. At inference time, it can compose plausible
novel scenes and animate objects by placing object parts at the desired
locations in space and time. The core innovation of our method lies in the
unified control format and the training process, where video generation is
conditioned on a randomly selected subset of pre-trained self-supervised local
features. This conditioning compels the model to learn how to inpaint the
missing information in the video both spatially and temporally, thereby
learning the inherent compositionality of a scene and the dynamics of moving
objects. The abstraction level and the imposed invariance of the conditioning
input to minor visual perturbations enable control over object motion by simply
using the same features at all the desired future locations. We call our model
CAGE, which stands for visual Composition and Animation for video GEneration.
We conduct extensive experiments to validate the effectiveness of CAGE across
various scenarios, demonstrating its capability to accurately follow the
control and to generate high-quality videos that exhibit coherent scene
composition and realistic animation.
|
2404.10353 | Haodong Wen | Haodong Wen, Bodong Du, Ruixun Liu, Deyu Meng, Xiangyong Cao | Rethinking the Graph Polynomial Filter via Positive and Negative
Coupling Analysis | 13 pages, 8 figures, 6 tables | null | null | null | cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the optimization of polynomial filters within Spectral Graph Neural
Networks (GNNs) has emerged as a prominent research focus. Existing spectral
GNNs mainly emphasize polynomial properties in filter design, introducing
computational overhead and neglecting the integration of crucial graph
structure information. We argue that incorporating graph information into basis
construction can enhance understanding of polynomial basis, and further
facilitate simplified polynomial filter design. Motivated by this, we first
propose a Positive and Negative Coupling Analysis (PNCA) framework, where the
concepts of positive and negative activation are defined and their respective
and mixed effects are analysed. Then, we explore PNCA from the message
propagation perspective, revealing the subtle information hidden in the
activation process. Subsequently, PNCA is used to analyze the mainstream
polynomial filters, and a novel simple basis that decouples the positive and
negative activation and fully utilizes graph structure information is designed.
Finally, a simple GNN (called GSCNet) is proposed based on the new basis.
Experimental results on the benchmark datasets for node classification verify
that our GSCNet obtains better or comparable results compared with existing
state-of-the-art GNNs while demanding relatively less computational time.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 07:41:29 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wen",
"Haodong",
""
],
[
"Du",
"Bodong",
""
],
[
"Liu",
"Ruixun",
""
],
[
"Meng",
"Deyu",
""
],
[
"Cao",
"Xiangyong",
""
]
] | TITLE: Rethinking the Graph Polynomial Filter via Positive and Negative
Coupling Analysis
ABSTRACT: Recently, the optimization of polynomial filters within Spectral Graph Neural
Networks (GNNs) has emerged as a prominent research focus. Existing spectral
GNNs mainly emphasize polynomial properties in filter design, introducing
computational overhead and neglecting the integration of crucial graph
structure information. We argue that incorporating graph information into basis
construction can enhance understanding of polynomial basis, and further
facilitate simplified polynomial filter design. Motivated by this, we first
propose a Positive and Negative Coupling Analysis (PNCA) framework, where the
concepts of positive and negative activation are defined and their respective
and mixed effects are analysed. Then, we explore PNCA from the message
propagation perspective, revealing the subtle information hidden in the
activation process. Subsequently, PNCA is used to analyze the mainstream
polynomial filters, and a novel simple basis that decouples the positive and
negative activation and fully utilizes graph structure information is designed.
Finally, a simple GNN (called GSCNet) is proposed based on the new basis.
Experimental results on the benchmark datasets for node classification verify
that our GSCNet obtains better or comparable results compared with existing
state-of-the-art GNNs while demanding relatively less computational time.
|
2404.17736 | Mingyu Yang | Mingyu Yang, Bowen Liu, Boyang Wang, Hun-Seok Kim | Diffusion-Aided Joint Source Channel Coding For High Realism Wireless
Image Transmission | null | null | null | null | eess.SP cs.CV cs.IT eess.IV math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning-based joint source-channel coding (deep JSCC) has been
demonstrated to be an effective approach for wireless image transmission.
Nevertheless, most existing work adopts an autoencoder framework to optimize
conventional criteria such as Mean Squared Error (MSE) and Structural
Similarity Index (SSIM) which do not suffice to maintain the perceptual quality
of reconstructed images. Such an issue is more prominent under stringent
bandwidth constraints or low signal-to-noise ratio (SNR) conditions. To tackle
this challenge, we propose DiffJSCC, a novel framework that leverages the prior
knowledge of the pre-trained Statble Diffusion model to produce high-realism
images via the conditional diffusion denoising process. Our DiffJSCC first
extracts multimodal spatial and textual features from the noisy channel symbols
in the generation phase. Then, it produces an initial reconstructed image as an
intermediate representation to aid robust feature extraction and a stable
training process. In the following diffusion step, DiffJSCC uses the derived
multimodal features, together with channel state information such as the
signal-to-noise ratio (SNR), as conditions to guide the denoising diffusion
process, which converts the initial random noise to the final reconstruction.
DiffJSCC employs a novel control module to fine-tune the Stable Diffusion model
and adjust it to the multimodal conditions. Extensive experiments on diverse
datasets reveal that our method significantly surpasses prior deep JSCC
approaches on both perceptual metrics and downstream task performance,
showcasing its ability to preserve the semantics of the original transmitted
images. Notably, DiffJSCC can achieve highly realistic reconstructions for
768x512 pixel Kodak images with only 3072 symbols (<0.008 symbols per pixel)
under 1dB SNR channels.
| [
{
"version": "v1",
"created": "Sat, 27 Apr 2024 00:12:13 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Jul 2024 05:33:10 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 00:52:41 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yang",
"Mingyu",
""
],
[
"Liu",
"Bowen",
""
],
[
"Wang",
"Boyang",
""
],
[
"Kim",
"Hun-Seok",
""
]
] | TITLE: Diffusion-Aided Joint Source Channel Coding For High Realism Wireless
Image Transmission
ABSTRACT: Deep learning-based joint source-channel coding (deep JSCC) has been
demonstrated to be an effective approach for wireless image transmission.
Nevertheless, most existing work adopts an autoencoder framework to optimize
conventional criteria such as Mean Squared Error (MSE) and Structural
Similarity Index (SSIM) which do not suffice to maintain the perceptual quality
of reconstructed images. Such an issue is more prominent under stringent
bandwidth constraints or low signal-to-noise ratio (SNR) conditions. To tackle
this challenge, we propose DiffJSCC, a novel framework that leverages the prior
knowledge of the pre-trained Statble Diffusion model to produce high-realism
images via the conditional diffusion denoising process. Our DiffJSCC first
extracts multimodal spatial and textual features from the noisy channel symbols
in the generation phase. Then, it produces an initial reconstructed image as an
intermediate representation to aid robust feature extraction and a stable
training process. In the following diffusion step, DiffJSCC uses the derived
multimodal features, together with channel state information such as the
signal-to-noise ratio (SNR), as conditions to guide the denoising diffusion
process, which converts the initial random noise to the final reconstruction.
DiffJSCC employs a novel control module to fine-tune the Stable Diffusion model
and adjust it to the multimodal conditions. Extensive experiments on diverse
datasets reveal that our method significantly surpasses prior deep JSCC
approaches on both perceptual metrics and downstream task performance,
showcasing its ability to preserve the semantics of the original transmitted
images. Notably, DiffJSCC can achieve highly realistic reconstructions for
768x512 pixel Kodak images with only 3072 symbols (<0.008 symbols per pixel)
under 1dB SNR channels.
|
2404.18433 | Guoyang Xie | Zhuohao Li, Guoyang Xie, Guannan Jiang and Zhichao Lu | ShadowMaskFormer: Mask Augmented Patch Embeddings for Shadow Removal | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Transformer recently emerged as the de facto model for computer vision tasks
and has also been successfully applied to shadow removal. However, these
existing methods heavily rely on intricate modifications to the attention
mechanisms within the transformer blocks while using a generic patch embedding.
As a result, it often leads to complex architectural designs requiring
additional computation resources. In this work, we aim to explore the efficacy
of incorporating shadow information within the early processing stage.
Accordingly, we propose a transformer-based framework with a novel patch
embedding that is tailored for shadow removal, dubbed ShadowMaskFormer.
Specifically, we present a simple and effective mask-augmented patch embedding
to integrate shadow information and promote the model's emphasis on acquiring
knowledge for shadow regions. Extensive experiments conducted on the ISTD,
ISTD+, and SRD benchmark datasets demonstrate the efficacy of our method
against state-of-the-art approaches while using fewer model parameters.g fewer
model parameters. Our implementation is available at
https://github.com/lizhh268/ShadowMaskFormer.
| [
{
"version": "v1",
"created": "Mon, 29 Apr 2024 05:17:33 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Apr 2024 15:42:25 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 12:24:41 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Zhuohao",
""
],
[
"Xie",
"Guoyang",
""
],
[
"Jiang",
"Guannan",
""
],
[
"Lu",
"Zhichao",
""
]
] | TITLE: ShadowMaskFormer: Mask Augmented Patch Embeddings for Shadow Removal
ABSTRACT: Transformer recently emerged as the de facto model for computer vision tasks
and has also been successfully applied to shadow removal. However, these
existing methods heavily rely on intricate modifications to the attention
mechanisms within the transformer blocks while using a generic patch embedding.
As a result, it often leads to complex architectural designs requiring
additional computation resources. In this work, we aim to explore the efficacy
of incorporating shadow information within the early processing stage.
Accordingly, we propose a transformer-based framework with a novel patch
embedding that is tailored for shadow removal, dubbed ShadowMaskFormer.
Specifically, we present a simple and effective mask-augmented patch embedding
to integrate shadow information and promote the model's emphasis on acquiring
knowledge for shadow regions. Extensive experiments conducted on the ISTD,
ISTD+, and SRD benchmark datasets demonstrate the efficacy of our method
against state-of-the-art approaches while using fewer model parameters.g fewer
model parameters. Our implementation is available at
https://github.com/lizhh268/ShadowMaskFormer.
|
2405.01373 | Ahmad Sajedi | Samir Khaki, Ahmad Sajedi, Kai Wang, Lucy Z. Liu, Yuri A. Lawryshyn,
Konstantinos N. Plataniotis | ATOM: Attention Mixer for Efficient Dataset Distillation | Accepted for an oral presentation in CVPR-DD 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent works in dataset distillation seek to minimize training expenses by
generating a condensed synthetic dataset that encapsulates the information
present in a larger real dataset. These approaches ultimately aim to attain
test accuracy levels akin to those achieved by models trained on the entirety
of the original dataset. Previous studies in feature and distribution matching
have achieved significant results without incurring the costs of bi-level
optimization in the distillation process. Despite their convincing efficiency,
many of these methods suffer from marginal downstream performance improvements,
limited distillation of contextual information, and subpar cross-architecture
generalization. To address these challenges in dataset distillation, we propose
the ATtentiOn Mixer (ATOM) module to efficiently distill large datasets using a
mixture of channel and spatial-wise attention in the feature matching process.
Spatial-wise attention helps guide the learning process based on consistent
localization of classes in their respective images, allowing for distillation
from a broader receptive field. Meanwhile, channel-wise attention captures the
contextual information associated with the class itself, thus making the
synthetic image more informative for training. By integrating both types of
attention, our ATOM module demonstrates superior performance across various
computer vision datasets, including CIFAR10/100 and TinyImagenet. Notably, our
method significantly improves performance in scenarios with a low number of
images per class, thereby enhancing its potential. Furthermore, we maintain the
improvement in cross-architectures and applications such as neural architecture
search.
| [
{
"version": "v1",
"created": "Thu, 2 May 2024 15:15:01 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 19:37:27 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Khaki",
"Samir",
""
],
[
"Sajedi",
"Ahmad",
""
],
[
"Wang",
"Kai",
""
],
[
"Liu",
"Lucy Z.",
""
],
[
"Lawryshyn",
"Yuri A.",
""
],
[
"Plataniotis",
"Konstantinos N.",
""
]
] | TITLE: ATOM: Attention Mixer for Efficient Dataset Distillation
ABSTRACT: Recent works in dataset distillation seek to minimize training expenses by
generating a condensed synthetic dataset that encapsulates the information
present in a larger real dataset. These approaches ultimately aim to attain
test accuracy levels akin to those achieved by models trained on the entirety
of the original dataset. Previous studies in feature and distribution matching
have achieved significant results without incurring the costs of bi-level
optimization in the distillation process. Despite their convincing efficiency,
many of these methods suffer from marginal downstream performance improvements,
limited distillation of contextual information, and subpar cross-architecture
generalization. To address these challenges in dataset distillation, we propose
the ATtentiOn Mixer (ATOM) module to efficiently distill large datasets using a
mixture of channel and spatial-wise attention in the feature matching process.
Spatial-wise attention helps guide the learning process based on consistent
localization of classes in their respective images, allowing for distillation
from a broader receptive field. Meanwhile, channel-wise attention captures the
contextual information associated with the class itself, thus making the
synthetic image more informative for training. By integrating both types of
attention, our ATOM module demonstrates superior performance across various
computer vision datasets, including CIFAR10/100 and TinyImagenet. Notably, our
method significantly improves performance in scenarios with a low number of
images per class, thereby enhancing its potential. Furthermore, we maintain the
improvement in cross-architectures and applications such as neural architecture
search.
|
2405.10591 | Wenbin Wu | Xin Tan, Wenbin Wu, Zhiwei Zhang, Chaojie Fan, Yong Peng, Zhizhong
Zhang, Yuan Xie, Lizhuang Ma | GEOcc: Geometrically Enhanced 3D Occupancy Network with
Implicit-Explicit Depth Fusion and Contextual Self-Supervision | This work has been accepted for publication in IEEE Transactions on
Intelligent Transportation Systems | IEEE Transactions on Intelligent Transportation Systems, pp. 1-12,
March 2025 | 10.1109/TITS.2025.3539627 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D occupancy perception holds a pivotal role in recent vision-centric
autonomous driving systems by converting surround-view images into integrated
geometric and semantic representations within dense 3D grids. Nevertheless,
current models still encounter two main challenges: modeling depth accurately
in the 2D-3D view transformation stage, and overcoming the lack of
generalizability issues due to sparse LiDAR supervision. To address these
issues, this paper presents GEOcc, a Geometric-Enhanced Occupancy network
tailored for vision-only surround-view perception. Our approach is three-fold:
1) Integration of explicit lift-based depth prediction and implicit
projection-based transformers for depth modeling, enhancing the density and
robustness of view transformation. 2) Utilization of mask-based encoder-decoder
architecture for fine-grained semantic predictions; 3) Adoption of
context-aware self-training loss functions in the pertaining stage to
complement LiDAR supervision, involving the re-rendering of 2D depth maps from
3D occupancy features and leveraging image reconstruction loss to obtain denser
depth supervision besides sparse LiDAR ground-truths. Our approach achieves
State-Of-The-Art performance on the Occ3D-nuScenes dataset with the least image
resolution needed and the most weightless image backbone compared with current
models, marking an improvement of 3.3% due to our proposed contributions.
Comprehensive experimentation also demonstrates the consistent superiority of
our method over baselines and alternative approaches.
| [
{
"version": "v1",
"created": "Fri, 17 May 2024 07:31:20 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 07:30:41 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Tan",
"Xin",
""
],
[
"Wu",
"Wenbin",
""
],
[
"Zhang",
"Zhiwei",
""
],
[
"Fan",
"Chaojie",
""
],
[
"Peng",
"Yong",
""
],
[
"Zhang",
"Zhizhong",
""
],
[
"Xie",
"Yuan",
""
],
[
"Ma",
"Lizhuang",
""
]
] | TITLE: GEOcc: Geometrically Enhanced 3D Occupancy Network with
Implicit-Explicit Depth Fusion and Contextual Self-Supervision
ABSTRACT: 3D occupancy perception holds a pivotal role in recent vision-centric
autonomous driving systems by converting surround-view images into integrated
geometric and semantic representations within dense 3D grids. Nevertheless,
current models still encounter two main challenges: modeling depth accurately
in the 2D-3D view transformation stage, and overcoming the lack of
generalizability issues due to sparse LiDAR supervision. To address these
issues, this paper presents GEOcc, a Geometric-Enhanced Occupancy network
tailored for vision-only surround-view perception. Our approach is three-fold:
1) Integration of explicit lift-based depth prediction and implicit
projection-based transformers for depth modeling, enhancing the density and
robustness of view transformation. 2) Utilization of mask-based encoder-decoder
architecture for fine-grained semantic predictions; 3) Adoption of
context-aware self-training loss functions in the pertaining stage to
complement LiDAR supervision, involving the re-rendering of 2D depth maps from
3D occupancy features and leveraging image reconstruction loss to obtain denser
depth supervision besides sparse LiDAR ground-truths. Our approach achieves
State-Of-The-Art performance on the Occ3D-nuScenes dataset with the least image
resolution needed and the most weightless image backbone compared with current
models, marking an improvement of 3.3% due to our proposed contributions.
Comprehensive experimentation also demonstrates the consistent superiority of
our method over baselines and alternative approaches.
|
2405.13098 | Max Kerr Winter | Max Kerr Winter, Liesbeth M. C. Janssen | Glassy dynamics in deep neural networks: A structural comparison | 17 pages, 18 figures | null | null | null | physics.comp-ph cond-mat.dis-nn cond-mat.stat-mech | http://creativecommons.org/licenses/by/4.0/ | Deep Neural Networks (DNNs) share important similarities with structural
glasses. Both have many degrees of freedom, and their dynamics are governed by
a high-dimensional, non-convex landscape representing either the loss or
energy, respectively. Furthermore, both experience gradient descent dynamics
subject to noise. In this work we investigate, by performing quantitative
measurements on realistic networks trained on the MNIST and CIFAR-10 datasets,
the extent to which this qualitative similarity gives rise to glass-like
dynamics in neural networks. We demonstrate the existence of a Topology
Trivialisation Transition as well as the previously studied
under-to-overparameterised transition analogous to jamming. By training DNNs
with overdamped Langevin dynamics in the resulting disordered phases, we do not
observe diverging relaxation times at non-zero temperature, nor do we observe
any caging effects, in contrast to glass phenomenology. However, the weight
overlap function follows a power law in time, with exponent $\approx -0.5$, in
agreement with the Mode-Coupling Theory of structural glasses. In addition, the
DNN dynamics obey a form of time-temperature superposition. Finally, dynamic
heterogeneity and ageing are observed at low temperatures. These results
highlight important and surprising points of both difference and agreement
between the behaviour of DNNs and structural glasses.
| [
{
"version": "v1",
"created": "Tue, 21 May 2024 14:43:02 GMT"
},
{
"version": "v2",
"created": "Fri, 24 May 2024 17:10:34 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Nov 2024 19:57:16 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Mar 2025 10:58:02 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Winter",
"Max Kerr",
""
],
[
"Janssen",
"Liesbeth M. C.",
""
]
] | TITLE: Glassy dynamics in deep neural networks: A structural comparison
ABSTRACT: Deep Neural Networks (DNNs) share important similarities with structural
glasses. Both have many degrees of freedom, and their dynamics are governed by
a high-dimensional, non-convex landscape representing either the loss or
energy, respectively. Furthermore, both experience gradient descent dynamics
subject to noise. In this work we investigate, by performing quantitative
measurements on realistic networks trained on the MNIST and CIFAR-10 datasets,
the extent to which this qualitative similarity gives rise to glass-like
dynamics in neural networks. We demonstrate the existence of a Topology
Trivialisation Transition as well as the previously studied
under-to-overparameterised transition analogous to jamming. By training DNNs
with overdamped Langevin dynamics in the resulting disordered phases, we do not
observe diverging relaxation times at non-zero temperature, nor do we observe
any caging effects, in contrast to glass phenomenology. However, the weight
overlap function follows a power law in time, with exponent $\approx -0.5$, in
agreement with the Mode-Coupling Theory of structural glasses. In addition, the
DNN dynamics obey a form of time-temperature superposition. Finally, dynamic
heterogeneity and ageing are observed at low temperatures. These results
highlight important and surprising points of both difference and agreement
between the behaviour of DNNs and structural glasses.
|
2405.14701 | Yibin Wang | Yibin Wang and Weizhong Zhang and Honghui Xu and Cheng Jin | DreamText: High Fidelity Scene Text Synthesis | Code: https://github.com/CodeGoat24/DreamText, Project page:
https://codegoat24.github.io/DreamText/ | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene text synthesis involves rendering specified texts onto arbitrary
images. Current methods typically formulate this task in an end-to-end manner
but lack effective character-level guidance during training. Besides, their
text encoders, pre-trained on a single font type, struggle to adapt to the
diverse font styles encountered in practical applications. Consequently, these
methods suffer from character distortion, repetition, and absence, particularly
in polystylistic scenarios. To this end, this paper proposes DreamText for
high-fidelity scene text synthesis. Our key idea is to reconstruct the
diffusion training process, introducing more refined guidance tailored to this
task, to expose and rectify the model's attention at the character level and
strengthen its learning of text regions. This transformation poses a hybrid
optimization challenge, involving both discrete and continuous variables. To
effectively tackle this challenge, we employ a heuristic alternate optimization
strategy. Meanwhile, we jointly train the text encoder and generator to
comprehensively learn and utilize the diverse font present in the training
dataset. This joint training is seamlessly integrated into the alternate
optimization process, fostering a synergistic relationship between learning
character embedding and re-estimating character attention. Specifically, in
each step, we first encode potential character-generated position information
from cross-attention maps into latent character masks. These masks are then
utilized to update the representation of specific characters in the current
step, which, in turn, enables the generator to correct the character's
attention in the subsequent steps. Both qualitative and quantitative results
demonstrate the superiority of our method to the state of the art.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 15:35:48 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Aug 2024 11:31:23 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Nov 2024 03:52:26 GMT"
},
{
"version": "v4",
"created": "Thu, 6 Mar 2025 04:38:23 GMT"
},
{
"version": "v5",
"created": "Mon, 24 Mar 2025 06:13:16 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Yibin",
""
],
[
"Zhang",
"Weizhong",
""
],
[
"Xu",
"Honghui",
""
],
[
"Jin",
"Cheng",
""
]
] | TITLE: DreamText: High Fidelity Scene Text Synthesis
ABSTRACT: Scene text synthesis involves rendering specified texts onto arbitrary
images. Current methods typically formulate this task in an end-to-end manner
but lack effective character-level guidance during training. Besides, their
text encoders, pre-trained on a single font type, struggle to adapt to the
diverse font styles encountered in practical applications. Consequently, these
methods suffer from character distortion, repetition, and absence, particularly
in polystylistic scenarios. To this end, this paper proposes DreamText for
high-fidelity scene text synthesis. Our key idea is to reconstruct the
diffusion training process, introducing more refined guidance tailored to this
task, to expose and rectify the model's attention at the character level and
strengthen its learning of text regions. This transformation poses a hybrid
optimization challenge, involving both discrete and continuous variables. To
effectively tackle this challenge, we employ a heuristic alternate optimization
strategy. Meanwhile, we jointly train the text encoder and generator to
comprehensively learn and utilize the diverse font present in the training
dataset. This joint training is seamlessly integrated into the alternate
optimization process, fostering a synergistic relationship between learning
character embedding and re-estimating character attention. Specifically, in
each step, we first encode potential character-generated position information
from cross-attention maps into latent character masks. These masks are then
utilized to update the representation of specific characters in the current
step, which, in turn, enables the generator to correct the character's
attention in the subsequent steps. Both qualitative and quantitative results
demonstrate the superiority of our method to the state of the art.
|
2406.00028 | Seyed Moein Ayyoubzadeh | Seyed Moein Ayyoubzadeh, Kourosh Shahnazari | Word Sense Disambiguation in Persian: Can AI Finally Get It Right? | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Homograph disambiguation, the task of distinguishing words with identical
spellings but different meanings, poses a substantial challenge in natural
language processing. In this study, we introduce a novel dataset tailored for
Persian homograph disambiguation. Our work encompasses a thorough exploration
of various embeddings, evaluated through the cosine similarity method and their
efficacy in downstream tasks like classification. Our investigation entails
training a diverse array of lightweight machine learning and deep learning
models for phonograph disambiguation. We scrutinize the models' performance in
terms of Accuracy, Recall, and F1 Score, thereby gaining insights into their
respective strengths and limitations. The outcomes of our research underscore
three key contributions. First, we present a newly curated Persian dataset,
providing a solid foundation for future research in homograph disambiguation.
Second, our comparative analysis of embeddings highlights their utility in
different contexts, enriching the understanding of their capabilities. Third,
by training and evaluating a spectrum of models, we extend valuable guidance
for practitioners in selecting suitable strategies for homograph disambiguation
tasks. In summary, our study unveils a new dataset, scrutinizes embeddings
through diverse perspectives, and benchmarks various models for homograph
disambiguation. These findings empower researchers and practitioners to
navigate the intricate landscape of homograph-related challenges effectively.
| [
{
"version": "v1",
"created": "Fri, 24 May 2024 14:56:36 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Oct 2024 17:34:54 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Mar 2025 02:44:23 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ayyoubzadeh",
"Seyed Moein",
""
],
[
"Shahnazari",
"Kourosh",
""
]
] | TITLE: Word Sense Disambiguation in Persian: Can AI Finally Get It Right?
ABSTRACT: Homograph disambiguation, the task of distinguishing words with identical
spellings but different meanings, poses a substantial challenge in natural
language processing. In this study, we introduce a novel dataset tailored for
Persian homograph disambiguation. Our work encompasses a thorough exploration
of various embeddings, evaluated through the cosine similarity method and their
efficacy in downstream tasks like classification. Our investigation entails
training a diverse array of lightweight machine learning and deep learning
models for phonograph disambiguation. We scrutinize the models' performance in
terms of Accuracy, Recall, and F1 Score, thereby gaining insights into their
respective strengths and limitations. The outcomes of our research underscore
three key contributions. First, we present a newly curated Persian dataset,
providing a solid foundation for future research in homograph disambiguation.
Second, our comparative analysis of embeddings highlights their utility in
different contexts, enriching the understanding of their capabilities. Third,
by training and evaluating a spectrum of models, we extend valuable guidance
for practitioners in selecting suitable strategies for homograph disambiguation
tasks. In summary, our study unveils a new dataset, scrutinizes embeddings
through diverse perspectives, and benchmarks various models for homograph
disambiguation. These findings empower researchers and practitioners to
navigate the intricate landscape of homograph-related challenges effectively.
|
2406.00492 | Baixiang Huang | Baixiang Huang, Yu Luo, Guangyu Wei, Songyan He, Yushuang Shao,
Xueying Zeng | A Deep Learning Model for Coronary Artery Segmentation and Quantitative
Stenosis Detection in Angiographic Images | null | null | null | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coronary artery disease (CAD) is a leading cause of cardiovascular-related
mortality, and accurate stenosis detection is crucial for effective clinical
decision-making. Coronary angiography remains the gold standard for diagnosing
CAD, but manual analysis of angiograms is prone to errors and subjectivity.
This study aims to develop a deep learning-based approach for the automatic
segmentation of coronary arteries from angiographic images and the quantitative
detection of stenosis, thereby improving the accuracy and efficiency of CAD
diagnosis. We propose a novel deep learning-based method for the automatic
segmentation of coronary arteries in angiographic images, coupled with a
dynamic cohort method for stenosis detection. The segmentation model combines
the MedSAM and VM-UNet architectures to achieve high-performance results. After
segmentation, the vascular centerline is extracted, vessel diameter is
computed, and the degree of stenosis is measured with high precision, enabling
accurate identification of arterial stenosis. On the mixed dataset (including
the ARCADE, DCA1, and GH datasets), the model achieved an average IoU of
0.6308, with sensitivity and specificity of 0.9772 and 0.9903, respectively. On
the ARCADE dataset, the average IoU was 0.6303, with sensitivity of 0.9832 and
specificity of 0.9933. Additionally, the stenosis detection algorithm achieved
a true positive rate (TPR) of 0.5867 and a positive predictive value (PPV) of
0.5911, demonstrating the effectiveness of our model in analyzing coronary
angiography images. SAM-VMNet offers a promising tool for the automated
segmentation and detection of coronary artery stenosis. The model's high
accuracy and robustness provide significant clinical value for the early
diagnosis and treatment planning of CAD. The code and examples are available at
https://github.com/qimingfan10/SAM-VMNet.
| [
{
"version": "v1",
"created": "Sat, 1 Jun 2024 16:45:33 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 07:17:05 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Huang",
"Baixiang",
""
],
[
"Luo",
"Yu",
""
],
[
"Wei",
"Guangyu",
""
],
[
"He",
"Songyan",
""
],
[
"Shao",
"Yushuang",
""
],
[
"Zeng",
"Xueying",
""
]
] | TITLE: A Deep Learning Model for Coronary Artery Segmentation and Quantitative
Stenosis Detection in Angiographic Images
ABSTRACT: Coronary artery disease (CAD) is a leading cause of cardiovascular-related
mortality, and accurate stenosis detection is crucial for effective clinical
decision-making. Coronary angiography remains the gold standard for diagnosing
CAD, but manual analysis of angiograms is prone to errors and subjectivity.
This study aims to develop a deep learning-based approach for the automatic
segmentation of coronary arteries from angiographic images and the quantitative
detection of stenosis, thereby improving the accuracy and efficiency of CAD
diagnosis. We propose a novel deep learning-based method for the automatic
segmentation of coronary arteries in angiographic images, coupled with a
dynamic cohort method for stenosis detection. The segmentation model combines
the MedSAM and VM-UNet architectures to achieve high-performance results. After
segmentation, the vascular centerline is extracted, vessel diameter is
computed, and the degree of stenosis is measured with high precision, enabling
accurate identification of arterial stenosis. On the mixed dataset (including
the ARCADE, DCA1, and GH datasets), the model achieved an average IoU of
0.6308, with sensitivity and specificity of 0.9772 and 0.9903, respectively. On
the ARCADE dataset, the average IoU was 0.6303, with sensitivity of 0.9832 and
specificity of 0.9933. Additionally, the stenosis detection algorithm achieved
a true positive rate (TPR) of 0.5867 and a positive predictive value (PPV) of
0.5911, demonstrating the effectiveness of our model in analyzing coronary
angiography images. SAM-VMNet offers a promising tool for the automated
segmentation and detection of coronary artery stenosis. The model's high
accuracy and robustness provide significant clinical value for the early
diagnosis and treatment planning of CAD. The code and examples are available at
https://github.com/qimingfan10/SAM-VMNet.
|
2406.00684 | Yuliang Liu | Haisu Guan, Huanxin Yang, Xinyu Wang, Shengwei Han, Yongge Liu,
Lianwen Jin, Xiang Bai, Yuliang Liu | Deciphering Oracle Bone Language with Diffusion Models | ACL 2024 Best Paper | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Originating from China's Shang Dynasty approximately 3,000 years ago, the
Oracle Bone Script (OBS) is a cornerstone in the annals of linguistic history,
predating many established writing systems. Despite the discovery of thousands
of inscriptions, a vast expanse of OBS remains undeciphered, casting a veil of
mystery over this ancient language. The emergence of modern AI technologies
presents a novel frontier for OBS decipherment, challenging traditional NLP
methods that rely heavily on large textual corpora, a luxury not afforded by
historical languages. This paper introduces a novel approach by adopting image
generation techniques, specifically through the development of Oracle Bone
Script Decipher (OBSD). Utilizing a conditional diffusion-based strategy, OBSD
generates vital clues for decipherment, charting a new course for AI-assisted
analysis of ancient languages. To validate its efficacy, extensive experiments
were conducted on an oracle bone script dataset, with quantitative results
demonstrating the effectiveness of OBSD. Code and decipherment results will be
made available at https://github.com/guanhaisu/OBSD.
| [
{
"version": "v1",
"created": "Sun, 2 Jun 2024 09:42:23 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jan 2025 03:28:31 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 03:03:33 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Guan",
"Haisu",
""
],
[
"Yang",
"Huanxin",
""
],
[
"Wang",
"Xinyu",
""
],
[
"Han",
"Shengwei",
""
],
[
"Liu",
"Yongge",
""
],
[
"Jin",
"Lianwen",
""
],
[
"Bai",
"Xiang",
""
],
[
"Liu",
"Yuliang",
""
]
] | TITLE: Deciphering Oracle Bone Language with Diffusion Models
ABSTRACT: Originating from China's Shang Dynasty approximately 3,000 years ago, the
Oracle Bone Script (OBS) is a cornerstone in the annals of linguistic history,
predating many established writing systems. Despite the discovery of thousands
of inscriptions, a vast expanse of OBS remains undeciphered, casting a veil of
mystery over this ancient language. The emergence of modern AI technologies
presents a novel frontier for OBS decipherment, challenging traditional NLP
methods that rely heavily on large textual corpora, a luxury not afforded by
historical languages. This paper introduces a novel approach by adopting image
generation techniques, specifically through the development of Oracle Bone
Script Decipher (OBSD). Utilizing a conditional diffusion-based strategy, OBSD
generates vital clues for decipherment, charting a new course for AI-assisted
analysis of ancient languages. To validate its efficacy, extensive experiments
were conducted on an oracle bone script dataset, with quantitative results
demonstrating the effectiveness of OBSD. Code and decipherment results will be
made available at https://github.com/guanhaisu/OBSD.
|
2406.01652 | Tal Korem | George I. Austin, Itsik Pe'er, Tal Korem | Distributional bias compromises leave-one-out cross-validation | 29 pages, 6 figures, supplementary information | null | null | null | stat.ME cs.LG q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | Cross-validation is a common method for estimating the predictive performance
of machine learning models. In a data-scarce regime, where one typically wishes
to maximize the number of instances used for training the model, an approach
called "leave-one-out cross-validation" is often used. In this design, a
separate model is built for predicting each data instance after training on all
other instances. Since this results in a single test instance available per
model trained, predictions are aggregated across the entire dataset to
calculate common performance metrics such as the area under the receiver
operating characteristic or R2 scores. In this work, we demonstrate that this
approach creates a negative correlation between the average label of each
training fold and the label of its corresponding test instance, a phenomenon
that we term distributional bias. As machine learning models tend to regress to
the mean of their training data, this distributional bias tends to negatively
impact performance evaluation and hyperparameter optimization. We show that
this effect generalizes to leave-P-out cross-validation and persists across a
wide range of modeling and evaluation approaches, and that it can lead to a
bias against stronger regularization. To address this, we propose a
generalizable rebalanced cross-validation approach that corrects for
distributional bias for both classification and regression. We demonstrate that
our approach improves cross-validation performance evaluation in synthetic
simulations, across machine learning benchmarks, and in several published
leave-one-out analyses.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 15:47:34 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 03:35:13 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Austin",
"George I.",
""
],
[
"Pe'er",
"Itsik",
""
],
[
"Korem",
"Tal",
""
]
] | TITLE: Distributional bias compromises leave-one-out cross-validation
ABSTRACT: Cross-validation is a common method for estimating the predictive performance
of machine learning models. In a data-scarce regime, where one typically wishes
to maximize the number of instances used for training the model, an approach
called "leave-one-out cross-validation" is often used. In this design, a
separate model is built for predicting each data instance after training on all
other instances. Since this results in a single test instance available per
model trained, predictions are aggregated across the entire dataset to
calculate common performance metrics such as the area under the receiver
operating characteristic or R2 scores. In this work, we demonstrate that this
approach creates a negative correlation between the average label of each
training fold and the label of its corresponding test instance, a phenomenon
that we term distributional bias. As machine learning models tend to regress to
the mean of their training data, this distributional bias tends to negatively
impact performance evaluation and hyperparameter optimization. We show that
this effect generalizes to leave-P-out cross-validation and persists across a
wide range of modeling and evaluation approaches, and that it can lead to a
bias against stronger regularization. To address this, we propose a
generalizable rebalanced cross-validation approach that corrects for
distributional bias for both classification and regression. We demonstrate that
our approach improves cross-validation performance evaluation in synthetic
simulations, across machine learning benchmarks, and in several published
leave-one-out analyses.
|
2406.04940 | Matthew Fortier | Matthew Fortier and Mats L. Richter and Oliver Sonnentag and Chris Pal | CarbonSense: A Multimodal Dataset and Baseline for Carbon Flux Modelling | 9 content pages, 11 reference pages, 9 appendix pages | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Terrestrial carbon fluxes provide vital information about our biosphere's
health and its capacity to absorb anthropogenic CO$_2$ emissions. The
importance of predicting carbon fluxes has led to the emerging field of
data-driven carbon flux modelling (DDCFM), which uses statistical techniques to
predict carbon fluxes from biophysical data. However, the field lacks a
standardized dataset to promote comparisons between models. To address this
gap, we present CarbonSense, the first machine learning-ready dataset for
DDCFM. CarbonSense integrates measured carbon fluxes, meteorological
predictors, and satellite imagery from 385 locations across the globe, offering
comprehensive coverage and facilitating robust model training. Additionally, we
provide a baseline model using a current state-of-the-art DDCFM approach and a
novel transformer based model. Our experiments illustrate the potential gains
that multimodal deep learning techniques can bring to this domain. By providing
these resources, we aim to lower the barrier to entry for other deep learning
researchers to develop new models and drive new advances in carbon flux
modelling.
| [
{
"version": "v1",
"created": "Fri, 7 Jun 2024 13:47:40 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 15:37:27 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Fortier",
"Matthew",
""
],
[
"Richter",
"Mats L.",
""
],
[
"Sonnentag",
"Oliver",
""
],
[
"Pal",
"Chris",
""
]
] | TITLE: CarbonSense: A Multimodal Dataset and Baseline for Carbon Flux Modelling
ABSTRACT: Terrestrial carbon fluxes provide vital information about our biosphere's
health and its capacity to absorb anthropogenic CO$_2$ emissions. The
importance of predicting carbon fluxes has led to the emerging field of
data-driven carbon flux modelling (DDCFM), which uses statistical techniques to
predict carbon fluxes from biophysical data. However, the field lacks a
standardized dataset to promote comparisons between models. To address this
gap, we present CarbonSense, the first machine learning-ready dataset for
DDCFM. CarbonSense integrates measured carbon fluxes, meteorological
predictors, and satellite imagery from 385 locations across the globe, offering
comprehensive coverage and facilitating robust model training. Additionally, we
provide a baseline model using a current state-of-the-art DDCFM approach and a
novel transformer based model. Our experiments illustrate the potential gains
that multimodal deep learning techniques can bring to this domain. By providing
these resources, we aim to lower the barrier to entry for other deep learning
researchers to develop new models and drive new advances in carbon flux
modelling.
|
2406.05475 | Jingchao Peng | Jingchao Peng, Thomas Bashford-Rogers, Francesco Banterle, Haitao
Zhao, Kurt Debattista | HDRT: A Large-Scale Dataset for Infrared-Guided HDR Imaging | null | Information Fusion, 120(2025), pp. 103109 | 10.1016/j.inffus.2025.103109 | null | cs.CV cs.GR eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Capturing images with enough details to solve imaging tasks is a
long-standing challenge in imaging, particularly due to the limitations of
standard dynamic range (SDR) images which often lose details in underexposed or
overexposed regions. Traditional high dynamic range (HDR) methods, like
multi-exposure fusion or inverse tone mapping, struggle with ghosting and
incomplete data reconstruction. Infrared (IR) imaging offers a unique advantage
by being less affected by lighting conditions, providing consistent detail
capture regardless of visible light intensity. In this paper, we introduce the
HDRT dataset, the first comprehensive dataset that consists of HDR and thermal
IR images. The HDRT dataset comprises 50,000 images captured across three
seasons over six months in eight cities, providing a diverse range of lighting
conditions and environmental contexts. Leveraging this dataset, we propose
HDRTNet, a novel deep neural method that fuses IR and SDR content to generate
HDR images. Extensive experiments validate HDRTNet against the
state-of-the-art, showing substantial quantitative and qualitative quality
improvements. The HDRT dataset not only advances IR-guided HDR imaging but also
offers significant potential for broader research in HDR imaging, multi-modal
fusion, domain transfer, and beyond. The dataset is available at
https://huggingface.co/datasets/jingchao-peng/HDRTDataset.
| [
{
"version": "v1",
"created": "Sat, 8 Jun 2024 13:43:44 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 10:17:16 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Peng",
"Jingchao",
""
],
[
"Bashford-Rogers",
"Thomas",
""
],
[
"Banterle",
"Francesco",
""
],
[
"Zhao",
"Haitao",
""
],
[
"Debattista",
"Kurt",
""
]
] | TITLE: HDRT: A Large-Scale Dataset for Infrared-Guided HDR Imaging
ABSTRACT: Capturing images with enough details to solve imaging tasks is a
long-standing challenge in imaging, particularly due to the limitations of
standard dynamic range (SDR) images which often lose details in underexposed or
overexposed regions. Traditional high dynamic range (HDR) methods, like
multi-exposure fusion or inverse tone mapping, struggle with ghosting and
incomplete data reconstruction. Infrared (IR) imaging offers a unique advantage
by being less affected by lighting conditions, providing consistent detail
capture regardless of visible light intensity. In this paper, we introduce the
HDRT dataset, the first comprehensive dataset that consists of HDR and thermal
IR images. The HDRT dataset comprises 50,000 images captured across three
seasons over six months in eight cities, providing a diverse range of lighting
conditions and environmental contexts. Leveraging this dataset, we propose
HDRTNet, a novel deep neural method that fuses IR and SDR content to generate
HDR images. Extensive experiments validate HDRTNet against the
state-of-the-art, showing substantial quantitative and qualitative quality
improvements. The HDRT dataset not only advances IR-guided HDR imaging but also
offers significant potential for broader research in HDR imaging, multi-modal
fusion, domain transfer, and beyond. The dataset is available at
https://huggingface.co/datasets/jingchao-peng/HDRTDataset.
|
2406.05821 | Size Wu | Size Wu, Sheng Jin, Wenwei Zhang, Lumin Xu, Wentao Liu, Wei Li, Chen
Change Loy | F-LMM: Grounding Frozen Large Multimodal Models | Project Page: https://github.com/wusize/F-LMM | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Endowing Large Multimodal Models (LMMs) with visual grounding capability can
significantly enhance AIs' understanding of the visual world and their
interaction with humans. However, existing methods typically fine-tune the
parameters of LMMs to learn additional segmentation tokens and overfit
grounding and segmentation datasets. Such a design would inevitably cause a
catastrophic diminution in the indispensable conversational capability of
general AI assistants. In this paper, we comprehensively evaluate
state-of-the-art grounding LMMs across a suite of multimodal question-answering
benchmarks, observing drastic performance drops that indicate vanishing general
knowledge comprehension and weakened instruction following ability. To address
this issue, we present F-LMM -- grounding frozen off-the-shelf LMMs in human-AI
conversations -- a straightforward yet effective design based on the fact that
word-pixel correspondences conducive to visual grounding inherently exist in
the attention mechanism of well-trained LMMs. Using only a few trainable CNN
layers, we can translate word-pixel attention weights to mask logits, which a
SAM-based mask refiner can further optimise. Our F-LMM neither learns special
segmentation tokens nor utilises high-quality grounded instruction-tuning data,
but achieves competitive performance on referring expression segmentation and
panoptic narrative grounding benchmarks while completely preserving LMMs'
original conversational ability. Additionally, with instruction-following
ability preserved and grounding ability obtained, F-LMM can be directly applied
to complex tasks like reasoning segmentation, grounded conversation generation
and visual chain-of-thought reasoning. Our code can be found at
https://github.com/wusize/F-LMM.
| [
{
"version": "v1",
"created": "Sun, 9 Jun 2024 15:14:26 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 07:20:35 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wu",
"Size",
""
],
[
"Jin",
"Sheng",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Xu",
"Lumin",
""
],
[
"Liu",
"Wentao",
""
],
[
"Li",
"Wei",
""
],
[
"Loy",
"Chen Change",
""
]
] | TITLE: F-LMM: Grounding Frozen Large Multimodal Models
ABSTRACT: Endowing Large Multimodal Models (LMMs) with visual grounding capability can
significantly enhance AIs' understanding of the visual world and their
interaction with humans. However, existing methods typically fine-tune the
parameters of LMMs to learn additional segmentation tokens and overfit
grounding and segmentation datasets. Such a design would inevitably cause a
catastrophic diminution in the indispensable conversational capability of
general AI assistants. In this paper, we comprehensively evaluate
state-of-the-art grounding LMMs across a suite of multimodal question-answering
benchmarks, observing drastic performance drops that indicate vanishing general
knowledge comprehension and weakened instruction following ability. To address
this issue, we present F-LMM -- grounding frozen off-the-shelf LMMs in human-AI
conversations -- a straightforward yet effective design based on the fact that
word-pixel correspondences conducive to visual grounding inherently exist in
the attention mechanism of well-trained LMMs. Using only a few trainable CNN
layers, we can translate word-pixel attention weights to mask logits, which a
SAM-based mask refiner can further optimise. Our F-LMM neither learns special
segmentation tokens nor utilises high-quality grounded instruction-tuning data,
but achieves competitive performance on referring expression segmentation and
panoptic narrative grounding benchmarks while completely preserving LMMs'
original conversational ability. Additionally, with instruction-following
ability preserved and grounding ability obtained, F-LMM can be directly applied
to complex tasks like reasoning segmentation, grounded conversation generation
and visual chain-of-thought reasoning. Our code can be found at
https://github.com/wusize/F-LMM.
|
2406.10819 | Dongping Chen | Dongping Chen, Yue Huang, Siyuan Wu, Jingyu Tang, Liuyi Chen, Yilin
Bai, Zhigang He, Chenlong Wang, Huichi Zhou, Yiqiang Li, Tianshuo Zhou, Yue
Yu, Chujie Gao, Qihui Zhang, Yi Gui, Zhen Li, Yao Wan, Pan Zhou, Jianfeng
Gao, Lichao Sun | GUI-World: A Video Benchmark and Dataset for Multimodal GUI-oriented
Understanding | Accepted by ICLR 2025 | null | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recently, Multimodal Large Language Models (MLLMs) have been used as agents
to control keyboard and mouse inputs by directly perceiving the Graphical User
Interface (GUI) and generating corresponding commands. However, current agents
primarily demonstrate strong understanding capabilities in static environments
and are mainly applied to relatively simple domains, such as Web or mobile
interfaces. We argue that a robust GUI agent should be capable of perceiving
temporal information on the GUI, including dynamic Web content and multi-step
tasks. Additionally, it should possess a comprehensive understanding of various
GUI scenarios, including desktop software and multi-window interactions. To
this end, this paper introduces a new dataset, termed GUI-World, which features
meticulously crafted Human-MLLM annotations, extensively covering six GUI
scenarios and eight types of GUI-oriented questions in three formats. We
evaluate the capabilities of current state-of-the-art MLLMs, including Image
LLMs and Video LLMs, in understanding various types of GUI content, especially
dynamic and sequential content. Our findings reveal that current models
struggle with dynamic GUI content without manually annotated keyframes or
operation history. On the other hand, Video LLMs fall short in all GUI-oriented
tasks given the sparse GUI video dataset. Therefore, we take the initial step
of leveraging a fine-tuned Video LLM, GUI-Vid, as a GUI-oriented assistant,
demonstrating an improved understanding of various GUI tasks. However, due to
the limitations in the performance of base LLMs, we conclude that using video
LLMs as GUI agents remains a significant challenge. We believe our work
provides valuable insights for future research in dynamic GUI content
understanding. All the dataset and code are publicly available at:
https://gui-world.github.io.
| [
{
"version": "v1",
"created": "Sun, 16 Jun 2024 06:56:53 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 11:46:14 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Dongping",
""
],
[
"Huang",
"Yue",
""
],
[
"Wu",
"Siyuan",
""
],
[
"Tang",
"Jingyu",
""
],
[
"Chen",
"Liuyi",
""
],
[
"Bai",
"Yilin",
""
],
[
"He",
"Zhigang",
""
],
[
"Wang",
"Chenlong",
""
],
[
"Zhou",
"Huichi",
""
],
[
"Li",
"Yiqiang",
""
],
[
"Zhou",
"Tianshuo",
""
],
[
"Yu",
"Yue",
""
],
[
"Gao",
"Chujie",
""
],
[
"Zhang",
"Qihui",
""
],
[
"Gui",
"Yi",
""
],
[
"Li",
"Zhen",
""
],
[
"Wan",
"Yao",
""
],
[
"Zhou",
"Pan",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Sun",
"Lichao",
""
]
] | TITLE: GUI-World: A Video Benchmark and Dataset for Multimodal GUI-oriented
Understanding
ABSTRACT: Recently, Multimodal Large Language Models (MLLMs) have been used as agents
to control keyboard and mouse inputs by directly perceiving the Graphical User
Interface (GUI) and generating corresponding commands. However, current agents
primarily demonstrate strong understanding capabilities in static environments
and are mainly applied to relatively simple domains, such as Web or mobile
interfaces. We argue that a robust GUI agent should be capable of perceiving
temporal information on the GUI, including dynamic Web content and multi-step
tasks. Additionally, it should possess a comprehensive understanding of various
GUI scenarios, including desktop software and multi-window interactions. To
this end, this paper introduces a new dataset, termed GUI-World, which features
meticulously crafted Human-MLLM annotations, extensively covering six GUI
scenarios and eight types of GUI-oriented questions in three formats. We
evaluate the capabilities of current state-of-the-art MLLMs, including Image
LLMs and Video LLMs, in understanding various types of GUI content, especially
dynamic and sequential content. Our findings reveal that current models
struggle with dynamic GUI content without manually annotated keyframes or
operation history. On the other hand, Video LLMs fall short in all GUI-oriented
tasks given the sparse GUI video dataset. Therefore, we take the initial step
of leveraging a fine-tuned Video LLM, GUI-Vid, as a GUI-oriented assistant,
demonstrating an improved understanding of various GUI tasks. However, due to
the limitations in the performance of base LLMs, we conclude that using video
LLMs as GUI agents remains a significant challenge. We believe our work
provides valuable insights for future research in dynamic GUI content
understanding. All the dataset and code are publicly available at:
https://gui-world.github.io.
|
2406.11148 | Tian Liu | Tian Liu, Huixin Zhang, Shubham Parashar, Shu Kong | Few-Shot Recognition via Stage-Wise Retrieval-Augmented Finetuning | Accepted to CVPR 2025. Website and code:
https://tian1327.github.io/SWAT/ | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Few-shot recognition (FSR) aims to train a classification model with only a
few labeled examples of each concept concerned by a downstream task, where data
annotation cost can be prohibitively high. We develop methods to solve FSR by
leveraging a pretrained Vision-Language Model (VLM). We particularly explore
retrieval-augmented learning (RAL), which retrieves open data, e.g., the VLM's
pretraining dataset, to learn models for better serving downstream tasks. RAL
has been studied in zero-shot recognition but remains under-explored in FSR.
Although applying RAL to FSR may seem straightforward, we observe interesting
and novel challenges and opportunities. First, somewhat surprisingly,
finetuning a VLM on a large amount of retrieved data underperforms
state-of-the-art zero-shot methods. This is due to the imbalanced distribution
of retrieved data and its domain gaps with the few-shot examples in the
downstream task. Second, more surprisingly, we find that simply finetuning a
VLM solely on few-shot examples significantly outperforms previous FSR methods,
and finetuning on the mix of retrieved and few-shot data yields even better
results. Third, to mitigate the imbalanced distribution and domain gap issues,
we propose Stage-Wise retrieval-Augmented fineTuning (SWAT), which involves
end-to-end finetuning on mixed data in the first stage and retraining the
classifier on the few-shot data in the second stage. Extensive experiments on
nine popular benchmarks demonstrate that SWAT significantly outperforms
previous methods by >6% accuracy.
| [
{
"version": "v1",
"created": "Mon, 17 Jun 2024 02:27:14 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Nov 2024 00:25:45 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Mar 2025 20:56:08 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Tian",
""
],
[
"Zhang",
"Huixin",
""
],
[
"Parashar",
"Shubham",
""
],
[
"Kong",
"Shu",
""
]
] | TITLE: Few-Shot Recognition via Stage-Wise Retrieval-Augmented Finetuning
ABSTRACT: Few-shot recognition (FSR) aims to train a classification model with only a
few labeled examples of each concept concerned by a downstream task, where data
annotation cost can be prohibitively high. We develop methods to solve FSR by
leveraging a pretrained Vision-Language Model (VLM). We particularly explore
retrieval-augmented learning (RAL), which retrieves open data, e.g., the VLM's
pretraining dataset, to learn models for better serving downstream tasks. RAL
has been studied in zero-shot recognition but remains under-explored in FSR.
Although applying RAL to FSR may seem straightforward, we observe interesting
and novel challenges and opportunities. First, somewhat surprisingly,
finetuning a VLM on a large amount of retrieved data underperforms
state-of-the-art zero-shot methods. This is due to the imbalanced distribution
of retrieved data and its domain gaps with the few-shot examples in the
downstream task. Second, more surprisingly, we find that simply finetuning a
VLM solely on few-shot examples significantly outperforms previous FSR methods,
and finetuning on the mix of retrieved and few-shot data yields even better
results. Third, to mitigate the imbalanced distribution and domain gap issues,
we propose Stage-Wise retrieval-Augmented fineTuning (SWAT), which involves
end-to-end finetuning on mixed data in the first stage and retraining the
classifier on the few-shot data in the second stage. Extensive experiments on
nine popular benchmarks demonstrate that SWAT significantly outperforms
previous methods by >6% accuracy.
|
2406.14855 | Jie Ren | Jie Ren, Kangrui Chen, Yingqian Cui, Shenglai Zeng, Hui Liu, Yue Xing,
Jiliang Tang, Lingjuan Lyu | Six-CD: Benchmarking Concept Removals for Benign Text-to-image Diffusion
Models | Accepted by CVPR 2025 | null | null | null | cs.CV cs.CR | http://creativecommons.org/licenses/by/4.0/ | Text-to-image (T2I) diffusion models have shown exceptional capabilities in
generating images that closely correspond to textual prompts. However, the
advancement of T2I diffusion models presents significant risks, as the models
could be exploited for malicious purposes, such as generating images with
violence or nudity, or creating unauthorized portraits of public figures in
inappropriate contexts. To mitigate these risks, concept removal methods have
been proposed. These methods aim to modify diffusion models to prevent the
generation of malicious and unwanted concepts. Despite these efforts, existing
research faces several challenges: (1) a lack of consistent comparisons on a
comprehensive dataset, (2) ineffective prompts in harmful and nudity concepts,
(3) overlooked evaluation of the ability to generate the benign part within
prompts containing malicious concepts. To address these gaps, we propose to
benchmark the concept removal methods by introducing a new dataset, Six-CD,
along with a novel evaluation metric. In this benchmark, we conduct a thorough
evaluation of concept removals, with the experimental observations and
discussions offering valuable insights in the field.
| [
{
"version": "v1",
"created": "Fri, 21 Jun 2024 03:58:44 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 19:56:34 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ren",
"Jie",
""
],
[
"Chen",
"Kangrui",
""
],
[
"Cui",
"Yingqian",
""
],
[
"Zeng",
"Shenglai",
""
],
[
"Liu",
"Hui",
""
],
[
"Xing",
"Yue",
""
],
[
"Tang",
"Jiliang",
""
],
[
"Lyu",
"Lingjuan",
""
]
] | TITLE: Six-CD: Benchmarking Concept Removals for Benign Text-to-image Diffusion
Models
ABSTRACT: Text-to-image (T2I) diffusion models have shown exceptional capabilities in
generating images that closely correspond to textual prompts. However, the
advancement of T2I diffusion models presents significant risks, as the models
could be exploited for malicious purposes, such as generating images with
violence or nudity, or creating unauthorized portraits of public figures in
inappropriate contexts. To mitigate these risks, concept removal methods have
been proposed. These methods aim to modify diffusion models to prevent the
generation of malicious and unwanted concepts. Despite these efforts, existing
research faces several challenges: (1) a lack of consistent comparisons on a
comprehensive dataset, (2) ineffective prompts in harmful and nudity concepts,
(3) overlooked evaluation of the ability to generate the benign part within
prompts containing malicious concepts. To address these gaps, we propose to
benchmark the concept removal methods by introducing a new dataset, Six-CD,
along with a novel evaluation metric. In this benchmark, we conduct a thorough
evaluation of concept removals, with the experimental observations and
discussions offering valuable insights in the field.
|
2406.18158 | Shengyi Qian | Shengyi Qian, Kaichun Mo, Valts Blukis, David F. Fouhey, Dieter Fox,
Ankit Goyal | 3D-MVP: 3D Multiview Pretraining for Robotic Manipulation | CVPR 2025 | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent works have shown that visual pretraining on egocentric datasets using
masked autoencoders (MAE) can improve generalization for downstream robotics
tasks. However, these approaches pretrain only on 2D images, while many
robotics applications require 3D scene understanding. In this work, we propose
3D-MVP, a novel approach for 3D Multi-View Pretraining using masked
autoencoders. We leverage Robotic View Transformer (RVT), which uses a
multi-view transformer to understand the 3D scene and predict gripper pose
actions. We split RVT's multi-view transformer into visual encoder and action
decoder, and pretrain its visual encoder using masked autoencoding on
large-scale 3D datasets such as Objaverse. We evaluate 3D-MVP on a suite of
virtual robot manipulation tasks and demonstrate improved performance over
baselines. Our results suggest that 3D-aware pretraining is a promising
approach to improve generalization of vision-based robotic manipulation
policies. Project site: https://jasonqsy.github.io/3DMVP
| [
{
"version": "v1",
"created": "Wed, 26 Jun 2024 08:17:59 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 00:39:57 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Qian",
"Shengyi",
""
],
[
"Mo",
"Kaichun",
""
],
[
"Blukis",
"Valts",
""
],
[
"Fouhey",
"David F.",
""
],
[
"Fox",
"Dieter",
""
],
[
"Goyal",
"Ankit",
""
]
] | TITLE: 3D-MVP: 3D Multiview Pretraining for Robotic Manipulation
ABSTRACT: Recent works have shown that visual pretraining on egocentric datasets using
masked autoencoders (MAE) can improve generalization for downstream robotics
tasks. However, these approaches pretrain only on 2D images, while many
robotics applications require 3D scene understanding. In this work, we propose
3D-MVP, a novel approach for 3D Multi-View Pretraining using masked
autoencoders. We leverage Robotic View Transformer (RVT), which uses a
multi-view transformer to understand the 3D scene and predict gripper pose
actions. We split RVT's multi-view transformer into visual encoder and action
decoder, and pretrain its visual encoder using masked autoencoding on
large-scale 3D datasets such as Objaverse. We evaluate 3D-MVP on a suite of
virtual robot manipulation tasks and demonstrate improved performance over
baselines. Our results suggest that 3D-aware pretraining is a promising
approach to improve generalization of vision-based robotic manipulation
policies. Project site: https://jasonqsy.github.io/3DMVP
|
2406.20085 | Yicheng Chen | Yicheng Chen, Xiangtai Li, Yining Li, Yanhong Zeng, Jianzong Wu,
Xiangyu Zhao, Kai Chen | Auto Cherry-Picker: Learning from High-quality Generative Data Driven by
Language | Accepted to CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion models can generate realistic and diverse images, potentially
facilitating data availability for data-intensive perception tasks. However,
leveraging these models to boost performance on downstream tasks with synthetic
data poses several challenges, including aligning with real data distribution,
scaling synthetic sample volumes, and ensuring their quality. To bridge these
gaps, we present \textbf{A}uto \textbf{C}herry-\textbf{P}icker (ACP), a novel
framework that generates high-quality cross-modality training samples at scale
to augment perception and multi-modal training. ACP first uses LLMs to sample
descriptions and layouts based on object combinations from real data priors,
eliminating the need for ground truth image captions or annotations. Next, we
use an off-the-shelf controllable diffusion model to generate multiple images.
Then, the generated data are refined using a comprehensively designed metric,
Composite Layout and Image Score (CLIS), to ensure quality. Our customized
synthetic high-quality samples boost performance in various scenarios,
especially in addressing challenges associated with long-tailed distribution
and imbalanced datasets. Experiment results on downstream tasks demonstrate
that ACP can significantly improve the performance of existing models. In
addition, we find a positive correlation between CLIS and performance gains in
downstream tasks. This finding shows the potential for evaluation metrics as
the role for various visual perception and MLLM tasks.
| [
{
"version": "v1",
"created": "Fri, 28 Jun 2024 17:53:18 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Nov 2024 17:13:15 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 09:58:24 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Yicheng",
""
],
[
"Li",
"Xiangtai",
""
],
[
"Li",
"Yining",
""
],
[
"Zeng",
"Yanhong",
""
],
[
"Wu",
"Jianzong",
""
],
[
"Zhao",
"Xiangyu",
""
],
[
"Chen",
"Kai",
""
]
] | TITLE: Auto Cherry-Picker: Learning from High-quality Generative Data Driven by
Language
ABSTRACT: Diffusion models can generate realistic and diverse images, potentially
facilitating data availability for data-intensive perception tasks. However,
leveraging these models to boost performance on downstream tasks with synthetic
data poses several challenges, including aligning with real data distribution,
scaling synthetic sample volumes, and ensuring their quality. To bridge these
gaps, we present \textbf{A}uto \textbf{C}herry-\textbf{P}icker (ACP), a novel
framework that generates high-quality cross-modality training samples at scale
to augment perception and multi-modal training. ACP first uses LLMs to sample
descriptions and layouts based on object combinations from real data priors,
eliminating the need for ground truth image captions or annotations. Next, we
use an off-the-shelf controllable diffusion model to generate multiple images.
Then, the generated data are refined using a comprehensively designed metric,
Composite Layout and Image Score (CLIS), to ensure quality. Our customized
synthetic high-quality samples boost performance in various scenarios,
especially in addressing challenges associated with long-tailed distribution
and imbalanced datasets. Experiment results on downstream tasks demonstrate
that ACP can significantly improve the performance of existing models. In
addition, we find a positive correlation between CLIS and performance gains in
downstream tasks. This finding shows the potential for evaluation metrics as
the role for various visual perception and MLLM tasks.
|
2407.00916 | Junfan Li | Junfan Li, Shizhong Liao | Learnability in Online Kernel Selection with Memory Constraint via
Data-dependent Regret Analysis | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online kernel selection is a fundamental problem of online kernel methods.In
this paper,we study online kernel selection with memory constraint in which the
memory of kernel selection and online prediction procedures is limited to a
fixed budget. An essential question is what is the intrinsic relationship among
online learnability, memory constraint, and data complexity? To answer the
question,it is necessary to show the trade-offs between regret and memory
constraint.Previous work gives a worst-case lower bound depending on the data
size,and shows learning is impossible within a small memory constraint.In
contrast, we present distinct results by offering data-dependent upper bounds
that rely on two data complexities:kernel alignment and the cumulative losses
of competitive hypothesis.We propose an algorithmic framework giving
data-dependent upper bounds for two types of loss functions.For the hinge loss
function,our algorithm achieves an expected upper bound depending on kernel
alignment.For smooth loss functions,our algorithm achieves a high-probability
upper bound depending on the cumulative losses of competitive hypothesis.We
also prove a matching lower bound for smooth loss functions.Our results show
that if the two data complexities are sub-linear,then learning is possible
within a small memory constraint.Our algorithmic framework depends on a new
buffer maintaining framework and a reduction from online kernel selection to
prediction with expert advice. Finally,we empirically verify the prediction
performance of our algorithms on benchmark datasets.
| [
{
"version": "v1",
"created": "Mon, 1 Jul 2024 02:42:27 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Jul 2024 03:42:46 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 14:42:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Junfan",
""
],
[
"Liao",
"Shizhong",
""
]
] | TITLE: Learnability in Online Kernel Selection with Memory Constraint via
Data-dependent Regret Analysis
ABSTRACT: Online kernel selection is a fundamental problem of online kernel methods.In
this paper,we study online kernel selection with memory constraint in which the
memory of kernel selection and online prediction procedures is limited to a
fixed budget. An essential question is what is the intrinsic relationship among
online learnability, memory constraint, and data complexity? To answer the
question,it is necessary to show the trade-offs between regret and memory
constraint.Previous work gives a worst-case lower bound depending on the data
size,and shows learning is impossible within a small memory constraint.In
contrast, we present distinct results by offering data-dependent upper bounds
that rely on two data complexities:kernel alignment and the cumulative losses
of competitive hypothesis.We propose an algorithmic framework giving
data-dependent upper bounds for two types of loss functions.For the hinge loss
function,our algorithm achieves an expected upper bound depending on kernel
alignment.For smooth loss functions,our algorithm achieves a high-probability
upper bound depending on the cumulative losses of competitive hypothesis.We
also prove a matching lower bound for smooth loss functions.Our results show
that if the two data complexities are sub-linear,then learning is possible
within a small memory constraint.Our algorithmic framework depends on a new
buffer maintaining framework and a reduction from online kernel selection to
prediction with expert advice. Finally,we empirically verify the prediction
performance of our algorithms on benchmark datasets.
|
2407.03695 | Xinyu Yang | Xinyu Yang, Xiaochen Ma, Xuekang Zhu, Bo Du, Lei Su, Bingkui Tong,
Zeyu Lei, Jizhe Zhou | M^3:Manipulation Mask Manufacturer for Arbitrary-Scale Super-Resolution
Mask | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of image manipulation localization (IML), the small quantity and
poor quality of existing datasets have always been major issues. A dataset
containing various types of manipulations will greatly help improve the
accuracy of IML models. Images on the internet (such as those on Baidu Tieba's
PS Bar) are manipulated using various techniques, and creating a dataset from
these images will significantly enrich the types of manipulations in our data.
However, images on the internet suffer from resolution and clarity issues, and
the masks obtained by simply subtracting the manipulated image from the
original contain various noises. These noises are difficult to remove,
rendering the masks unusable for IML models. Inspired by the field of change
detection, we treat the original and manipulated images as changes over time
for the same image and view the data generation task as a change detection
task. However, due to clarity issues between images, conventional change
detection models perform poorly. Therefore, we introduced a super-resolution
module and proposed the Manipulation Mask Manufacturer (MMM) framework. It
enhances the resolution of both the original and tampered images, thereby
improving image details for better comparison. Simultaneously, the framework
converts the original and tampered images into feature embeddings and
concatenates them, effectively modeling the context. Additionally, we created
the Manipulation Mask Manufacturer Dataset (MMMD), a dataset that covers a wide
range of manipulation techniques. We aim to contribute to the fields of image
forensics and manipulation detection by providing more realistic manipulation
data through MMM and MMMD. Detailed information about MMMD and the download
link can be found at: the code and datasets will be made available.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2024 07:30:41 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 10:50:57 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yang",
"Xinyu",
""
],
[
"Ma",
"Xiaochen",
""
],
[
"Zhu",
"Xuekang",
""
],
[
"Du",
"Bo",
""
],
[
"Su",
"Lei",
""
],
[
"Tong",
"Bingkui",
""
],
[
"Lei",
"Zeyu",
""
],
[
"Zhou",
"Jizhe",
""
]
] | TITLE: M^3:Manipulation Mask Manufacturer for Arbitrary-Scale Super-Resolution
Mask
ABSTRACT: In the field of image manipulation localization (IML), the small quantity and
poor quality of existing datasets have always been major issues. A dataset
containing various types of manipulations will greatly help improve the
accuracy of IML models. Images on the internet (such as those on Baidu Tieba's
PS Bar) are manipulated using various techniques, and creating a dataset from
these images will significantly enrich the types of manipulations in our data.
However, images on the internet suffer from resolution and clarity issues, and
the masks obtained by simply subtracting the manipulated image from the
original contain various noises. These noises are difficult to remove,
rendering the masks unusable for IML models. Inspired by the field of change
detection, we treat the original and manipulated images as changes over time
for the same image and view the data generation task as a change detection
task. However, due to clarity issues between images, conventional change
detection models perform poorly. Therefore, we introduced a super-resolution
module and proposed the Manipulation Mask Manufacturer (MMM) framework. It
enhances the resolution of both the original and tampered images, thereby
improving image details for better comparison. Simultaneously, the framework
converts the original and tampered images into feature embeddings and
concatenates them, effectively modeling the context. Additionally, we created
the Manipulation Mask Manufacturer Dataset (MMMD), a dataset that covers a wide
range of manipulation techniques. We aim to contribute to the fields of image
forensics and manipulation detection by providing more realistic manipulation
data through MMM and MMMD. Detailed information about MMMD and the download
link can be found at: the code and datasets will be made available.
|
2407.11905 | Andrej \v{C}op | Andrej \v{C}op, Bla\v{z} Bertalani\v{c}, Carolina Fortuna | An Overview and Solution for Democratizing AI Workflows at the Network
Edge | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the process of democratization of the network edge, hardware and
software for networks are becoming available to the public, overcoming the
confines of traditional cloud providers and network operators. This trend,
coupled with the increasing importance of AI in 6G and beyond cellular
networks, presents opportunities for innovative AI applications and systems at
the network edge. While AI models and services are well-managed in cloud
systems, achieving similar maturity for serving network needs remains an open
challenge. Existing open solutions are emerging and are yet to consider
democratization requirements. In this work, we identify key requirements for
democratization and propose NAOMI, a solution for democratizing AI/ML workflows
at the network edge designed based on those requirements. Guided by the
functionality and overlap analysis of the O-RAN AI/ML workflow architecture and
MLOps systems, coupled with the survey of open-source AI/ML tools, we develop a
modular, scalable, and distributed hardware architecture-independent solution.
NAOMI leverages state-of-the-art open-source tools and can be deployed on
distributed clusters of heterogeneous devices. The results show that NAOMI
performs up to 40% better in deployment time and up to 73% faster in AI/ML
workflow execution for larger datasets compared to AI/ML Framework, a
representative open network access solution, while performing inference and
utilizing resources on par with its counterpart.
| [
{
"version": "v1",
"created": "Tue, 16 Jul 2024 16:38:47 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Feb 2025 13:04:20 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 14:30:31 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Čop",
"Andrej",
""
],
[
"Bertalanič",
"Blaž",
""
],
[
"Fortuna",
"Carolina",
""
]
] | TITLE: An Overview and Solution for Democratizing AI Workflows at the Network
Edge
ABSTRACT: With the process of democratization of the network edge, hardware and
software for networks are becoming available to the public, overcoming the
confines of traditional cloud providers and network operators. This trend,
coupled with the increasing importance of AI in 6G and beyond cellular
networks, presents opportunities for innovative AI applications and systems at
the network edge. While AI models and services are well-managed in cloud
systems, achieving similar maturity for serving network needs remains an open
challenge. Existing open solutions are emerging and are yet to consider
democratization requirements. In this work, we identify key requirements for
democratization and propose NAOMI, a solution for democratizing AI/ML workflows
at the network edge designed based on those requirements. Guided by the
functionality and overlap analysis of the O-RAN AI/ML workflow architecture and
MLOps systems, coupled with the survey of open-source AI/ML tools, we develop a
modular, scalable, and distributed hardware architecture-independent solution.
NAOMI leverages state-of-the-art open-source tools and can be deployed on
distributed clusters of heterogeneous devices. The results show that NAOMI
performs up to 40% better in deployment time and up to 73% faster in AI/ML
workflow execution for larger datasets compared to AI/ML Framework, a
representative open network access solution, while performing inference and
utilizing resources on par with its counterpart.
|
2407.12781 | Sherwin Bahmani | Sherwin Bahmani, Ivan Skorokhodov, Aliaksandr Siarohin, Willi
Menapace, Guocheng Qian, Michael Vasilkovsky, Hsin-Ying Lee, Chaoyang Wang,
Jiaxu Zou, Andrea Tagliasacchi, David B. Lindell, Sergey Tulyakov | VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control | ICLR 2025; Project Page: https://snap-research.github.io/vd3d/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Modern text-to-video synthesis models demonstrate coherent, photorealistic
generation of complex videos from a text description. However, most existing
models lack fine-grained control over camera movement, which is critical for
downstream applications related to content creation, visual effects, and 3D
vision. Recently, new methods demonstrate the ability to generate videos with
controllable camera poses these techniques leverage pre-trained U-Net-based
diffusion models that explicitly disentangle spatial and temporal generation.
Still, no existing approach enables camera control for new, transformer-based
video diffusion models that process spatial and temporal information jointly.
Here, we propose to tame video transformers for 3D camera control using a
ControlNet-like conditioning mechanism that incorporates spatiotemporal camera
embeddings based on Pl\"ucker coordinates. The approach demonstrates
state-of-the-art performance for controllable video generation after
fine-tuning on the RealEstate10K dataset. To the best of our knowledge, our
work is the first to enable camera control for transformer-based video
diffusion models.
| [
{
"version": "v1",
"created": "Wed, 17 Jul 2024 17:59:05 GMT"
},
{
"version": "v2",
"created": "Sat, 20 Jul 2024 19:43:10 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 15:40:42 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Bahmani",
"Sherwin",
""
],
[
"Skorokhodov",
"Ivan",
""
],
[
"Siarohin",
"Aliaksandr",
""
],
[
"Menapace",
"Willi",
""
],
[
"Qian",
"Guocheng",
""
],
[
"Vasilkovsky",
"Michael",
""
],
[
"Lee",
"Hsin-Ying",
""
],
[
"Wang",
"Chaoyang",
""
],
[
"Zou",
"Jiaxu",
""
],
[
"Tagliasacchi",
"Andrea",
""
],
[
"Lindell",
"David B.",
""
],
[
"Tulyakov",
"Sergey",
""
]
] | TITLE: VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control
ABSTRACT: Modern text-to-video synthesis models demonstrate coherent, photorealistic
generation of complex videos from a text description. However, most existing
models lack fine-grained control over camera movement, which is critical for
downstream applications related to content creation, visual effects, and 3D
vision. Recently, new methods demonstrate the ability to generate videos with
controllable camera poses these techniques leverage pre-trained U-Net-based
diffusion models that explicitly disentangle spatial and temporal generation.
Still, no existing approach enables camera control for new, transformer-based
video diffusion models that process spatial and temporal information jointly.
Here, we propose to tame video transformers for 3D camera control using a
ControlNet-like conditioning mechanism that incorporates spatiotemporal camera
embeddings based on Pl\"ucker coordinates. The approach demonstrates
state-of-the-art performance for controllable video generation after
fine-tuning on the RealEstate10K dataset. To the best of our knowledge, our
work is the first to enable camera control for transformer-based video
diffusion models.
|
2407.15946 | Davide Cugini | Davide Cugini and Andr\'e Timpanaro and Giacomo Livan and Giacomo
Guarnieri | Universal emergence of local Zipf's law | 6+4 pages, 3+1 figures | null | null | null | physics.soc-ph physics.data-an | http://creativecommons.org/licenses/by/4.0/ | A plethora of natural and socio-economic phenomena share a striking
statistical regularity, that is the magnitude of elements decreases with a
power law as a function of their position in a ranking of magnitude. Such
regularity is known as Zipf-Mandelbrot law (ZM), and plenty of problem-specific
explanations for its emergence have been provided in different fields. Yet, an
explanation for ZM ubiquity is currently lacking. In this paper we first
provide an analytical expression for the cumulants of any ranked sample of
i.i.d. random variables once sorted in decreasing order. Then we make use of
this result to rigorously demonstrate that, whenever a small fraction of such
ranked dataset is considered, it becomes statistically indistinguishable from a
ZM law. We finally validate our results against several relevant examples.
| [
{
"version": "v1",
"created": "Mon, 22 Jul 2024 18:00:22 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 14:12:31 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Cugini",
"Davide",
""
],
[
"Timpanaro",
"André",
""
],
[
"Livan",
"Giacomo",
""
],
[
"Guarnieri",
"Giacomo",
""
]
] | TITLE: Universal emergence of local Zipf's law
ABSTRACT: A plethora of natural and socio-economic phenomena share a striking
statistical regularity, that is the magnitude of elements decreases with a
power law as a function of their position in a ranking of magnitude. Such
regularity is known as Zipf-Mandelbrot law (ZM), and plenty of problem-specific
explanations for its emergence have been provided in different fields. Yet, an
explanation for ZM ubiquity is currently lacking. In this paper we first
provide an analytical expression for the cumulants of any ranked sample of
i.i.d. random variables once sorted in decreasing order. Then we make use of
this result to rigorously demonstrate that, whenever a small fraction of such
ranked dataset is considered, it becomes statistically indistinguishable from a
ZM law. We finally validate our results against several relevant examples.
|
2407.16272 | Pakizar Shamoi Dr | Malika Ziyada and Pakizar Shamoi | Video Popularity in Social Media: Impact of Emotions, Raw Features and
Viewer Comments | the paper has been submitted to IEEE SCIS ISIS 2024 for consideration | 2024 Joint 13th International Conference on Soft Computing and
Intelligent Systems and 25th International Symposium on Advanced Intelligent
Systems (SCIS&ISIS), Himeji, Japan, 2024, pp. 1-7 | 10.1109/SCISISIS61014.2024.10759978 | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Internet has significantly affected the increase of social media users.
Nowadays, informative content is presented along with entertainment on the web.
Highlighting environmental issues on social networks is crucial, given their
significance as major global problems. This study examines the popularity
determinants for short environmental videos on social media, focusing on the
comparative influence of raw video features and viewer engagement metrics. We
collected a dataset of videos along with associated popularity metrics such as
likes, views, shares, and comments per day. We also extracted video
characteristics, including duration, text post length, emotional and sentiment
analysis using the VADER and text2emotion models, and color palette brightness.
Our analysis consisted of two main experiments: one evaluating the correlation
between raw video features and popularity metrics and another assessing the
impact of viewer comments and their sentiments and emotions on video
popularity. We employed a ridge regression classifier with standard scaling to
predict the popularity, categorizing videos as popular or not based on the
median views and likes per day. The findings reveal that viewer comments and
reactions (accuracy of 0.8) have a more substantial influence on video
popularity compared to raw video features (accuracy of 0.67). Significant
correlations include a positive relationship between the emotion of sadness in
posts and the number of likes and negative correlations between sentiment
scores, and both likes and shares. This research highlights the complex
relationship between content features and public perception in shaping the
popularity of environmental messages on social media.
| [
{
"version": "v1",
"created": "Tue, 23 Jul 2024 08:25:18 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ziyada",
"Malika",
""
],
[
"Shamoi",
"Pakizar",
""
]
] | TITLE: Video Popularity in Social Media: Impact of Emotions, Raw Features and
Viewer Comments
ABSTRACT: The Internet has significantly affected the increase of social media users.
Nowadays, informative content is presented along with entertainment on the web.
Highlighting environmental issues on social networks is crucial, given their
significance as major global problems. This study examines the popularity
determinants for short environmental videos on social media, focusing on the
comparative influence of raw video features and viewer engagement metrics. We
collected a dataset of videos along with associated popularity metrics such as
likes, views, shares, and comments per day. We also extracted video
characteristics, including duration, text post length, emotional and sentiment
analysis using the VADER and text2emotion models, and color palette brightness.
Our analysis consisted of two main experiments: one evaluating the correlation
between raw video features and popularity metrics and another assessing the
impact of viewer comments and their sentiments and emotions on video
popularity. We employed a ridge regression classifier with standard scaling to
predict the popularity, categorizing videos as popular or not based on the
median views and likes per day. The findings reveal that viewer comments and
reactions (accuracy of 0.8) have a more substantial influence on video
popularity compared to raw video features (accuracy of 0.67). Significant
correlations include a positive relationship between the emotion of sadness in
posts and the number of likes and negative correlations between sentiment
scores, and both likes and shares. This research highlights the complex
relationship between content features and public perception in shaping the
popularity of environmental messages on social media.
|
2407.16985 | Junjing Zheng | Junjing Zheng, Xinyu Zhang, Weidong Jiang, Xiangfeng Qiu, Mingjian Ren | Sparse Tensor PCA via Tensor Decomposition for Unsupervised Feature
Selection | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recently, introducing Tensor Decomposition (TD) techniques into unsupervised
feature selection (UFS) has been an emerging research topic. A tensor structure
is beneficial for mining the relations between different modes and helps
relieve the computation burden. However, while existing methods exploit TD to
preserve the data tensor structure, they do not consider the influence of data
orientation and thus have difficulty in handling orientation-specific data such
as time series. To solve the above problem, we utilize the
orientation-dependent tensor-tensor product from Tensor Singular Value
Decomposition based on *M-product (T-SVDM) and extend the one-dimensional
Sparse Principal Component Analysis (SPCA) to a tensor form. The proposed
sparse tensor PCA model can constrain sparsity at the specified mode and yield
sparse tensor principal components, enhancing flexibility and accuracy in
learning feature relations. To ensure fast convergence and a flexible
description of feature correlation, we develop a convex version specially
designed for general UFS tasks and propose an efficient slice-by-slice
algorithm that performs dual optimization in the transform domain. Experimental
results on real-world datasets demonstrate the effectiveness and remarkable
computational efficiency of the proposed method for tensor data of diverse
structures over the state-of-the-arts. With a proper combination of data
orientation and transform domain, our method is promising for various
applications. The codes related to our proposed methods and the experiments are
available at https://github.com/zjj20212035/STPCA.git.
| [
{
"version": "v1",
"created": "Wed, 24 Jul 2024 04:04:56 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 09:55:59 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zheng",
"Junjing",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Jiang",
"Weidong",
""
],
[
"Qiu",
"Xiangfeng",
""
],
[
"Ren",
"Mingjian",
""
]
] | TITLE: Sparse Tensor PCA via Tensor Decomposition for Unsupervised Feature
Selection
ABSTRACT: Recently, introducing Tensor Decomposition (TD) techniques into unsupervised
feature selection (UFS) has been an emerging research topic. A tensor structure
is beneficial for mining the relations between different modes and helps
relieve the computation burden. However, while existing methods exploit TD to
preserve the data tensor structure, they do not consider the influence of data
orientation and thus have difficulty in handling orientation-specific data such
as time series. To solve the above problem, we utilize the
orientation-dependent tensor-tensor product from Tensor Singular Value
Decomposition based on *M-product (T-SVDM) and extend the one-dimensional
Sparse Principal Component Analysis (SPCA) to a tensor form. The proposed
sparse tensor PCA model can constrain sparsity at the specified mode and yield
sparse tensor principal components, enhancing flexibility and accuracy in
learning feature relations. To ensure fast convergence and a flexible
description of feature correlation, we develop a convex version specially
designed for general UFS tasks and propose an efficient slice-by-slice
algorithm that performs dual optimization in the transform domain. Experimental
results on real-world datasets demonstrate the effectiveness and remarkable
computational efficiency of the proposed method for tensor data of diverse
structures over the state-of-the-arts. With a proper combination of data
orientation and transform domain, our method is promising for various
applications. The codes related to our proposed methods and the experiments are
available at https://github.com/zjj20212035/STPCA.git.
|
2408.01536 | Daniel Musekamp | Daniel Musekamp, Marimuthu Kalimuthu, David Holzm\"uller, Makoto
Takamoto, Mathias Niepert | Active Learning for Neural PDE Solvers | null | null | null | null | cs.LG cs.AI cs.CE cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solving partial differential equations (PDEs) is a fundamental problem in
science and engineering. While neural PDE solvers can be more efficient than
established numerical solvers, they often require large amounts of training
data that is costly to obtain. Active learning (AL) could help surrogate models
reach the same accuracy with smaller training sets by querying classical
solvers with more informative initial conditions and PDE parameters. While AL
is more common in other domains, it has yet to be studied extensively for
neural PDE solvers. To bridge this gap, we introduce AL4PDE, a modular and
extensible active learning benchmark. It provides multiple parametric PDEs and
state-of-the-art surrogate models for the solver-in-the-loop setting, enabling
the evaluation of existing and the development of new AL methods for neural PDE
solving. We use the benchmark to evaluate batch active learning algorithms such
as uncertainty- and feature-based methods. We show that AL reduces the average
error by up to 71% compared to random sampling and significantly reduces
worst-case errors. Moreover, AL generates similar datasets across repeated
runs, with consistent distributions over the PDE parameters and initial
conditions. The acquired datasets are reusable, providing benefits for
surrogate models not involved in the data generation.
| [
{
"version": "v1",
"created": "Fri, 2 Aug 2024 18:48:58 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 10:31:44 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Musekamp",
"Daniel",
""
],
[
"Kalimuthu",
"Marimuthu",
""
],
[
"Holzmüller",
"David",
""
],
[
"Takamoto",
"Makoto",
""
],
[
"Niepert",
"Mathias",
""
]
] | TITLE: Active Learning for Neural PDE Solvers
ABSTRACT: Solving partial differential equations (PDEs) is a fundamental problem in
science and engineering. While neural PDE solvers can be more efficient than
established numerical solvers, they often require large amounts of training
data that is costly to obtain. Active learning (AL) could help surrogate models
reach the same accuracy with smaller training sets by querying classical
solvers with more informative initial conditions and PDE parameters. While AL
is more common in other domains, it has yet to be studied extensively for
neural PDE solvers. To bridge this gap, we introduce AL4PDE, a modular and
extensible active learning benchmark. It provides multiple parametric PDEs and
state-of-the-art surrogate models for the solver-in-the-loop setting, enabling
the evaluation of existing and the development of new AL methods for neural PDE
solving. We use the benchmark to evaluate batch active learning algorithms such
as uncertainty- and feature-based methods. We show that AL reduces the average
error by up to 71% compared to random sampling and significantly reduces
worst-case errors. Moreover, AL generates similar datasets across repeated
runs, with consistent distributions over the PDE parameters and initial
conditions. The acquired datasets are reusable, providing benefits for
surrogate models not involved in the data generation.
|
2408.06010 | Jisoo Kim | Jisoo Kim, Jungbin Cho, Joonho Park, Soonmin Hwang, Da Eun Kim, Geon
Kim, Youngjae Yu | DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D
Face Animation | First two authors contributed equally. This is a revised version of
the original submission, which has been accepted for publication at AAAI 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Speech-driven 3D facial animation has garnered lots of attention thanks to
its broad range of applications. Despite recent advancements in achieving
realistic lip motion, current methods fail to capture the nuanced emotional
undertones conveyed through speech and produce monotonous facial motion. These
limitations result in blunt and repetitive facial animations, reducing user
engagement and hindering their applicability. To address these challenges, we
introduce DEEPTalk, a novel approach that generates diverse and emotionally
rich 3D facial expressions directly from speech inputs. To achieve this, we
first train DEE (Dynamic Emotion Embedding), which employs probabilistic
contrastive learning to forge a joint emotion embedding space for both speech
and facial motion. This probabilistic framework captures the uncertainty in
interpreting emotions from speech and facial motion, enabling the derivation of
emotion vectors from its multifaceted space. Moreover, to generate dynamic
facial motion, we design TH-VQVAE (Temporally Hierarchical VQ-VAE) as an
expressive and robust motion prior overcoming limitations of VAEs and VQ-VAEs.
Utilizing these strong priors, we develop DEEPTalk, a talking head generator
that non-autoregressively predicts codebook indices to create dynamic facial
motion, incorporating a novel emotion consistency loss. Extensive experiments
on various datasets demonstrate the effectiveness of our approach in creating
diverse, emotionally expressive talking faces that maintain accurate lip-sync.
Our project page is available at https://whwjdqls.github.io/deeptalk\_website/
| [
{
"version": "v1",
"created": "Mon, 12 Aug 2024 08:56:49 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Dec 2024 09:48:08 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 06:37:25 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Kim",
"Jisoo",
""
],
[
"Cho",
"Jungbin",
""
],
[
"Park",
"Joonho",
""
],
[
"Hwang",
"Soonmin",
""
],
[
"Kim",
"Da Eun",
""
],
[
"Kim",
"Geon",
""
],
[
"Yu",
"Youngjae",
""
]
] | TITLE: DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D
Face Animation
ABSTRACT: Speech-driven 3D facial animation has garnered lots of attention thanks to
its broad range of applications. Despite recent advancements in achieving
realistic lip motion, current methods fail to capture the nuanced emotional
undertones conveyed through speech and produce monotonous facial motion. These
limitations result in blunt and repetitive facial animations, reducing user
engagement and hindering their applicability. To address these challenges, we
introduce DEEPTalk, a novel approach that generates diverse and emotionally
rich 3D facial expressions directly from speech inputs. To achieve this, we
first train DEE (Dynamic Emotion Embedding), which employs probabilistic
contrastive learning to forge a joint emotion embedding space for both speech
and facial motion. This probabilistic framework captures the uncertainty in
interpreting emotions from speech and facial motion, enabling the derivation of
emotion vectors from its multifaceted space. Moreover, to generate dynamic
facial motion, we design TH-VQVAE (Temporally Hierarchical VQ-VAE) as an
expressive and robust motion prior overcoming limitations of VAEs and VQ-VAEs.
Utilizing these strong priors, we develop DEEPTalk, a talking head generator
that non-autoregressively predicts codebook indices to create dynamic facial
motion, incorporating a novel emotion consistency loss. Extensive experiments
on various datasets demonstrate the effectiveness of our approach in creating
diverse, emotionally expressive talking faces that maintain accurate lip-sync.
Our project page is available at https://whwjdqls.github.io/deeptalk\_website/
|
2408.10878 | Sang-Ki Ko | Han-Jun Choi, Hyunsung Kim, Minho Lee, Minchul Jeong, Chang-Jo Kim,
Jinsung Yoon, Sang-Ki Ko | Trajectory Imputation in Multi-Agent Sports with Derivative-Accumulating
Self-Ensemble | null | null | null | null | cs.AI cs.LG cs.MA | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Multi-agent trajectory data collected from domains such as team sports often
suffer from missing values due to various factors. While many imputation
methods have been proposed for spatiotemporal data, they are not well-suited
for multi-agent sports scenarios where player movements are highly dynamic and
inter-agent interactions continuously evolve. To address these challenges, we
propose MIDAS (Multi-agent Imputer with Derivative-Accumulating Self-ensemble),
a framework that imputes multi-agent trajectories with high accuracy and
physical plausibility. It jointly predicts positions, velocities, and
accelerations through a Set Transformer-based neural network and generates
alternative estimates by recursively accumulating predicted velocity and
acceleration values. These predictions are then combined using a learnable
weighted ensemble to produce final imputed trajectories. Experiments on three
sports datasets demonstrate that MIDAS significantly outperforms existing
baselines in both positional accuracy and physical plausibility. Lastly, we
showcase use cases of MIDAS, such as approximating total distance and pass
success probability, to highlight its applicability to practical downstream
tasks that require complete tracking data.
| [
{
"version": "v1",
"created": "Tue, 20 Aug 2024 14:08:16 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Aug 2024 01:27:46 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Mar 2025 17:12:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Choi",
"Han-Jun",
""
],
[
"Kim",
"Hyunsung",
""
],
[
"Lee",
"Minho",
""
],
[
"Jeong",
"Minchul",
""
],
[
"Kim",
"Chang-Jo",
""
],
[
"Yoon",
"Jinsung",
""
],
[
"Ko",
"Sang-Ki",
""
]
] | TITLE: Trajectory Imputation in Multi-Agent Sports with Derivative-Accumulating
Self-Ensemble
ABSTRACT: Multi-agent trajectory data collected from domains such as team sports often
suffer from missing values due to various factors. While many imputation
methods have been proposed for spatiotemporal data, they are not well-suited
for multi-agent sports scenarios where player movements are highly dynamic and
inter-agent interactions continuously evolve. To address these challenges, we
propose MIDAS (Multi-agent Imputer with Derivative-Accumulating Self-ensemble),
a framework that imputes multi-agent trajectories with high accuracy and
physical plausibility. It jointly predicts positions, velocities, and
accelerations through a Set Transformer-based neural network and generates
alternative estimates by recursively accumulating predicted velocity and
acceleration values. These predictions are then combined using a learnable
weighted ensemble to produce final imputed trajectories. Experiments on three
sports datasets demonstrate that MIDAS significantly outperforms existing
baselines in both positional accuracy and physical plausibility. Lastly, we
showcase use cases of MIDAS, such as approximating total distance and pass
success probability, to highlight its applicability to practical downstream
tasks that require complete tracking data.
|
2409.01688 | Jerry Yao-Chieh Hu | Erzhi Liu, Jerry Yao-Chieh Hu, Alex Reneau, Zhao Song, Han Liu | Differentially Private Kernel Density Estimation | v2: Appendix added. v3: Numerical validations added | null | null | null | cs.DS cs.AI cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We introduce a refined differentially private (DP) data structure for kernel
density estimation (KDE), offering not only improved privacy-utility tradeoff
but also better efficiency over prior results. Specifically, we study the
mathematical problem: given a similarity function $f$ (or DP KDE) and a private
dataset $X \subset \mathbb{R}^d$, our goal is to preprocess $X$ so that for any
query $y\in\mathbb{R}^d$, we approximate $\sum_{x \in X} f(x, y)$ in a
differentially private fashion. The best previous algorithm for $f(x,y) =\| x -
y \|_1$ is the node-contaminated balanced binary tree by [Backurs, Lin,
Mahabadi, Silwal, and Tarnawski, ICLR 2024]. Their algorithm requires $O(nd)$
space and time for preprocessing with $n=|X|$. For any query point, the query
time is $d \log n$, with an error guarantee of $(1+\alpha)$-approximation and
$\epsilon^{-1} \alpha^{-0.5} d^{1.5} R \log^{1.5} n$.
In this paper, we improve the best previous result [Backurs, Lin, Mahabadi,
Silwal, and Tarnawski, ICLR 2024] in three aspects:
- We reduce query time by a factor of $\alpha^{-1} \log n$.
- We improve the approximation ratio from $\alpha$ to 1.
- We reduce the error dependence by a factor of $\alpha^{-0.5}$.
From a technical perspective, our method of constructing the search tree
differs from previous work [Backurs, Lin, Mahabadi, Silwal, and Tarnawski, ICLR
2024]. In prior work, for each query, the answer is split into $\alpha^{-1}
\log n$ numbers, each derived from the summation of $\log n$ values in interval
tree countings. In contrast, we construct the tree differently, splitting the
answer into $\log n$ numbers, where each is a smart combination of two distance
values, two counting values, and $y$ itself. We believe our tree structure may
be of independent interest.
| [
{
"version": "v1",
"created": "Tue, 3 Sep 2024 08:01:19 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Nov 2024 01:47:36 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 00:13:57 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Erzhi",
""
],
[
"Hu",
"Jerry Yao-Chieh",
""
],
[
"Reneau",
"Alex",
""
],
[
"Song",
"Zhao",
""
],
[
"Liu",
"Han",
""
]
] | TITLE: Differentially Private Kernel Density Estimation
ABSTRACT: We introduce a refined differentially private (DP) data structure for kernel
density estimation (KDE), offering not only improved privacy-utility tradeoff
but also better efficiency over prior results. Specifically, we study the
mathematical problem: given a similarity function $f$ (or DP KDE) and a private
dataset $X \subset \mathbb{R}^d$, our goal is to preprocess $X$ so that for any
query $y\in\mathbb{R}^d$, we approximate $\sum_{x \in X} f(x, y)$ in a
differentially private fashion. The best previous algorithm for $f(x,y) =\| x -
y \|_1$ is the node-contaminated balanced binary tree by [Backurs, Lin,
Mahabadi, Silwal, and Tarnawski, ICLR 2024]. Their algorithm requires $O(nd)$
space and time for preprocessing with $n=|X|$. For any query point, the query
time is $d \log n$, with an error guarantee of $(1+\alpha)$-approximation and
$\epsilon^{-1} \alpha^{-0.5} d^{1.5} R \log^{1.5} n$.
In this paper, we improve the best previous result [Backurs, Lin, Mahabadi,
Silwal, and Tarnawski, ICLR 2024] in three aspects:
- We reduce query time by a factor of $\alpha^{-1} \log n$.
- We improve the approximation ratio from $\alpha$ to 1.
- We reduce the error dependence by a factor of $\alpha^{-0.5}$.
From a technical perspective, our method of constructing the search tree
differs from previous work [Backurs, Lin, Mahabadi, Silwal, and Tarnawski, ICLR
2024]. In prior work, for each query, the answer is split into $\alpha^{-1}
\log n$ numbers, each derived from the summation of $\log n$ values in interval
tree countings. In contrast, we construct the tree differently, splitting the
answer into $\log n$ numbers, where each is a smart combination of two distance
values, two counting values, and $y$ itself. We believe our tree structure may
be of independent interest.
|
2409.05399 | Tristan Stevens | Tristan S.W. Stevens, Ois\'in Nolan, Jean-Luc Robert, Ruud J.G. van
Sloun | Sequential Posterior Sampling with Diffusion Models | 5 pages, 4 figures, preprint | 2025 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP) | 10.1109/ICASSP49660.2025.10889752 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion models have quickly risen in popularity for their ability to model
complex distributions and perform effective posterior sampling. Unfortunately,
the iterative nature of these generative models makes them computationally
expensive and unsuitable for real-time sequential inverse problems such as
ultrasound imaging. Considering the strong temporal structure across sequences
of frames, we propose a novel approach that models the transition dynamics to
improve the efficiency of sequential diffusion posterior sampling in
conditional image synthesis. Through modeling sequence data using a video
vision transformer (ViViT) transition model based on previous diffusion
outputs, we can initialize the reverse diffusion trajectory at a lower noise
scale, greatly reducing the number of iterations required for convergence. We
demonstrate the effectiveness of our approach on a real-world dataset of high
frame rate cardiac ultrasound images and show that it achieves the same
performance as a full diffusion trajectory while accelerating inference
25$\times$, enabling real-time posterior sampling. Furthermore, we show that
the addition of a transition model improves the PSNR up to 8\% in cases with
severe motion. Our method opens up new possibilities for real-time applications
of diffusion models in imaging and other domains requiring real-time inference.
| [
{
"version": "v1",
"created": "Mon, 9 Sep 2024 07:55:59 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Stevens",
"Tristan S. W.",
""
],
[
"Nolan",
"Oisín",
""
],
[
"Robert",
"Jean-Luc",
""
],
[
"van Sloun",
"Ruud J. G.",
""
]
] | TITLE: Sequential Posterior Sampling with Diffusion Models
ABSTRACT: Diffusion models have quickly risen in popularity for their ability to model
complex distributions and perform effective posterior sampling. Unfortunately,
the iterative nature of these generative models makes them computationally
expensive and unsuitable for real-time sequential inverse problems such as
ultrasound imaging. Considering the strong temporal structure across sequences
of frames, we propose a novel approach that models the transition dynamics to
improve the efficiency of sequential diffusion posterior sampling in
conditional image synthesis. Through modeling sequence data using a video
vision transformer (ViViT) transition model based on previous diffusion
outputs, we can initialize the reverse diffusion trajectory at a lower noise
scale, greatly reducing the number of iterations required for convergence. We
demonstrate the effectiveness of our approach on a real-world dataset of high
frame rate cardiac ultrasound images and show that it achieves the same
performance as a full diffusion trajectory while accelerating inference
25$\times$, enabling real-time posterior sampling. Furthermore, we show that
the addition of a transition model improves the PSNR up to 8\% in cases with
severe motion. Our method opens up new possibilities for real-time applications
of diffusion models in imaging and other domains requiring real-time inference.
|
2409.05595 | Haoyu Zhang | Haoyu Zhang, Raghavendra Ramachandra, Kiran Raja, Christoph Busch | SynMorph: Generating Synthetic Face Morphing Dataset with Mated Samples | This preprint has been further published in IEEE Access. Print ISSN:
2169-3536. Online ISSN: 2169-3536. Digital Object Identifier:
10.1109/ACCESS.2025.3548957 | IEEE Access 2025 | 10.1109/ACCESS.2025.3548957 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Face morphing attack detection (MAD) algorithms have become essential to
overcome the vulnerability of face recognition systems. To solve the lack of
large-scale and public-available datasets due to privacy concerns and
restrictions, in this work we propose a new method to generate a synthetic face
morphing dataset with 2450 identities and more than 100k morphs. The proposed
synthetic face morphing dataset is unique for its high-quality samples,
different types of morphing algorithms, and the generalization for both single
and differential morphing attack detection algorithms. For experiments, we
apply face image quality assessment and vulnerability analysis to evaluate the
proposed synthetic face morphing dataset from the perspective of biometric
sample quality and morphing attack potential on face recognition systems. The
results are benchmarked with an existing SOTA synthetic dataset and a
representative non-synthetic and indicate improvement compared with the SOTA.
Additionally, we design different protocols and study the applicability of
using the proposed synthetic dataset on training morphing attack detection
algorithms.
| [
{
"version": "v1",
"created": "Mon, 9 Sep 2024 13:29:53 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 17:21:22 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Haoyu",
""
],
[
"Ramachandra",
"Raghavendra",
""
],
[
"Raja",
"Kiran",
""
],
[
"Busch",
"Christoph",
""
]
] | TITLE: SynMorph: Generating Synthetic Face Morphing Dataset with Mated Samples
ABSTRACT: Face morphing attack detection (MAD) algorithms have become essential to
overcome the vulnerability of face recognition systems. To solve the lack of
large-scale and public-available datasets due to privacy concerns and
restrictions, in this work we propose a new method to generate a synthetic face
morphing dataset with 2450 identities and more than 100k morphs. The proposed
synthetic face morphing dataset is unique for its high-quality samples,
different types of morphing algorithms, and the generalization for both single
and differential morphing attack detection algorithms. For experiments, we
apply face image quality assessment and vulnerability analysis to evaluate the
proposed synthetic face morphing dataset from the perspective of biometric
sample quality and morphing attack potential on face recognition systems. The
results are benchmarked with an existing SOTA synthetic dataset and a
representative non-synthetic and indicate improvement compared with the SOTA.
Additionally, we design different protocols and study the applicability of
using the proposed synthetic dataset on training morphing attack detection
algorithms.
|
2409.07931 | Jie Wen | Lian Zhao, Jie Wen, Xiaohuan Lu, Wai Keung Wong, Jiang Long, Wulin Xie | Task-Augmented Cross-View Imputation Network for Partial Multi-View
Incomplete Multi-Label Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real-world scenarios, multi-view multi-label learning often encounters the
challenge of incomplete training data due to limitations in data collection and
unreliable annotation processes. The absence of multi-view features impairs the
comprehensive understanding of samples, omitting crucial details essential for
classification. To address this issue, we present a task-augmented cross-view
imputation network (TACVI-Net) for the purpose of handling partial multi-view
incomplete multi-label classification. Specifically, we employ a two-stage
network to derive highly task-relevant features to recover the missing views.
In the first stage, we leverage the information bottleneck theory to obtain a
discriminative representation of each view by extracting task-relevant
information through a view-specific encoder-classifier architecture. In the
second stage, an autoencoder based multi-view reconstruction network is
utilized to extract high-level semantic representation of the augmented
features and recover the missing data, thereby aiding the final classification
task. Extensive experiments on five datasets demonstrate that our TACVI-Net
outperforms other state-of-the-art methods.
| [
{
"version": "v1",
"created": "Thu, 12 Sep 2024 10:56:11 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 04:22:28 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhao",
"Lian",
""
],
[
"Wen",
"Jie",
""
],
[
"Lu",
"Xiaohuan",
""
],
[
"Wong",
"Wai Keung",
""
],
[
"Long",
"Jiang",
""
],
[
"Xie",
"Wulin",
""
]
] | TITLE: Task-Augmented Cross-View Imputation Network for Partial Multi-View
Incomplete Multi-Label Classification
ABSTRACT: In real-world scenarios, multi-view multi-label learning often encounters the
challenge of incomplete training data due to limitations in data collection and
unreliable annotation processes. The absence of multi-view features impairs the
comprehensive understanding of samples, omitting crucial details essential for
classification. To address this issue, we present a task-augmented cross-view
imputation network (TACVI-Net) for the purpose of handling partial multi-view
incomplete multi-label classification. Specifically, we employ a two-stage
network to derive highly task-relevant features to recover the missing views.
In the first stage, we leverage the information bottleneck theory to obtain a
discriminative representation of each view by extracting task-relevant
information through a view-specific encoder-classifier architecture. In the
second stage, an autoencoder based multi-view reconstruction network is
utilized to extract high-level semantic representation of the augmented
features and recover the missing data, thereby aiding the final classification
task. Extensive experiments on five datasets demonstrate that our TACVI-Net
outperforms other state-of-the-art methods.
|
2409.10141 | Peng Li | Peng Li, Wangguandong Zheng, Yuan Liu, Tao Yu, Yangguang Li, Xingqun
Qi, Xiaowei Chi, Siyu Xia, Yan-Pei Cao, Wei Xue, Wenhan Luo, Yike Guo | PSHuman: Photorealistic Single-image 3D Human Reconstruction using
Cross-Scale Multiview Diffusion and Explicit Remeshing | CVPR2025, Project page: https://penghtyx.github.io/PSHuman | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Detailed and photorealistic 3D human modeling is essential for various
applications and has seen tremendous progress. However, full-body
reconstruction from a monocular RGB image remains challenging due to the
ill-posed nature of the problem and sophisticated clothing topology with
self-occlusions. In this paper, we propose PSHuman, a novel framework that
explicitly reconstructs human meshes utilizing priors from the multiview
diffusion model. It is found that directly applying multiview diffusion on
single-view human images leads to severe geometric distortions, especially on
generated faces. To address it, we propose a cross-scale diffusion that models
the joint probability distribution of global full-body shape and local facial
characteristics, enabling detailed and identity-preserved novel-view generation
without any geometric distortion. Moreover, to enhance cross-view body shape
consistency of varied human poses, we condition the generative model on
parametric models like SMPL-X, which provide body priors and prevent unnatural
views inconsistent with human anatomy. Leveraging the generated multi-view
normal and color images, we present SMPLX-initialized explicit human carving to
recover realistic textured human meshes efficiently. Extensive experimental
results and quantitative evaluations on CAPE and THuman2.1 datasets demonstrate
PSHumans superiority in geometry details, texture fidelity, and generalization
capability.
| [
{
"version": "v1",
"created": "Mon, 16 Sep 2024 10:13:06 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 12:27:12 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Peng",
""
],
[
"Zheng",
"Wangguandong",
""
],
[
"Liu",
"Yuan",
""
],
[
"Yu",
"Tao",
""
],
[
"Li",
"Yangguang",
""
],
[
"Qi",
"Xingqun",
""
],
[
"Chi",
"Xiaowei",
""
],
[
"Xia",
"Siyu",
""
],
[
"Cao",
"Yan-Pei",
""
],
[
"Xue",
"Wei",
""
],
[
"Luo",
"Wenhan",
""
],
[
"Guo",
"Yike",
""
]
] | TITLE: PSHuman: Photorealistic Single-image 3D Human Reconstruction using
Cross-Scale Multiview Diffusion and Explicit Remeshing
ABSTRACT: Detailed and photorealistic 3D human modeling is essential for various
applications and has seen tremendous progress. However, full-body
reconstruction from a monocular RGB image remains challenging due to the
ill-posed nature of the problem and sophisticated clothing topology with
self-occlusions. In this paper, we propose PSHuman, a novel framework that
explicitly reconstructs human meshes utilizing priors from the multiview
diffusion model. It is found that directly applying multiview diffusion on
single-view human images leads to severe geometric distortions, especially on
generated faces. To address it, we propose a cross-scale diffusion that models
the joint probability distribution of global full-body shape and local facial
characteristics, enabling detailed and identity-preserved novel-view generation
without any geometric distortion. Moreover, to enhance cross-view body shape
consistency of varied human poses, we condition the generative model on
parametric models like SMPL-X, which provide body priors and prevent unnatural
views inconsistent with human anatomy. Leveraging the generated multi-view
normal and color images, we present SMPLX-initialized explicit human carving to
recover realistic textured human meshes efficiently. Extensive experimental
results and quantitative evaluations on CAPE and THuman2.1 datasets demonstrate
PSHumans superiority in geometry details, texture fidelity, and generalization
capability.
|
2409.11110 | Hassan Keshvarikhojasteh | Hassan Keshvarikhojasteh | Quantitative Evaluation of Multiple Instance Learning Reliability For
WSIs Classification | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Machine learning models have become integral to many fields, but their
reliability, particularly in high-stakes domains, remains a critical concern.
Reliability refers to the quality of being dependable and trustworthy. Reliable
models consistently provide predictions aligned with basic domain knowledge,
making their development and deployment particularly critical in healthcare
applications. However, Multiple Instance Learning (MIL) models designed for
Whole Slide Image (WSI) classification in computational pathology are rarely
evaluated in terms of reliability. In this paper, we address this gap by
comparing the reliability of MIL models using three proposed metrics, applied
across three region-wise annotated datasets. Our findings indicate that the
mean pooling instance (MEAN-POOL-INS) model demonstrates superior reliability
compared to other networks, despite its simple architectural design and
computational efficiency. The code for reproducing our results is available at
github.com/tueimage/MIL-Reliability. Keywords: Machine learning, Reliability,
Whole Slide Image, Multiple Instance Learning, MEAN-POOL-INS.
| [
{
"version": "v1",
"created": "Tue, 17 Sep 2024 12:04:18 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 16:49:03 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Keshvarikhojasteh",
"Hassan",
""
]
] | TITLE: Quantitative Evaluation of Multiple Instance Learning Reliability For
WSIs Classification
ABSTRACT: Machine learning models have become integral to many fields, but their
reliability, particularly in high-stakes domains, remains a critical concern.
Reliability refers to the quality of being dependable and trustworthy. Reliable
models consistently provide predictions aligned with basic domain knowledge,
making their development and deployment particularly critical in healthcare
applications. However, Multiple Instance Learning (MIL) models designed for
Whole Slide Image (WSI) classification in computational pathology are rarely
evaluated in terms of reliability. In this paper, we address this gap by
comparing the reliability of MIL models using three proposed metrics, applied
across three region-wise annotated datasets. Our findings indicate that the
mean pooling instance (MEAN-POOL-INS) model demonstrates superior reliability
compared to other networks, despite its simple architectural design and
computational efficiency. The code for reproducing our results is available at
github.com/tueimage/MIL-Reliability. Keywords: Machine learning, Reliability,
Whole Slide Image, Multiple Instance Learning, MEAN-POOL-INS.
|
2409.17213 | Joshua Ashkinaze | Joshua Ashkinaze, Emily Fry, Narendra Edara, Eric Gilbert, Ceren Budak | Plurals: A System for Guiding LLMs Via Simulated Social Ensembles | CHI 2025 | null | 10.1145/3706598.3713675 | null | cs.CL cs.AI cs.CY cs.HC cs.MA | http://creativecommons.org/licenses/by/4.0/ | Recent debates raised concerns that language models may favor certain
viewpoints. But what if the solution is not to aim for a 'view from nowhere'
but rather to leverage different viewpoints? We introduce Plurals, a system and
Python library for pluralistic AI deliberation. Plurals consists of Agents
(LLMs, optionally with personas) which deliberate within customizable
Structures, with Moderators overseeing deliberation. Plurals is a generator of
simulated social ensembles. Plurals integrates with government datasets to
create nationally representative personas, includes deliberation templates
inspired by deliberative democracy, and allows users to customize both
information-sharing structures and deliberation behavior within Structures. Six
case studies demonstrate fidelity to theoretical constructs and efficacy. Three
randomized experiments show simulated focus groups produced output resonant
with an online sample of the relevant audiences (chosen over zero-shot
generation in 75% of trials). Plurals is both a paradigm and a concrete system
for pluralistic AI. The Plurals library is available at
https://github.com/josh-ashkinaze/plurals and will be continually updated.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2024 17:38:39 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Sep 2024 12:12:44 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Oct 2024 01:11:54 GMT"
},
{
"version": "v4",
"created": "Fri, 1 Nov 2024 02:08:03 GMT"
},
{
"version": "v5",
"created": "Tue, 19 Nov 2024 15:37:57 GMT"
},
{
"version": "v6",
"created": "Sat, 22 Mar 2025 20:30:18 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ashkinaze",
"Joshua",
""
],
[
"Fry",
"Emily",
""
],
[
"Edara",
"Narendra",
""
],
[
"Gilbert",
"Eric",
""
],
[
"Budak",
"Ceren",
""
]
] | TITLE: Plurals: A System for Guiding LLMs Via Simulated Social Ensembles
ABSTRACT: Recent debates raised concerns that language models may favor certain
viewpoints. But what if the solution is not to aim for a 'view from nowhere'
but rather to leverage different viewpoints? We introduce Plurals, a system and
Python library for pluralistic AI deliberation. Plurals consists of Agents
(LLMs, optionally with personas) which deliberate within customizable
Structures, with Moderators overseeing deliberation. Plurals is a generator of
simulated social ensembles. Plurals integrates with government datasets to
create nationally representative personas, includes deliberation templates
inspired by deliberative democracy, and allows users to customize both
information-sharing structures and deliberation behavior within Structures. Six
case studies demonstrate fidelity to theoretical constructs and efficacy. Three
randomized experiments show simulated focus groups produced output resonant
with an online sample of the relevant audiences (chosen over zero-shot
generation in 75% of trials). Plurals is both a paradigm and a concrete system
for pluralistic AI. The Plurals library is available at
https://github.com/josh-ashkinaze/plurals and will be continually updated.
|
2409.18473 | Yi Zhou | Zhenxiang Xu, Yiping Liu, Yi Zhou, Yimin Hao, Zhengren Wang | Efficient Top-k s-Biplexes Search over Large Bipartite Graphs | null | null | null | null | cs.IR cs.DS | http://creativecommons.org/licenses/by/4.0/ | In a bipartite graph, a subgraph is an $s$-biplex if each vertex of the
subgraph is adjacent to all but at most $s$ vertices on the opposite set. The
enumeration of $s$-biplexes from a given graph is a fundamental problem in
bipartite graph analysis. However, in real-world data engineering, finding all
$s$-biplexes is neither necessary nor computationally affordable. A more
realistic problem is to identify some of the largest $s$-biplexes from the
large input graph. We formulate the problem as the {\em top-$k$ $s$-biplex
search (TBS) problem}, which aims to find the top-$k$ maximal $s$-biplexes with
the most vertices, where $k$ is an input parameter. We prove that the TBS
problem is NP-hard for any fixed $k\ge 1$. Then, we propose a branching
algorithm, named MVBP, that breaks the simple $2^n$ enumeration algorithm.
Furthermore, from a practical perspective, we investigate three techniques to
improve the performance of MVBP: 2-hop decomposition, single-side bounds, and
progressive search. Complexity analysis shows that the improved algorithm,
named FastMVBP, has a running time $O^*(\gamma_s^{d_2})$, where $\gamma_s<2$,
and $d_2$ is a parameter much smaller than the number of vertex in the sparse
real-world graphs, e.g. $d_2$ is only $67$ in the AmazonRatings dataset which
has more than $3$ million vertices. Finally, we conducted extensive experiments
on eight real-world and synthetic datasets to demonstrate the empirical
efficiency of the proposed algorithms. In particular, FastMVBP outperforms the
benchmark algorithms by up to three orders of magnitude in several instances.
| [
{
"version": "v1",
"created": "Fri, 27 Sep 2024 06:23:29 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 11:03:40 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xu",
"Zhenxiang",
""
],
[
"Liu",
"Yiping",
""
],
[
"Zhou",
"Yi",
""
],
[
"Hao",
"Yimin",
""
],
[
"Wang",
"Zhengren",
""
]
] | TITLE: Efficient Top-k s-Biplexes Search over Large Bipartite Graphs
ABSTRACT: In a bipartite graph, a subgraph is an $s$-biplex if each vertex of the
subgraph is adjacent to all but at most $s$ vertices on the opposite set. The
enumeration of $s$-biplexes from a given graph is a fundamental problem in
bipartite graph analysis. However, in real-world data engineering, finding all
$s$-biplexes is neither necessary nor computationally affordable. A more
realistic problem is to identify some of the largest $s$-biplexes from the
large input graph. We formulate the problem as the {\em top-$k$ $s$-biplex
search (TBS) problem}, which aims to find the top-$k$ maximal $s$-biplexes with
the most vertices, where $k$ is an input parameter. We prove that the TBS
problem is NP-hard for any fixed $k\ge 1$. Then, we propose a branching
algorithm, named MVBP, that breaks the simple $2^n$ enumeration algorithm.
Furthermore, from a practical perspective, we investigate three techniques to
improve the performance of MVBP: 2-hop decomposition, single-side bounds, and
progressive search. Complexity analysis shows that the improved algorithm,
named FastMVBP, has a running time $O^*(\gamma_s^{d_2})$, where $\gamma_s<2$,
and $d_2$ is a parameter much smaller than the number of vertex in the sparse
real-world graphs, e.g. $d_2$ is only $67$ in the AmazonRatings dataset which
has more than $3$ million vertices. Finally, we conducted extensive experiments
on eight real-world and synthetic datasets to demonstrate the empirical
efficiency of the proposed algorithms. In particular, FastMVBP outperforms the
benchmark algorithms by up to three orders of magnitude in several instances.
|
2409.19425 | Mayug Maniparambil | Mayug Maniparambil, Raiymbek Akshulakov, Yasser Abdelaziz Dahou
Djilali, Sanath Narayan, Ankit Singh, Noel E. O'Connor | Harnessing Frozen Unimodal Encoders for Flexible Multimodal Alignment | Accepted CVPR 2025; First two authors contributed equally; | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent contrastive multimodal vision-language models like CLIP have
demonstrated robust open-world semantic understanding, becoming the standard
image backbones for vision-language applications. However, recent findings
suggest high semantic similarity between well-trained unimodal encoders, which
raises a key question: Is there a plausible way to connect unimodal backbones
for vision-language tasks? To this end, we propose a novel framework that
aligns vision and language using frozen unimodal encoders. It involves
selecting semantically similar encoders in the latent space, curating a
concept-rich dataset of image-caption pairs, and training simple MLP
projectors. We evaluated our approach on 12 zero-shot classification datasets
and 2 image-text retrieval datasets. Our best model, utilizing DINOv2 and
All-Roberta-Large text encoder, achieves 76\(\%\) accuracy on ImageNet with a
20-fold reduction in data and 65-fold reduction in compute requirements
compared multi-modal alignment where models are trained from scratch. The
proposed framework enhances the accessibility of multimodal model development
while enabling flexible adaptation across diverse scenarios. Code and curated
datasets are available at \texttt{github.com/mayug/freeze-align}.
| [
{
"version": "v1",
"created": "Sat, 28 Sep 2024 17:57:32 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 14:00:30 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Maniparambil",
"Mayug",
""
],
[
"Akshulakov",
"Raiymbek",
""
],
[
"Djilali",
"Yasser Abdelaziz Dahou",
""
],
[
"Narayan",
"Sanath",
""
],
[
"Singh",
"Ankit",
""
],
[
"O'Connor",
"Noel E.",
""
]
] | TITLE: Harnessing Frozen Unimodal Encoders for Flexible Multimodal Alignment
ABSTRACT: Recent contrastive multimodal vision-language models like CLIP have
demonstrated robust open-world semantic understanding, becoming the standard
image backbones for vision-language applications. However, recent findings
suggest high semantic similarity between well-trained unimodal encoders, which
raises a key question: Is there a plausible way to connect unimodal backbones
for vision-language tasks? To this end, we propose a novel framework that
aligns vision and language using frozen unimodal encoders. It involves
selecting semantically similar encoders in the latent space, curating a
concept-rich dataset of image-caption pairs, and training simple MLP
projectors. We evaluated our approach on 12 zero-shot classification datasets
and 2 image-text retrieval datasets. Our best model, utilizing DINOv2 and
All-Roberta-Large text encoder, achieves 76\(\%\) accuracy on ImageNet with a
20-fold reduction in data and 65-fold reduction in compute requirements
compared multi-modal alignment where models are trained from scratch. The
proposed framework enhances the accessibility of multimodal model development
while enabling flexible adaptation across diverse scenarios. Code and curated
datasets are available at \texttt{github.com/mayug/freeze-align}.
|
2410.01376 | Alejandro Casta\~neda Garcia | Alejandro Casta\~neda Garcia, Jan van Gemert, Daan Brinks and Nergis
T\"omen | Learning Physics From Video: Unsupervised Physical Parameter Estimation
for Continuous Dynamical Systems | null | null | null | null | cs.CV physics.comp-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Extracting physical dynamical system parameters from recorded observations is
key in natural science. Current methods for automatic parameter estimation from
video train supervised deep networks on large datasets. Such datasets require
labels, which are difficult to acquire. While some unsupervised
techniques--which depend on frame prediction--exist, they suffer from long
training times, initialization instabilities, only consider motion-based
dynamical systems, and are evaluated mainly on synthetic data. In this work, we
propose an unsupervised method to estimate the physical parameters of known,
continuous governing equations from single videos suitable for different
dynamical systems beyond motion and robust to initialization. Moreover, we
remove the need for frame prediction by implementing a KL-divergence-based loss
function in the latent space, which avoids convergence to trivial solutions and
reduces model size and compute. We first evaluate our model on synthetic data,
as commonly done. After which, we take the field closer to reality by recording
Delfys75: our own real-world dataset of 75 videos for five different types of
dynamical systems to evaluate our method and others. Our method compares
favorably to others. %, yet, and real-world video datasets and demonstrate
improved parameter estimation accuracy compared to existing methods. Code and
data are available
online:https://github.com/Alejandro-neuro/Learning_physics_from_video.
| [
{
"version": "v1",
"created": "Wed, 2 Oct 2024 09:44:54 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 13:02:03 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Garcia",
"Alejandro Castañeda",
""
],
[
"van Gemert",
"Jan",
""
],
[
"Brinks",
"Daan",
""
],
[
"Tömen",
"Nergis",
""
]
] | TITLE: Learning Physics From Video: Unsupervised Physical Parameter Estimation
for Continuous Dynamical Systems
ABSTRACT: Extracting physical dynamical system parameters from recorded observations is
key in natural science. Current methods for automatic parameter estimation from
video train supervised deep networks on large datasets. Such datasets require
labels, which are difficult to acquire. While some unsupervised
techniques--which depend on frame prediction--exist, they suffer from long
training times, initialization instabilities, only consider motion-based
dynamical systems, and are evaluated mainly on synthetic data. In this work, we
propose an unsupervised method to estimate the physical parameters of known,
continuous governing equations from single videos suitable for different
dynamical systems beyond motion and robust to initialization. Moreover, we
remove the need for frame prediction by implementing a KL-divergence-based loss
function in the latent space, which avoids convergence to trivial solutions and
reduces model size and compute. We first evaluate our model on synthetic data,
as commonly done. After which, we take the field closer to reality by recording
Delfys75: our own real-world dataset of 75 videos for five different types of
dynamical systems to evaluate our method and others. Our method compares
favorably to others. %, yet, and real-world video datasets and demonstrate
improved parameter estimation accuracy compared to existing methods. Code and
data are available
online:https://github.com/Alejandro-neuro/Learning_physics_from_video.
|
2410.04324 | Xiang Li | Xiang Li, Pin-Yu Chen, Wenqi Wei | Where are we in audio deepfake detection? A systematic analysis over
generative and detection models | null | null | null | null | cs.SD cs.AI eess.AS | http://creativecommons.org/licenses/by/4.0/ | Recent advances in Text-to-Speech (TTS) and Voice-Conversion (VC) using
generative Artificial Intelligence (AI) technology have made it possible to
generate high-quality and realistic human-like audio. This poses growing
challenges in distinguishing AI-synthesized speech from the genuine human voice
and could raise concerns about misuse for impersonation, fraud, spreading
misinformation, and scams. However, existing detection methods for
AI-synthesized audio have not kept pace and often fail to generalize across
diverse datasets. In this paper, we introduce SONAR, a synthetic AI-Audio
Detection Framework and Benchmark, aiming to provide a comprehensive evaluation
for distinguishing cutting-edge AI-synthesized auditory content. SONAR includes
a novel evaluation dataset sourced from 9 diverse audio synthesis platforms,
including leading TTS providers and state-of-the-art TTS models. It is the
first framework to uniformly benchmark AI-audio detection across both
traditional and foundation model-based detection systems. Through extensive
experiments, (1) we reveal the limitations of existing detection methods and
demonstrate that foundation models exhibit stronger generalization
capabilities, likely due to their model size and the scale and quality of
pretraining data. (2) Speech foundation models demonstrate robust cross-lingual
generalization capabilities, maintaining strong performance across diverse
languages despite being fine-tuned solely on English speech data. This finding
also suggests that the primary challenges in audio deepfake detection are more
closely tied to the realism and quality of synthetic audio rather than
language-specific characteristics. (3) We explore the effectiveness and
efficiency of few-shot fine-tuning in improving generalization, highlighting
its potential for tailored applications, such as personalized detection systems
for specific entities or individuals.
| [
{
"version": "v1",
"created": "Sun, 6 Oct 2024 01:03:42 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Oct 2024 16:32:49 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Oct 2024 05:34:21 GMT"
},
{
"version": "v4",
"created": "Sat, 22 Mar 2025 01:10:56 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Xiang",
""
],
[
"Chen",
"Pin-Yu",
""
],
[
"Wei",
"Wenqi",
""
]
] | TITLE: Where are we in audio deepfake detection? A systematic analysis over
generative and detection models
ABSTRACT: Recent advances in Text-to-Speech (TTS) and Voice-Conversion (VC) using
generative Artificial Intelligence (AI) technology have made it possible to
generate high-quality and realistic human-like audio. This poses growing
challenges in distinguishing AI-synthesized speech from the genuine human voice
and could raise concerns about misuse for impersonation, fraud, spreading
misinformation, and scams. However, existing detection methods for
AI-synthesized audio have not kept pace and often fail to generalize across
diverse datasets. In this paper, we introduce SONAR, a synthetic AI-Audio
Detection Framework and Benchmark, aiming to provide a comprehensive evaluation
for distinguishing cutting-edge AI-synthesized auditory content. SONAR includes
a novel evaluation dataset sourced from 9 diverse audio synthesis platforms,
including leading TTS providers and state-of-the-art TTS models. It is the
first framework to uniformly benchmark AI-audio detection across both
traditional and foundation model-based detection systems. Through extensive
experiments, (1) we reveal the limitations of existing detection methods and
demonstrate that foundation models exhibit stronger generalization
capabilities, likely due to their model size and the scale and quality of
pretraining data. (2) Speech foundation models demonstrate robust cross-lingual
generalization capabilities, maintaining strong performance across diverse
languages despite being fine-tuned solely on English speech data. This finding
also suggests that the primary challenges in audio deepfake detection are more
closely tied to the realism and quality of synthetic audio rather than
language-specific characteristics. (3) We explore the effectiveness and
efficiency of few-shot fine-tuning in improving generalization, highlighting
its potential for tailored applications, such as personalized detection systems
for specific entities or individuals.
|
2410.05869 | Subhransu S. Bhattacharjee Mr. | Subhransu S. Bhattacharjee and Dylan Campbell and Rahul Shome | Believing is Seeing: Unobserved Object Detection using Generative Models | IEEE/CVF Computer Vision and Pattern Recognition 2025; 22 pages | null | null | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Can objects that are not visible in an image -- but are in the vicinity of
the camera -- be detected? This study introduces the novel tasks of 2D, 2.5D
and 3D unobserved object detection for predicting the location of nearby
objects that are occluded or lie outside the image frame. We adapt several
state-of-the-art pre-trained generative models to address this task, including
2D and 3D diffusion models and vision-language models, and show that they can
be used to infer the presence of objects that are not directly observed. To
benchmark this task, we propose a suite of metrics that capture different
aspects of performance. Our empirical evaluation on indoor scenes from the
RealEstate10k and NYU Depth v2 datasets demonstrate results that motivate the
use of generative models for the unobserved object detection task.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 09:57:14 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Nov 2024 23:47:03 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 09:56:24 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Mar 2025 13:41:43 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Bhattacharjee",
"Subhransu S.",
""
],
[
"Campbell",
"Dylan",
""
],
[
"Shome",
"Rahul",
""
]
] | TITLE: Believing is Seeing: Unobserved Object Detection using Generative Models
ABSTRACT: Can objects that are not visible in an image -- but are in the vicinity of
the camera -- be detected? This study introduces the novel tasks of 2D, 2.5D
and 3D unobserved object detection for predicting the location of nearby
objects that are occluded or lie outside the image frame. We adapt several
state-of-the-art pre-trained generative models to address this task, including
2D and 3D diffusion models and vision-language models, and show that they can
be used to infer the presence of objects that are not directly observed. To
benchmark this task, we propose a suite of metrics that capture different
aspects of performance. Our empirical evaluation on indoor scenes from the
RealEstate10k and NYU Depth v2 datasets demonstrate results that motivate the
use of generative models for the unobserved object detection task.
|
2410.06380 | Mateus Karvat | Mateus Karvat, Sidney Givigi | Adver-City: Open-Source Multi-Modal Dataset for Collaborative Perception
Under Adverse Weather Conditions | 13 pages | null | null | null | cs.CV cs.LG cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | Adverse weather conditions pose a significant challenge to the widespread
adoption of Autonomous Vehicles (AVs) by impacting sensors like LiDARs and
cameras. Even though Collaborative Perception (CP) improves AV perception in
difficult conditions, existing CP datasets lack adverse weather conditions. To
address this, we introduce Adver-City, the first open-source synthetic CP
dataset focused on adverse weather conditions. Simulated in CARLA with OpenCDA,
it contains over 24 thousand frames, over 890 thousand annotations, and 110
unique scenarios across six different weather conditions: clear weather, soft
rain, heavy rain, fog, foggy heavy rain and, for the first time in a synthetic
CP dataset, glare. It has six object categories including pedestrians and
cyclists, and uses data from vehicles and roadside units featuring LiDARs, RGB
and semantic segmentation cameras, GNSS, and IMUs. Its scenarios, based on real
crash reports, depict the most relevant road configurations for adverse weather
and poor visibility conditions, varying in object density, with both dense and
sparse scenes, allowing for novel testing conditions of CP models. Benchmarks
run on the dataset show that weather conditions created challenging conditions
for perception models, with CoBEVT scoring 58.30/52.44/38.90 (AP@30/50/70). The
dataset, code and documentation are available at
https://labs.cs.queensu.ca/quarrg/datasets/adver-city/.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 21:26:22 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 20:59:38 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Karvat",
"Mateus",
""
],
[
"Givigi",
"Sidney",
""
]
] | TITLE: Adver-City: Open-Source Multi-Modal Dataset for Collaborative Perception
Under Adverse Weather Conditions
ABSTRACT: Adverse weather conditions pose a significant challenge to the widespread
adoption of Autonomous Vehicles (AVs) by impacting sensors like LiDARs and
cameras. Even though Collaborative Perception (CP) improves AV perception in
difficult conditions, existing CP datasets lack adverse weather conditions. To
address this, we introduce Adver-City, the first open-source synthetic CP
dataset focused on adverse weather conditions. Simulated in CARLA with OpenCDA,
it contains over 24 thousand frames, over 890 thousand annotations, and 110
unique scenarios across six different weather conditions: clear weather, soft
rain, heavy rain, fog, foggy heavy rain and, for the first time in a synthetic
CP dataset, glare. It has six object categories including pedestrians and
cyclists, and uses data from vehicles and roadside units featuring LiDARs, RGB
and semantic segmentation cameras, GNSS, and IMUs. Its scenarios, based on real
crash reports, depict the most relevant road configurations for adverse weather
and poor visibility conditions, varying in object density, with both dense and
sparse scenes, allowing for novel testing conditions of CP models. Benchmarks
run on the dataset show that weather conditions created challenging conditions
for perception models, with CoBEVT scoring 58.30/52.44/38.90 (AP@30/50/70). The
dataset, code and documentation are available at
https://labs.cs.queensu.ca/quarrg/datasets/adver-city/.
|
2410.07511 | Zeyu Zhang | Yiru Pan, Xingyu Ji, Jiaqi You, Lu Li, Zhenping Liu, Xianlong Zhang,
Zeyu Zhang and Maojun Wang | CSGDN: Contrastive Signed Graph Diffusion Network for Predicting Crop
Gene-phenotype Associations | Under review | null | 10.1093/bib/bbaf062 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Positive and negative association prediction between gene and phenotype helps
to illustrate the underlying mechanism of complex traits in organisms. The
transcription and regulation activity of specific genes will be adjusted
accordingly in different cell types, developmental stages, and physiological
states. There are the following two problems in obtaining the positive/negative
associations between gene and trait: 1) High-throughput DNA/RNA sequencing and
phenotyping are expensive and time-consuming due to the need to process large
sample sizes; 2) experiments introduce both random and systematic errors, and,
meanwhile, calculations or predictions using software or models may produce
noise. To address these two issues, we propose a Contrastive Signed Graph
Diffusion Network, CSGDN, to learn robust node representations with fewer
training samples to achieve higher link prediction accuracy. CSGDN employs a
signed graph diffusion method to uncover the underlying regulatory associations
between genes and phenotypes. Then, stochastic perturbation strategies are used
to create two views for both original and diffusive graphs. Lastly, a
multi-view contrastive learning paradigm loss is designed to unify the node
presentations learned from the two views to resist interference and reduce
noise. We conduct experiments to validate the performance of CSGDN on three
crop datasets: Gossypium hirsutum, Brassica napus, and Triticum turgidum. The
results demonstrate that the proposed model outperforms state-of-the-art
methods by up to 9.28% AUC for link sign prediction in G. hirsutum dataset.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 01:01:10 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Oct 2024 02:50:55 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Pan",
"Yiru",
""
],
[
"Ji",
"Xingyu",
""
],
[
"You",
"Jiaqi",
""
],
[
"Li",
"Lu",
""
],
[
"Liu",
"Zhenping",
""
],
[
"Zhang",
"Xianlong",
""
],
[
"Zhang",
"Zeyu",
""
],
[
"Wang",
"Maojun",
""
]
] | TITLE: CSGDN: Contrastive Signed Graph Diffusion Network for Predicting Crop
Gene-phenotype Associations
ABSTRACT: Positive and negative association prediction between gene and phenotype helps
to illustrate the underlying mechanism of complex traits in organisms. The
transcription and regulation activity of specific genes will be adjusted
accordingly in different cell types, developmental stages, and physiological
states. There are the following two problems in obtaining the positive/negative
associations between gene and trait: 1) High-throughput DNA/RNA sequencing and
phenotyping are expensive and time-consuming due to the need to process large
sample sizes; 2) experiments introduce both random and systematic errors, and,
meanwhile, calculations or predictions using software or models may produce
noise. To address these two issues, we propose a Contrastive Signed Graph
Diffusion Network, CSGDN, to learn robust node representations with fewer
training samples to achieve higher link prediction accuracy. CSGDN employs a
signed graph diffusion method to uncover the underlying regulatory associations
between genes and phenotypes. Then, stochastic perturbation strategies are used
to create two views for both original and diffusive graphs. Lastly, a
multi-view contrastive learning paradigm loss is designed to unify the node
presentations learned from the two views to resist interference and reduce
noise. We conduct experiments to validate the performance of CSGDN on three
crop datasets: Gossypium hirsutum, Brassica napus, and Triticum turgidum. The
results demonstrate that the proposed model outperforms state-of-the-art
methods by up to 9.28% AUC for link sign prediction in G. hirsutum dataset.
|
2410.09006 | Zhuohao Jerry Zhang | Zhuohao Jerry Zhang, Eldon Schoop, Jeffrey Nichols, Anuj Mahajan,
Amanda Swearngin | From Interaction to Impact: Towards Safer AI Agents Through
Understanding and Evaluating Mobile UI Operation Impacts | null | null | 10.1145/3708359.3712153 | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | With advances in generative AI, there is increasing work towards creating
autonomous agents that can manage daily tasks by operating user interfaces
(UIs). While prior research has studied the mechanics of how AI agents might
navigate UIs and understand UI structure, the effects of agents and their
autonomous actions-particularly those that may be risky or irreversible-remain
under-explored. In this work, we investigate the real-world impacts and
consequences of mobile UI actions taken by AI agents. We began by developing a
taxonomy of the impacts of mobile UI actions through a series of workshops with
domain experts. Following this, we conducted a data synthesis study to gather
realistic mobile UI screen traces and action data that users perceive as
impactful. We then used our impact categories to annotate our collected data
and data repurposed from existing mobile UI navigation datasets. Our
quantitative evaluations of different large language models (LLMs) and variants
demonstrate how well different LLMs can understand the impacts of mobile UI
actions that might be taken by an agent. We show that our taxonomy enhances the
reasoning capabilities of these LLMs for understanding the impacts of mobile UI
actions, but our findings also reveal significant gaps in their ability to
reliably classify more nuanced or complex categories of impact.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2024 17:24:00 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 18:01:37 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Zhuohao Jerry",
""
],
[
"Schoop",
"Eldon",
""
],
[
"Nichols",
"Jeffrey",
""
],
[
"Mahajan",
"Anuj",
""
],
[
"Swearngin",
"Amanda",
""
]
] | TITLE: From Interaction to Impact: Towards Safer AI Agents Through
Understanding and Evaluating Mobile UI Operation Impacts
ABSTRACT: With advances in generative AI, there is increasing work towards creating
autonomous agents that can manage daily tasks by operating user interfaces
(UIs). While prior research has studied the mechanics of how AI agents might
navigate UIs and understand UI structure, the effects of agents and their
autonomous actions-particularly those that may be risky or irreversible-remain
under-explored. In this work, we investigate the real-world impacts and
consequences of mobile UI actions taken by AI agents. We began by developing a
taxonomy of the impacts of mobile UI actions through a series of workshops with
domain experts. Following this, we conducted a data synthesis study to gather
realistic mobile UI screen traces and action data that users perceive as
impactful. We then used our impact categories to annotate our collected data
and data repurposed from existing mobile UI navigation datasets. Our
quantitative evaluations of different large language models (LLMs) and variants
demonstrate how well different LLMs can understand the impacts of mobile UI
actions that might be taken by an agent. We show that our taxonomy enhances the
reasoning capabilities of these LLMs for understanding the impacts of mobile UI
actions, but our findings also reveal significant gaps in their ability to
reliably classify more nuanced or complex categories of impact.
|
2410.10636 | Adyasha Maharana | Adyasha Maharana, Jaehong Yoon, Tianlong Chen, Mohit Bansal | Adapt-$\infty$: Scalable Continual Multimodal Instruction Tuning via
Dynamic Data Selection | First two authors contributed equally. Code:
https://github.com/adymaharana/adapt-inf | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual instruction datasets from various distributors are released at
different times and often contain a significant number of semantically
redundant text-image pairs, depending on their task compositions (i.e., skills)
or reference sources. This redundancy greatly limits the efficient deployment
of continually adaptable multimodal large language models, hindering their
ability to refine existing skills and acquire new competencies over time. We
reframe the problem of lifelong Instruction Tuning (LiIT) via data selection,
where the model automatically selects beneficial samples to learn from earlier
and new datasets based on the current state of acquired knowledge in the model.
We propose Adapt-$\infty$, a new multi-way and adaptive data selection approach
that dynamically balances sample efficiency and effectiveness during LiIT. We
first construct pseudo-skill clusters by grouping gradient-based sample
vectors. Next, we select the best-performing data selector for each skill
cluster from a pool of selector experts, including our newly proposed scoring
function, Image Grounding score. This data selector samples a subset of the
most important samples from each skill cluster for training. To prevent the
continuous increase in the size of the dataset pool during LiIT, we introduce a
cluster-wise permanent data pruning strategy to remove the most semantically
redundant samples from each cluster, keeping computational requirements
manageable. We validate the effectiveness and efficiency of Adapt-$\infty$ over
a sequence of multimodal instruction tuning datasets with various tasks,
including (Knowledge) VQA, multilingual, grounding, reasoning, language-only,
and multi-image comprehension. Training with samples selected by Adapt-$\infty$
alleviates catastrophic forgetting, especially for rare tasks, and promotes
forward transfer across the continuum using only a fraction of the original
data.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 15:48:09 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 09:17:13 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Maharana",
"Adyasha",
""
],
[
"Yoon",
"Jaehong",
""
],
[
"Chen",
"Tianlong",
""
],
[
"Bansal",
"Mohit",
""
]
] | TITLE: Adapt-$\infty$: Scalable Continual Multimodal Instruction Tuning via
Dynamic Data Selection
ABSTRACT: Visual instruction datasets from various distributors are released at
different times and often contain a significant number of semantically
redundant text-image pairs, depending on their task compositions (i.e., skills)
or reference sources. This redundancy greatly limits the efficient deployment
of continually adaptable multimodal large language models, hindering their
ability to refine existing skills and acquire new competencies over time. We
reframe the problem of lifelong Instruction Tuning (LiIT) via data selection,
where the model automatically selects beneficial samples to learn from earlier
and new datasets based on the current state of acquired knowledge in the model.
We propose Adapt-$\infty$, a new multi-way and adaptive data selection approach
that dynamically balances sample efficiency and effectiveness during LiIT. We
first construct pseudo-skill clusters by grouping gradient-based sample
vectors. Next, we select the best-performing data selector for each skill
cluster from a pool of selector experts, including our newly proposed scoring
function, Image Grounding score. This data selector samples a subset of the
most important samples from each skill cluster for training. To prevent the
continuous increase in the size of the dataset pool during LiIT, we introduce a
cluster-wise permanent data pruning strategy to remove the most semantically
redundant samples from each cluster, keeping computational requirements
manageable. We validate the effectiveness and efficiency of Adapt-$\infty$ over
a sequence of multimodal instruction tuning datasets with various tasks,
including (Knowledge) VQA, multilingual, grounding, reasoning, language-only,
and multi-image comprehension. Training with samples selected by Adapt-$\infty$
alleviates catastrophic forgetting, especially for rare tasks, and promotes
forward transfer across the continuum using only a fraction of the original
data.
|
2410.11774 | Konstantinos Panagiotis Alexandridis Mr | Konstantinos Panagiotis Alexandridis, Ismail Elezi, Jiankang Deng, Anh
Nguyen and Shan Luo | Fractal Calibration for long-tailed object detection | CVPR2025 (camera-ready) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world datasets follow an imbalanced distribution, which poses
significant challenges in rare-category object detection. Recent studies tackle
this problem by developing re-weighting and re-sampling methods, that utilise
the class frequencies of the dataset. However, these techniques focus solely on
the frequency statistics and ignore the distribution of the classes in image
space, missing important information. In contrast to them, we propose FRActal
CALibration (FRACAL): a novel post-calibration method for long-tailed object
detection. FRACAL devises a logit adjustment method that utilises the fractal
dimension to estimate how uniformly classes are distributed in image space.
During inference, it uses the fractal dimension to inversely downweight the
probabilities of uniformly spaced class predictions achieving balance in two
axes: between frequent and rare categories, and between uniformly spaced and
sparsely spaced classes. FRACAL is a post-processing method and it does not
require any training, also it can be combined with many off-the-shelf models
such as one-stage sigmoid detectors and two-stage instance segmentation models.
FRACAL boosts the rare class performance by up to 8.6% and surpasses all
previous methods on LVIS dataset, while showing good generalisation to other
datasets such as COCO, V3Det and OpenImages. We provide the code at
https://github.com/kostas1515/FRACAL.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 16:55:10 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 17:57:48 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 10:25:29 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Alexandridis",
"Konstantinos Panagiotis",
""
],
[
"Elezi",
"Ismail",
""
],
[
"Deng",
"Jiankang",
""
],
[
"Nguyen",
"Anh",
""
],
[
"Luo",
"Shan",
""
]
] | TITLE: Fractal Calibration for long-tailed object detection
ABSTRACT: Real-world datasets follow an imbalanced distribution, which poses
significant challenges in rare-category object detection. Recent studies tackle
this problem by developing re-weighting and re-sampling methods, that utilise
the class frequencies of the dataset. However, these techniques focus solely on
the frequency statistics and ignore the distribution of the classes in image
space, missing important information. In contrast to them, we propose FRActal
CALibration (FRACAL): a novel post-calibration method for long-tailed object
detection. FRACAL devises a logit adjustment method that utilises the fractal
dimension to estimate how uniformly classes are distributed in image space.
During inference, it uses the fractal dimension to inversely downweight the
probabilities of uniformly spaced class predictions achieving balance in two
axes: between frequent and rare categories, and between uniformly spaced and
sparsely spaced classes. FRACAL is a post-processing method and it does not
require any training, also it can be combined with many off-the-shelf models
such as one-stage sigmoid detectors and two-stage instance segmentation models.
FRACAL boosts the rare class performance by up to 8.6% and surpasses all
previous methods on LVIS dataset, while showing good generalisation to other
datasets such as COCO, V3Det and OpenImages. We provide the code at
https://github.com/kostas1515/FRACAL.
|
2410.14225 | Li Yuan | Li Yuan, Yi Cai, Junsheng Huang | Few-Shot Joint Multimodal Entity-Relation Extraction via
Knowledge-Enhanced Cross-modal Prompt Model | accepted by ACM MM 2024 | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Joint Multimodal Entity-Relation Extraction (JMERE) is a challenging task
that aims to extract entities and their relations from text-image pairs in
social media posts. Existing methods for JMERE require large amounts of labeled
data. However, gathering and annotating fine-grained multimodal data for JMERE
poses significant challenges. Initially, we construct diverse and comprehensive
multimodal few-shot datasets fitted to the original data distribution. To
address the insufficient information in the few-shot setting, we introduce the
\textbf{K}nowledge-\textbf{E}nhanced \textbf{C}ross-modal \textbf{P}rompt
\textbf{M}odel (KECPM) for JMERE. This method can effectively address the
problem of insufficient information in the few-shot setting by guiding a large
language model to generate supplementary background knowledge. Our proposed
method comprises two stages: (1) a knowledge ingestion stage that dynamically
formulates prompts based on semantic similarity guide ChatGPT generating
relevant knowledge and employs self-reflection to refine the knowledge; (2) a
knowledge-enhanced language model stage that merges the auxiliary knowledge
with the original input and utilizes a transformer-based model to align with
JMERE's required output format. We extensively evaluate our approach on a
few-shot dataset derived from the JMERE dataset, demonstrating its superiority
over strong baselines in terms of both micro and macro F$_1$ scores.
Additionally, we present qualitative analyses and case studies to elucidate the
effectiveness of our model.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 07:14:54 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 02:01:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yuan",
"Li",
""
],
[
"Cai",
"Yi",
""
],
[
"Huang",
"Junsheng",
""
]
] | TITLE: Few-Shot Joint Multimodal Entity-Relation Extraction via
Knowledge-Enhanced Cross-modal Prompt Model
ABSTRACT: Joint Multimodal Entity-Relation Extraction (JMERE) is a challenging task
that aims to extract entities and their relations from text-image pairs in
social media posts. Existing methods for JMERE require large amounts of labeled
data. However, gathering and annotating fine-grained multimodal data for JMERE
poses significant challenges. Initially, we construct diverse and comprehensive
multimodal few-shot datasets fitted to the original data distribution. To
address the insufficient information in the few-shot setting, we introduce the
\textbf{K}nowledge-\textbf{E}nhanced \textbf{C}ross-modal \textbf{P}rompt
\textbf{M}odel (KECPM) for JMERE. This method can effectively address the
problem of insufficient information in the few-shot setting by guiding a large
language model to generate supplementary background knowledge. Our proposed
method comprises two stages: (1) a knowledge ingestion stage that dynamically
formulates prompts based on semantic similarity guide ChatGPT generating
relevant knowledge and employs self-reflection to refine the knowledge; (2) a
knowledge-enhanced language model stage that merges the auxiliary knowledge
with the original input and utilizes a transformer-based model to align with
JMERE's required output format. We extensively evaluate our approach on a
few-shot dataset derived from the JMERE dataset, demonstrating its superiority
over strong baselines in terms of both micro and macro F$_1$ scores.
Additionally, we present qualitative analyses and case studies to elucidate the
effectiveness of our model.
|
2410.15392 | Bohao Liao | Bohao Liao, Wei Zhai, Zengyu Wan, Zhixin Cheng, Wenfei Yang, Tianzhu
Zhang, Yang Cao and Zheng-Jun Zha | EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting | Project Page: https://lbh666.github.io/ef-3dgs/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene reconstruction from casually captured videos has wide applications in
real-world scenarios. With recent advancements in differentiable rendering
techniques, several methods have attempted to simultaneously optimize scene
representations (NeRF or 3DGS) and camera poses. Despite recent progress,
existing methods relying on traditional camera input tend to fail in high-speed
(or equivalently low-frame-rate) scenarios. Event cameras, inspired by
biological vision, record pixel-wise intensity changes asynchronously with high
temporal resolution, providing valuable scene and motion information in blind
inter-frame intervals. In this paper, we introduce the event camera to aid
scene construction from a casually captured video for the first time, and
propose Event-Aided Free-Trajectory 3DGS, called EF-3DGS, which seamlessly
integrates the advantages of event cameras into 3DGS through three key
components. First, we leverage the Event Generation Model (EGM) to fuse events
and frames, supervising the rendered views observed by the event stream.
Second, we adopt the Contrast Maximization (CMax) framework in a piece-wise
manner to extract motion information by maximizing the contrast of the Image of
Warped Events (IWE), thereby calibrating the estimated poses. Besides, based on
the Linear Event Generation Model (LEGM), the brightness information encoded in
the IWE is also utilized to constrain the 3DGS in the gradient domain. Third,
to mitigate the absence of color information of events, we introduce
photometric bundle adjustment (PBA) to ensure view consistency across events
and frames. We evaluate our method on the public Tanks and Temples benchmark
and a newly collected real-world dataset, RealEv-DAVIS. Our project page is
https://lbh666.github.io/ef-3dgs/.
| [
{
"version": "v1",
"created": "Sun, 20 Oct 2024 13:44:24 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Oct 2024 18:22:20 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Mar 2025 13:41:06 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liao",
"Bohao",
""
],
[
"Zhai",
"Wei",
""
],
[
"Wan",
"Zengyu",
""
],
[
"Cheng",
"Zhixin",
""
],
[
"Yang",
"Wenfei",
""
],
[
"Zhang",
"Tianzhu",
""
],
[
"Cao",
"Yang",
""
],
[
"Zha",
"Zheng-Jun",
""
]
] | TITLE: EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting
ABSTRACT: Scene reconstruction from casually captured videos has wide applications in
real-world scenarios. With recent advancements in differentiable rendering
techniques, several methods have attempted to simultaneously optimize scene
representations (NeRF or 3DGS) and camera poses. Despite recent progress,
existing methods relying on traditional camera input tend to fail in high-speed
(or equivalently low-frame-rate) scenarios. Event cameras, inspired by
biological vision, record pixel-wise intensity changes asynchronously with high
temporal resolution, providing valuable scene and motion information in blind
inter-frame intervals. In this paper, we introduce the event camera to aid
scene construction from a casually captured video for the first time, and
propose Event-Aided Free-Trajectory 3DGS, called EF-3DGS, which seamlessly
integrates the advantages of event cameras into 3DGS through three key
components. First, we leverage the Event Generation Model (EGM) to fuse events
and frames, supervising the rendered views observed by the event stream.
Second, we adopt the Contrast Maximization (CMax) framework in a piece-wise
manner to extract motion information by maximizing the contrast of the Image of
Warped Events (IWE), thereby calibrating the estimated poses. Besides, based on
the Linear Event Generation Model (LEGM), the brightness information encoded in
the IWE is also utilized to constrain the 3DGS in the gradient domain. Third,
to mitigate the absence of color information of events, we introduce
photometric bundle adjustment (PBA) to ensure view consistency across events
and frames. We evaluate our method on the public Tanks and Temples benchmark
and a newly collected real-world dataset, RealEv-DAVIS. Our project page is
https://lbh666.github.io/ef-3dgs/.
|
2410.15959 | Zhi Hou | Zhi Hou, Tianyi Zhang, Yuwen Xiong, Hengjun Pu, Chengyang Zhao,
Ronglei Tong, Yu Qiao, Jifeng Dai, Yuntao Chen | Diffusion Transformer Policy | preprint; New Project Page: https://robodita.github.io; revert
unsuitable replacement | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent large vision-language-action models pretrained on diverse robot
datasets have demonstrated the potential for generalizing to new environments
with a few in-domain data. However, those approaches usually predict individual
discretized or continuous action by a small action head, which limits the
ability in handling diverse action spaces. In contrast, we model the continuous
action sequence with a large multi-modal diffusion transformer, dubbed as
Diffusion Transformer Policy, in which we directly denoise action chunks by a
large transformer model rather than a small action head for action embedding.
By leveraging the scaling capability of transformers, the proposed approach can
effectively model continuous end-effector actions across large diverse robot
datasets, and achieve better generalization performance. Extensive experiments
demonstrate the effectiveness and generalization of Diffusion Transformer
Policy on Maniskill2, Libero, Calvin and SimplerEnv, as well as the real-world
Franka arm, achieving consistent better performance on Real-to-Sim benchmark
SimplerEnv, real-world Franka Arm and Libero compared to OpenVLA and Octo.
Specifically, without bells and whistles, the proposed approach achieves
state-of-the-art performance with only a single third-view camera stream in the
Calvin task ABC->D, improving the average number of tasks completed in a row of
5 to 3.6, and the pretraining stage significantly facilitates the success
sequence length on the Calvin by over 1.2.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2024 12:43:54 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Feb 2025 07:20:30 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Feb 2025 15:38:06 GMT"
},
{
"version": "v4",
"created": "Fri, 14 Mar 2025 15:30:07 GMT"
},
{
"version": "v5",
"created": "Mon, 17 Mar 2025 11:45:52 GMT"
},
{
"version": "v6",
"created": "Sun, 23 Mar 2025 05:03:59 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hou",
"Zhi",
""
],
[
"Zhang",
"Tianyi",
""
],
[
"Xiong",
"Yuwen",
""
],
[
"Pu",
"Hengjun",
""
],
[
"Zhao",
"Chengyang",
""
],
[
"Tong",
"Ronglei",
""
],
[
"Qiao",
"Yu",
""
],
[
"Dai",
"Jifeng",
""
],
[
"Chen",
"Yuntao",
""
]
] | TITLE: Diffusion Transformer Policy
ABSTRACT: Recent large vision-language-action models pretrained on diverse robot
datasets have demonstrated the potential for generalizing to new environments
with a few in-domain data. However, those approaches usually predict individual
discretized or continuous action by a small action head, which limits the
ability in handling diverse action spaces. In contrast, we model the continuous
action sequence with a large multi-modal diffusion transformer, dubbed as
Diffusion Transformer Policy, in which we directly denoise action chunks by a
large transformer model rather than a small action head for action embedding.
By leveraging the scaling capability of transformers, the proposed approach can
effectively model continuous end-effector actions across large diverse robot
datasets, and achieve better generalization performance. Extensive experiments
demonstrate the effectiveness and generalization of Diffusion Transformer
Policy on Maniskill2, Libero, Calvin and SimplerEnv, as well as the real-world
Franka arm, achieving consistent better performance on Real-to-Sim benchmark
SimplerEnv, real-world Franka Arm and Libero compared to OpenVLA and Octo.
Specifically, without bells and whistles, the proposed approach achieves
state-of-the-art performance with only a single third-view camera stream in the
Calvin task ABC->D, improving the average number of tasks completed in a row of
5 to 3.6, and the pretraining stage significantly facilitates the success
sequence length on the Calvin by over 1.2.
|
2410.18359 | Yiqing Xie | Yiqing Xie, Wenxuan Zhou, Pradyot Prakash, Di Jin, Yuning Mao, Quintin
Fettes, Arya Talebzadeh, Sinong Wang, Han Fang, Carolyn Rose, Daniel Fried,
Hejia Zhang | Improving Model Factuality with Fine-grained Critique-based Evaluator | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Factuality evaluation aims to detect factual errors produced by language
models (LMs) and hence guide the development of more factual models. Towards
this goal, we train a factuality evaluator, FenCE, that provides LM generators
with claim-level factuality feedback. We conduct data augmentation on a
combination of public judgment datasets to train FenCE to (1) generate textual
critiques along with scores and (2) make claim-level judgment based on diverse
source documents obtained by various tools. We then present a framework that
leverages FenCE to improve the factuality of LM generators by constructing
training data. Specifically, we generate a set of candidate responses, leverage
FenCE to revise and score each response without introducing lesser-known facts,
and train the generator by preferring highly scored revised responses.
Experiments show that our data augmentation methods improve the evaluator's
accuracy by 2.9% on LLM-AggreFact. With FenCE, we improve Llama2-7B-chat and
Llama3-8B-chat's factuality rate by 16.86% and 14.45% on FActScore,
outperforming state-of-the-art factuality finetuning methods by 8.83% and
6.96%.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2024 01:41:02 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 19:57:02 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xie",
"Yiqing",
""
],
[
"Zhou",
"Wenxuan",
""
],
[
"Prakash",
"Pradyot",
""
],
[
"Jin",
"Di",
""
],
[
"Mao",
"Yuning",
""
],
[
"Fettes",
"Quintin",
""
],
[
"Talebzadeh",
"Arya",
""
],
[
"Wang",
"Sinong",
""
],
[
"Fang",
"Han",
""
],
[
"Rose",
"Carolyn",
""
],
[
"Fried",
"Daniel",
""
],
[
"Zhang",
"Hejia",
""
]
] | TITLE: Improving Model Factuality with Fine-grained Critique-based Evaluator
ABSTRACT: Factuality evaluation aims to detect factual errors produced by language
models (LMs) and hence guide the development of more factual models. Towards
this goal, we train a factuality evaluator, FenCE, that provides LM generators
with claim-level factuality feedback. We conduct data augmentation on a
combination of public judgment datasets to train FenCE to (1) generate textual
critiques along with scores and (2) make claim-level judgment based on diverse
source documents obtained by various tools. We then present a framework that
leverages FenCE to improve the factuality of LM generators by constructing
training data. Specifically, we generate a set of candidate responses, leverage
FenCE to revise and score each response without introducing lesser-known facts,
and train the generator by preferring highly scored revised responses.
Experiments show that our data augmentation methods improve the evaluator's
accuracy by 2.9% on LLM-AggreFact. With FenCE, we improve Llama2-7B-chat and
Llama3-8B-chat's factuality rate by 16.86% and 14.45% on FActScore,
outperforming state-of-the-art factuality finetuning methods by 8.83% and
6.96%.
|
2410.20579 | Shi-Ang Qi | Shi-ang Qi, Yakun Yu, Russell Greiner | Toward Conditional Distribution Calibration in Survival Prediction | Accepted to NeurIPS 2024. 41 pages, 23 figures | null | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | Survival prediction often involves estimating the time-to-event distribution
from censored datasets. Previous approaches have focused on enhancing
discrimination and marginal calibration. In this paper, we highlight the
significance of conditional calibration for real-world applications --
especially its role in individual decision-making. We propose a method based on
conformal prediction that uses the model's predicted individual survival
probability at that instance's observed time. This method effectively improves
the model's marginal and conditional calibration, without compromising
discrimination. We provide asymptotic theoretical guarantees for both marginal
and conditional calibration and test it extensively across 15 diverse
real-world datasets, demonstrating the method's practical effectiveness and
versatility in various settings.
| [
{
"version": "v1",
"created": "Sun, 27 Oct 2024 20:19:46 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Feb 2025 16:47:09 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Mar 2025 00:04:00 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Qi",
"Shi-ang",
""
],
[
"Yu",
"Yakun",
""
],
[
"Greiner",
"Russell",
""
]
] | TITLE: Toward Conditional Distribution Calibration in Survival Prediction
ABSTRACT: Survival prediction often involves estimating the time-to-event distribution
from censored datasets. Previous approaches have focused on enhancing
discrimination and marginal calibration. In this paper, we highlight the
significance of conditional calibration for real-world applications --
especially its role in individual decision-making. We propose a method based on
conformal prediction that uses the model's predicted individual survival
probability at that instance's observed time. This method effectively improves
the model's marginal and conditional calibration, without compromising
discrimination. We provide asymptotic theoretical guarantees for both marginal
and conditional calibration and test it extensively across 15 diverse
real-world datasets, demonstrating the method's practical effectiveness and
versatility in various settings.
|
2410.21349 | Zeyuan Li | Zeyuan Li, Yangfan He, Lewei He, Jianhui Wang, Tianyu Shi, Bin Lei,
Yuchen Li, Qiuwu Chen | FALCON: Feedback-driven Adaptive Long/short-term memory reinforced
Coding Optimization system | 20 pages, 7 figures | null | null | null | cs.LG cs.AI cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, large language models (LLMs) have achieved significant progress in
automated code generation. Despite their strong instruction-following
capabilities, these models frequently struggled to align with user intent in
coding scenarios. In particular, they were hampered by datasets that lacked
diversity and failed to address specialized tasks or edge cases. Furthermore,
challenges in supervised fine-tuning (SFT) and reinforcement learning from
human feedback (RLHF) led to failures in generating precise,
human-intent-aligned code. To tackle these challenges and improve the code
generation performance for automated programming systems, we propose
Feedback-driven Adaptive Long/short-term memory reinforced Coding Optimization
(i.e., FALCON). FALCON is structured into two hierarchical levels. From the
global level, long-term memory improves code quality by retaining and applying
learned knowledge. At the local level, short-term memory allows for the
incorporation of immediate feedback from compilers and AI systems.
Additionally, we introduce meta-reinforcement learning with feedback rewards to
solve the global-local bi-level optimization problem and enhance the model's
adaptability across diverse code generation tasks. Extensive experiments
demonstrate that our technique achieves state-of-the-art performance, leading
other reinforcement learning methods by more than 4.5 percentage points on the
MBPP benchmark and 6.1 percentage points on the Humaneval benchmark. The
open-sourced code is publicly available at https://github.com/titurte/FALCON.
| [
{
"version": "v1",
"created": "Mon, 28 Oct 2024 12:18:22 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Nov 2024 16:50:05 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Jan 2025 11:16:32 GMT"
},
{
"version": "v4",
"created": "Sun, 23 Mar 2025 17:12:25 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Zeyuan",
""
],
[
"He",
"Yangfan",
""
],
[
"He",
"Lewei",
""
],
[
"Wang",
"Jianhui",
""
],
[
"Shi",
"Tianyu",
""
],
[
"Lei",
"Bin",
""
],
[
"Li",
"Yuchen",
""
],
[
"Chen",
"Qiuwu",
""
]
] | TITLE: FALCON: Feedback-driven Adaptive Long/short-term memory reinforced
Coding Optimization system
ABSTRACT: Recently, large language models (LLMs) have achieved significant progress in
automated code generation. Despite their strong instruction-following
capabilities, these models frequently struggled to align with user intent in
coding scenarios. In particular, they were hampered by datasets that lacked
diversity and failed to address specialized tasks or edge cases. Furthermore,
challenges in supervised fine-tuning (SFT) and reinforcement learning from
human feedback (RLHF) led to failures in generating precise,
human-intent-aligned code. To tackle these challenges and improve the code
generation performance for automated programming systems, we propose
Feedback-driven Adaptive Long/short-term memory reinforced Coding Optimization
(i.e., FALCON). FALCON is structured into two hierarchical levels. From the
global level, long-term memory improves code quality by retaining and applying
learned knowledge. At the local level, short-term memory allows for the
incorporation of immediate feedback from compilers and AI systems.
Additionally, we introduce meta-reinforcement learning with feedback rewards to
solve the global-local bi-level optimization problem and enhance the model's
adaptability across diverse code generation tasks. Extensive experiments
demonstrate that our technique achieves state-of-the-art performance, leading
other reinforcement learning methods by more than 4.5 percentage points on the
MBPP benchmark and 6.1 percentage points on the Humaneval benchmark. The
open-sourced code is publicly available at https://github.com/titurte/FALCON.
|
2410.23073 | Hongyu Chen | Hongyu Chen, Chengcheng Chen, Fei Wang, Yugang Chang, Yuhu Shi, and
Weiming Zeng | RSNet: A Light Framework for The Detection of Multi-scale Remote Sensing
Targets | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in synthetic aperture radar (SAR) ship detection using
deep learning have significantly improved accuracy and speed, yet effectively
detecting small objects in complex backgrounds with fewer parameters remains a
challenge. This letter introduces RSNet, a lightweight framework constructed to
enhance ship detection in SAR imagery. To ensure accuracy with fewer
parameters, we proposed Waveletpool-ContextGuided (WCG) as its backbone,
guiding global context understanding through multi-scale wavelet features for
effective detection in complex scenes. Additionally, Waveletpool-StarFusion
(WSF) is introduced as the neck, employing a residual wavelet element-wise
multiplication structure to achieve higher dimensional nonlinear features
without increasing network width. The Lightweight-Shared (LS) module is
designed as detect components to achieve efficient detection through
lightweight shared convolutional structure and multi-format compatibility.
Experiments on the SAR Ship Detection Dataset (SSDD) and High-Resolution SAR
Image Dataset (HRSID) demonstrate that RSNet achieves a strong balance between
lightweight design and detection performance, surpassing many state-of-the-art
detectors, reaching 72.5\% and 67.6\% in \textbf{\(\mathbf{mAP_{.50:.95}}\)
}respectively with 1.49M parameters. Our code will be released soon.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2024 14:46:35 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Nov 2024 09:09:37 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Nov 2024 02:31:09 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Feb 2025 03:05:48 GMT"
},
{
"version": "v5",
"created": "Wed, 19 Feb 2025 14:13:25 GMT"
},
{
"version": "v6",
"created": "Sat, 22 Mar 2025 05:12:05 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Hongyu",
""
],
[
"Chen",
"Chengcheng",
""
],
[
"Wang",
"Fei",
""
],
[
"Chang",
"Yugang",
""
],
[
"Shi",
"Yuhu",
""
],
[
"Zeng",
"Weiming",
""
]
] | TITLE: RSNet: A Light Framework for The Detection of Multi-scale Remote Sensing
Targets
ABSTRACT: Recent advancements in synthetic aperture radar (SAR) ship detection using
deep learning have significantly improved accuracy and speed, yet effectively
detecting small objects in complex backgrounds with fewer parameters remains a
challenge. This letter introduces RSNet, a lightweight framework constructed to
enhance ship detection in SAR imagery. To ensure accuracy with fewer
parameters, we proposed Waveletpool-ContextGuided (WCG) as its backbone,
guiding global context understanding through multi-scale wavelet features for
effective detection in complex scenes. Additionally, Waveletpool-StarFusion
(WSF) is introduced as the neck, employing a residual wavelet element-wise
multiplication structure to achieve higher dimensional nonlinear features
without increasing network width. The Lightweight-Shared (LS) module is
designed as detect components to achieve efficient detection through
lightweight shared convolutional structure and multi-format compatibility.
Experiments on the SAR Ship Detection Dataset (SSDD) and High-Resolution SAR
Image Dataset (HRSID) demonstrate that RSNet achieves a strong balance between
lightweight design and detection performance, surpassing many state-of-the-art
detectors, reaching 72.5\% and 67.6\% in \textbf{\(\mathbf{mAP_{.50:.95}}\)
}respectively with 1.49M parameters. Our code will be released soon.
|
2411.00865 | Kapu Nirmal Joshua | Nirmal Joshua Kapu and Mihit Sreejith | Demo-Craft: Using In-Context Learning to Improve Code Generation in
Large Language Models | Accepted at IEEE ICIITCEE 2025. Presented on 16th January 2025 in
Bengaluru, India | null | 10.1109/IITCEE64140.2025.10915349 | null | cs.SE cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Generating executable code from natural language instructions using Large
Language Models (LLMs) poses challenges such as semantic ambiguity and
understanding taskspecific contexts. To address these issues, we propose a
system called DemoCraft, which enhances code generation by leveraging
in-context learning and demonstration selection, combined with latent concept
learning. Latent concept learning introduces additional concept tokens, which
are trainable embeddings that capture task-specific knowledge. We then test our
system on two major datasets: MBPP and Humaneval. Our experimental results
demonstrate that the proposed system achieves an approximate 2x increase in the
pass@k metric compared to baseline models. Furthermore, we introduce two novel
evaluation metrics: correctness@k and similarity@k. Our empirical studies
indicate that our system attains nearly a 3x improvement in these metrics as
well.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2024 19:45:50 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 05:52:26 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Kapu",
"Nirmal Joshua",
""
],
[
"Sreejith",
"Mihit",
""
]
] | TITLE: Demo-Craft: Using In-Context Learning to Improve Code Generation in
Large Language Models
ABSTRACT: Generating executable code from natural language instructions using Large
Language Models (LLMs) poses challenges such as semantic ambiguity and
understanding taskspecific contexts. To address these issues, we propose a
system called DemoCraft, which enhances code generation by leveraging
in-context learning and demonstration selection, combined with latent concept
learning. Latent concept learning introduces additional concept tokens, which
are trainable embeddings that capture task-specific knowledge. We then test our
system on two major datasets: MBPP and Humaneval. Our experimental results
demonstrate that the proposed system achieves an approximate 2x increase in the
pass@k metric compared to baseline models. Furthermore, we introduce two novel
evaluation metrics: correctness@k and similarity@k. Our empirical studies
indicate that our system attains nearly a 3x improvement in these metrics as
well.
|
2411.01839 | Rina Carines Cabral | Rina Carines Cabral, Soyeon Caren Han, Areej Alhassan, Riza
Batista-Navarro, Goran Nenadic, Josiah Poon | TriG-NER: Triplet-Grid Framework for Discontinuous Named Entity
Recognition | Accepted at The ACM Web Conference WWW'25. Code available at
https://github.com/adlnlp/trig_ner | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Discontinuous Named Entity Recognition (DNER) presents a challenging problem
where entities may be scattered across multiple non-adjacent tokens, making
traditional sequence labelling approaches inadequate. Existing methods
predominantly rely on custom tagging schemes to handle these discontinuous
entities, resulting in models tightly coupled to specific tagging strategies
and lacking generalisability across diverse datasets. To address these
challenges, we propose TriG-NER, a novel Triplet-Grid Framework that introduces
a generalisable approach to learning robust token-level representations for
discontinuous entity extraction. Our framework applies triplet loss at the
token level, where similarity is defined by word pairs existing within the same
entity, effectively pulling together similar and pushing apart dissimilar ones.
This approach enhances entity boundary detection and reduces the dependency on
specific tagging schemes by focusing on word-pair relationships within a
flexible grid structure. We evaluate TriG-NER on three benchmark DNER datasets
and demonstrate significant improvements over existing grid-based
architectures. These results underscore our framework's effectiveness in
capturing complex entity structures and its adaptability to various tagging
schemes, setting a new benchmark for discontinuous entity extraction.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2024 06:26:09 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Jan 2025 14:37:03 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 05:45:37 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Cabral",
"Rina Carines",
""
],
[
"Han",
"Soyeon Caren",
""
],
[
"Alhassan",
"Areej",
""
],
[
"Batista-Navarro",
"Riza",
""
],
[
"Nenadic",
"Goran",
""
],
[
"Poon",
"Josiah",
""
]
] | TITLE: TriG-NER: Triplet-Grid Framework for Discontinuous Named Entity
Recognition
ABSTRACT: Discontinuous Named Entity Recognition (DNER) presents a challenging problem
where entities may be scattered across multiple non-adjacent tokens, making
traditional sequence labelling approaches inadequate. Existing methods
predominantly rely on custom tagging schemes to handle these discontinuous
entities, resulting in models tightly coupled to specific tagging strategies
and lacking generalisability across diverse datasets. To address these
challenges, we propose TriG-NER, a novel Triplet-Grid Framework that introduces
a generalisable approach to learning robust token-level representations for
discontinuous entity extraction. Our framework applies triplet loss at the
token level, where similarity is defined by word pairs existing within the same
entity, effectively pulling together similar and pushing apart dissimilar ones.
This approach enhances entity boundary detection and reduces the dependency on
specific tagging schemes by focusing on word-pair relationships within a
flexible grid structure. We evaluate TriG-NER on three benchmark DNER datasets
and demonstrate significant improvements over existing grid-based
architectures. These results underscore our framework's effectiveness in
capturing complex entity structures and its adaptability to various tagging
schemes, setting a new benchmark for discontinuous entity extraction.
|
2411.07747 | Xi Cheng | Xi Cheng, Ruiqi Lei, Di Huang, Zhichao Liao, Fengyuan Piao, Yan Chen,
Pingfa Feng, Long Zeng | Constraint-Aware Feature Learning for Parametric Point Cloud | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parametric point clouds are sampled from CAD shapes and are becoming
increasingly common in industrial manufacturing. Most existing CAD-specific
deep learning methods only focus on geometric features, while overlooking
constraints which are inherent and important in CAD shapes. This limits their
ability to discern CAD shapes with similar appearance but different
constraints. To tackle this challenge, we first analyze the constraint
importance via a simple validation experiment. Then, we introduce a deep
learning-friendly constraints representation with three vectorized components,
and design a constraint-aware feature learning network (CstNet), which includes
two stages. Stage 1 extracts constraint feature from B-Rep data or point cloud
based on shape local information. It enables better generalization ability to
unseen dataset after model pre-training. Stage 2 employs attention layers to
adaptively adjust the weights of three constraints' components. It facilitates
the effective utilization of constraints. In addition, we built the first
multi-modal parametric-purpose dataset, i.e. Param20K, comprising about 20K
shape instances of 75 classes. On this dataset, we performed the classification
and rotation robustness experiments, and CstNet achieved 3.52\% and 26.17\%
absolute improvements in instance accuracy over the state-of-the-art methods,
respectively. To the best of our knowledge, CstNet is the first
constraint-aware deep learning method tailored for parametric point cloud
analysis in CAD domain.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 12:18:18 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Nov 2024 07:10:52 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Nov 2024 13:56:33 GMT"
},
{
"version": "v4",
"created": "Sat, 8 Mar 2025 10:27:31 GMT"
},
{
"version": "v5",
"created": "Mon, 24 Mar 2025 10:22:01 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Cheng",
"Xi",
""
],
[
"Lei",
"Ruiqi",
""
],
[
"Huang",
"Di",
""
],
[
"Liao",
"Zhichao",
""
],
[
"Piao",
"Fengyuan",
""
],
[
"Chen",
"Yan",
""
],
[
"Feng",
"Pingfa",
""
],
[
"Zeng",
"Long",
""
]
] | TITLE: Constraint-Aware Feature Learning for Parametric Point Cloud
ABSTRACT: Parametric point clouds are sampled from CAD shapes and are becoming
increasingly common in industrial manufacturing. Most existing CAD-specific
deep learning methods only focus on geometric features, while overlooking
constraints which are inherent and important in CAD shapes. This limits their
ability to discern CAD shapes with similar appearance but different
constraints. To tackle this challenge, we first analyze the constraint
importance via a simple validation experiment. Then, we introduce a deep
learning-friendly constraints representation with three vectorized components,
and design a constraint-aware feature learning network (CstNet), which includes
two stages. Stage 1 extracts constraint feature from B-Rep data or point cloud
based on shape local information. It enables better generalization ability to
unseen dataset after model pre-training. Stage 2 employs attention layers to
adaptively adjust the weights of three constraints' components. It facilitates
the effective utilization of constraints. In addition, we built the first
multi-modal parametric-purpose dataset, i.e. Param20K, comprising about 20K
shape instances of 75 classes. On this dataset, we performed the classification
and rotation robustness experiments, and CstNet achieved 3.52\% and 26.17\%
absolute improvements in instance accuracy over the state-of-the-art methods,
respectively. To the best of our knowledge, CstNet is the first
constraint-aware deep learning method tailored for parametric point cloud
analysis in CAD domain.
|
2411.13626 | Xinyue Hao | Xinyue Hao, Gen Li, Shreyank N Gowda, Robert B Fisher, Jonathan Huang,
Anurag Arnab, Laura Sevilla-Lara | Principles of Visual Tokens for Efficient Video Understanding | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video understanding has made huge strides in recent years, relying largely on
the power of transformers. As this architecture is notoriously expensive and
video data is highly redundant, research into improving efficiency has become
particularly relevant. Some creative solutions include token selection and
merging. While most methods succeed in reducing the cost of the model and
maintaining accuracy, an interesting pattern arises: most methods do not
outperform the baseline of randomly discarding tokens. In this paper we take a
closer look at this phenomenon and observe 5 principles of the nature of visual
tokens. For example, we observe that the value of tokens follows a clear
Pareto-distribution where most tokens have remarkably low value, and just a few
carry most of the perceptual information. We build on these and further
insights to propose a lightweight video model, LITE, that can select a small
number of tokens effectively, outperforming state-of-the-art and existing
baselines across datasets (Kinetics-400 and Something-Something-V2) in the
challenging trade-off of computation (GFLOPs) vs accuracy. Experiments also
show that LITE generalizes across datasets and even other tasks without the
need for retraining.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 14:09:47 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 19:09:19 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hao",
"Xinyue",
""
],
[
"Li",
"Gen",
""
],
[
"Gowda",
"Shreyank N",
""
],
[
"Fisher",
"Robert B",
""
],
[
"Huang",
"Jonathan",
""
],
[
"Arnab",
"Anurag",
""
],
[
"Sevilla-Lara",
"Laura",
""
]
] | TITLE: Principles of Visual Tokens for Efficient Video Understanding
ABSTRACT: Video understanding has made huge strides in recent years, relying largely on
the power of transformers. As this architecture is notoriously expensive and
video data is highly redundant, research into improving efficiency has become
particularly relevant. Some creative solutions include token selection and
merging. While most methods succeed in reducing the cost of the model and
maintaining accuracy, an interesting pattern arises: most methods do not
outperform the baseline of randomly discarding tokens. In this paper we take a
closer look at this phenomenon and observe 5 principles of the nature of visual
tokens. For example, we observe that the value of tokens follows a clear
Pareto-distribution where most tokens have remarkably low value, and just a few
carry most of the perceptual information. We build on these and further
insights to propose a lightweight video model, LITE, that can select a small
number of tokens effectively, outperforming state-of-the-art and existing
baselines across datasets (Kinetics-400 and Something-Something-V2) in the
challenging trade-off of computation (GFLOPs) vs accuracy. Experiments also
show that LITE generalizes across datasets and even other tasks without the
need for retraining.
|
2411.13927 | Xueying Jiang | Xueying Jiang, Lewei Lu, Ling Shao, Shijian Lu | Multimodal 3D Reasoning Segmentation with Complex Scenes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent development in multimodal learning has greatly advanced the
research in 3D scene understanding in various real-world tasks such as embodied
AI. However, most existing work shares two typical constraints: 1) they are
short of reasoning ability for interaction and interpretation of human
intension and 2) they focus on scenarios with single-category objects only
which leads to over-simplified textual descriptions due to the negligence of
multi-object scenarios and spatial relations among objects. We bridge the
research gaps by proposing a 3D reasoning segmentation task for multiple
objects in scenes. The task allows producing 3D segmentation masks and detailed
textual explanations as enriched by 3D spatial relations among objects. To this
end, we create ReasonSeg3D, a large-scale and high-quality benchmark that
integrates 3D segmentation masks and 3D spatial relations with generated
question-answer pairs. In addition, we design MORE3D, a novel 3D reasoning
network that works with queries of multiple objects and tailored 3D scene
understanding designs. MORE3D learns detailed explanations on 3D relations and
employs them to capture spatial information of objects and reason textual
outputs. Extensive experiments show that MORE3D excels in reasoning and
segmenting complex multi-object 3D scenes, and the created ReasonSeg3D offers a
valuable platform for future exploration of 3D reasoning segmentation. The
dataset and code will be released.
| [
{
"version": "v1",
"created": "Thu, 21 Nov 2024 08:22:45 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 08:50:00 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Jiang",
"Xueying",
""
],
[
"Lu",
"Lewei",
""
],
[
"Shao",
"Ling",
""
],
[
"Lu",
"Shijian",
""
]
] | TITLE: Multimodal 3D Reasoning Segmentation with Complex Scenes
ABSTRACT: The recent development in multimodal learning has greatly advanced the
research in 3D scene understanding in various real-world tasks such as embodied
AI. However, most existing work shares two typical constraints: 1) they are
short of reasoning ability for interaction and interpretation of human
intension and 2) they focus on scenarios with single-category objects only
which leads to over-simplified textual descriptions due to the negligence of
multi-object scenarios and spatial relations among objects. We bridge the
research gaps by proposing a 3D reasoning segmentation task for multiple
objects in scenes. The task allows producing 3D segmentation masks and detailed
textual explanations as enriched by 3D spatial relations among objects. To this
end, we create ReasonSeg3D, a large-scale and high-quality benchmark that
integrates 3D segmentation masks and 3D spatial relations with generated
question-answer pairs. In addition, we design MORE3D, a novel 3D reasoning
network that works with queries of multiple objects and tailored 3D scene
understanding designs. MORE3D learns detailed explanations on 3D relations and
employs them to capture spatial information of objects and reason textual
outputs. Extensive experiments show that MORE3D excels in reasoning and
segmenting complex multi-object 3D scenes, and the created ReasonSeg3D offers a
valuable platform for future exploration of 3D reasoning segmentation. The
dataset and code will be released.
|
2411.14299 | Jitendra Bhandari | Jitendra Bhandari, Vineet Bhat, Yuheng He, Hamed Rahmani, Siddharth
Garg and Ramesh Karri | Masala-CHAI: A Large-Scale SPICE Netlist Dataset for Analog Circuits by
Harnessing AI | null | null | null | null | cs.AR | http://creativecommons.org/licenses/by/4.0/ | Masala-CHAI is a fully automated framework leveraging large language models
(LLMs) to generate Simulation Programs with Integrated Circuit Emphasis (SPICE)
netlists. It addresses a long-standing challenge in circuit design automation:
automating netlist generation for analog circuits. Automating this workflow
could accelerate the creation of fine-tuned LLMs for analog circuit design and
verification. In this work, we identify key challenges in automated netlist
generation and evaluate multimodal capabilities of state-of-the-art LLMs,
particularly GPT-4, in addressing them. We propose a three-step workflow to
overcome existing limitations: labeling analog circuits, prompt tuning, and
netlist verification. This approach enables end-to-end SPICE netlist generation
from circuit schematic images, tackling the persistent challenge of accurate
netlist generation. We utilize Masala-CHAI to collect a corpus of 7,500
schematics that span varying complexities in 10 textbooks and benchmark various
open source and proprietary LLMs. Models fine-tuned on Masala-CHAI when used in
LLM-agentic frameworks such as AnalogCoder achieve a notable 46% improvement in
Pass@1 scores. We open-source our dataset and code for community-driven
development.
| [
{
"version": "v1",
"created": "Thu, 21 Nov 2024 16:50:11 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Nov 2024 20:42:40 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Feb 2025 18:52:39 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Mar 2025 15:22:28 GMT"
},
{
"version": "v5",
"created": "Sun, 23 Mar 2025 15:39:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Bhandari",
"Jitendra",
""
],
[
"Bhat",
"Vineet",
""
],
[
"He",
"Yuheng",
""
],
[
"Rahmani",
"Hamed",
""
],
[
"Garg",
"Siddharth",
""
],
[
"Karri",
"Ramesh",
""
]
] | TITLE: Masala-CHAI: A Large-Scale SPICE Netlist Dataset for Analog Circuits by
Harnessing AI
ABSTRACT: Masala-CHAI is a fully automated framework leveraging large language models
(LLMs) to generate Simulation Programs with Integrated Circuit Emphasis (SPICE)
netlists. It addresses a long-standing challenge in circuit design automation:
automating netlist generation for analog circuits. Automating this workflow
could accelerate the creation of fine-tuned LLMs for analog circuit design and
verification. In this work, we identify key challenges in automated netlist
generation and evaluate multimodal capabilities of state-of-the-art LLMs,
particularly GPT-4, in addressing them. We propose a three-step workflow to
overcome existing limitations: labeling analog circuits, prompt tuning, and
netlist verification. This approach enables end-to-end SPICE netlist generation
from circuit schematic images, tackling the persistent challenge of accurate
netlist generation. We utilize Masala-CHAI to collect a corpus of 7,500
schematics that span varying complexities in 10 textbooks and benchmark various
open source and proprietary LLMs. Models fine-tuned on Masala-CHAI when used in
LLM-agentic frameworks such as AnalogCoder achieve a notable 46% improvement in
Pass@1 scores. We open-source our dataset and code for community-driven
development.
|
2411.15076 | Wentao Huang | Wentao Huang, Meilong Xu, Xiaoling Hu, Shahira Abousamra, Aniruddha
Ganguly, Saarthak Kapse, Alisa Yurovsky, Prateek Prasanna, Tahsin Kurc, Joel
Saltz, Michael L. Miller, Chao Chen | RankByGene: Gene-Guided Histopathology Representation Learning Through
Cross-Modal Ranking Consistency | 18 pages, 9 figures | null | null | null | eess.IV cs.CV q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Spatial transcriptomics (ST) provides essential spatial context by mapping
gene expression within tissue, enabling detailed study of cellular
heterogeneity and tissue organization. However, aligning ST data with histology
images poses challenges due to inherent spatial distortions and
modality-specific variations. Existing methods largely rely on direct
alignment, which often fails to capture complex cross-modal relationships. To
address these limitations, we propose a novel framework that aligns gene and
image features using a ranking-based alignment loss, preserving relative
similarity across modalities and enabling robust multi-scale alignment. To
further enhance the alignment's stability, we employ self-supervised knowledge
distillation with a teacher-student network architecture, effectively
mitigating disruptions from high dimensionality, sparsity, and noise in gene
expression data. Extensive experiments on seven public datasets that encompass
gene expression prediction, slide-level classification, and survival analysis
demonstrate the efficacy of our method, showing improved alignment and
predictive performance over existing methods.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2024 17:08:28 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 06:41:14 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Huang",
"Wentao",
""
],
[
"Xu",
"Meilong",
""
],
[
"Hu",
"Xiaoling",
""
],
[
"Abousamra",
"Shahira",
""
],
[
"Ganguly",
"Aniruddha",
""
],
[
"Kapse",
"Saarthak",
""
],
[
"Yurovsky",
"Alisa",
""
],
[
"Prasanna",
"Prateek",
""
],
[
"Kurc",
"Tahsin",
""
],
[
"Saltz",
"Joel",
""
],
[
"Miller",
"Michael L.",
""
],
[
"Chen",
"Chao",
""
]
] | TITLE: RankByGene: Gene-Guided Histopathology Representation Learning Through
Cross-Modal Ranking Consistency
ABSTRACT: Spatial transcriptomics (ST) provides essential spatial context by mapping
gene expression within tissue, enabling detailed study of cellular
heterogeneity and tissue organization. However, aligning ST data with histology
images poses challenges due to inherent spatial distortions and
modality-specific variations. Existing methods largely rely on direct
alignment, which often fails to capture complex cross-modal relationships. To
address these limitations, we propose a novel framework that aligns gene and
image features using a ranking-based alignment loss, preserving relative
similarity across modalities and enabling robust multi-scale alignment. To
further enhance the alignment's stability, we employ self-supervised knowledge
distillation with a teacher-student network architecture, effectively
mitigating disruptions from high dimensionality, sparsity, and noise in gene
expression data. Extensive experiments on seven public datasets that encompass
gene expression prediction, slide-level classification, and survival analysis
demonstrate the efficacy of our method, showing improved alignment and
predictive performance over existing methods.
|
2411.15648 | Elad Amrani | Elad Amrani, Leonid Karlinsky, Alex Bronstein | Sample- and Parameter-Efficient Auto-Regressive Image Models | CVPR 2025 camera-ready with supplementary. For code see
https://github.com/elad-amrani/xtra | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce XTRA, a vision model pre-trained with a novel auto-regressive
objective that significantly enhances both sample and parameter efficiency
compared to previous auto-regressive image models. Unlike contrastive or masked
image modeling methods, which have not been demonstrated as having consistent
scaling behavior on unbalanced internet data, auto-regressive vision models
exhibit scalable and promising performance as model and dataset size increase.
In contrast to standard auto-regressive models, XTRA employs a Block Causal
Mask, where each Block represents k $\times$ k tokens rather than relying on a
standard causal mask. By reconstructing pixel values block by block, XTRA
captures higher-level structural patterns over larger image regions. Predicting
on blocks allows the model to learn relationships across broader areas of
pixels, enabling more abstract and semantically meaningful representations than
traditional next-token prediction. This simple modification yields two key
results. First, XTRA is sample-efficient. Despite being trained on 152$\times$
fewer samples (13.1M vs. 2B), XTRA ViT-H/14 surpasses the top-1 average
accuracy of the previous state-of-the-art auto-regressive model across 15
diverse image recognition benchmarks. Second, XTRA is parameter-efficient.
Compared to auto-regressive models trained on ImageNet-1k, XTRA ViT-B/16
outperforms in linear and attentive probing tasks, using 7-16$\times$ fewer
parameters (85M vs. 1.36B/0.63B).
| [
{
"version": "v1",
"created": "Sat, 23 Nov 2024 20:40:46 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 21:23:43 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Amrani",
"Elad",
""
],
[
"Karlinsky",
"Leonid",
""
],
[
"Bronstein",
"Alex",
""
]
] | TITLE: Sample- and Parameter-Efficient Auto-Regressive Image Models
ABSTRACT: We introduce XTRA, a vision model pre-trained with a novel auto-regressive
objective that significantly enhances both sample and parameter efficiency
compared to previous auto-regressive image models. Unlike contrastive or masked
image modeling methods, which have not been demonstrated as having consistent
scaling behavior on unbalanced internet data, auto-regressive vision models
exhibit scalable and promising performance as model and dataset size increase.
In contrast to standard auto-regressive models, XTRA employs a Block Causal
Mask, where each Block represents k $\times$ k tokens rather than relying on a
standard causal mask. By reconstructing pixel values block by block, XTRA
captures higher-level structural patterns over larger image regions. Predicting
on blocks allows the model to learn relationships across broader areas of
pixels, enabling more abstract and semantically meaningful representations than
traditional next-token prediction. This simple modification yields two key
results. First, XTRA is sample-efficient. Despite being trained on 152$\times$
fewer samples (13.1M vs. 2B), XTRA ViT-H/14 surpasses the top-1 average
accuracy of the previous state-of-the-art auto-regressive model across 15
diverse image recognition benchmarks. Second, XTRA is parameter-efficient.
Compared to auto-regressive models trained on ImageNet-1k, XTRA ViT-B/16
outperforms in linear and attentive probing tasks, using 7-16$\times$ fewer
parameters (85M vs. 1.36B/0.63B).
|
2411.16443 | Byeongjun Park | Hyojun Go, Byeongjun Park, Jiho Jang, Jin-Young Kim, Soonwoo Kwon,
Changick Kim | SplatFlow: Multi-View Rectified Flow Model for 3D Gaussian Splatting
Synthesis | Project Page: https://gohyojun15.github.io/SplatFlow/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Text-based generation and editing of 3D scenes hold significant potential for
streamlining content creation through intuitive user interactions. While recent
advances leverage 3D Gaussian Splatting (3DGS) for high-fidelity and real-time
rendering, existing methods are often specialized and task-focused, lacking a
unified framework for both generation and editing. In this paper, we introduce
SplatFlow, a comprehensive framework that addresses this gap by enabling direct
3DGS generation and editing. SplatFlow comprises two main components: a
multi-view rectified flow (RF) model and a Gaussian Splatting Decoder
(GSDecoder). The multi-view RF model operates in latent space, generating
multi-view images, depths, and camera poses simultaneously, conditioned on text
prompts, thus addressing challenges like diverse scene scales and complex
camera trajectories in real-world settings. Then, the GSDecoder efficiently
translates these latent outputs into 3DGS representations through a
feed-forward 3DGS method. Leveraging training-free inversion and inpainting
techniques, SplatFlow enables seamless 3DGS editing and supports a broad range
of 3D tasks-including object editing, novel view synthesis, and camera pose
estimation-within a unified framework without requiring additional complex
pipelines. We validate SplatFlow's capabilities on the MVImgNet and DL3DV-7K
datasets, demonstrating its versatility and effectiveness in various 3D
generation, editing, and inpainting-based tasks.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 14:46:17 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 05:52:21 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Go",
"Hyojun",
""
],
[
"Park",
"Byeongjun",
""
],
[
"Jang",
"Jiho",
""
],
[
"Kim",
"Jin-Young",
""
],
[
"Kwon",
"Soonwoo",
""
],
[
"Kim",
"Changick",
""
]
] | TITLE: SplatFlow: Multi-View Rectified Flow Model for 3D Gaussian Splatting
Synthesis
ABSTRACT: Text-based generation and editing of 3D scenes hold significant potential for
streamlining content creation through intuitive user interactions. While recent
advances leverage 3D Gaussian Splatting (3DGS) for high-fidelity and real-time
rendering, existing methods are often specialized and task-focused, lacking a
unified framework for both generation and editing. In this paper, we introduce
SplatFlow, a comprehensive framework that addresses this gap by enabling direct
3DGS generation and editing. SplatFlow comprises two main components: a
multi-view rectified flow (RF) model and a Gaussian Splatting Decoder
(GSDecoder). The multi-view RF model operates in latent space, generating
multi-view images, depths, and camera poses simultaneously, conditioned on text
prompts, thus addressing challenges like diverse scene scales and complex
camera trajectories in real-world settings. Then, the GSDecoder efficiently
translates these latent outputs into 3DGS representations through a
feed-forward 3DGS method. Leveraging training-free inversion and inpainting
techniques, SplatFlow enables seamless 3DGS editing and supports a broad range
of 3D tasks-including object editing, novel view synthesis, and camera pose
estimation-within a unified framework without requiring additional complex
pipelines. We validate SplatFlow's capabilities on the MVImgNet and DL3DV-7K
datasets, demonstrating its versatility and effectiveness in various 3D
generation, editing, and inpainting-based tasks.
|
2411.16683 | Yao-Chih Lee | Yao-Chih Lee, Erika Lu, Sarah Rumbley, Michal Geyer, Jia-Bin Huang,
Tali Dekel, Forrester Cole | Generative Omnimatte: Learning to Decompose Video into Layers | CVPR 2025. Project page: https://gen-omnimatte.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a video and a set of input object masks, an omnimatte method aims to
decompose the video into semantically meaningful layers containing individual
objects along with their associated effects, such as shadows and reflections.
Existing omnimatte methods assume a static background or accurate pose and
depth estimation and produce poor decompositions when these assumptions are
violated. Furthermore, due to the lack of generative prior on natural videos,
existing methods cannot complete dynamic occluded regions. We present a novel
generative layered video decomposition framework to address the omnimatte
problem. Our method does not assume a stationary scene or require camera pose
or depth information and produces clean, complete layers, including convincing
completions of occluded dynamic regions. Our core idea is to train a video
diffusion model to identify and remove scene effects caused by a specific
object. We show that this model can be finetuned from an existing video
inpainting model with a small, carefully curated dataset, and demonstrate
high-quality decompositions and editing results for a wide range of casually
captured videos containing soft shadows, glossy reflections, splashing water,
and more.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 18:59:57 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 16:08:09 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lee",
"Yao-Chih",
""
],
[
"Lu",
"Erika",
""
],
[
"Rumbley",
"Sarah",
""
],
[
"Geyer",
"Michal",
""
],
[
"Huang",
"Jia-Bin",
""
],
[
"Dekel",
"Tali",
""
],
[
"Cole",
"Forrester",
""
]
] | TITLE: Generative Omnimatte: Learning to Decompose Video into Layers
ABSTRACT: Given a video and a set of input object masks, an omnimatte method aims to
decompose the video into semantically meaningful layers containing individual
objects along with their associated effects, such as shadows and reflections.
Existing omnimatte methods assume a static background or accurate pose and
depth estimation and produce poor decompositions when these assumptions are
violated. Furthermore, due to the lack of generative prior on natural videos,
existing methods cannot complete dynamic occluded regions. We present a novel
generative layered video decomposition framework to address the omnimatte
problem. Our method does not assume a stationary scene or require camera pose
or depth information and produces clean, complete layers, including convincing
completions of occluded dynamic regions. Our core idea is to train a video
diffusion model to identify and remove scene effects caused by a specific
object. We show that this model can be finetuned from an existing video
inpainting model with a small, carefully curated dataset, and demonstrate
high-quality decompositions and editing results for a wide range of casually
captured videos containing soft shadows, glossy reflections, splashing water,
and more.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.