Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2403.16526 | Haiqiao Wang | Haiqiao Wang, Zhuoyuan Wang, Dong Ni, Yi Wang | ModeTv2: GPU-accelerated Motion Decomposition Transformer for Pairwise
Optimization in Medical Image Registration | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Deformable image registration plays a crucial role in medical imaging, aiding
in disease diagnosis and image-guided interventions. Traditional iterative
methods are slow, while deep learning (DL) accelerates solutions but faces
usability and precision challenges. This study introduces a pyramid network
with the enhanced motion decomposition Transformer (ModeTv2) operator,
showcasing superior pairwise optimization (PO) akin to traditional methods. We
re-implement ModeT operator with CUDA extensions to enhance its computational
efficiency. We further propose RegHead module which refines deformation fields,
improves the realism of deformation and reduces parameters. By adopting the PO,
the proposed network balances accuracy, efficiency, and generalizability.
Extensive experiments on three public brain MRI datasets and one abdominal CT
dataset demonstrate the network's suitability for PO, providing a DL model with
enhanced usability and interpretability. The code is publicly available at
https://github.com/ZAX130/ModeTv2.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 08:09:22 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 01:56:29 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Haiqiao",
""
],
[
"Wang",
"Zhuoyuan",
""
],
[
"Ni",
"Dong",
""
],
[
"Wang",
"Yi",
""
]
] | TITLE: ModeTv2: GPU-accelerated Motion Decomposition Transformer for Pairwise
Optimization in Medical Image Registration
ABSTRACT: Deformable image registration plays a crucial role in medical imaging, aiding
in disease diagnosis and image-guided interventions. Traditional iterative
methods are slow, while deep learning (DL) accelerates solutions but faces
usability and precision challenges. This study introduces a pyramid network
with the enhanced motion decomposition Transformer (ModeTv2) operator,
showcasing superior pairwise optimization (PO) akin to traditional methods. We
re-implement ModeT operator with CUDA extensions to enhance its computational
efficiency. We further propose RegHead module which refines deformation fields,
improves the realism of deformation and reduces parameters. By adopting the PO,
the proposed network balances accuracy, efficiency, and generalizability.
Extensive experiments on three public brain MRI datasets and one abdominal CT
dataset demonstrate the network's suitability for PO, providing a DL model with
enhanced usability and interpretability. The code is publicly available at
https://github.com/ZAX130/ModeTv2.
|
2403.18771 | Yukyung Lee | Yukyung Lee, Joonghoon Kim, Jaehee Kim, Hyowon Cho, Jaewook Kang,
Pilsung Kang, Najoung Kim | CheckEval: A reliable LLM-as-a-Judge framework for evaluating text
generation using checklists | Extended version currently under review (Workshop version: HEAL at
CHI 2024) | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Existing LLM-as-a-Judge approaches for evaluating text generation suffer from
rating inconsistencies, with low agreement and high rating variance across
different evaluator models. We attribute this to subjective evaluation criteria
combined with Likert scale scoring in existing protocols. To address this
issue, we introduce CheckEval, a checklist-based evaluation framework that
improves rating reliability via decomposed binary questions. Through
experiments with 12 evaluator models across multiple datasets, we first
demonstrate that CheckEval strongly correlates with human judgments, improving
the average correlation with human judgments by 0.10. More importantly,
CheckEval dramatically improves the average agreement across evaluator models
by 0.45 and reduces the score variance. CheckEval scores furthermore have the
benefit of being more interpretable because it decomposes evaluation criteria
into traceable binary decisions, allowing analyses of specific attributes
driving quality judgments.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 17:20:39 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 00:07:06 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lee",
"Yukyung",
""
],
[
"Kim",
"Joonghoon",
""
],
[
"Kim",
"Jaehee",
""
],
[
"Cho",
"Hyowon",
""
],
[
"Kang",
"Jaewook",
""
],
[
"Kang",
"Pilsung",
""
],
[
"Kim",
"Najoung",
""
]
] | TITLE: CheckEval: A reliable LLM-as-a-Judge framework for evaluating text
generation using checklists
ABSTRACT: Existing LLM-as-a-Judge approaches for evaluating text generation suffer from
rating inconsistencies, with low agreement and high rating variance across
different evaluator models. We attribute this to subjective evaluation criteria
combined with Likert scale scoring in existing protocols. To address this
issue, we introduce CheckEval, a checklist-based evaluation framework that
improves rating reliability via decomposed binary questions. Through
experiments with 12 evaluator models across multiple datasets, we first
demonstrate that CheckEval strongly correlates with human judgments, improving
the average correlation with human judgments by 0.10. More importantly,
CheckEval dramatically improves the average agreement across evaluator models
by 0.45 and reduces the score variance. CheckEval scores furthermore have the
benefit of being more interpretable because it decomposes evaluation criteria
into traceable binary decisions, allowing analyses of specific attributes
driving quality judgments.
|
2404.00521 | Yao Ni | Yao Ni, Piotr Koniusz | CHAIN: Enhancing Generalization in Data-Efficient GANs via lipsCHitz
continuity constrAIned Normalization | Accepted by CVPR 2024. 26 pages. Code:
https://github.com/MaxwellYaoNi/CHAIN | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative Adversarial Networks (GANs) significantly advanced image
generation but their performance heavily depends on abundant training data. In
scenarios with limited data, GANs often struggle with discriminator overfitting
and unstable training. Batch Normalization (BN), despite being known for
enhancing generalization and training stability, has rarely been used in the
discriminator of Data-Efficient GANs. Our work addresses this gap by
identifying a critical flaw in BN: the tendency for gradient explosion during
the centering and scaling steps. To tackle this issue, we present CHAIN
(lipsCHitz continuity constrAIned Normalization), which replaces the
conventional centering step with zero-mean regularization and integrates a
Lipschitz continuity constraint in the scaling step. CHAIN further enhances GAN
training by adaptively interpolating the normalized and unnormalized features,
effectively avoiding discriminator overfitting. Our theoretical analyses firmly
establishes CHAIN's effectiveness in reducing gradients in latent features and
weights, improving stability and generalization in GAN training. Empirical
evidence supports our theory. CHAIN achieves state-of-the-art results in
data-limited scenarios on CIFAR-10/100, ImageNet, five low-shot and seven
high-resolution few-shot image datasets. Code:
https://github.com/MaxwellYaoNi/CHAIN
| [
{
"version": "v1",
"created": "Sun, 31 Mar 2024 01:41:36 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Apr 2024 07:15:34 GMT"
},
{
"version": "v3",
"created": "Sun, 7 Apr 2024 15:04:47 GMT"
},
{
"version": "v4",
"created": "Sat, 1 Jun 2024 16:22:54 GMT"
},
{
"version": "v5",
"created": "Sat, 2 Nov 2024 03:14:15 GMT"
},
{
"version": "v6",
"created": "Sat, 15 Mar 2025 06:11:42 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ni",
"Yao",
""
],
[
"Koniusz",
"Piotr",
""
]
] | TITLE: CHAIN: Enhancing Generalization in Data-Efficient GANs via lipsCHitz
continuity constrAIned Normalization
ABSTRACT: Generative Adversarial Networks (GANs) significantly advanced image
generation but their performance heavily depends on abundant training data. In
scenarios with limited data, GANs often struggle with discriminator overfitting
and unstable training. Batch Normalization (BN), despite being known for
enhancing generalization and training stability, has rarely been used in the
discriminator of Data-Efficient GANs. Our work addresses this gap by
identifying a critical flaw in BN: the tendency for gradient explosion during
the centering and scaling steps. To tackle this issue, we present CHAIN
(lipsCHitz continuity constrAIned Normalization), which replaces the
conventional centering step with zero-mean regularization and integrates a
Lipschitz continuity constraint in the scaling step. CHAIN further enhances GAN
training by adaptively interpolating the normalized and unnormalized features,
effectively avoiding discriminator overfitting. Our theoretical analyses firmly
establishes CHAIN's effectiveness in reducing gradients in latent features and
weights, improving stability and generalization in GAN training. Empirical
evidence supports our theory. CHAIN achieves state-of-the-art results in
data-limited scenarios on CIFAR-10/100, ImageNet, five low-shot and seven
high-resolution few-shot image datasets. Code:
https://github.com/MaxwellYaoNi/CHAIN
|
2404.05583 | Yue-Hua Han | Yue-Hua Han, Tai-Ming Huang, Kai-Lung Hua, Jun-Cheng Chen | Towards More General Video-based Deepfake Detection through Facial
Component Guided Adaptation for Foundation Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative models have enabled the creation of highly realistic
facial-synthetic images, raising significant concerns due to their potential
for misuse. Despite rapid advancements in the field of deepfake detection,
developing efficient approaches to leverage foundation models for improved
generalizability to unseen forgery samples remains challenging. To address this
challenge, we propose a novel side-network-based decoder that extracts spatial
and temporal cues using the CLIP image encoder for generalized video-based
Deepfake detection. Additionally, we introduce Facial Component Guidance (FCG)
to enhance spatial learning generalizability by encouraging the model to focus
on key facial regions. By leveraging the generic features of a vision-language
foundation model, our approach demonstrates promising generalizability on
challenging Deepfake datasets while also exhibiting superiority in training
data efficiency, parameter efficiency, and model robustness.
| [
{
"version": "v1",
"created": "Mon, 8 Apr 2024 14:58:52 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jun 2024 06:29:37 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 17:10:58 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Han",
"Yue-Hua",
""
],
[
"Huang",
"Tai-Ming",
""
],
[
"Hua",
"Kai-Lung",
""
],
[
"Chen",
"Jun-Cheng",
""
]
] | TITLE: Towards More General Video-based Deepfake Detection through Facial
Component Guided Adaptation for Foundation Model
ABSTRACT: Generative models have enabled the creation of highly realistic
facial-synthetic images, raising significant concerns due to their potential
for misuse. Despite rapid advancements in the field of deepfake detection,
developing efficient approaches to leverage foundation models for improved
generalizability to unseen forgery samples remains challenging. To address this
challenge, we propose a novel side-network-based decoder that extracts spatial
and temporal cues using the CLIP image encoder for generalized video-based
Deepfake detection. Additionally, we introduce Facial Component Guidance (FCG)
to enhance spatial learning generalizability by encouraging the model to focus
on key facial regions. By leveraging the generic features of a vision-language
foundation model, our approach demonstrates promising generalizability on
challenging Deepfake datasets while also exhibiting superiority in training
data efficiency, parameter efficiency, and model robustness.
|
2404.10757 | Yuyang Li | Yu-Yang Li, Yu Bai, Cunshi Wang, Mengwei Qu, Ziteng Lu, Roberto Soria,
Jifeng Liu | Deep Learning and LLM-based Methods Applied to Stellar Lightcurve
Classification | 35 pages, 20 figures | Intell Comput. 2025;4:0110 | 10.34133/icomputing.0110 | null | astro-ph.IM astro-ph.SR cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Light curves serve as a valuable source of information on stellar formation
and evolution. With the rapid advancement of machine learning techniques, it
can be effectively processed to extract astronomical patterns and information.
In this study, we present a comprehensive evaluation of deep-learning and large
language model (LLM) based models for the automatic classification of variable
star light curves, based on large datasets from the Kepler and K2 missions.
Special emphasis is placed on Cepheids, RR Lyrae, and eclipsing binaries,
examining the influence of observational cadence and phase distribution on
classification precision. Employing AutoDL optimization, we achieve striking
performance with the 1D-Convolution+BiLSTM architecture and the Swin
Transformer, hitting accuracies of 94\% and 99\% correspondingly, with the
latter demonstrating a notable 83\% accuracy in discerning the elusive Type II
Cepheids-comprising merely 0.02\% of the total dataset.We unveil StarWhisper
LightCurve (LC), an innovative Series comprising three LLM-based models: LLM,
multimodal large language model (MLLM), and Large Audio Language Model (LALM).
Each model is fine-tuned with strategic prompt engineering and customized
training methods to explore the emergent abilities of these models for
astronomical data. Remarkably, StarWhisper LC Series exhibit high accuracies
around 90\%, significantly reducing the need for explicit feature engineering,
thereby paving the way for streamlined parallel data processing and the
progression of multifaceted multimodal models in astronomical applications. The
study furnishes two detailed catalogs illustrating the impacts of phase and
sampling intervals on deep learning classification accuracy, showing that a
substantial decrease of up to 14\% in observation duration and 21\% in sampling
points can be realized without compromising accuracy by more than 10\%.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 17:35:25 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Feb 2025 00:25:01 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Yu-Yang",
""
],
[
"Bai",
"Yu",
""
],
[
"Wang",
"Cunshi",
""
],
[
"Qu",
"Mengwei",
""
],
[
"Lu",
"Ziteng",
""
],
[
"Soria",
"Roberto",
""
],
[
"Liu",
"Jifeng",
""
]
] | TITLE: Deep Learning and LLM-based Methods Applied to Stellar Lightcurve
Classification
ABSTRACT: Light curves serve as a valuable source of information on stellar formation
and evolution. With the rapid advancement of machine learning techniques, it
can be effectively processed to extract astronomical patterns and information.
In this study, we present a comprehensive evaluation of deep-learning and large
language model (LLM) based models for the automatic classification of variable
star light curves, based on large datasets from the Kepler and K2 missions.
Special emphasis is placed on Cepheids, RR Lyrae, and eclipsing binaries,
examining the influence of observational cadence and phase distribution on
classification precision. Employing AutoDL optimization, we achieve striking
performance with the 1D-Convolution+BiLSTM architecture and the Swin
Transformer, hitting accuracies of 94\% and 99\% correspondingly, with the
latter demonstrating a notable 83\% accuracy in discerning the elusive Type II
Cepheids-comprising merely 0.02\% of the total dataset.We unveil StarWhisper
LightCurve (LC), an innovative Series comprising three LLM-based models: LLM,
multimodal large language model (MLLM), and Large Audio Language Model (LALM).
Each model is fine-tuned with strategic prompt engineering and customized
training methods to explore the emergent abilities of these models for
astronomical data. Remarkably, StarWhisper LC Series exhibit high accuracies
around 90\%, significantly reducing the need for explicit feature engineering,
thereby paving the way for streamlined parallel data processing and the
progression of multifaceted multimodal models in astronomical applications. The
study furnishes two detailed catalogs illustrating the impacts of phase and
sampling intervals on deep learning classification accuracy, showing that a
substantial decrease of up to 14\% in observation duration and 21\% in sampling
points can be realized without compromising accuracy by more than 10\%.
|
2404.15786 | Christian Ledig | Sebastian Doerrich, Francesco Di Salvo, Julius Brockmann, Christian
Ledig | Rethinking model prototyping through the MedMNIST+ dataset collection | null | Scientific Reports 15, 7669 (2025) | 10.1038/s41598-025-92156-9 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The integration of deep learning based systems in clinical practice is often
impeded by challenges rooted in limited and heterogeneous medical datasets. In
addition, the field has increasingly prioritized marginal performance gains on
a few, narrowly scoped benchmarks over clinical applicability, slowing down
meaningful algorithmic progress. This trend often results in excessive
fine-tuning of existing methods on selected datasets rather than fostering
clinically relevant innovations. In response, this work introduces a
comprehensive benchmark for the MedMNIST+ dataset collection, designed to
diversify the evaluation landscape across several imaging modalities,
anatomical regions, classification tasks and sample sizes. We systematically
reassess commonly used Convolutional Neural Networks (CNNs) and Vision
Transformer (ViT) architectures across distinct medical datasets, training
methodologies, and input resolutions to validate and refine existing
assumptions about model effectiveness and development. Our findings suggest
that computationally efficient training schemes and modern foundation models
offer viable alternatives to costly end-to-end training. Additionally, we
observe that higher image resolutions do not consistently improve performance
beyond a certain threshold. This highlights the potential benefits of using
lower resolutions, particularly in prototyping stages, to reduce computational
demands without sacrificing accuracy. Notably, our analysis reaffirms the
competitiveness of CNNs compared to ViTs, emphasizing the importance of
comprehending the intrinsic capabilities of different architectures. Finally,
by establishing a standardized evaluation framework, we aim to enhance
transparency, reproducibility, and comparability within the MedMNIST+ dataset
collection. Code is available at
https://github.com/sdoerrich97/rethinking-model-prototyping-MedMNISTPlus .
| [
{
"version": "v1",
"created": "Wed, 24 Apr 2024 10:19:25 GMT"
},
{
"version": "v2",
"created": "Tue, 7 May 2024 20:49:46 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 12:01:18 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Doerrich",
"Sebastian",
""
],
[
"Di Salvo",
"Francesco",
""
],
[
"Brockmann",
"Julius",
""
],
[
"Ledig",
"Christian",
""
]
] | TITLE: Rethinking model prototyping through the MedMNIST+ dataset collection
ABSTRACT: The integration of deep learning based systems in clinical practice is often
impeded by challenges rooted in limited and heterogeneous medical datasets. In
addition, the field has increasingly prioritized marginal performance gains on
a few, narrowly scoped benchmarks over clinical applicability, slowing down
meaningful algorithmic progress. This trend often results in excessive
fine-tuning of existing methods on selected datasets rather than fostering
clinically relevant innovations. In response, this work introduces a
comprehensive benchmark for the MedMNIST+ dataset collection, designed to
diversify the evaluation landscape across several imaging modalities,
anatomical regions, classification tasks and sample sizes. We systematically
reassess commonly used Convolutional Neural Networks (CNNs) and Vision
Transformer (ViT) architectures across distinct medical datasets, training
methodologies, and input resolutions to validate and refine existing
assumptions about model effectiveness and development. Our findings suggest
that computationally efficient training schemes and modern foundation models
offer viable alternatives to costly end-to-end training. Additionally, we
observe that higher image resolutions do not consistently improve performance
beyond a certain threshold. This highlights the potential benefits of using
lower resolutions, particularly in prototyping stages, to reduce computational
demands without sacrificing accuracy. Notably, our analysis reaffirms the
competitiveness of CNNs compared to ViTs, emphasizing the importance of
comprehending the intrinsic capabilities of different architectures. Finally,
by establishing a standardized evaluation framework, we aim to enhance
transparency, reproducibility, and comparability within the MedMNIST+ dataset
collection. Code is available at
https://github.com/sdoerrich97/rethinking-model-prototyping-MedMNISTPlus .
|
2404.16367 | Kabir Ahuja | Kabir Ahuja, Vidhisha Balachandran, Madhur Panwar, Tianxing He, Noah
A. Smith, Navin Goyal, Yulia Tsvetkov | Learning Syntax Without Planting Trees: Understanding Hierarchical
Generalization in Transformers | Accepted in TACL Code now available:
https://github.com/kabirahuja2431/transformers-hg | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Transformers trained on natural language data have been shown to learn its
hierarchical structure and generalize to sentences with unseen syntactic
structures without explicitly encoding any structural bias. In this work, we
investigate sources of inductive bias in transformer models and their training
that could cause such generalization behavior to emerge. We extensively
experiment with transformer models trained on multiple synthetic datasets and
with different training objectives and show that while other objectives e.g.
sequence-to-sequence modeling, prefix language modeling, often failed to lead
to hierarchical generalization, models trained with the language modeling
objective consistently learned to generalize hierarchically. We then conduct
pruning experiments to study how transformers trained with the language
modeling objective encode hierarchical structure. When pruned, we find joint
existence of subnetworks within the model with different generalization
behaviors (subnetworks corresponding to hierarchical structure and linear
order). Finally, we take a Bayesian perspective to further uncover
transformers' preference for hierarchical generalization: We establish a
correlation between whether transformers generalize hierarchically on a dataset
and whether the simplest explanation of that dataset is provided by a
hierarchical grammar compared to regular grammars exhibiting linear
generalization.
| [
{
"version": "v1",
"created": "Thu, 25 Apr 2024 07:10:29 GMT"
},
{
"version": "v2",
"created": "Fri, 31 May 2024 23:47:15 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 05:23:12 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ahuja",
"Kabir",
""
],
[
"Balachandran",
"Vidhisha",
""
],
[
"Panwar",
"Madhur",
""
],
[
"He",
"Tianxing",
""
],
[
"Smith",
"Noah A.",
""
],
[
"Goyal",
"Navin",
""
],
[
"Tsvetkov",
"Yulia",
""
]
] | TITLE: Learning Syntax Without Planting Trees: Understanding Hierarchical
Generalization in Transformers
ABSTRACT: Transformers trained on natural language data have been shown to learn its
hierarchical structure and generalize to sentences with unseen syntactic
structures without explicitly encoding any structural bias. In this work, we
investigate sources of inductive bias in transformer models and their training
that could cause such generalization behavior to emerge. We extensively
experiment with transformer models trained on multiple synthetic datasets and
with different training objectives and show that while other objectives e.g.
sequence-to-sequence modeling, prefix language modeling, often failed to lead
to hierarchical generalization, models trained with the language modeling
objective consistently learned to generalize hierarchically. We then conduct
pruning experiments to study how transformers trained with the language
modeling objective encode hierarchical structure. When pruned, we find joint
existence of subnetworks within the model with different generalization
behaviors (subnetworks corresponding to hierarchical structure and linear
order). Finally, we take a Bayesian perspective to further uncover
transformers' preference for hierarchical generalization: We establish a
correlation between whether transformers generalize hierarchically on a dataset
and whether the simplest explanation of that dataset is provided by a
hierarchical grammar compared to regular grammars exhibiting linear
generalization.
|
2404.16820 | Chuhan Zhang | Olivia Wiles, Chuhan Zhang, Isabela Albuquerque, Ivana Kaji\'c, Su
Wang, Emanuele Bugliarello, Yasumasa Onoe, Pinelopi Papalampidi, Ira Ktena,
Chris Knutsen, Cyrus Rashtchian, Anant Nawalgaria, Jordi Pont-Tuset, Aida
Nematzadeh | Revisiting Text-to-Image Evaluation with Gecko: On Metrics, Prompts, and
Human Ratings | Accepted to ICLR 2025 (Spotlight) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | While text-to-image (T2I) generative models have become ubiquitous, they do
not necessarily generate images that align with a given prompt. While previous
work has evaluated T2I alignment by proposing metrics, benchmarks, and
templates for collecting human judgements, the quality of these components is
not systematically measured. Human-rated prompt sets are generally small and
the reliability of the ratings -- and thereby the prompt set used to compare
models -- is not evaluated. We address this gap by performing an extensive
study evaluating auto-eval metrics and human templates. We provide three main
contributions: (1) We introduce a comprehensive skills-based benchmark that can
discriminate models across different human templates. This skills-based
benchmark categorises prompts into sub-skills, allowing a practitioner to
pinpoint not only which skills are challenging, but at what level of complexity
a skill becomes challenging. (2) We gather human ratings across four templates
and four T2I models for a total of >100K annotations. This allows us to
understand where differences arise due to inherent ambiguity in the prompt and
where they arise due to differences in metric and model quality. (3) Finally,
we introduce a new QA-based auto-eval metric that is better correlated with
human ratings than existing metrics for our new dataset, across different human
templates, and on TIFA160.
| [
{
"version": "v1",
"created": "Thu, 25 Apr 2024 17:58:43 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Feb 2025 21:18:48 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 22:41:18 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Mar 2025 15:53:14 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wiles",
"Olivia",
""
],
[
"Zhang",
"Chuhan",
""
],
[
"Albuquerque",
"Isabela",
""
],
[
"Kajić",
"Ivana",
""
],
[
"Wang",
"Su",
""
],
[
"Bugliarello",
"Emanuele",
""
],
[
"Onoe",
"Yasumasa",
""
],
[
"Papalampidi",
"Pinelopi",
""
],
[
"Ktena",
"Ira",
""
],
[
"Knutsen",
"Chris",
""
],
[
"Rashtchian",
"Cyrus",
""
],
[
"Nawalgaria",
"Anant",
""
],
[
"Pont-Tuset",
"Jordi",
""
],
[
"Nematzadeh",
"Aida",
""
]
] | TITLE: Revisiting Text-to-Image Evaluation with Gecko: On Metrics, Prompts, and
Human Ratings
ABSTRACT: While text-to-image (T2I) generative models have become ubiquitous, they do
not necessarily generate images that align with a given prompt. While previous
work has evaluated T2I alignment by proposing metrics, benchmarks, and
templates for collecting human judgements, the quality of these components is
not systematically measured. Human-rated prompt sets are generally small and
the reliability of the ratings -- and thereby the prompt set used to compare
models -- is not evaluated. We address this gap by performing an extensive
study evaluating auto-eval metrics and human templates. We provide three main
contributions: (1) We introduce a comprehensive skills-based benchmark that can
discriminate models across different human templates. This skills-based
benchmark categorises prompts into sub-skills, allowing a practitioner to
pinpoint not only which skills are challenging, but at what level of complexity
a skill becomes challenging. (2) We gather human ratings across four templates
and four T2I models for a total of >100K annotations. This allows us to
understand where differences arise due to inherent ambiguity in the prompt and
where they arise due to differences in metric and model quality. (3) Finally,
we introduce a new QA-based auto-eval metric that is better correlated with
human ratings than existing metrics for our new dataset, across different human
templates, and on TIFA160.
|
2404.17092 | Weiran Chen | Weiran Chen, Qi Xu | Robust and Efficient Adversarial Defense in SNNs via Image Purification
and Joint Detection | null | null | 10.1109/ICASSP49660.2025.10888581 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spiking Neural Networks (SNNs) aim to bridge the gap between neuroscience and
machine learning by emulating the structure of the human nervous system.
However, like convolutional neural networks, SNNs are vulnerable to adversarial
attacks. To tackle the challenge, we propose a biologically inspired
methodology to enhance the robustness of SNNs, drawing insights from the visual
masking effect and filtering theory. First, an end-to-end SNN-based image
purification model is proposed to defend against adversarial attacks, including
a noise extraction network and a non-blind denoising network. The former
network extracts noise features from noisy images, while the latter component
employs a residual U-Net structure to reconstruct high-quality noisy images and
generate clean images. Simultaneously, a multi-level firing SNN based on
Squeeze-and-Excitation Network is introduced to improve the robustness of the
classifier. Crucially, the proposed image purification network serves as a
pre-processing module, avoiding modifications to classifiers. Unlike
adversarial training, our method is highly flexible and can be seamlessly
integrated with other defense strategies. Experimental results on various
datasets demonstrate that the proposed methodology outperforms state-of-the-art
baselines in terms of defense effectiveness, training time, and resource
consumption.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2024 00:57:06 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 05:06:12 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chen",
"Weiran",
""
],
[
"Xu",
"Qi",
""
]
] | TITLE: Robust and Efficient Adversarial Defense in SNNs via Image Purification
and Joint Detection
ABSTRACT: Spiking Neural Networks (SNNs) aim to bridge the gap between neuroscience and
machine learning by emulating the structure of the human nervous system.
However, like convolutional neural networks, SNNs are vulnerable to adversarial
attacks. To tackle the challenge, we propose a biologically inspired
methodology to enhance the robustness of SNNs, drawing insights from the visual
masking effect and filtering theory. First, an end-to-end SNN-based image
purification model is proposed to defend against adversarial attacks, including
a noise extraction network and a non-blind denoising network. The former
network extracts noise features from noisy images, while the latter component
employs a residual U-Net structure to reconstruct high-quality noisy images and
generate clean images. Simultaneously, a multi-level firing SNN based on
Squeeze-and-Excitation Network is introduced to improve the robustness of the
classifier. Crucially, the proposed image purification network serves as a
pre-processing module, avoiding modifications to classifiers. Unlike
adversarial training, our method is highly flexible and can be seamlessly
integrated with other defense strategies. Experimental results on various
datasets demonstrate that the proposed methodology outperforms state-of-the-art
baselines in terms of defense effectiveness, training time, and resource
consumption.
|
2405.00604 | Theodor Westny Mr | Theodor Westny and Bj\"orn Olofsson and Erik Frisk | Toward Unified Practices in Trajectory Prediction Research on Drone
Datasets | https://github.com/westny/dronalize | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The availability of high-quality datasets is crucial for the development of
behavior prediction algorithms in autonomous vehicles. This paper highlights
the need to standardize the use of certain datasets for motion forecasting
research to simplify comparative analysis and proposes a set of tools and
practices to achieve this. Drawing on extensive experience and a comprehensive
review of current literature, we summarize our proposals for preprocessing,
visualization, and evaluation in the form of an open-sourced toolbox designed
for researchers working on trajectory prediction problems. The clear
specification of necessary preprocessing steps and evaluation metrics is
intended to alleviate development efforts and facilitate the comparison of
results across different studies. The toolbox is available at:
https://github.com/westny/dronalize.
| [
{
"version": "v1",
"created": "Wed, 1 May 2024 16:17:39 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Sep 2024 09:18:59 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Mar 2025 22:13:49 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Westny",
"Theodor",
""
],
[
"Olofsson",
"Björn",
""
],
[
"Frisk",
"Erik",
""
]
] | TITLE: Toward Unified Practices in Trajectory Prediction Research on Drone
Datasets
ABSTRACT: The availability of high-quality datasets is crucial for the development of
behavior prediction algorithms in autonomous vehicles. This paper highlights
the need to standardize the use of certain datasets for motion forecasting
research to simplify comparative analysis and proposes a set of tools and
practices to achieve this. Drawing on extensive experience and a comprehensive
review of current literature, we summarize our proposals for preprocessing,
visualization, and evaluation in the form of an open-sourced toolbox designed
for researchers working on trajectory prediction problems. The clear
specification of necessary preprocessing steps and evaluation metrics is
intended to alleviate development efforts and facilitate the comparison of
results across different studies. The toolbox is available at:
https://github.com/westny/dronalize.
|
2405.01217 | Chenying Liu | Chenying Liu, Conrad Albrecht, Yi Wang, Xiao Xiang Zhu | CromSS: Cross-modal pre-training with noisy labels for remote sensing
image segmentation | The 1st short version was accepted as an oral presentation by ICLR
2024 ML4RS workshop. The 2nd extended version was accepted by IEEE TGRS | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the potential of large-scale noisily labeled data to enhance
feature learning by pretraining semantic segmentation models within a
multi-modal framework for geospatial applications. We propose a novel
Cross-modal Sample Selection (CromSS) method, a weakly supervised pretraining
strategy designed to improve feature representations through cross-modal
consistency and noise mitigation techniques. Unlike conventional pretraining
approaches, CromSS exploits massive amounts of noisy and easy-to-come-by labels
for improved feature learning beneficial to semantic segmentation tasks. We
investigate middle and late fusion strategies to optimize the multi-modal
pretraining architecture design. We also introduce a cross-modal sample
selection module to mitigate the adverse effects of label noise, which employs
a cross-modal entangling strategy to refine the estimated confidence masks
within each modality to guide the sampling process. Additionally, we introduce
a spatial-temporal label smoothing technique to counteract overconfidence for
enhanced robustness against noisy labels. To validate our approach, we
assembled the multi-modal dataset, NoLDO-S12, which consists of a large-scale
noisy label subset from Google's Dynamic World (DW) dataset for pretraining and
two downstream subsets with high-quality labels from Google DW and
OpenStreetMap (OSM) for transfer learning. Experimental results on two
downstream tasks and the publicly available DFC2020 dataset demonstrate that
when effectively utilized, the low-cost noisy labels can significantly enhance
feature learning for segmentation tasks. All data, code, and pretrained weights
will be made publicly available.
| [
{
"version": "v1",
"created": "Thu, 2 May 2024 11:58:06 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 07:38:09 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 07:26:04 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Chenying",
""
],
[
"Albrecht",
"Conrad",
""
],
[
"Wang",
"Yi",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] | TITLE: CromSS: Cross-modal pre-training with noisy labels for remote sensing
image segmentation
ABSTRACT: We explore the potential of large-scale noisily labeled data to enhance
feature learning by pretraining semantic segmentation models within a
multi-modal framework for geospatial applications. We propose a novel
Cross-modal Sample Selection (CromSS) method, a weakly supervised pretraining
strategy designed to improve feature representations through cross-modal
consistency and noise mitigation techniques. Unlike conventional pretraining
approaches, CromSS exploits massive amounts of noisy and easy-to-come-by labels
for improved feature learning beneficial to semantic segmentation tasks. We
investigate middle and late fusion strategies to optimize the multi-modal
pretraining architecture design. We also introduce a cross-modal sample
selection module to mitigate the adverse effects of label noise, which employs
a cross-modal entangling strategy to refine the estimated confidence masks
within each modality to guide the sampling process. Additionally, we introduce
a spatial-temporal label smoothing technique to counteract overconfidence for
enhanced robustness against noisy labels. To validate our approach, we
assembled the multi-modal dataset, NoLDO-S12, which consists of a large-scale
noisy label subset from Google's Dynamic World (DW) dataset for pretraining and
two downstream subsets with high-quality labels from Google DW and
OpenStreetMap (OSM) for transfer learning. Experimental results on two
downstream tasks and the publicly available DFC2020 dataset demonstrate that
when effectively utilized, the low-cost noisy labels can significantly enhance
feature learning for segmentation tasks. All data, code, and pretrained weights
will be made publicly available.
|
2405.10948 | Guankun Wang | Guankun Wang, Long Bai, Wan Jun Nah, Jie Wang, Zhaoxi Zhang, Zhen
Chen, Jinlin Wu, Mobarakol Islam, Hongbin Liu, and Hongliang Ren | Surgical-LVLM: Learning to Adapt Large Vision-Language Model for
Grounded Visual Question Answering in Robotic Surgery | The manuscript is accepted by ICLR 2025 FM-Wild Workshop | null | null | null | cs.CV cs.AI cs.RO eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in Surgical Visual Question Answering (Surgical-VQA) and
related region grounding have shown great promise for robotic and medical
applications, addressing the critical need for automated methods in
personalized surgical mentorship. However, existing models primarily provide
simple structured answers and struggle with complex scenarios due to their
limited capability in recognizing long-range dependencies and aligning
multimodal information. In this paper, we introduce Surgical-LVLM, a novel
personalized large vision-language model tailored for complex surgical
scenarios. Leveraging the pre-trained large vision-language model and
specialized Visual Perception LoRA (VP-LoRA) blocks, our model excels in
understanding complex visual-language tasks within surgical contexts. In
addressing the visual grounding task, we propose the Token-Interaction (TIT)
module, which strengthens the interaction between the grounding module and the
language responses of the Large Visual Language Model (LVLM) after projecting
them into the latent space. We demonstrate the effectiveness of Surgical-LVLM
on several benchmarks, including EndoVis-17-VQLA, EndoVis-18-VQLA, and a newly
introduced EndoVis Conversations dataset, which sets new performance standards.
Our work contributes to advancing the field of automated surgical mentorship by
providing a context-aware solution.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 08:38:27 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 01:02:22 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 02:23:30 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Guankun",
""
],
[
"Bai",
"Long",
""
],
[
"Nah",
"Wan Jun",
""
],
[
"Wang",
"Jie",
""
],
[
"Zhang",
"Zhaoxi",
""
],
[
"Chen",
"Zhen",
""
],
[
"Wu",
"Jinlin",
""
],
[
"Islam",
"Mobarakol",
""
],
[
"Liu",
"Hongbin",
""
],
[
"Ren",
"Hongliang",
""
]
] | TITLE: Surgical-LVLM: Learning to Adapt Large Vision-Language Model for
Grounded Visual Question Answering in Robotic Surgery
ABSTRACT: Recent advancements in Surgical Visual Question Answering (Surgical-VQA) and
related region grounding have shown great promise for robotic and medical
applications, addressing the critical need for automated methods in
personalized surgical mentorship. However, existing models primarily provide
simple structured answers and struggle with complex scenarios due to their
limited capability in recognizing long-range dependencies and aligning
multimodal information. In this paper, we introduce Surgical-LVLM, a novel
personalized large vision-language model tailored for complex surgical
scenarios. Leveraging the pre-trained large vision-language model and
specialized Visual Perception LoRA (VP-LoRA) blocks, our model excels in
understanding complex visual-language tasks within surgical contexts. In
addressing the visual grounding task, we propose the Token-Interaction (TIT)
module, which strengthens the interaction between the grounding module and the
language responses of the Large Visual Language Model (LVLM) after projecting
them into the latent space. We demonstrate the effectiveness of Surgical-LVLM
on several benchmarks, including EndoVis-17-VQLA, EndoVis-18-VQLA, and a newly
introduced EndoVis Conversations dataset, which sets new performance standards.
Our work contributes to advancing the field of automated surgical mentorship by
providing a context-aware solution.
|
2405.16868 | Tianhang Wang | Tianhang Wang, Fan Lu, Zehan Zheng, Zhijun Li, Guang Chen, Changjun
Jiang | RCDN: Towards Robust Camera-Insensitivity Collaborative Perception via
Dynamic Feature-based 3D Neural Modeling | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Collaborative perception is dedicated to tackling the constraints of
single-agent perception, such as occlusions, based on the multiple agents'
multi-view sensor inputs. However, most existing works assume an ideal
condition that all agents' multi-view cameras are continuously available. In
reality, cameras may be highly noisy, obscured or even failed during the
collaboration. In this work, we introduce a new robust camera-insensitivity
problem: how to overcome the issues caused by the failed camera perspectives,
while stabilizing high collaborative performance with low calibration cost? To
address above problems, we propose RCDN, a Robust Camera-insensitivity
collaborative perception with a novel Dynamic feature-based 3D Neural modeling
mechanism. The key intuition of RCDN is to construct collaborative neural
rendering field representations to recover failed perceptual messages sent by
multiple agents. To better model collaborative neural rendering field, RCDN
first establishes a geometry BEV feature based time-invariant static field with
other agents via fast hash grid modeling. Based on the static background field,
the proposed time-varying dynamic field can model corresponding motion vectors
for foregrounds with appropriate positions. To validate RCDN, we create
OPV2V-N, a new large-scale dataset with manual labelling under different camera
failed scenarios. Extensive experiments conducted on OPV2V-N show that RCDN can
be ported to other baselines and improve their robustness in extreme
camera-insensitivity settings.
| [
{
"version": "v1",
"created": "Mon, 27 May 2024 06:35:55 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 06:27:08 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Tianhang",
""
],
[
"Lu",
"Fan",
""
],
[
"Zheng",
"Zehan",
""
],
[
"Li",
"Zhijun",
""
],
[
"Chen",
"Guang",
""
],
[
"Jiang",
"Changjun",
""
]
] | TITLE: RCDN: Towards Robust Camera-Insensitivity Collaborative Perception via
Dynamic Feature-based 3D Neural Modeling
ABSTRACT: Collaborative perception is dedicated to tackling the constraints of
single-agent perception, such as occlusions, based on the multiple agents'
multi-view sensor inputs. However, most existing works assume an ideal
condition that all agents' multi-view cameras are continuously available. In
reality, cameras may be highly noisy, obscured or even failed during the
collaboration. In this work, we introduce a new robust camera-insensitivity
problem: how to overcome the issues caused by the failed camera perspectives,
while stabilizing high collaborative performance with low calibration cost? To
address above problems, we propose RCDN, a Robust Camera-insensitivity
collaborative perception with a novel Dynamic feature-based 3D Neural modeling
mechanism. The key intuition of RCDN is to construct collaborative neural
rendering field representations to recover failed perceptual messages sent by
multiple agents. To better model collaborative neural rendering field, RCDN
first establishes a geometry BEV feature based time-invariant static field with
other agents via fast hash grid modeling. Based on the static background field,
the proposed time-varying dynamic field can model corresponding motion vectors
for foregrounds with appropriate positions. To validate RCDN, we create
OPV2V-N, a new large-scale dataset with manual labelling under different camera
failed scenarios. Extensive experiments conducted on OPV2V-N show that RCDN can
be ported to other baselines and improve their robustness in extreme
camera-insensitivity settings.
|
2405.17035 | Harshit Varma | Harshit Varma, Dheeraj Nagaraj, Karthikeyan Shanmugam | Glauber Generative Model: Discrete Diffusion Models via Binary
Classification | ICLR 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the Glauber Generative Model (GGM), a new class of discrete
diffusion models, to obtain new samples from a distribution given samples from
a discrete space. GGM deploys a discrete Markov chain called the heat bath
dynamics (or the Glauber dynamics) to denoise a sequence of noisy tokens to a
sample from a joint distribution of discrete tokens. Our novel conceptual
framework provides an exact reduction of the task of learning the denoising
Markov chain to solving a class of binary classification tasks. More
specifically, the model learns to classify a given token in a noisy sequence as
signal or noise. In contrast, prior works on discrete diffusion models either
solve regression problems to learn importance ratios, or minimize loss
functions given by variational approximations. We apply GGM to language
modeling and image generation, where images are discretized using image
tokenizers like VQGANs. We show that it outperforms existing discrete diffusion
models in language generation, and demonstrates strong performance for image
generation without using dataset-specific image tokenizers. We also show that
our model is capable of performing well in zero-shot control settings like text
and image infilling.
| [
{
"version": "v1",
"created": "Mon, 27 May 2024 10:42:13 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jun 2024 05:09:57 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Aug 2024 13:05:33 GMT"
},
{
"version": "v4",
"created": "Sun, 16 Mar 2025 09:13:20 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Varma",
"Harshit",
""
],
[
"Nagaraj",
"Dheeraj",
""
],
[
"Shanmugam",
"Karthikeyan",
""
]
] | TITLE: Glauber Generative Model: Discrete Diffusion Models via Binary
Classification
ABSTRACT: We introduce the Glauber Generative Model (GGM), a new class of discrete
diffusion models, to obtain new samples from a distribution given samples from
a discrete space. GGM deploys a discrete Markov chain called the heat bath
dynamics (or the Glauber dynamics) to denoise a sequence of noisy tokens to a
sample from a joint distribution of discrete tokens. Our novel conceptual
framework provides an exact reduction of the task of learning the denoising
Markov chain to solving a class of binary classification tasks. More
specifically, the model learns to classify a given token in a noisy sequence as
signal or noise. In contrast, prior works on discrete diffusion models either
solve regression problems to learn importance ratios, or minimize loss
functions given by variational approximations. We apply GGM to language
modeling and image generation, where images are discretized using image
tokenizers like VQGANs. We show that it outperforms existing discrete diffusion
models in language generation, and demonstrates strong performance for image
generation without using dataset-specific image tokenizers. We also show that
our model is capable of performing well in zero-shot control settings like text
and image infilling.
|
2405.18684 | Mohammadjavad Matinkia | Mohammadjavad Matinkia, Nilanjan Ray | Learning Diffeomorphism for Image Registration with Time-Continuous
Networks using Semigroup Regularization | 27 pages, 11 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Diffeomorphic image registration (DIR) is a fundamental task in 3D medical
image analysis that seeks topology-preserving deformations between image pairs.
To ensure diffeomorphism, a common approach is to model the deformation field
as the flow map solution of a differential equation, which is solved using
efficient schemes such as scaling and squaring along with multiple smoothness
regularization terms. In this paper, we propose a novel learning-based approach
for diffeomorphic 3D image registration that models diffeomorphisms in a
continuous-time framework using only a single regularization term, without
requiring additional integration. We exploit the semigroup property-a
fundamental characteristic of flow maps-as the sole form of regularization,
ensuring temporally continuous diffeomorphic flows between image pairs.
Leveraging this property, we prove that our formulation directly learns the
flow map solution of an ODE, ensuring continuous inverse and cycle
consistencies without explicit enforcement, while eliminating additional
integration schemes and regularization terms. To achieve time-continuous
diffeomorphisms, we employ time-embedded UNets, an architecture commonly used
in diffusion models. Our results demonstrate that modeling diffeomorphism
continuously in time improves registration performance. Experimental results on
four public datasets demonstrate the superiority of our model over
state-of-the-art diffeomorphic methods. Additionally, comparison to several
recent non-diffeomorphic deformable image registration methods shows that our
method achieves competitive Dice scores while significantly improving topology
preservation.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 01:25:43 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Nov 2024 04:13:08 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 21:22:43 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Matinkia",
"Mohammadjavad",
""
],
[
"Ray",
"Nilanjan",
""
]
] | TITLE: Learning Diffeomorphism for Image Registration with Time-Continuous
Networks using Semigroup Regularization
ABSTRACT: Diffeomorphic image registration (DIR) is a fundamental task in 3D medical
image analysis that seeks topology-preserving deformations between image pairs.
To ensure diffeomorphism, a common approach is to model the deformation field
as the flow map solution of a differential equation, which is solved using
efficient schemes such as scaling and squaring along with multiple smoothness
regularization terms. In this paper, we propose a novel learning-based approach
for diffeomorphic 3D image registration that models diffeomorphisms in a
continuous-time framework using only a single regularization term, without
requiring additional integration. We exploit the semigroup property-a
fundamental characteristic of flow maps-as the sole form of regularization,
ensuring temporally continuous diffeomorphic flows between image pairs.
Leveraging this property, we prove that our formulation directly learns the
flow map solution of an ODE, ensuring continuous inverse and cycle
consistencies without explicit enforcement, while eliminating additional
integration schemes and regularization terms. To achieve time-continuous
diffeomorphisms, we employ time-embedded UNets, an architecture commonly used
in diffusion models. Our results demonstrate that modeling diffeomorphism
continuously in time improves registration performance. Experimental results on
four public datasets demonstrate the superiority of our model over
state-of-the-art diffeomorphic methods. Additionally, comparison to several
recent non-diffeomorphic deformable image registration methods shows that our
method achieves competitive Dice scores while significantly improving topology
preservation.
|
2406.00430 | Jianxiang Feng | Zhi Zheng, Qian Feng, Hang Li, Alois Knoll, Jianxiang Feng | Evaluating Uncertainty-based Failure Detection for Closed-Loop LLM
Planners | Accepted at ICRA 2024 Workshop on Back to the Future: Robot Learning
Going Probabilistic. Website: https://sites.google.com/view/konwloop/home | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, Large Language Models (LLMs) have witnessed remarkable performance
as zero-shot task planners for robotic manipulation tasks. However, the
open-loop nature of previous works makes LLM-based planning error-prone and
fragile. On the other hand, failure detection approaches for closed-loop
planning are often limited by task-specific heuristics or following an
unrealistic assumption that the prediction is trustworthy all the time. As a
general-purpose reasoning machine, LLMs or Multimodal Large Language Models
(MLLMs) are promising for detecting failures. However, However, the
appropriateness of the aforementioned assumption diminishes due to the
notorious hullucination problem. In this work, we attempt to mitigate these
issues by introducing a framework for closed-loop LLM-based planning called
KnowLoop, backed by an uncertainty-based MLLMs failure detector, which is
agnostic to any used MLLMs or LLMs. Specifically, we evaluate three different
ways for quantifying the uncertainty of MLLMs, namely token probability,
entropy, and self-explained confidence as primary metrics based on three
carefully designed representative prompting strategies. With a self-collected
dataset including various manipulation tasks and an LLM-based robot system, our
experiments demonstrate that token probability and entropy are more reflective
compared to self-explained confidence. By setting an appropriate threshold to
filter out uncertain predictions and seek human help actively, the accuracy of
failure detection can be significantly enhanced. This improvement boosts the
effectiveness of closed-loop planning and the overall success rate of tasks.
| [
{
"version": "v1",
"created": "Sat, 1 Jun 2024 12:52:06 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 17:21:09 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zheng",
"Zhi",
""
],
[
"Feng",
"Qian",
""
],
[
"Li",
"Hang",
""
],
[
"Knoll",
"Alois",
""
],
[
"Feng",
"Jianxiang",
""
]
] | TITLE: Evaluating Uncertainty-based Failure Detection for Closed-Loop LLM
Planners
ABSTRACT: Recently, Large Language Models (LLMs) have witnessed remarkable performance
as zero-shot task planners for robotic manipulation tasks. However, the
open-loop nature of previous works makes LLM-based planning error-prone and
fragile. On the other hand, failure detection approaches for closed-loop
planning are often limited by task-specific heuristics or following an
unrealistic assumption that the prediction is trustworthy all the time. As a
general-purpose reasoning machine, LLMs or Multimodal Large Language Models
(MLLMs) are promising for detecting failures. However, However, the
appropriateness of the aforementioned assumption diminishes due to the
notorious hullucination problem. In this work, we attempt to mitigate these
issues by introducing a framework for closed-loop LLM-based planning called
KnowLoop, backed by an uncertainty-based MLLMs failure detector, which is
agnostic to any used MLLMs or LLMs. Specifically, we evaluate three different
ways for quantifying the uncertainty of MLLMs, namely token probability,
entropy, and self-explained confidence as primary metrics based on three
carefully designed representative prompting strategies. With a self-collected
dataset including various manipulation tasks and an LLM-based robot system, our
experiments demonstrate that token probability and entropy are more reflective
compared to self-explained confidence. By setting an appropriate threshold to
filter out uncertain predictions and seek human help actively, the accuracy of
failure detection can be significantly enhanced. This improvement boosts the
effectiveness of closed-loop planning and the overall success rate of tasks.
|
2406.02923 | Malyaban Bal | Malyaban Bal and Abhronil Sengupta | P-SpikeSSM: Harnessing Probabilistic Spiking State Space Models for
Long-Range Dependency Tasks | Accepted at ICLR 2025 | null | null | null | cs.NE | http://creativecommons.org/licenses/by/4.0/ | Spiking neural networks (SNNs) are posited as a computationally efficient and
biologically plausible alternative to conventional neural architectures, with
their core computational framework primarily using the leaky integrate-and-fire
(LIF) neuron model. However, the limited hidden state representation of LIF
neurons, characterized by a scalar membrane potential, and sequential spike
generation process, poses challenges for effectively developing scalable
spiking models to address long-range dependencies in sequence learning tasks.
In this study, we develop a scalable probabilistic spiking learning framework
for long-range dependency tasks leveraging the fundamentals of state space
models. Unlike LIF neurons that rely on the deterministic Heaviside function
for a sequential process of spike generation, we introduce a SpikeSampler layer
that samples spikes stochastically based on an SSM-based neuronal model while
allowing parallel computations. To address non-differentiability of the spiking
operation and enable effective training, we also propose a surrogate function
tailored for the stochastic nature of the SpikeSampler layer. To enhance
inter-neuron communication, we introduce the SpikeMixer block, which integrates
spikes from neuron populations in each layer. This is followed by a ClampFuse
layer, incorporating a residual connection to capture complex dependencies,
enabling scalability of the model. Our models attain state-of-the-art
performance among SNN models across diverse long-range dependency tasks,
encompassing the Long Range Arena benchmark, permuted sequential MNIST, and the
Speech Command dataset and demonstrate sparse spiking pattern highlighting its
computational efficiency.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2024 04:23:11 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Oct 2024 18:55:14 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Feb 2025 18:44:10 GMT"
},
{
"version": "v4",
"created": "Mon, 3 Mar 2025 06:02:04 GMT"
},
{
"version": "v5",
"created": "Mon, 17 Mar 2025 01:02:29 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Bal",
"Malyaban",
""
],
[
"Sengupta",
"Abhronil",
""
]
] | TITLE: P-SpikeSSM: Harnessing Probabilistic Spiking State Space Models for
Long-Range Dependency Tasks
ABSTRACT: Spiking neural networks (SNNs) are posited as a computationally efficient and
biologically plausible alternative to conventional neural architectures, with
their core computational framework primarily using the leaky integrate-and-fire
(LIF) neuron model. However, the limited hidden state representation of LIF
neurons, characterized by a scalar membrane potential, and sequential spike
generation process, poses challenges for effectively developing scalable
spiking models to address long-range dependencies in sequence learning tasks.
In this study, we develop a scalable probabilistic spiking learning framework
for long-range dependency tasks leveraging the fundamentals of state space
models. Unlike LIF neurons that rely on the deterministic Heaviside function
for a sequential process of spike generation, we introduce a SpikeSampler layer
that samples spikes stochastically based on an SSM-based neuronal model while
allowing parallel computations. To address non-differentiability of the spiking
operation and enable effective training, we also propose a surrogate function
tailored for the stochastic nature of the SpikeSampler layer. To enhance
inter-neuron communication, we introduce the SpikeMixer block, which integrates
spikes from neuron populations in each layer. This is followed by a ClampFuse
layer, incorporating a residual connection to capture complex dependencies,
enabling scalability of the model. Our models attain state-of-the-art
performance among SNN models across diverse long-range dependency tasks,
encompassing the Long Range Arena benchmark, permuted sequential MNIST, and the
Speech Command dataset and demonstrate sparse spiking pattern highlighting its
computational efficiency.
|
2406.04419 | Md Atik Ahamed | Md Atik Ahamed, Qiang Cheng | TSCMamba: Mamba Meets Multi-View Learning for Time Series Classification | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Multivariate time series classification (TSC) is critical for various
applications in fields such as healthcare and finance. While various approaches
for TSC have been explored, important properties of time series, such as shift
equivariance and inversion invariance, are largely underexplored by existing
works. To fill this gap, we propose a novel multi-view approach to capture
patterns with properties like shift equivariance. Our method integrates diverse
features, including spectral, temporal, local, and global features, to obtain
rich, complementary contexts for TSC. We use continuous wavelet transform to
capture time-frequency features that remain consistent even when the input is
shifted in time. These features are fused with temporal convolutional or
multilayer perceptron features to provide complex local and global contextual
information. We utilize the Mamba state space model for efficient and scalable
sequence modeling and capturing long-range dependencies in time series.
Moreover, we introduce a new scanning scheme for Mamba, called tango scanning,
to effectively model sequence relationships and leverage inversion invariance,
thereby enhancing our model's generalization and robustness. Experiments on two
sets of benchmark datasets (10+20 datasets) demonstrate our approach's
effectiveness, achieving average accuracy improvements of 4.01-6.45\% and
7.93\% respectively, over leading TSC models such as TimesNet and TSLANet.
| [
{
"version": "v1",
"created": "Thu, 6 Jun 2024 18:05:10 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 17:40:41 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ahamed",
"Md Atik",
""
],
[
"Cheng",
"Qiang",
""
]
] | TITLE: TSCMamba: Mamba Meets Multi-View Learning for Time Series Classification
ABSTRACT: Multivariate time series classification (TSC) is critical for various
applications in fields such as healthcare and finance. While various approaches
for TSC have been explored, important properties of time series, such as shift
equivariance and inversion invariance, are largely underexplored by existing
works. To fill this gap, we propose a novel multi-view approach to capture
patterns with properties like shift equivariance. Our method integrates diverse
features, including spectral, temporal, local, and global features, to obtain
rich, complementary contexts for TSC. We use continuous wavelet transform to
capture time-frequency features that remain consistent even when the input is
shifted in time. These features are fused with temporal convolutional or
multilayer perceptron features to provide complex local and global contextual
information. We utilize the Mamba state space model for efficient and scalable
sequence modeling and capturing long-range dependencies in time series.
Moreover, we introduce a new scanning scheme for Mamba, called tango scanning,
to effectively model sequence relationships and leverage inversion invariance,
thereby enhancing our model's generalization and robustness. Experiments on two
sets of benchmark datasets (10+20 datasets) demonstrate our approach's
effectiveness, achieving average accuracy improvements of 4.01-6.45\% and
7.93\% respectively, over leading TSC models such as TimesNet and TSLANet.
|
2406.04927 | Georgios Efstathiadis | Georgios Efstathiadis, Vijay Yadav, Anzar Abbas | LLM-based speaker diarization correction: A generalizable approach | null | Speech Communication, Volume 170, 2025, Page 103224 | 10.1016/j.specom.2025.103224 | null | eess.AS cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Speaker diarization is necessary for interpreting conversations transcribed
using automated speech recognition (ASR) tools. Despite significant
developments in diarization methods, diarization accuracy remains an issue.
Here, we investigate the use of large language models (LLMs) for diarization
correction as a post-processing step. LLMs were fine-tuned using the Fisher
corpus, a large dataset of transcribed conversations. The ability of the models
to improve diarization accuracy in a holdout dataset from the Fisher corpus as
well as an independent dataset was measured. We report that fine-tuned LLMs can
markedly improve diarization accuracy. However, model performance is
constrained to transcripts produced using the same ASR tool as the transcripts
used for fine-tuning, limiting generalizability. To address this constraint, an
ensemble model was developed by combining weights from three separate models,
each fine-tuned using transcripts from a different ASR tool. The ensemble model
demonstrated better overall performance than each of the ASR-specific models,
suggesting that a generalizable and ASR-agnostic approach may be achievable. We
have made the weights of these models publicly available on HuggingFace at
https://huggingface.co/bklynhlth.
| [
{
"version": "v1",
"created": "Fri, 7 Jun 2024 13:33:22 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Sep 2024 20:42:20 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 13:34:07 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Efstathiadis",
"Georgios",
""
],
[
"Yadav",
"Vijay",
""
],
[
"Abbas",
"Anzar",
""
]
] | TITLE: LLM-based speaker diarization correction: A generalizable approach
ABSTRACT: Speaker diarization is necessary for interpreting conversations transcribed
using automated speech recognition (ASR) tools. Despite significant
developments in diarization methods, diarization accuracy remains an issue.
Here, we investigate the use of large language models (LLMs) for diarization
correction as a post-processing step. LLMs were fine-tuned using the Fisher
corpus, a large dataset of transcribed conversations. The ability of the models
to improve diarization accuracy in a holdout dataset from the Fisher corpus as
well as an independent dataset was measured. We report that fine-tuned LLMs can
markedly improve diarization accuracy. However, model performance is
constrained to transcripts produced using the same ASR tool as the transcripts
used for fine-tuning, limiting generalizability. To address this constraint, an
ensemble model was developed by combining weights from three separate models,
each fine-tuned using transcripts from a different ASR tool. The ensemble model
demonstrated better overall performance than each of the ASR-specific models,
suggesting that a generalizable and ASR-agnostic approach may be achievable. We
have made the weights of these models publicly available on HuggingFace at
https://huggingface.co/bklynhlth.
|
2406.08920 | Swapnil Bhosale | Swapnil Bhosale, Haosen Yang, Diptesh Kanojia, Jiankang Deng, Xiatian
Zhu | AV-GS: Learning Material and Geometry Aware Priors for Novel View
Acoustic Synthesis | Accepted to NeurIPS 2024 | null | null | null | cs.SD cs.AI eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Novel view acoustic synthesis (NVAS) aims to render binaural audio at any
target viewpoint, given a mono audio emitted by a sound source at a 3D scene.
Existing methods have proposed NeRF-based implicit models to exploit visual
cues as a condition for synthesizing binaural audio. However, in addition to
low efficiency originating from heavy NeRF rendering, these methods all have a
limited ability of characterizing the entire scene environment such as room
geometry, material properties, and the spatial relation between the listener
and sound source. To address these issues, we propose a novel Audio-Visual
Gaussian Splatting (AV-GS) model. To obtain a material-aware and geometry-aware
condition for audio synthesis, we learn an explicit point-based scene
representation with an audio-guidance parameter on locally initialized Gaussian
points, taking into account the space relation from the listener and sound
source. To make the visual scene model audio adaptive, we propose a point
densification and pruning strategy to optimally distribute the Gaussian points,
with the per-point contribution in sound propagation (e.g., more points needed
for texture-less wall surfaces as they affect sound path diversion). Extensive
experiments validate the superiority of our AV-GS over existing alternatives on
the real-world RWAS and simulation-based SoundSpaces datasets.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2024 08:34:12 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jun 2024 06:38:50 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 19:43:03 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Bhosale",
"Swapnil",
""
],
[
"Yang",
"Haosen",
""
],
[
"Kanojia",
"Diptesh",
""
],
[
"Deng",
"Jiankang",
""
],
[
"Zhu",
"Xiatian",
""
]
] | TITLE: AV-GS: Learning Material and Geometry Aware Priors for Novel View
Acoustic Synthesis
ABSTRACT: Novel view acoustic synthesis (NVAS) aims to render binaural audio at any
target viewpoint, given a mono audio emitted by a sound source at a 3D scene.
Existing methods have proposed NeRF-based implicit models to exploit visual
cues as a condition for synthesizing binaural audio. However, in addition to
low efficiency originating from heavy NeRF rendering, these methods all have a
limited ability of characterizing the entire scene environment such as room
geometry, material properties, and the spatial relation between the listener
and sound source. To address these issues, we propose a novel Audio-Visual
Gaussian Splatting (AV-GS) model. To obtain a material-aware and geometry-aware
condition for audio synthesis, we learn an explicit point-based scene
representation with an audio-guidance parameter on locally initialized Gaussian
points, taking into account the space relation from the listener and sound
source. To make the visual scene model audio adaptive, we propose a point
densification and pruning strategy to optimally distribute the Gaussian points,
with the per-point contribution in sound propagation (e.g., more points needed
for texture-less wall surfaces as they affect sound path diversion). Extensive
experiments validate the superiority of our AV-GS over existing alternatives on
the real-world RWAS and simulation-based SoundSpaces datasets.
|
2406.11601 | Weronika Ormaniec | Weronika Ormaniec, Scott Sussex, Lars Lorch, Bernhard Sch\"olkopf,
Andreas Krause | Standardizing Structural Causal Models | Added additional benchmarks, including PC algorithm, GES, GOLEM.
Evaluated Var-sortability and R2-sortability of the heuristics for mitigating
variance accumulation | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic datasets generated by structural causal models (SCMs) are commonly
used for benchmarking causal structure learning algorithms. However, the
variances and pairwise correlations in SCM data tend to increase along the
causal ordering. Several popular algorithms exploit these artifacts, possibly
leading to conclusions that do not generalize to real-world settings. Existing
metrics like $\operatorname{Var}$-sortability and
$\operatorname{R^2}$-sortability quantify these patterns, but they do not
provide tools to remedy them. To address this, we propose
internally-standardized structural causal models (iSCMs), a modification of
SCMs that introduces a standardization operation at each variable during the
generative process. By construction, iSCMs are not
$\operatorname{Var}$-sortable. We also find empirical evidence that they are
mostly not $\operatorname{R^2}$-sortable for commonly-used graph families.
Moreover, contrary to the post-hoc standardization of data generated by
standard SCMs, we prove that linear iSCMs are less identifiable from prior
knowledge on the weights and do not collapse to deterministic relationships in
large systems, which may make iSCMs a useful model in causal inference beyond
the benchmarking problem studied here. Our code is publicly available at:
https://github.com/werkaaa/iscm.
| [
{
"version": "v1",
"created": "Mon, 17 Jun 2024 14:52:21 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Oct 2024 21:14:49 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 14:26:33 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ormaniec",
"Weronika",
""
],
[
"Sussex",
"Scott",
""
],
[
"Lorch",
"Lars",
""
],
[
"Schölkopf",
"Bernhard",
""
],
[
"Krause",
"Andreas",
""
]
] | TITLE: Standardizing Structural Causal Models
ABSTRACT: Synthetic datasets generated by structural causal models (SCMs) are commonly
used for benchmarking causal structure learning algorithms. However, the
variances and pairwise correlations in SCM data tend to increase along the
causal ordering. Several popular algorithms exploit these artifacts, possibly
leading to conclusions that do not generalize to real-world settings. Existing
metrics like $\operatorname{Var}$-sortability and
$\operatorname{R^2}$-sortability quantify these patterns, but they do not
provide tools to remedy them. To address this, we propose
internally-standardized structural causal models (iSCMs), a modification of
SCMs that introduces a standardization operation at each variable during the
generative process. By construction, iSCMs are not
$\operatorname{Var}$-sortable. We also find empirical evidence that they are
mostly not $\operatorname{R^2}$-sortable for commonly-used graph families.
Moreover, contrary to the post-hoc standardization of data generated by
standard SCMs, we prove that linear iSCMs are less identifiable from prior
knowledge on the weights and do not collapse to deterministic relationships in
large systems, which may make iSCMs a useful model in causal inference beyond
the benchmarking problem studied here. Our code is publicly available at:
https://github.com/werkaaa/iscm.
|
2406.13378 | Zidong Cao | Zidong Cao, Jinjing Zhu, Weiming Zhang, Hao Ai, Haotian Bai,
Hengshuang Zhao, Lin Wang | PanDA: Towards Panoramic Depth Anything with Unlabeled Panoramas and
Mobius Spatial Augmentation | 16 pages, 18 figures, accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Depth Anything Models (DAMs) - a type of depth foundation models -
have demonstrated impressive zero-shot capabilities across diverse perspective
images. Despite its success, it remains an open question regarding DAMs'
performance on panorama images that enjoy a large field-of-view (180x360) but
suffer from spherical distortions. To address this gap, we conduct an empirical
analysis to evaluate the performance of DAMs on panoramic images and identify
their limitations. For this, we undertake comprehensive experiments to assess
the performance of DAMs from three key factors: panoramic representations, 360
camera positions for capturing scenarios, and spherical spatial
transformations. This way, we reveal some key findings, e.g., DAMs are
sensitive to spatial transformations. We then propose a semi-supervised
learning (SSL) framework to learn a panoramic DAM, dubbed PanDA. Under the
umbrella of SSL, PanDA first learns a teacher model by fine-tuning DAM through
joint training on synthetic indoor and outdoor panoramic datasets. Then, a
student model is trained using large-scale unlabeled data, leveraging
pseudo-labels generated by the teacher model. To enhance PanDA's generalization
capability, M"obius transformation-based spatial augmentation (MTSA) is
proposed to impose consistency regularization between the predicted depth maps
from the original and spatially transformed ones. This subtly improves the
student model's robustness to various spatial transformations, even under
severe distortions. Extensive experiments demonstrate that PanDA exhibits
remarkable zero-shot capability across diverse scenes, and outperforms the
data-specific panoramic depth estimation methods on two popular real-world
benchmarks.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2024 09:19:06 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 09:07:43 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Cao",
"Zidong",
""
],
[
"Zhu",
"Jinjing",
""
],
[
"Zhang",
"Weiming",
""
],
[
"Ai",
"Hao",
""
],
[
"Bai",
"Haotian",
""
],
[
"Zhao",
"Hengshuang",
""
],
[
"Wang",
"Lin",
""
]
] | TITLE: PanDA: Towards Panoramic Depth Anything with Unlabeled Panoramas and
Mobius Spatial Augmentation
ABSTRACT: Recently, Depth Anything Models (DAMs) - a type of depth foundation models -
have demonstrated impressive zero-shot capabilities across diverse perspective
images. Despite its success, it remains an open question regarding DAMs'
performance on panorama images that enjoy a large field-of-view (180x360) but
suffer from spherical distortions. To address this gap, we conduct an empirical
analysis to evaluate the performance of DAMs on panoramic images and identify
their limitations. For this, we undertake comprehensive experiments to assess
the performance of DAMs from three key factors: panoramic representations, 360
camera positions for capturing scenarios, and spherical spatial
transformations. This way, we reveal some key findings, e.g., DAMs are
sensitive to spatial transformations. We then propose a semi-supervised
learning (SSL) framework to learn a panoramic DAM, dubbed PanDA. Under the
umbrella of SSL, PanDA first learns a teacher model by fine-tuning DAM through
joint training on synthetic indoor and outdoor panoramic datasets. Then, a
student model is trained using large-scale unlabeled data, leveraging
pseudo-labels generated by the teacher model. To enhance PanDA's generalization
capability, M"obius transformation-based spatial augmentation (MTSA) is
proposed to impose consistency regularization between the predicted depth maps
from the original and spatially transformed ones. This subtly improves the
student model's robustness to various spatial transformations, even under
severe distortions. Extensive experiments demonstrate that PanDA exhibits
remarkable zero-shot capability across diverse scenes, and outperforms the
data-specific panoramic depth estimation methods on two popular real-world
benchmarks.
|
2406.17503 | Fu Feng | Fu Feng, Yucheng Xie, Jing Wang, Xin Geng | WAVE: Weight Templates for Adaptive Initialization of Variable-sized
Models | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The growing complexity of model parameters underscores the significance of
pre-trained models. However, deployment constraints often necessitate models of
varying sizes, exposing limitations in the conventional pre-training and
fine-tuning paradigm, particularly when target model sizes are incompatible
with pre-trained ones. To address this challenge, we propose WAVE, a novel
approach that reformulates variable-sized model initialization from a
multi-task perspective, where initializing each model size is treated as a
distinct task. WAVE employs shared, size-agnostic weight templates alongside
size-specific weight scalers to achieve consistent initialization across
various model sizes. These weight templates, constructed within the Learngene
framework, integrate knowledge from pre-trained models through a distillation
process constrained by Kronecker-based rules. Target models are then
initialized by concatenating and weighting these templates, with adaptive
connection rules established by lightweight weight scalers, whose parameters
are learned from minimal training data. Extensive experiments demonstrate the
efficiency of WAVE, achieving state-of-the-art performance in initializing
models of various depth and width. The knowledge encapsulated in weight
templates is also task-agnostic, allowing for seamless transfer across diverse
downstream datasets. Code will be made available at
https://github.com/fu-feng/WAVE.
| [
{
"version": "v1",
"created": "Tue, 25 Jun 2024 12:43:33 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Jul 2024 06:41:13 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 17:21:38 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Feng",
"Fu",
""
],
[
"Xie",
"Yucheng",
""
],
[
"Wang",
"Jing",
""
],
[
"Geng",
"Xin",
""
]
] | TITLE: WAVE: Weight Templates for Adaptive Initialization of Variable-sized
Models
ABSTRACT: The growing complexity of model parameters underscores the significance of
pre-trained models. However, deployment constraints often necessitate models of
varying sizes, exposing limitations in the conventional pre-training and
fine-tuning paradigm, particularly when target model sizes are incompatible
with pre-trained ones. To address this challenge, we propose WAVE, a novel
approach that reformulates variable-sized model initialization from a
multi-task perspective, where initializing each model size is treated as a
distinct task. WAVE employs shared, size-agnostic weight templates alongside
size-specific weight scalers to achieve consistent initialization across
various model sizes. These weight templates, constructed within the Learngene
framework, integrate knowledge from pre-trained models through a distillation
process constrained by Kronecker-based rules. Target models are then
initialized by concatenating and weighting these templates, with adaptive
connection rules established by lightweight weight scalers, whose parameters
are learned from minimal training data. Extensive experiments demonstrate the
efficiency of WAVE, achieving state-of-the-art performance in initializing
models of various depth and width. The knowledge encapsulated in weight
templates is also task-agnostic, allowing for seamless transfer across diverse
downstream datasets. Code will be made available at
https://github.com/fu-feng/WAVE.
|
2406.18333 | Hossein Ranjbar | Hossein Ranjbar, Alireza Taheri | Continuous Sign Language Recognition Using Intra-inter Gloss Attention | null | null | 10.1007/s11042-025-20721-5 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Many continuous sign language recognition (CSLR) studies adopt
transformer-based architectures for sequence modeling due to their powerful
capacity for capturing global contexts. Nevertheless, vanilla self-attention,
which serves as the core module of the transformer, calculates a weighted
average over all time steps; therefore, the local temporal semantics of sign
videos may not be fully exploited. In this study, we introduce a novel module
in sign language recognition studies, called intra-inter gloss attention
module, to leverage the relationships among frames within glosses and the
semantic and grammatical dependencies between glosses in the video. In the
intra-gloss attention module, the video is divided into equally sized chunks
and a self-attention mechanism is applied within each chunk. This localized
self-attention significantly reduces complexity and eliminates noise introduced
by considering non-relative frames. In the inter-gloss attention module, we
first aggregate the chunk-level features within each gloss chunk by average
pooling along the temporal dimension. Subsequently, multi-head self-attention
is applied to all chunk-level features. Given the non-significance of the
signer-environment interaction, we utilize segmentation to remove the
background of the videos. This enables the proposed model to direct its focus
toward the signer. Experimental results on the PHOENIX-2014 benchmark dataset
demonstrate that our method can effectively extract sign language features in
an end-to-end manner without any prior knowledge, improve the accuracy of CSLR,
and achieve the word error rate (WER) of 20.4 on the test set which is a
competitive result compare to the state-of-the-art which uses additional
supervisions.
| [
{
"version": "v1",
"created": "Wed, 26 Jun 2024 13:21:08 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ranjbar",
"Hossein",
""
],
[
"Taheri",
"Alireza",
""
]
] | TITLE: Continuous Sign Language Recognition Using Intra-inter Gloss Attention
ABSTRACT: Many continuous sign language recognition (CSLR) studies adopt
transformer-based architectures for sequence modeling due to their powerful
capacity for capturing global contexts. Nevertheless, vanilla self-attention,
which serves as the core module of the transformer, calculates a weighted
average over all time steps; therefore, the local temporal semantics of sign
videos may not be fully exploited. In this study, we introduce a novel module
in sign language recognition studies, called intra-inter gloss attention
module, to leverage the relationships among frames within glosses and the
semantic and grammatical dependencies between glosses in the video. In the
intra-gloss attention module, the video is divided into equally sized chunks
and a self-attention mechanism is applied within each chunk. This localized
self-attention significantly reduces complexity and eliminates noise introduced
by considering non-relative frames. In the inter-gloss attention module, we
first aggregate the chunk-level features within each gloss chunk by average
pooling along the temporal dimension. Subsequently, multi-head self-attention
is applied to all chunk-level features. Given the non-significance of the
signer-environment interaction, we utilize segmentation to remove the
background of the videos. This enables the proposed model to direct its focus
toward the signer. Experimental results on the PHOENIX-2014 benchmark dataset
demonstrate that our method can effectively extract sign language features in
an end-to-end manner without any prior knowledge, improve the accuracy of CSLR,
and achieve the word error rate (WER) of 20.4 on the test set which is a
competitive result compare to the state-of-the-art which uses additional
supervisions.
|
2406.18345 | Yi Ding | Yi Ding, Chengxuan Tong, Shuailei Zhang, Muyun Jiang, Yong Li, Kevin
Lim Jun Liang, Cuntai Guan | EmT: A Novel Transformer for Generalized Cross-subject EEG Emotion
Recognition | 12 pages, 9 figures. This work has been accepted by IEEE TNNLS | null | 10.1109/TNNLS.2025.3552603 | null | cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Integrating prior knowledge of neurophysiology into neural network
architecture enhances the performance of emotion decoding. While numerous
techniques emphasize learning spatial and short-term temporal patterns, there
has been limited emphasis on capturing the vital long-term contextual
information associated with emotional cognitive processes. In order to address
this discrepancy, we introduce a novel transformer model called emotion
transformer (EmT). EmT is designed to excel in both generalized cross-subject
EEG emotion classification and regression tasks. In EmT, EEG signals are
transformed into a temporal graph format, creating a sequence of EEG feature
graphs using a temporal graph construction module (TGC). A novel residual
multi-view pyramid GCN module (RMPG) is then proposed to learn dynamic graph
representations for each EEG feature graph within the series, and the learned
representations of each graph are fused into one token. Furthermore, we design
a temporal contextual transformer module (TCT) with two types of token mixers
to learn the temporal contextual information. Finally, the task-specific output
module (TSO) generates the desired outputs. Experiments on four publicly
available datasets show that EmT achieves higher results than the baseline
methods for both EEG emotion classification and regression tasks. The code is
available at https://github.com/yi-ding-cs/EmT.
| [
{
"version": "v1",
"created": "Wed, 26 Jun 2024 13:42:11 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 05:17:27 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 02:22:04 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ding",
"Yi",
""
],
[
"Tong",
"Chengxuan",
""
],
[
"Zhang",
"Shuailei",
""
],
[
"Jiang",
"Muyun",
""
],
[
"Li",
"Yong",
""
],
[
"Liang",
"Kevin Lim Jun",
""
],
[
"Guan",
"Cuntai",
""
]
] | TITLE: EmT: A Novel Transformer for Generalized Cross-subject EEG Emotion
Recognition
ABSTRACT: Integrating prior knowledge of neurophysiology into neural network
architecture enhances the performance of emotion decoding. While numerous
techniques emphasize learning spatial and short-term temporal patterns, there
has been limited emphasis on capturing the vital long-term contextual
information associated with emotional cognitive processes. In order to address
this discrepancy, we introduce a novel transformer model called emotion
transformer (EmT). EmT is designed to excel in both generalized cross-subject
EEG emotion classification and regression tasks. In EmT, EEG signals are
transformed into a temporal graph format, creating a sequence of EEG feature
graphs using a temporal graph construction module (TGC). A novel residual
multi-view pyramid GCN module (RMPG) is then proposed to learn dynamic graph
representations for each EEG feature graph within the series, and the learned
representations of each graph are fused into one token. Furthermore, we design
a temporal contextual transformer module (TCT) with two types of token mixers
to learn the temporal contextual information. Finally, the task-specific output
module (TSO) generates the desired outputs. Experiments on four publicly
available datasets show that EmT achieves higher results than the baseline
methods for both EEG emotion classification and regression tasks. The code is
available at https://github.com/yi-ding-cs/EmT.
|
2406.18894 | Vasileios Kouliaridis | Vasileios Kouliaridis, Georgios Karopoulos, Georgios Kambourakis | Assessing the Effectiveness of LLMs in Android Application Vulnerability
Analysis | null | null | 10.1007/978-3-031-85593-1_9 | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | The increasing frequency of attacks on Android applications coupled with the
recent popularity of large language models (LLMs) necessitates a comprehensive
understanding of the capabilities of the latter in identifying potential
vulnerabilities, which is key to mitigate the overall risk. To this end, the
work at hand compares the ability of nine state-of-the-art LLMs to detect
Android code vulnerabilities listed in the latest Open Worldwide Application
Security Project (OWASP) Mobile Top 10. Each LLM was evaluated against an open
dataset of over 100 vulnerable code samples, including obfuscated ones,
assessing each model's ability to identify key vulnerabilities. Our analysis
reveals the strengths and weaknesses of each LLM, identifying important factors
that contribute to their performance. Additionally, we offer insights into
context augmentation with retrieval-augmented generation (RAG) for detecting
Android code vulnerabilities, which in turn may propel secure application
development. Finally, while the reported findings regarding code vulnerability
analysis show promise, they also reveal significant discrepancies among the
different LLMs.
| [
{
"version": "v1",
"created": "Thu, 27 Jun 2024 05:14:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Kouliaridis",
"Vasileios",
""
],
[
"Karopoulos",
"Georgios",
""
],
[
"Kambourakis",
"Georgios",
""
]
] | TITLE: Assessing the Effectiveness of LLMs in Android Application Vulnerability
Analysis
ABSTRACT: The increasing frequency of attacks on Android applications coupled with the
recent popularity of large language models (LLMs) necessitates a comprehensive
understanding of the capabilities of the latter in identifying potential
vulnerabilities, which is key to mitigate the overall risk. To this end, the
work at hand compares the ability of nine state-of-the-art LLMs to detect
Android code vulnerabilities listed in the latest Open Worldwide Application
Security Project (OWASP) Mobile Top 10. Each LLM was evaluated against an open
dataset of over 100 vulnerable code samples, including obfuscated ones,
assessing each model's ability to identify key vulnerabilities. Our analysis
reveals the strengths and weaknesses of each LLM, identifying important factors
that contribute to their performance. Additionally, we offer insights into
context augmentation with retrieval-augmented generation (RAG) for detecting
Android code vulnerabilities, which in turn may propel secure application
development. Finally, while the reported findings regarding code vulnerability
analysis show promise, they also reveal significant discrepancies among the
different LLMs.
|
2407.03605 | Xiaoxia Liu | Xiaoxia Liu, Shijie Yu, Jian Lu, Xiaojun Chen | Orthogonal Constrained Minimization with Tensor $\ell_{2,p}$
Regularization for HSI Denoising and Destriping | null | null | null | null | math.OC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hyperspectral images (HSIs) are often contaminated by a mixture of noises
such as Gaussian noise, dead lines, stripes, and so on. In this paper, we
propose a novel approach for HSI denoising and destriping, called NLTL2p, which
consists of an orthogonal constrained minimization model and an iterative
algorithm with convergence guarantees. The model of the proposed NLTL2p
approach is built based on a new sparsity-enhanced Nonlocal Low-rank Tensor
regularization and a tensor $\ell_{2,p}$ norm with $p\in(0,1)$. The low-rank
constraints for HSI denoising utilize the spatial nonlocal self-similarity and
spectral correlation of HSIs and are formulated based on independent
higher-order singular value decomposition with sparsity enhancement on its core
tensor to prompt more low-rankness. The tensor $\ell_{2,p}$ norm for HSI
destriping is extended from the matrix $\ell_{2,p}$ norm. A proximal block
coordinate descent algorithm is proposed in the NLTL2p approach to solve the
resulting nonconvex nonsmooth minimization with orthogonal constraints. We show
any accumulation point of the sequence generated by the proposed algorithm
converges to a first-order stationary point, which is defined using three
equalities of substationarity, symmetry, and feasibility for orthogonal
constraints. In the numerical experiments, we compare the proposed method with
state-of-the-art methods including a deep learning based method, and test the
methods on both simulated and real HSI datasets. Our proposed NLTL2p method
demonstrates outperformance in terms of metrics such as mean peak
signal-to-noise ratio as well as visual quality.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2024 03:33:19 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 03:13:43 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Xiaoxia",
""
],
[
"Yu",
"Shijie",
""
],
[
"Lu",
"Jian",
""
],
[
"Chen",
"Xiaojun",
""
]
] | TITLE: Orthogonal Constrained Minimization with Tensor $\ell_{2,p}$
Regularization for HSI Denoising and Destriping
ABSTRACT: Hyperspectral images (HSIs) are often contaminated by a mixture of noises
such as Gaussian noise, dead lines, stripes, and so on. In this paper, we
propose a novel approach for HSI denoising and destriping, called NLTL2p, which
consists of an orthogonal constrained minimization model and an iterative
algorithm with convergence guarantees. The model of the proposed NLTL2p
approach is built based on a new sparsity-enhanced Nonlocal Low-rank Tensor
regularization and a tensor $\ell_{2,p}$ norm with $p\in(0,1)$. The low-rank
constraints for HSI denoising utilize the spatial nonlocal self-similarity and
spectral correlation of HSIs and are formulated based on independent
higher-order singular value decomposition with sparsity enhancement on its core
tensor to prompt more low-rankness. The tensor $\ell_{2,p}$ norm for HSI
destriping is extended from the matrix $\ell_{2,p}$ norm. A proximal block
coordinate descent algorithm is proposed in the NLTL2p approach to solve the
resulting nonconvex nonsmooth minimization with orthogonal constraints. We show
any accumulation point of the sequence generated by the proposed algorithm
converges to a first-order stationary point, which is defined using three
equalities of substationarity, symmetry, and feasibility for orthogonal
constraints. In the numerical experiments, we compare the proposed method with
state-of-the-art methods including a deep learning based method, and test the
methods on both simulated and real HSI datasets. Our proposed NLTL2p method
demonstrates outperformance in terms of metrics such as mean peak
signal-to-noise ratio as well as visual quality.
|
2407.05649 | Tongzhou Liao | Tongzhou Liao, Barnab\'as P\'oczos | Greener GRASS: Enhancing GNNs with Encoding, Rewiring, and Attention | Published as a conference paper at ICLR 2025 | null | null | null | cs.LG cs.AI cs.NE | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) have become important tools for machine learning
on graph-structured data. In this paper, we explore the synergistic combination
of graph encoding, graph rewiring, and graph attention, by introducing Graph
Attention with Stochastic Structures (GRASS), a novel GNN architecture. GRASS
utilizes relative random walk probabilities (RRWP) encoding and a novel
decomposed variant (D-RRWP) to efficiently capture structural information. It
rewires the input graph by superimposing a random regular graph to enhance
long-range information propagation. It also employs a novel additive attention
mechanism tailored for graph-structured data. Our empirical evaluations
demonstrate that GRASS achieves state-of-the-art performance on multiple
benchmark datasets, including a 20.3% reduction in mean absolute error on the
ZINC dataset.
| [
{
"version": "v1",
"created": "Mon, 8 Jul 2024 06:21:56 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Jul 2024 07:30:43 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Oct 2024 16:32:11 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Mar 2025 11:37:49 GMT"
},
{
"version": "v5",
"created": "Fri, 14 Mar 2025 23:47:53 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liao",
"Tongzhou",
""
],
[
"Póczos",
"Barnabás",
""
]
] | TITLE: Greener GRASS: Enhancing GNNs with Encoding, Rewiring, and Attention
ABSTRACT: Graph Neural Networks (GNNs) have become important tools for machine learning
on graph-structured data. In this paper, we explore the synergistic combination
of graph encoding, graph rewiring, and graph attention, by introducing Graph
Attention with Stochastic Structures (GRASS), a novel GNN architecture. GRASS
utilizes relative random walk probabilities (RRWP) encoding and a novel
decomposed variant (D-RRWP) to efficiently capture structural information. It
rewires the input graph by superimposing a random regular graph to enhance
long-range information propagation. It also employs a novel additive attention
mechanism tailored for graph-structured data. Our empirical evaluations
demonstrate that GRASS achieves state-of-the-art performance on multiple
benchmark datasets, including a 20.3% reduction in mean absolute error on the
ZINC dataset.
|
2407.05782 | Ioannis Tsiamas | Ioannis Tsiamas, Santiago Pascual, Chunghsin Yeh, Joan Serr\`a | Sequential Contrastive Audio-Visual Learning | ICASSP 2025. Version 1 contains more details | null | null | null | cs.SD cs.CV cs.LG cs.MM eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contrastive learning has emerged as a powerful technique in audio-visual
representation learning, leveraging the natural co-occurrence of audio and
visual modalities in webscale video datasets. However, conventional contrastive
audio-visual learning (CAV) methodologies often rely on aggregated
representations derived through temporal aggregation, neglecting the intrinsic
sequential nature of the data. This oversight raises concerns regarding the
ability of standard approaches to capture and utilize fine-grained information
within sequences. In response to this limitation, we propose sequential
contrastive audiovisual learning (SCAV), which contrasts examples based on
their non-aggregated representation space using multidimensional sequential
distances. Audio-visual retrieval experiments with the VGGSound and Music
datasets demonstrate the effectiveness of SCAV, with up to 3.5x relative
improvements in recall against traditional aggregation-based contrastive
learning and other previously proposed methods, which utilize more parameters
and data. We also show that models trained with SCAV exhibit a significant
degree of flexibility regarding the metric employed for retrieval, allowing us
to use a hybrid retrieval approach that is both effective and efficient.
| [
{
"version": "v1",
"created": "Mon, 8 Jul 2024 09:45:20 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 13:36:14 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Tsiamas",
"Ioannis",
""
],
[
"Pascual",
"Santiago",
""
],
[
"Yeh",
"Chunghsin",
""
],
[
"Serrà",
"Joan",
""
]
] | TITLE: Sequential Contrastive Audio-Visual Learning
ABSTRACT: Contrastive learning has emerged as a powerful technique in audio-visual
representation learning, leveraging the natural co-occurrence of audio and
visual modalities in webscale video datasets. However, conventional contrastive
audio-visual learning (CAV) methodologies often rely on aggregated
representations derived through temporal aggregation, neglecting the intrinsic
sequential nature of the data. This oversight raises concerns regarding the
ability of standard approaches to capture and utilize fine-grained information
within sequences. In response to this limitation, we propose sequential
contrastive audiovisual learning (SCAV), which contrasts examples based on
their non-aggregated representation space using multidimensional sequential
distances. Audio-visual retrieval experiments with the VGGSound and Music
datasets demonstrate the effectiveness of SCAV, with up to 3.5x relative
improvements in recall against traditional aggregation-based contrastive
learning and other previously proposed methods, which utilize more parameters
and data. We also show that models trained with SCAV exhibit a significant
degree of flexibility regarding the metric employed for retrieval, allowing us
to use a hybrid retrieval approach that is both effective and efficient.
|
2407.08227 | Catarina Moreira | Chihcheng Hsieh, Catarina Moreira, Isabel Blanco Nobre, Sandra Costa
Sousa, Chun Ouyang, Margot Brereton, Joaquim Jorge and Jacinto C. Nascimento | DALL-M: Context-Aware Clinical Data Augmentation with LLMs | null | null | null | null | cs.AI cs.IR cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | X-ray images are vital in medical diagnostics, but their effectiveness is
limited without clinical context. Radiologists often find chest X-rays
insufficient for diagnosing underlying diseases, necessitating the integration
of structured clinical features with radiology reports.
To address this, we introduce DALL-M, a novel framework that enhances
clinical datasets by generating contextual synthetic data. DALL-M augments
structured patient data, including vital signs (e.g., heart rate, oxygen
saturation), radiology findings (e.g., lesion presence), and demographic
factors. It integrates this tabular data with contextual knowledge extracted
from radiology reports and domain-specific resources (e.g., Radiopaedia,
Wikipedia), ensuring clinical consistency and reliability.
DALL-M follows a three-phase process: (i) clinical context storage, (ii)
expert query generation, and (iii) context-aware feature augmentation. Using
large language models (LLMs), it generates both contextual synthetic values for
existing clinical features and entirely new, clinically relevant features.
Applied to 799 cases from the MIMIC-IV dataset, DALL-M expanded the original
9 clinical features to 91. Empirical validation with machine learning models
(including Decision Trees, Random Forests, XGBoost, and TabNET) demonstrated a
16.5% improvement in F1 score and a 25% increase in Precision and Recall.
DALL-M bridges an important gap in clinical data augmentation by preserving
data integrity while enhancing predictive modeling in healthcare. Our results
show that integrating LLM-generated synthetic features significantly improves
model performance, making DALL-M a scalable and practical approach for
AI-driven medical diagnostics.
| [
{
"version": "v1",
"created": "Thu, 11 Jul 2024 07:01:50 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Oct 2024 09:51:46 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 06:25:38 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Hsieh",
"Chihcheng",
""
],
[
"Moreira",
"Catarina",
""
],
[
"Nobre",
"Isabel Blanco",
""
],
[
"Sousa",
"Sandra Costa",
""
],
[
"Ouyang",
"Chun",
""
],
[
"Brereton",
"Margot",
""
],
[
"Jorge",
"Joaquim",
""
],
[
"Nascimento",
"Jacinto C.",
""
]
] | TITLE: DALL-M: Context-Aware Clinical Data Augmentation with LLMs
ABSTRACT: X-ray images are vital in medical diagnostics, but their effectiveness is
limited without clinical context. Radiologists often find chest X-rays
insufficient for diagnosing underlying diseases, necessitating the integration
of structured clinical features with radiology reports.
To address this, we introduce DALL-M, a novel framework that enhances
clinical datasets by generating contextual synthetic data. DALL-M augments
structured patient data, including vital signs (e.g., heart rate, oxygen
saturation), radiology findings (e.g., lesion presence), and demographic
factors. It integrates this tabular data with contextual knowledge extracted
from radiology reports and domain-specific resources (e.g., Radiopaedia,
Wikipedia), ensuring clinical consistency and reliability.
DALL-M follows a three-phase process: (i) clinical context storage, (ii)
expert query generation, and (iii) context-aware feature augmentation. Using
large language models (LLMs), it generates both contextual synthetic values for
existing clinical features and entirely new, clinically relevant features.
Applied to 799 cases from the MIMIC-IV dataset, DALL-M expanded the original
9 clinical features to 91. Empirical validation with machine learning models
(including Decision Trees, Random Forests, XGBoost, and TabNET) demonstrated a
16.5% improvement in F1 score and a 25% increase in Precision and Recall.
DALL-M bridges an important gap in clinical data augmentation by preserving
data integrity while enhancing predictive modeling in healthcare. Our results
show that integrating LLM-generated synthetic features significantly improves
model performance, making DALL-M a scalable and practical approach for
AI-driven medical diagnostics.
|
2407.09295 | Yulong Yang | Yulong Yang, Xinshan Yang, Shuaidong Li, Chenhao Lin, Zhengyu Zhao,
Chao Shen, Tianwei Zhang | Systematic Categorization, Construction and Evaluation of New Attacks
against Multi-modal Mobile GUI Agents | Preprint. Work in progress | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The integration of Large Language Models (LLMs) and Multi-modal Large
Language Models (MLLMs) into mobile GUI agents has significantly enhanced user
efficiency and experience. However, this advancement also introduces potential
security vulnerabilities that have yet to be thoroughly explored. In this
paper, we present a systematic security investigation of multi-modal mobile GUI
agents, addressing this critical gap in the existing literature. Our
contributions are twofold: (1) we propose a novel threat modeling methodology,
leading to the discovery and feasibility analysis of 34 previously unreported
attacks, and (2) we design an attack framework to systematically construct and
evaluate these threats. Through a combination of real-world case studies and
extensive dataset-driven experiments, we validate the severity and practicality
of those attacks, highlighting the pressing need for robust security measures
in mobile GUI systems.
| [
{
"version": "v1",
"created": "Fri, 12 Jul 2024 14:30:05 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Jul 2024 13:36:56 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 07:13:53 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yang",
"Yulong",
""
],
[
"Yang",
"Xinshan",
""
],
[
"Li",
"Shuaidong",
""
],
[
"Lin",
"Chenhao",
""
],
[
"Zhao",
"Zhengyu",
""
],
[
"Shen",
"Chao",
""
],
[
"Zhang",
"Tianwei",
""
]
] | TITLE: Systematic Categorization, Construction and Evaluation of New Attacks
against Multi-modal Mobile GUI Agents
ABSTRACT: The integration of Large Language Models (LLMs) and Multi-modal Large
Language Models (MLLMs) into mobile GUI agents has significantly enhanced user
efficiency and experience. However, this advancement also introduces potential
security vulnerabilities that have yet to be thoroughly explored. In this
paper, we present a systematic security investigation of multi-modal mobile GUI
agents, addressing this critical gap in the existing literature. Our
contributions are twofold: (1) we propose a novel threat modeling methodology,
leading to the discovery and feasibility analysis of 34 previously unreported
attacks, and (2) we design an attack framework to systematically construct and
evaluate these threats. Through a combination of real-world case studies and
extensive dataset-driven experiments, we validate the severity and practicality
of those attacks, highlighting the pressing need for robust security measures
in mobile GUI systems.
|
2407.14500 | Rongkun Zheng | Rongkun Zheng, Lu Qi, Xi Chen, Yi Wang, Kun Wang, Yu Qiao, Hengshuang
Zhao | ViLLa: Video Reasoning Segmentation with Large Language Model | 15 pages,7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent efforts in video reasoning segmentation (VRS) integrate large language
models (LLMs) with perception models to localize and track objects via textual
instructions, achieving barely satisfactory results in simple scenarios.
However, they struggled to discriminate and deduce the objects from user
queries in more real-world scenes featured by long durations, multiple objects,
rapid motion, and heavy occlusions. In this work, we analyze the underlying
causes of these limitations, and present ViLLa: Video reasoning segmentation
with Large Language Model. Remarkably, our ViLLa manages to tackle these
challenges through multiple core innovations: (1) a context synthesizer that
dynamically encodes the user intent with video contexts for accurate reasoning,
resolving ambiguities in complex queries, and (2) a hierarchical temporal
synchronizer that disentangles multi-object interactions across complex
temporal scenarios by modelling multi-object interactions at local and global
temporal scales. To enable efficient processing of long videos, ViLLa
incorporates (3) a key segment sampler that adaptively partitions long videos
into shorter but semantically dense segments for less redundancy. What's more,
to promote research in this unexplored area, we construct a VRS benchmark,
VideoReasonSeg, featuring different complex scenarios. Our model also exhibits
impressive state-of-the-art results on VideoReasonSeg, Ref-YouTube-VOS,
Ref-DAVIS17, MeViS, and ReVOS. Both quantitative and qualitative experiments
demonstrate that our method effectively enhances video reasoning segmentation
capabilities for multimodal LLMs. The code and dataset will be available at
https://github.com/rkzheng99/ViLLa.
| [
{
"version": "v1",
"created": "Thu, 18 Jul 2024 17:59:17 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Jul 2024 13:32:14 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 14:39:54 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zheng",
"Rongkun",
""
],
[
"Qi",
"Lu",
""
],
[
"Chen",
"Xi",
""
],
[
"Wang",
"Yi",
""
],
[
"Wang",
"Kun",
""
],
[
"Qiao",
"Yu",
""
],
[
"Zhao",
"Hengshuang",
""
]
] | TITLE: ViLLa: Video Reasoning Segmentation with Large Language Model
ABSTRACT: Recent efforts in video reasoning segmentation (VRS) integrate large language
models (LLMs) with perception models to localize and track objects via textual
instructions, achieving barely satisfactory results in simple scenarios.
However, they struggled to discriminate and deduce the objects from user
queries in more real-world scenes featured by long durations, multiple objects,
rapid motion, and heavy occlusions. In this work, we analyze the underlying
causes of these limitations, and present ViLLa: Video reasoning segmentation
with Large Language Model. Remarkably, our ViLLa manages to tackle these
challenges through multiple core innovations: (1) a context synthesizer that
dynamically encodes the user intent with video contexts for accurate reasoning,
resolving ambiguities in complex queries, and (2) a hierarchical temporal
synchronizer that disentangles multi-object interactions across complex
temporal scenarios by modelling multi-object interactions at local and global
temporal scales. To enable efficient processing of long videos, ViLLa
incorporates (3) a key segment sampler that adaptively partitions long videos
into shorter but semantically dense segments for less redundancy. What's more,
to promote research in this unexplored area, we construct a VRS benchmark,
VideoReasonSeg, featuring different complex scenarios. Our model also exhibits
impressive state-of-the-art results on VideoReasonSeg, Ref-YouTube-VOS,
Ref-DAVIS17, MeViS, and ReVOS. Both quantitative and qualitative experiments
demonstrate that our method effectively enhances video reasoning segmentation
capabilities for multimodal LLMs. The code and dataset will be available at
https://github.com/rkzheng99/ViLLa.
|
2407.14850 | Nizhuan Wang | Yueyang Li, Weiming Zeng, Wenhao Dong, Di Han, Lei Chen, Hongyu Chen,
Zijian Kang, Shengyu Gong, Hongjie Yan, Wai Ting Siok, and Nizhuan Wang | A Tale of Single-channel Electroencephalogram: Devices, Datasets, Signal
Processing, Applications, and Future Directions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single-channel electroencephalogram (EEG) is a cost-effective, comfortable,
and non-invasive method for monitoring brain activity, widely adopted by
researchers, consumers, and clinicians. The increasing number and proportion of
articles on single-channel EEG underscore its growing potential. This paper
provides a comprehensive review of single-channel EEG, focusing on development
trends, devices, datasets, signal processing methods, recent applications, and
future directions. Definitions of bipolar and unipolar configurations in
single-channel EEG are clarified to guide future advancements. Applications
mainly span sleep staging, emotion recognition, educational research, and
clinical diagnosis. Ongoing advancements of single-channel EEG in AI-based EEG
generation techniques suggest potential parity or superiority over multichannel
EEG performance.
| [
{
"version": "v1",
"created": "Sat, 20 Jul 2024 11:36:17 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 02:13:30 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Yueyang",
""
],
[
"Zeng",
"Weiming",
""
],
[
"Dong",
"Wenhao",
""
],
[
"Han",
"Di",
""
],
[
"Chen",
"Lei",
""
],
[
"Chen",
"Hongyu",
""
],
[
"Kang",
"Zijian",
""
],
[
"Gong",
"Shengyu",
""
],
[
"Yan",
"Hongjie",
""
],
[
"Siok",
"Wai Ting",
""
],
[
"Wang",
"Nizhuan",
""
]
] | TITLE: A Tale of Single-channel Electroencephalogram: Devices, Datasets, Signal
Processing, Applications, and Future Directions
ABSTRACT: Single-channel electroencephalogram (EEG) is a cost-effective, comfortable,
and non-invasive method for monitoring brain activity, widely adopted by
researchers, consumers, and clinicians. The increasing number and proportion of
articles on single-channel EEG underscore its growing potential. This paper
provides a comprehensive review of single-channel EEG, focusing on development
trends, devices, datasets, signal processing methods, recent applications, and
future directions. Definitions of bipolar and unipolar configurations in
single-channel EEG are clarified to guide future advancements. Applications
mainly span sleep staging, emotion recognition, educational research, and
clinical diagnosis. Ongoing advancements of single-channel EEG in AI-based EEG
generation techniques suggest potential parity or superiority over multichannel
EEG performance.
|
2407.16008 | Jiaming Shen | Jiaming Shen, Ran Xu, Yennie Jun, Zhen Qin, Tianqi Liu, Carl Yang, Yi
Liang, Simon Baumgartner, Michael Bendersky | Boosting Reward Model with Preference-Conditional Multi-Aspect Synthetic
Data Generation | ICLR 2025 SSI-FM version | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Reward models (RMs) are crucial for aligning large language models (LLMs)
with human preferences. They are trained using preference datasets where each
example consists of one input prompt, two responses, and a preference label. As
curating a high-quality human labeled preference dataset is both time-consuming
and expensive, people often rely on existing powerful LLMs for preference label
generation. This can potentially introduce noise and impede RM training. In
this work, we present RMBoost, a novel synthetic preference data generation
paradigm to boost reward model quality. Unlike traditional methods, which
generate two responses before obtaining the preference label, RMBoost first
generates one response and selects a preference label, followed by generating
the second more (or less) preferred response conditioned on the pre-selected
preference label and the first response. This approach offers two main
advantages. First, RMBoost reduces labeling noise since preference pairs are
constructed intentionally. Second, RMBoost facilitates the creation of more
diverse responses by incorporating various quality aspects (e.g., helpfulness,
relevance, completeness) into the prompts. We conduct extensive experiments
across three diverse datasets and demonstrate that RMBoost outperforms other
synthetic preference data generation techniques and significantly boosts the
performance of four distinct reward models.
| [
{
"version": "v1",
"created": "Mon, 22 Jul 2024 19:21:55 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 20:08:08 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Shen",
"Jiaming",
""
],
[
"Xu",
"Ran",
""
],
[
"Jun",
"Yennie",
""
],
[
"Qin",
"Zhen",
""
],
[
"Liu",
"Tianqi",
""
],
[
"Yang",
"Carl",
""
],
[
"Liang",
"Yi",
""
],
[
"Baumgartner",
"Simon",
""
],
[
"Bendersky",
"Michael",
""
]
] | TITLE: Boosting Reward Model with Preference-Conditional Multi-Aspect Synthetic
Data Generation
ABSTRACT: Reward models (RMs) are crucial for aligning large language models (LLMs)
with human preferences. They are trained using preference datasets where each
example consists of one input prompt, two responses, and a preference label. As
curating a high-quality human labeled preference dataset is both time-consuming
and expensive, people often rely on existing powerful LLMs for preference label
generation. This can potentially introduce noise and impede RM training. In
this work, we present RMBoost, a novel synthetic preference data generation
paradigm to boost reward model quality. Unlike traditional methods, which
generate two responses before obtaining the preference label, RMBoost first
generates one response and selects a preference label, followed by generating
the second more (or less) preferred response conditioned on the pre-selected
preference label and the first response. This approach offers two main
advantages. First, RMBoost reduces labeling noise since preference pairs are
constructed intentionally. Second, RMBoost facilitates the creation of more
diverse responses by incorporating various quality aspects (e.g., helpfulness,
relevance, completeness) into the prompts. We conduct extensive experiments
across three diverse datasets and demonstrate that RMBoost outperforms other
synthetic preference data generation techniques and significantly boosts the
performance of four distinct reward models.
|
2407.17390 | Itamar Trainin | Itamar Trainin, Omri Abend | $T^5Score$: A Methodology for Automatically Assessing the Quality of LLM
Generated Multi-Document Topic Sets | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Using LLMs for Multi-Document Topic Extraction has recently gained popularity
due to their apparent high-quality outputs, expressiveness, and ease of use.
However, most existing evaluation practices are not designed for LLM-generated
topics and result in low inter-annotator agreement scores, hindering the
reliable use of LLMs for the task. To address this, we introduce $T^5Score$, an
evaluation methodology that decomposes the quality of a topic set into
quantifiable aspects, measurable through easy-to-perform annotation tasks. This
framing enables a convenient, manual or automatic, evaluation procedure
resulting in a strong inter-annotator agreement score. To substantiate our
methodology and claims, we perform extensive experimentation on multiple
datasets and report the results.
| [
{
"version": "v1",
"created": "Wed, 24 Jul 2024 16:14:15 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 08:21:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Trainin",
"Itamar",
""
],
[
"Abend",
"Omri",
""
]
] | TITLE: $T^5Score$: A Methodology for Automatically Assessing the Quality of LLM
Generated Multi-Document Topic Sets
ABSTRACT: Using LLMs for Multi-Document Topic Extraction has recently gained popularity
due to their apparent high-quality outputs, expressiveness, and ease of use.
However, most existing evaluation practices are not designed for LLM-generated
topics and result in low inter-annotator agreement scores, hindering the
reliable use of LLMs for the task. To address this, we introduce $T^5Score$, an
evaluation methodology that decomposes the quality of a topic set into
quantifiable aspects, measurable through easy-to-perform annotation tasks. This
framing enables a convenient, manual or automatic, evaluation procedure
resulting in a strong inter-annotator agreement score. To substantiate our
methodology and claims, we perform extensive experimentation on multiple
datasets and report the results.
|
2407.19675 | Wulian Yun | Wulian Yun, Mengshi Qi, Fei Peng, Huadong Ma | Semi-Supervised Teacher-Reference-Student Architecture for Action
Quality Assessment | To be published in ECCV2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing action quality assessment (AQA) methods often require a large number
of label annotations for fully supervised learning, which are laborious and
expensive. In practice, the labeled data are difficult to obtain because the
AQA annotation process requires domain-specific expertise. In this paper, we
propose a novel semi-supervised method, which can be utilized for better
assessment of the AQA task by exploiting a large amount of unlabeled data and a
small portion of labeled data. Differing from the traditional teacher-student
network, we propose a teacher-reference-student architecture to learn both
unlabeled and labeled data, where the teacher network and the reference network
are used to generate pseudo-labels for unlabeled data to supervise the student
network. Specifically, the teacher predicts pseudo-labels by capturing
high-level features of unlabeled data. The reference network provides adequate
supervision of the student network by referring to additional action
information. Moreover, we introduce confidence memory to improve the
reliability of pseudo-labels by storing the most accurate ever output of the
teacher network and reference network. To validate our method, we conduct
extensive experiments on three AQA benchmark datasets. Experimental results
show that our method achieves significant improvements and outperforms existing
semi-supervised AQA methods.
| [
{
"version": "v1",
"created": "Mon, 29 Jul 2024 03:36:39 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 08:12:47 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yun",
"Wulian",
""
],
[
"Qi",
"Mengshi",
""
],
[
"Peng",
"Fei",
""
],
[
"Ma",
"Huadong",
""
]
] | TITLE: Semi-Supervised Teacher-Reference-Student Architecture for Action
Quality Assessment
ABSTRACT: Existing action quality assessment (AQA) methods often require a large number
of label annotations for fully supervised learning, which are laborious and
expensive. In practice, the labeled data are difficult to obtain because the
AQA annotation process requires domain-specific expertise. In this paper, we
propose a novel semi-supervised method, which can be utilized for better
assessment of the AQA task by exploiting a large amount of unlabeled data and a
small portion of labeled data. Differing from the traditional teacher-student
network, we propose a teacher-reference-student architecture to learn both
unlabeled and labeled data, where the teacher network and the reference network
are used to generate pseudo-labels for unlabeled data to supervise the student
network. Specifically, the teacher predicts pseudo-labels by capturing
high-level features of unlabeled data. The reference network provides adequate
supervision of the student network by referring to additional action
information. Moreover, we introduce confidence memory to improve the
reliability of pseudo-labels by storing the most accurate ever output of the
teacher network and reference network. To validate our method, we conduct
extensive experiments on three AQA benchmark datasets. Experimental results
show that our method achieves significant improvements and outperforms existing
semi-supervised AQA methods.
|
2407.20361 | Aditya Kulkarni | Aditya Kulkarni, Vivek Balachandran, Dinil Mon Divakaran and Tamal Das | From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection
Models against Adversarial Attacks | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Phishing attacks attempt to deceive users into stealing sensitive
information, posing a significant cybersecurity threat. Advances in machine
learning (ML) and deep learning (DL) have led to the development of numerous
phishing webpage detection solutions, but these models remain vulnerable to
adversarial attacks. Evaluating their robustness against adversarial phishing
webpages is essential. Existing tools contain datasets of pre-designed phishing
webpages for a limited number of brands, and lack diversity in phishing
features.
To address these challenges, we develop PhishOracle, a tool that generates
adversarial phishing webpages by embedding diverse phishing features into
legitimate webpages. We evaluate the robustness of three existing task-specific
models -- Stack model, VisualPhishNet, and Phishpedia -- against
PhishOracle-generated adversarial phishing webpages and observe a significant
drop in their detection rates. In contrast, a multimodal large language model
(MLLM)-based phishing detector demonstrates stronger robustness against these
adversarial attacks but still is prone to evasion. Our findings highlight the
vulnerability of phishing detection models to adversarial attacks, emphasizing
the need for more robust detection approaches. Furthermore, we conduct a user
study to evaluate whether PhishOracle-generated adversarial phishing webpages
can deceive users. The results show that many of these phishing webpages evade
not only existing detection models but also users. We also develop the
PhishOracle web app, allowing users to input a legitimate URL, select relevant
phishing features and generate a corresponding phishing webpage. All resources
will be made publicly available on GitHub.
| [
{
"version": "v1",
"created": "Mon, 29 Jul 2024 18:21:34 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Sep 2024 16:07:40 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 11:39:42 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Kulkarni",
"Aditya",
""
],
[
"Balachandran",
"Vivek",
""
],
[
"Divakaran",
"Dinil Mon",
""
],
[
"Das",
"Tamal",
""
]
] | TITLE: From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection
Models against Adversarial Attacks
ABSTRACT: Phishing attacks attempt to deceive users into stealing sensitive
information, posing a significant cybersecurity threat. Advances in machine
learning (ML) and deep learning (DL) have led to the development of numerous
phishing webpage detection solutions, but these models remain vulnerable to
adversarial attacks. Evaluating their robustness against adversarial phishing
webpages is essential. Existing tools contain datasets of pre-designed phishing
webpages for a limited number of brands, and lack diversity in phishing
features.
To address these challenges, we develop PhishOracle, a tool that generates
adversarial phishing webpages by embedding diverse phishing features into
legitimate webpages. We evaluate the robustness of three existing task-specific
models -- Stack model, VisualPhishNet, and Phishpedia -- against
PhishOracle-generated adversarial phishing webpages and observe a significant
drop in their detection rates. In contrast, a multimodal large language model
(MLLM)-based phishing detector demonstrates stronger robustness against these
adversarial attacks but still is prone to evasion. Our findings highlight the
vulnerability of phishing detection models to adversarial attacks, emphasizing
the need for more robust detection approaches. Furthermore, we conduct a user
study to evaluate whether PhishOracle-generated adversarial phishing webpages
can deceive users. The results show that many of these phishing webpages evade
not only existing detection models but also users. We also develop the
PhishOracle web app, allowing users to input a legitimate URL, select relevant
phishing features and generate a corresponding phishing webpage. All resources
will be made publicly available on GitHub.
|
2407.20640 | Peng Ye | Bo Li, Wei Wang, Peng Ye | Improved Bounds for Pure Private Agnostic Learning: Item-Level and
User-Level Privacy | Fix some typos | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine Learning has made remarkable progress in a wide range of fields. In
many scenarios, learning is performed on datasets involving sensitive
information, in which privacy protection is essential for learning algorithms.
In this work, we study pure private learning in the agnostic model -- a
framework reflecting the learning process in practice. We examine the number of
users required under item-level (where each user contributes one example) and
user-level (where each user contributes multiple examples) privacy and derive
several improved upper bounds. For item-level privacy, our algorithm achieves a
near optimal bound for general concept classes. We extend this to the
user-level setting, rendering a tighter upper bound than the one proved by
Ghazi et al. (2023). Lastly, we consider the problem of learning thresholds
under user-level privacy and present an algorithm with a nearly tight user
complexity.
| [
{
"version": "v1",
"created": "Tue, 30 Jul 2024 08:35:26 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 13:19:11 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Bo",
""
],
[
"Wang",
"Wei",
""
],
[
"Ye",
"Peng",
""
]
] | TITLE: Improved Bounds for Pure Private Agnostic Learning: Item-Level and
User-Level Privacy
ABSTRACT: Machine Learning has made remarkable progress in a wide range of fields. In
many scenarios, learning is performed on datasets involving sensitive
information, in which privacy protection is essential for learning algorithms.
In this work, we study pure private learning in the agnostic model -- a
framework reflecting the learning process in practice. We examine the number of
users required under item-level (where each user contributes one example) and
user-level (where each user contributes multiple examples) privacy and derive
several improved upper bounds. For item-level privacy, our algorithm achieves a
near optimal bound for general concept classes. We extend this to the
user-level setting, rendering a tighter upper bound than the one proved by
Ghazi et al. (2023). Lastly, we consider the problem of learning thresholds
under user-level privacy and present an algorithm with a nearly tight user
complexity.
|
2407.21368 | Danfeng Guo | Danfeng Guo and Demetri Terzopoulos | Prompting Medical Large Vision-Language Models to Diagnose Pathologies
by Visual Question Answering | Accepted for publication at the Journal of Machine Learning for
Biomedical Imaging (MELBA) https://melba-journal.org/2025:004 | Machine.Learning.for.Biomedical.Imaging. 3 (2025) | 10.59275/j.melba.2025-1a8b | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large Vision-Language Models (LVLMs) have achieved significant success in
recent years, and they have been extended to the medical domain. Although
demonstrating satisfactory performance on medical Visual Question Answering
(VQA) tasks, Medical LVLMs (MLVLMs) suffer from the hallucination problem,
which makes them fail to diagnose complex pathologies. Moreover, they readily
fail to learn minority pathologies due to imbalanced training data. We propose
two prompting strategies for MLVLMs that reduce hallucination and improve VQA
performance. In the first strategy, we provide a detailed explanation of the
queried pathology. In the second strategy, we fine-tune a cheap, weak learner
to achieve high performance on a specific metric, and textually provide its
judgment to the MLVLM. Tested on the MIMIC-CXR-JPG and Chexpert datasets, our
methods significantly improve the diagnostic F1 score, with the highest
increase being 0.27. We also demonstrate that our prompting strategies can be
extended to general LVLM domains. Based on POPE metrics, it effectively
suppresses the false negative predictions of existing LVLMs and improves Recall
by approximately 0.07.
| [
{
"version": "v1",
"created": "Wed, 31 Jul 2024 06:34:38 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 06:14:00 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 00:27:45 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Guo",
"Danfeng",
""
],
[
"Terzopoulos",
"Demetri",
""
]
] | TITLE: Prompting Medical Large Vision-Language Models to Diagnose Pathologies
by Visual Question Answering
ABSTRACT: Large Vision-Language Models (LVLMs) have achieved significant success in
recent years, and they have been extended to the medical domain. Although
demonstrating satisfactory performance on medical Visual Question Answering
(VQA) tasks, Medical LVLMs (MLVLMs) suffer from the hallucination problem,
which makes them fail to diagnose complex pathologies. Moreover, they readily
fail to learn minority pathologies due to imbalanced training data. We propose
two prompting strategies for MLVLMs that reduce hallucination and improve VQA
performance. In the first strategy, we provide a detailed explanation of the
queried pathology. In the second strategy, we fine-tune a cheap, weak learner
to achieve high performance on a specific metric, and textually provide its
judgment to the MLVLM. Tested on the MIMIC-CXR-JPG and Chexpert datasets, our
methods significantly improve the diagnostic F1 score, with the highest
increase being 0.27. We also demonstrate that our prompting strategies can be
extended to general LVLM domains. Based on POPE metrics, it effectively
suppresses the false negative predictions of existing LVLMs and improves Recall
by approximately 0.07.
|
2408.02032 | Fushuo Huo | Fushuo Huo, Wenchao Xu, Zhong Zhang, Haozhao Wang, Zhicheng Chen,
Peilin Zhao | Self-Introspective Decoding: Alleviating Hallucinations for Large
Vision-Language Models | ICLR2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | While Large Vision-Language Models (LVLMs) have rapidly advanced in recent
years, the prevalent issue known as the `hallucination' problem has emerged as
a significant bottleneck, hindering their real-world deployments. Existing
methods mitigate this issue mainly from two perspectives: One approach
leverages extra knowledge like robust instruction tuning LVLMs with curated
datasets or employing auxiliary analysis networks, which inevitable incur
additional costs. Another approach, known as contrastive decoding, induces
hallucinations by manually disturbing the vision or instruction raw inputs and
mitigates them by contrasting the outputs of the disturbed and original LVLMs.
However, these approaches rely on empirical holistic input disturbances and
double the inference cost. To avoid these issues, we propose a simple yet
effective method named Self-Introspective Decoding (SID). Our empirical
investigation reveals that pretrained LVLMs can introspectively assess the
importance of vision tokens based on preceding vision and text (both
instruction and generated) tokens. We develop the Context and Text-aware Token
Selection (CT2S) strategy, which preserves only unimportant vision tokens after
early layers of LVLMs to adaptively amplify text-informed hallucination during
the auto-regressive decoding. This approach ensures that multimodal knowledge
absorbed in the early layers induces multimodal contextual rather than aimless
hallucinations. Subsequently, the original token logits subtract the amplified
vision-and-text association hallucinations, guiding LVLMs decoding faithfully.
Extensive experiments illustrate SID generates less-hallucination and
higher-quality texts across various metrics, without extra knowledge and much
additional computation burdens.
| [
{
"version": "v1",
"created": "Sun, 4 Aug 2024 13:50:17 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Oct 2024 12:26:40 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 06:51:13 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huo",
"Fushuo",
""
],
[
"Xu",
"Wenchao",
""
],
[
"Zhang",
"Zhong",
""
],
[
"Wang",
"Haozhao",
""
],
[
"Chen",
"Zhicheng",
""
],
[
"Zhao",
"Peilin",
""
]
] | TITLE: Self-Introspective Decoding: Alleviating Hallucinations for Large
Vision-Language Models
ABSTRACT: While Large Vision-Language Models (LVLMs) have rapidly advanced in recent
years, the prevalent issue known as the `hallucination' problem has emerged as
a significant bottleneck, hindering their real-world deployments. Existing
methods mitigate this issue mainly from two perspectives: One approach
leverages extra knowledge like robust instruction tuning LVLMs with curated
datasets or employing auxiliary analysis networks, which inevitable incur
additional costs. Another approach, known as contrastive decoding, induces
hallucinations by manually disturbing the vision or instruction raw inputs and
mitigates them by contrasting the outputs of the disturbed and original LVLMs.
However, these approaches rely on empirical holistic input disturbances and
double the inference cost. To avoid these issues, we propose a simple yet
effective method named Self-Introspective Decoding (SID). Our empirical
investigation reveals that pretrained LVLMs can introspectively assess the
importance of vision tokens based on preceding vision and text (both
instruction and generated) tokens. We develop the Context and Text-aware Token
Selection (CT2S) strategy, which preserves only unimportant vision tokens after
early layers of LVLMs to adaptively amplify text-informed hallucination during
the auto-regressive decoding. This approach ensures that multimodal knowledge
absorbed in the early layers induces multimodal contextual rather than aimless
hallucinations. Subsequently, the original token logits subtract the amplified
vision-and-text association hallucinations, guiding LVLMs decoding faithfully.
Extensive experiments illustrate SID generates less-hallucination and
higher-quality texts across various metrics, without extra knowledge and much
additional computation burdens.
|
2408.02833 | Costantino Carugno | Costantino Carugno, Maurizio Ferrari Dacrema, Paolo Cremonesi | Adaptive Learning for Quantum Linear Regression | null | null | 10.1109/QCE60285.2024.00186 | null | quant-ph cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The recent availability of quantum annealers as cloud-based services has
enabled new ways to handle machine learning problems, and several relevant
algorithms have been adapted to run on these devices. In a recent work, linear
regression was formulated as a quadratic binary optimization problem that can
be solved via quantum annealing. Although this approach promises a
computational time advantage for large datasets, the quality of the solution is
limited by the necessary use of a precision vector, used to approximate the
real-numbered regression coefficients in the quantum formulation. In this work,
we focus on the practical challenge of improving the precision vector encoding:
instead of setting an array of generic values equal for all coefficients, we
allow each one to be expressed by its specific precision, which is tuned with a
simple adaptive algorithm. This approach is evaluated on synthetic datasets of
increasing size, and linear regression is solved using the D-Wave Advantage
quantum annealer, as well as classical solvers. To the best of our knowledge,
this is the largest dataset ever evaluated for linear regression on a quantum
annealer. The results show that our formulation is able to deliver improved
solution quality in all instances, and could better exploit the potential of
current quantum devices.
| [
{
"version": "v1",
"created": "Mon, 5 Aug 2024 21:09:01 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Carugno",
"Costantino",
""
],
[
"Dacrema",
"Maurizio Ferrari",
""
],
[
"Cremonesi",
"Paolo",
""
]
] | TITLE: Adaptive Learning for Quantum Linear Regression
ABSTRACT: The recent availability of quantum annealers as cloud-based services has
enabled new ways to handle machine learning problems, and several relevant
algorithms have been adapted to run on these devices. In a recent work, linear
regression was formulated as a quadratic binary optimization problem that can
be solved via quantum annealing. Although this approach promises a
computational time advantage for large datasets, the quality of the solution is
limited by the necessary use of a precision vector, used to approximate the
real-numbered regression coefficients in the quantum formulation. In this work,
we focus on the practical challenge of improving the precision vector encoding:
instead of setting an array of generic values equal for all coefficients, we
allow each one to be expressed by its specific precision, which is tuned with a
simple adaptive algorithm. This approach is evaluated on synthetic datasets of
increasing size, and linear regression is solved using the D-Wave Advantage
quantum annealer, as well as classical solvers. To the best of our knowledge,
this is the largest dataset ever evaluated for linear regression on a quantum
annealer. The results show that our formulation is able to deliver improved
solution quality in all instances, and could better exploit the potential of
current quantum devices.
|
2408.04315 | Wei Huo | Wei Huo, Changxin Liu, Kemi Ding, Karl Henrik Johansson, Ling Shi | Federated Cubic Regularized Newton Learning with
Sparsification-amplified Differential Privacy | null | null | null | null | cs.LG cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | This paper investigates the use of the cubic-regularized Newton method within
a federated learning framework while addressing two major concerns that
commonly arise in federated learning: privacy leakage and communication
bottleneck. We introduce a federated learning algorithm called Differentially
Private Federated Cubic Regularized Newton (DP-FCRN). By leveraging
second-order techniques, our algorithm achieves lower iteration complexity
compared to first-order methods. We also incorporate noise perturbation during
local computations to ensure privacy. Furthermore, we employ sparsification in
uplink transmission, which not only reduces the communication costs but also
amplifies the privacy guarantee. Specifically, this approach reduces the
necessary noise intensity without compromising privacy protection. We analyze
the convergence properties of our algorithm and establish the privacy
guarantee. Finally, we validate the effectiveness of the proposed algorithm
through experiments on a benchmark dataset.
| [
{
"version": "v1",
"created": "Thu, 8 Aug 2024 08:48:54 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 08:45:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huo",
"Wei",
""
],
[
"Liu",
"Changxin",
""
],
[
"Ding",
"Kemi",
""
],
[
"Johansson",
"Karl Henrik",
""
],
[
"Shi",
"Ling",
""
]
] | TITLE: Federated Cubic Regularized Newton Learning with
Sparsification-amplified Differential Privacy
ABSTRACT: This paper investigates the use of the cubic-regularized Newton method within
a federated learning framework while addressing two major concerns that
commonly arise in federated learning: privacy leakage and communication
bottleneck. We introduce a federated learning algorithm called Differentially
Private Federated Cubic Regularized Newton (DP-FCRN). By leveraging
second-order techniques, our algorithm achieves lower iteration complexity
compared to first-order methods. We also incorporate noise perturbation during
local computations to ensure privacy. Furthermore, we employ sparsification in
uplink transmission, which not only reduces the communication costs but also
amplifies the privacy guarantee. Specifically, this approach reduces the
necessary noise intensity without compromising privacy protection. We analyze
the convergence properties of our algorithm and establish the privacy
guarantee. Finally, we validate the effectiveness of the proposed algorithm
through experiments on a benchmark dataset.
|
2408.11470 | Panfeng Liu | Panfeng Liu and Guoliang Qiu and Biaoshuai Tao and Kuan Yang | A Thorough Comparison Between Independent Cascade and
Susceptible-Infected-Recovered Models | 30 pages, 6 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study cascades in social networks with the independent cascade (IC) model
and the Susceptible-Infected-recovered (SIR) model. The well-studied IC model
fails to capture the feature of node recovery, and the SIR model is a variant
of the IC model with the node recovery feature. In the SIR model, by computing
the probability that a node successfully infects another before its recovery
and viewing this probability as the corresponding IC parameter, the SIR model
becomes an "out-going-edge-correlated" version of the IC model: the events of
the infections along different out-going edges of a node become dependent in
the SIR model, whereas these events are independent in the IC model. In this
paper, we thoroughly compare the two models and examine the effect of this
extra dependency in the SIR model. By a carefully designed coupling argument,
we show that the seeds in the IC model have a stronger influence spread than
their counterparts in the SIR model, and sometimes it can be significantly
stronger. Specifically, we prove that, given the same network, the same seed
sets, and the parameters of the two models being set based on the
above-mentioned equivalence, the expected number of infected nodes at the end
of the cascade for the IC model is weakly larger than that for the SIR model,
and there are instances where this dominance is significant. We also study the
influence maximization problem with the SIR model. We show that the
above-mentioned difference in the two models yields different seed-selection
strategies, which motivates the design of influence maximization algorithms
specifically for the SIR model. We design efficient approximation algorithms
with theoretical guarantees by adapting the reverse-reachable-set-based
algorithms, commonly used for the IC model, to the SIR model. Finally, we
conduct experimental studies over real-world datasets.
| [
{
"version": "v1",
"created": "Wed, 21 Aug 2024 09:38:41 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 15:25:19 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Panfeng",
""
],
[
"Qiu",
"Guoliang",
""
],
[
"Tao",
"Biaoshuai",
""
],
[
"Yang",
"Kuan",
""
]
] | TITLE: A Thorough Comparison Between Independent Cascade and
Susceptible-Infected-Recovered Models
ABSTRACT: We study cascades in social networks with the independent cascade (IC) model
and the Susceptible-Infected-recovered (SIR) model. The well-studied IC model
fails to capture the feature of node recovery, and the SIR model is a variant
of the IC model with the node recovery feature. In the SIR model, by computing
the probability that a node successfully infects another before its recovery
and viewing this probability as the corresponding IC parameter, the SIR model
becomes an "out-going-edge-correlated" version of the IC model: the events of
the infections along different out-going edges of a node become dependent in
the SIR model, whereas these events are independent in the IC model. In this
paper, we thoroughly compare the two models and examine the effect of this
extra dependency in the SIR model. By a carefully designed coupling argument,
we show that the seeds in the IC model have a stronger influence spread than
their counterparts in the SIR model, and sometimes it can be significantly
stronger. Specifically, we prove that, given the same network, the same seed
sets, and the parameters of the two models being set based on the
above-mentioned equivalence, the expected number of infected nodes at the end
of the cascade for the IC model is weakly larger than that for the SIR model,
and there are instances where this dominance is significant. We also study the
influence maximization problem with the SIR model. We show that the
above-mentioned difference in the two models yields different seed-selection
strategies, which motivates the design of influence maximization algorithms
specifically for the SIR model. We design efficient approximation algorithms
with theoretical guarantees by adapting the reverse-reachable-set-based
algorithms, commonly used for the IC model, to the SIR model. Finally, we
conduct experimental studies over real-world datasets.
|
2408.12871 | Xiaochen Zhou | Zhou Xiaochen, Liang Xingzhou, Zou Hui, Lu Yi, Qu Jingjing | DeepDiveAI: Identifying AI Related Documents in Large Scale Literature
Data | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose a method to automatically classify AI-related
documents from large-scale literature databases, leading to the creation of an
AI-related literature dataset, named DeepDiveAI. The dataset construction
approach integrates expert knowledge with the capabilities of advanced models,
structured across two global stages. In the first stage, expert-curated
classification datasets are used to train an LSTM model, which classifies
coarse AI related records from large-scale datasets. In the second stage, we
use Qwen2.5 Plus to annotate a random 10% of the coarse AI-related records,
which are then used to train a BERT binary classifier. This step further
refines the coarse AI related record set to obtain the final DeepDiveAI
dataset. Evaluation results demonstrate that the entire workflow can
efficiently and accurately identify AI-related literature from large-scale
datasets.
| [
{
"version": "v1",
"created": "Fri, 23 Aug 2024 07:05:12 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Aug 2024 11:30:28 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Oct 2024 07:21:57 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Mar 2025 12:46:22 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Xiaochen",
"Zhou",
""
],
[
"Xingzhou",
"Liang",
""
],
[
"Hui",
"Zou",
""
],
[
"Yi",
"Lu",
""
],
[
"Jingjing",
"Qu",
""
]
] | TITLE: DeepDiveAI: Identifying AI Related Documents in Large Scale Literature
Data
ABSTRACT: In this paper, we propose a method to automatically classify AI-related
documents from large-scale literature databases, leading to the creation of an
AI-related literature dataset, named DeepDiveAI. The dataset construction
approach integrates expert knowledge with the capabilities of advanced models,
structured across two global stages. In the first stage, expert-curated
classification datasets are used to train an LSTM model, which classifies
coarse AI related records from large-scale datasets. In the second stage, we
use Qwen2.5 Plus to annotate a random 10% of the coarse AI-related records,
which are then used to train a BERT binary classifier. This step further
refines the coarse AI related record set to obtain the final DeepDiveAI
dataset. Evaluation results demonstrate that the entire workflow can
efficiently and accurately identify AI-related literature from large-scale
datasets.
|
2408.15185 | Ghazal Alinezhad Noghre | Ghazal Alinezhad Noghre, Armin Danesh Pazho, Hamed Tabkhi | Human-Centric Video Anomaly Detection Through Spatio-Temporal Pose
Tokenization and Transformer | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video Anomaly Detection (VAD) presents a significant challenge in computer
vision, particularly due to the unpredictable and infrequent nature of
anomalous events, coupled with the diverse and dynamic environments in which
they occur. Human-centric VAD, a specialized area within this domain, faces
additional complexities, including variations in human behavior, potential
biases in data, and substantial privacy concerns related to human subjects.
These issues complicate the development of models that are both robust and
generalizable. To address these challenges, recent advancements have focused on
pose-based VAD, which leverages human pose as a high-level feature to mitigate
privacy concerns, reduce appearance biases, and minimize background
interference. In this paper, we introduce SPARTA, a novel transformer-based
architecture designed specifically for human-centric pose-based VAD. SPARTA
introduces an innovative Spatio-Temporal Pose and Relative Pose (ST-PRP)
tokenization method that produces an enriched representation of human motion
over time. This approach ensures that the transformer's attention mechanism
captures both spatial and temporal patterns simultaneously, rather than
focusing on only one aspect. The addition of the relative pose further
emphasizes subtle deviations from normal human movements. The architecture's
core, a novel Unified Encoder Twin Decoders (UETD) transformer, significantly
improves the detection of anomalous behaviors in video data. Extensive
evaluations across multiple benchmark datasets demonstrate that SPARTA
consistently outperforms existing methods, establishing a new state-of-the-art
in pose-based VAD.
| [
{
"version": "v1",
"created": "Tue, 27 Aug 2024 16:40:14 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 14:05:49 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Noghre",
"Ghazal Alinezhad",
""
],
[
"Pazho",
"Armin Danesh",
""
],
[
"Tabkhi",
"Hamed",
""
]
] | TITLE: Human-Centric Video Anomaly Detection Through Spatio-Temporal Pose
Tokenization and Transformer
ABSTRACT: Video Anomaly Detection (VAD) presents a significant challenge in computer
vision, particularly due to the unpredictable and infrequent nature of
anomalous events, coupled with the diverse and dynamic environments in which
they occur. Human-centric VAD, a specialized area within this domain, faces
additional complexities, including variations in human behavior, potential
biases in data, and substantial privacy concerns related to human subjects.
These issues complicate the development of models that are both robust and
generalizable. To address these challenges, recent advancements have focused on
pose-based VAD, which leverages human pose as a high-level feature to mitigate
privacy concerns, reduce appearance biases, and minimize background
interference. In this paper, we introduce SPARTA, a novel transformer-based
architecture designed specifically for human-centric pose-based VAD. SPARTA
introduces an innovative Spatio-Temporal Pose and Relative Pose (ST-PRP)
tokenization method that produces an enriched representation of human motion
over time. This approach ensures that the transformer's attention mechanism
captures both spatial and temporal patterns simultaneously, rather than
focusing on only one aspect. The addition of the relative pose further
emphasizes subtle deviations from normal human movements. The architecture's
core, a novel Unified Encoder Twin Decoders (UETD) transformer, significantly
improves the detection of anomalous behaviors in video data. Extensive
evaluations across multiple benchmark datasets demonstrate that SPARTA
consistently outperforms existing methods, establishing a new state-of-the-art
in pose-based VAD.
|
2408.16444 | Leandro Car\'isio Fernandes | Leandro Car\'isio Fernandes, Gustavo Bartz Guedes, Thiago Soares
Laitz, Thales Sales Almeida, Rodrigo Nogueira, Roberto Lotufo, Jayr Pereira | SurveySum: A Dataset for Summarizing Multiple Scientific Articles into a
Survey Section | 15 pages, 6 figures, 1 table. Submitted to BRACIS 2024 | null | 10.1007/978-3-031-79032-4_30 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Document summarization is a task to shorten texts into concise and
informative summaries. This paper introduces a novel dataset designed for
summarizing multiple scientific articles into a section of a survey. Our
contributions are: (1) SurveySum, a new dataset addressing the gap in
domain-specific summarization tools; (2) two specific pipelines to summarize
scientific articles into a section of a survey; and (3) the evaluation of these
pipelines using multiple metrics to compare their performance. Our results
highlight the importance of high-quality retrieval stages and the impact of
different configurations on the quality of generated summaries.
| [
{
"version": "v1",
"created": "Thu, 29 Aug 2024 11:13:23 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Fernandes",
"Leandro Carísio",
""
],
[
"Guedes",
"Gustavo Bartz",
""
],
[
"Laitz",
"Thiago Soares",
""
],
[
"Almeida",
"Thales Sales",
""
],
[
"Nogueira",
"Rodrigo",
""
],
[
"Lotufo",
"Roberto",
""
],
[
"Pereira",
"Jayr",
""
]
] | TITLE: SurveySum: A Dataset for Summarizing Multiple Scientific Articles into a
Survey Section
ABSTRACT: Document summarization is a task to shorten texts into concise and
informative summaries. This paper introduces a novel dataset designed for
summarizing multiple scientific articles into a section of a survey. Our
contributions are: (1) SurveySum, a new dataset addressing the gap in
domain-specific summarization tools; (2) two specific pipelines to summarize
scientific articles into a section of a survey; and (3) the evaluation of these
pipelines using multiple metrics to compare their performance. Our results
highlight the importance of high-quality retrieval stages and the impact of
different configurations on the quality of generated summaries.
|
2409.07896 | Shun Zou | Shun Zou, Zhuo Zhang, Yi Zou, Guangwei Gao | MambaMIC: An Efficient Baseline for Microscopic Image Classification
with State Space Models | 7 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, CNN and Transformer-based methods have made significant
progress in Microscopic Image Classification (MIC). However, existing
approaches still face the dilemma between global modeling and efficient
computation. While the Selective State Space Model (SSM) can simulate
long-range dependencies with linear complexity, it still encounters challenges
in MIC, such as local pixel forgetting, channel redundancy, and lack of local
perception. To address these issues, we propose a simple yet efficient vision
backbone for MIC tasks, named MambaMIC. Specifically, we introduce a
Local-Global dual-branch aggregation module: the MambaMIC Block, designed to
effectively capture and fuse local connectivity and global dependencies. In the
local branch, we use local convolutions to capture pixel similarity, mitigating
local pixel forgetting and enhancing perception. In the global branch, SSM
extracts global dependencies, while Locally Aware Enhanced Filter reduces
channel redundancy and local pixel forgetting. Additionally, we design a
Feature Modulation Interaction Aggregation Module for deep feature interaction
and key feature re-localization. Extensive benchmarking shows that MambaMIC
achieves state-of-the-art performance across five datasets. code is available
at https://zs1314.github.io/MambaMIC
| [
{
"version": "v1",
"created": "Thu, 12 Sep 2024 10:01:33 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 03:18:57 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zou",
"Shun",
""
],
[
"Zhang",
"Zhuo",
""
],
[
"Zou",
"Yi",
""
],
[
"Gao",
"Guangwei",
""
]
] | TITLE: MambaMIC: An Efficient Baseline for Microscopic Image Classification
with State Space Models
ABSTRACT: In recent years, CNN and Transformer-based methods have made significant
progress in Microscopic Image Classification (MIC). However, existing
approaches still face the dilemma between global modeling and efficient
computation. While the Selective State Space Model (SSM) can simulate
long-range dependencies with linear complexity, it still encounters challenges
in MIC, such as local pixel forgetting, channel redundancy, and lack of local
perception. To address these issues, we propose a simple yet efficient vision
backbone for MIC tasks, named MambaMIC. Specifically, we introduce a
Local-Global dual-branch aggregation module: the MambaMIC Block, designed to
effectively capture and fuse local connectivity and global dependencies. In the
local branch, we use local convolutions to capture pixel similarity, mitigating
local pixel forgetting and enhancing perception. In the global branch, SSM
extracts global dependencies, while Locally Aware Enhanced Filter reduces
channel redundancy and local pixel forgetting. Additionally, we design a
Feature Modulation Interaction Aggregation Module for deep feature interaction
and key feature re-localization. Extensive benchmarking shows that MambaMIC
achieves state-of-the-art performance across five datasets. code is available
at https://zs1314.github.io/MambaMIC
|
2409.08481 | Zhuoyuan Li | Zhuoyuan Li, Junqi Liao, Chuanbo Tang, Haotian Zhang, Yuqi Li, Yifan
Bian, Xihua Sheng, Xinmin Feng, Yao Li, Changsheng Gao, Li Li, Dong Liu, Feng
Wu | USTC-TD: A Test Dataset and Benchmark for Image and Video Coding in
2020s | 16 pages. Project Page: https://esakak.github.io/USTC-TD.
Supplementary Material:
https://zhuoyuanli1997.github.io/files/USTC-TD/sup.pdf | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image/video coding has been a remarkable research area for both academia and
industry for many years. Testing datasets, especially high-quality image/video
datasets are desirable for the justified evaluation of coding-related research,
practical applications, and standardization activities. We put forward a test
dataset namely USTC-TD, which has been successfully adopted in the practical
end-to-end image/video coding challenge of the IEEE International Conference on
Visual Communications and Image Processing (VCIP) in 2022 and 2023. USTC-TD
contains 40 images at 4K spatial resolution and 10 video sequences at 1080p
spatial resolution, featuring various content due to the diverse environmental
factors (e.g. scene type, texture, motion, view) and the designed imaging
factors (e.g. illumination, lens, shadow). We quantitatively evaluate USTC-TD
on different image/video features (spatial, temporal, color, lightness), and
compare it with the previous image/video test datasets, which verifies its
excellent compensation for the shortcomings of existing datasets. We also
evaluate both classic standardized and recently learned image/video coding
schemes on USTC-TD using objective quality metrics (PSNR, MS-SSIM, VMAF) and
subjective quality metric (MOS), providing an extensive benchmark for these
evaluated schemes. Based on the characteristics and specific design of the
proposed test dataset, we analyze the benchmark performance and shed light on
the future research and development of image/video coding. All the data are
released online: https://esakak.github.io/USTC-TD.
| [
{
"version": "v1",
"created": "Fri, 13 Sep 2024 02:13:11 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Nov 2024 05:13:21 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 02:09:46 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Zhuoyuan",
""
],
[
"Liao",
"Junqi",
""
],
[
"Tang",
"Chuanbo",
""
],
[
"Zhang",
"Haotian",
""
],
[
"Li",
"Yuqi",
""
],
[
"Bian",
"Yifan",
""
],
[
"Sheng",
"Xihua",
""
],
[
"Feng",
"Xinmin",
""
],
[
"Li",
"Yao",
""
],
[
"Gao",
"Changsheng",
""
],
[
"Li",
"Li",
""
],
[
"Liu",
"Dong",
""
],
[
"Wu",
"Feng",
""
]
] | TITLE: USTC-TD: A Test Dataset and Benchmark for Image and Video Coding in
2020s
ABSTRACT: Image/video coding has been a remarkable research area for both academia and
industry for many years. Testing datasets, especially high-quality image/video
datasets are desirable for the justified evaluation of coding-related research,
practical applications, and standardization activities. We put forward a test
dataset namely USTC-TD, which has been successfully adopted in the practical
end-to-end image/video coding challenge of the IEEE International Conference on
Visual Communications and Image Processing (VCIP) in 2022 and 2023. USTC-TD
contains 40 images at 4K spatial resolution and 10 video sequences at 1080p
spatial resolution, featuring various content due to the diverse environmental
factors (e.g. scene type, texture, motion, view) and the designed imaging
factors (e.g. illumination, lens, shadow). We quantitatively evaluate USTC-TD
on different image/video features (spatial, temporal, color, lightness), and
compare it with the previous image/video test datasets, which verifies its
excellent compensation for the shortcomings of existing datasets. We also
evaluate both classic standardized and recently learned image/video coding
schemes on USTC-TD using objective quality metrics (PSNR, MS-SSIM, VMAF) and
subjective quality metric (MOS), providing an extensive benchmark for these
evaluated schemes. Based on the characteristics and specific design of the
proposed test dataset, we analyze the benchmark performance and shed light on
the future research and development of image/video coding. All the data are
released online: https://esakak.github.io/USTC-TD.
|
2409.09021 | Soumitra Kundu | Soumitra Kundu and Gargi Panda and Saumik Bhattacharya and Aurobinda
Routray and Rajlakshmi Guha | INN-PAR: Invertible Neural Network for PPG to ABP Reconstruction | ICASSP 2025 | null | 10.1109/ICASSP49660.2025.10888915 | null | cs.LG cs.HC | http://creativecommons.org/licenses/by/4.0/ | Non-invasive and continuous blood pressure (BP) monitoring is essential for
the early prevention of many cardiovascular diseases. Estimating arterial blood
pressure (ABP) from photoplethysmography (PPG) has emerged as a promising
solution. However, existing deep learning approaches for PPG-to-ABP
reconstruction (PAR) encounter certain information loss, impacting the
precision of the reconstructed signal. To overcome this limitation, we
introduce an invertible neural network for PPG to ABP reconstruction (INN-PAR),
which employs a series of invertible blocks to jointly learn the mapping
between PPG and its gradient with the ABP signal and its gradient. INN-PAR
efficiently captures both forward and inverse mappings simultaneously, thereby
preventing information loss. By integrating signal gradients into the learning
process, INN-PAR enhances the network's ability to capture essential
high-frequency details, leading to more accurate signal reconstruction.
Moreover, we propose a multi-scale convolution module (MSCM) within the
invertible block, enabling the model to learn features across multiple scales
effectively. We have experimented on two benchmark datasets, which show that
INN-PAR significantly outperforms the state-of-the-art methods in both waveform
reconstruction and BP measurement accuracy. Codes can be found at:
https://github.com/soumitra1992/INNPAR-PPG2ABP.
| [
{
"version": "v1",
"created": "Fri, 13 Sep 2024 17:48:48 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 17:28:15 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Kundu",
"Soumitra",
""
],
[
"Panda",
"Gargi",
""
],
[
"Bhattacharya",
"Saumik",
""
],
[
"Routray",
"Aurobinda",
""
],
[
"Guha",
"Rajlakshmi",
""
]
] | TITLE: INN-PAR: Invertible Neural Network for PPG to ABP Reconstruction
ABSTRACT: Non-invasive and continuous blood pressure (BP) monitoring is essential for
the early prevention of many cardiovascular diseases. Estimating arterial blood
pressure (ABP) from photoplethysmography (PPG) has emerged as a promising
solution. However, existing deep learning approaches for PPG-to-ABP
reconstruction (PAR) encounter certain information loss, impacting the
precision of the reconstructed signal. To overcome this limitation, we
introduce an invertible neural network for PPG to ABP reconstruction (INN-PAR),
which employs a series of invertible blocks to jointly learn the mapping
between PPG and its gradient with the ABP signal and its gradient. INN-PAR
efficiently captures both forward and inverse mappings simultaneously, thereby
preventing information loss. By integrating signal gradients into the learning
process, INN-PAR enhances the network's ability to capture essential
high-frequency details, leading to more accurate signal reconstruction.
Moreover, we propose a multi-scale convolution module (MSCM) within the
invertible block, enabling the model to learn features across multiple scales
effectively. We have experimented on two benchmark datasets, which show that
INN-PAR significantly outperforms the state-of-the-art methods in both waveform
reconstruction and BP measurement accuracy. Codes can be found at:
https://github.com/soumitra1992/INNPAR-PPG2ABP.
|
2409.10687 | Ruchik Mishra | Ruchik Mishra, Andrew Frye, Madan Mohan Rayguru, Dan O. Popa | Personalized Speech Emotion Recognition in Human-Robot Interaction using
Vision Transformers | This work has been accepted for the IEEE Robotics and Automation
Letters (RA-L) | null | null | null | eess.AS cs.HC cs.RO cs.SD | http://creativecommons.org/licenses/by/4.0/ | Emotions are an essential element in verbal communication, so understanding
individuals' affect during a human-robot interaction (HRI) becomes imperative.
This paper investigates the application of vision transformer models, namely
ViT (Vision Transformers) and BEiT (BERT Pre-Training of Image Transformers)
pipelines, for Speech Emotion Recognition (SER) in HRI. The focus is to
generalize the SER models for individual speech characteristics by fine-tuning
these models on benchmark datasets and exploiting ensemble methods. For this
purpose, we collected audio data from different human subjects having
pseudo-naturalistic conversations with the NAO robot. We then fine-tuned our
ViT and BEiT-based models and tested these models on unseen speech samples from
the participants. In the results, we show that fine-tuning vision transformers
on benchmark datasets and and then using either these already fine-tuned models
or ensembling ViT/BEiT models gets us the highest classification accuracies per
individual when it comes to identifying four primary emotions from their
speech: neutral, happy, sad, and angry, as compared to fine-tuning vanilla-ViTs
or BEiTs.
| [
{
"version": "v1",
"created": "Mon, 16 Sep 2024 19:34:34 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Nov 2024 23:26:24 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 14:58:30 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Mishra",
"Ruchik",
""
],
[
"Frye",
"Andrew",
""
],
[
"Rayguru",
"Madan Mohan",
""
],
[
"Popa",
"Dan O.",
""
]
] | TITLE: Personalized Speech Emotion Recognition in Human-Robot Interaction using
Vision Transformers
ABSTRACT: Emotions are an essential element in verbal communication, so understanding
individuals' affect during a human-robot interaction (HRI) becomes imperative.
This paper investigates the application of vision transformer models, namely
ViT (Vision Transformers) and BEiT (BERT Pre-Training of Image Transformers)
pipelines, for Speech Emotion Recognition (SER) in HRI. The focus is to
generalize the SER models for individual speech characteristics by fine-tuning
these models on benchmark datasets and exploiting ensemble methods. For this
purpose, we collected audio data from different human subjects having
pseudo-naturalistic conversations with the NAO robot. We then fine-tuned our
ViT and BEiT-based models and tested these models on unseen speech samples from
the participants. In the results, we show that fine-tuning vision transformers
on benchmark datasets and and then using either these already fine-tuned models
or ensembling ViT/BEiT models gets us the highest classification accuracies per
individual when it comes to identifying four primary emotions from their
speech: neutral, happy, sad, and angry, as compared to fine-tuning vanilla-ViTs
or BEiTs.
|
2409.10831 | Phillip Long | Phillip Long, Zachary Novack, Taylor Berg-Kirkpatrick, Julian McAuley | PDMX: A Large-Scale Public Domain MusicXML Dataset for Symbolic Music
Processing | Accepted to 2025 IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP) | null | 10.1109/ICASSP49660.2025.10890217 | null | cs.SD cs.AI cs.LG cs.MM eess.AS | http://creativecommons.org/licenses/by/4.0/ | The recent explosion of generative AI-Music systems has raised numerous
concerns over data copyright, licensing music from musicians, and the conflict
between open-source AI and large prestige companies. Such issues highlight the
need for publicly available, copyright-free musical data, in which there is a
large shortage, particularly for symbolic music data. To alleviate this issue,
we present PDMX: a large-scale open-source dataset of over 250K public domain
MusicXML scores collected from the score-sharing forum MuseScore, making it the
largest available copyright-free symbolic music dataset to our knowledge. PDMX
additionally includes a wealth of both tag and user interaction metadata,
allowing us to efficiently analyze the dataset and filter for high quality
user-generated scores. Given the additional metadata afforded by our data
collection process, we conduct multitrack music generation experiments
evaluating how different representative subsets of PDMX lead to different
behaviors in downstream models, and how user-rating statistics can be used as
an effective measure of data quality. Examples can be found at
https://pnlong.github.io/PDMX.demo/.
| [
{
"version": "v1",
"created": "Tue, 17 Sep 2024 01:48:42 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 03:08:29 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Long",
"Phillip",
""
],
[
"Novack",
"Zachary",
""
],
[
"Berg-Kirkpatrick",
"Taylor",
""
],
[
"McAuley",
"Julian",
""
]
] | TITLE: PDMX: A Large-Scale Public Domain MusicXML Dataset for Symbolic Music
Processing
ABSTRACT: The recent explosion of generative AI-Music systems has raised numerous
concerns over data copyright, licensing music from musicians, and the conflict
between open-source AI and large prestige companies. Such issues highlight the
need for publicly available, copyright-free musical data, in which there is a
large shortage, particularly for symbolic music data. To alleviate this issue,
we present PDMX: a large-scale open-source dataset of over 250K public domain
MusicXML scores collected from the score-sharing forum MuseScore, making it the
largest available copyright-free symbolic music dataset to our knowledge. PDMX
additionally includes a wealth of both tag and user interaction metadata,
allowing us to efficiently analyze the dataset and filter for high quality
user-generated scores. Given the additional metadata afforded by our data
collection process, we conduct multitrack music generation experiments
evaluating how different representative subsets of PDMX lead to different
behaviors in downstream models, and how user-rating statistics can be used as
an effective measure of data quality. Examples can be found at
https://pnlong.github.io/PDMX.demo/.
|
2409.13477 | Chinmay Surendra Rao | Chinmay Rao, Matthias van Osch, Nicola Pezzotti, Jeroen de Bresser,
Laurens Beljaards, Jakob Meineke, Elwin de Weerdt, Huangling Lu, Mariya
Doneva, and Marius Staring | A Plug-and-Play Method for Guided Multi-contrast MRI Reconstruction
based on Content/Style Modeling | This work has been submitted to the IEEE for possible publication | null | null | null | eess.IV cs.CV physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Since multiple MRI contrasts of the same anatomy contain redundant
information, one contrast can be used as a prior for guiding the reconstruction
of an undersampled subsequent contrast. To this end, several learning-based
guided reconstruction methods have been proposed. However, a key challenge is
the requirement of large paired training datasets comprising raw data and
aligned reference images. We propose a modular two-stage approach for guided
reconstruction addressing this issue, which additionally provides an
explanatory framework for the multi-contrast problem in terms of the shared and
non-shared generative factors underlying two given contrasts. A content/style
model of two-contrast image data is learned from a largely unpaired
image-domain dataset and is subsequently applied as a plug-and-play operator in
iterative reconstruction. The disentanglement of content and style allows
explicit representation of contrast-independent and contrast-specific factors.
Based on this, incorporating prior information into the reconstruction reduces
to simply replacing the aliased content of the image estimate with high-quality
content derived from the reference scan. Combining this component with a data
consistency step and introducing a general corrective process for the content
yields an iterative scheme. We name this novel approach PnP-MUNIT. Various
aspects like interpretability and convergence are explored via simulations.
Furthermore, its practicality is demonstrated on the NYU fastMRI DICOM dataset
and two in-house multi-coil raw datasets, obtaining up to 32.6% more
acceleration over learning-based non-guided reconstruction for a given SSIM. In
a radiological task, PnP-MUNIT allowed 33.3% more acceleration over clinical
reconstruction at diagnostic quality.
| [
{
"version": "v1",
"created": "Fri, 20 Sep 2024 13:08:51 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 23:39:10 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Rao",
"Chinmay",
""
],
[
"van Osch",
"Matthias",
""
],
[
"Pezzotti",
"Nicola",
""
],
[
"de Bresser",
"Jeroen",
""
],
[
"Beljaards",
"Laurens",
""
],
[
"Meineke",
"Jakob",
""
],
[
"de Weerdt",
"Elwin",
""
],
[
"Lu",
"Huangling",
""
],
[
"Doneva",
"Mariya",
""
],
[
"Staring",
"Marius",
""
]
] | TITLE: A Plug-and-Play Method for Guided Multi-contrast MRI Reconstruction
based on Content/Style Modeling
ABSTRACT: Since multiple MRI contrasts of the same anatomy contain redundant
information, one contrast can be used as a prior for guiding the reconstruction
of an undersampled subsequent contrast. To this end, several learning-based
guided reconstruction methods have been proposed. However, a key challenge is
the requirement of large paired training datasets comprising raw data and
aligned reference images. We propose a modular two-stage approach for guided
reconstruction addressing this issue, which additionally provides an
explanatory framework for the multi-contrast problem in terms of the shared and
non-shared generative factors underlying two given contrasts. A content/style
model of two-contrast image data is learned from a largely unpaired
image-domain dataset and is subsequently applied as a plug-and-play operator in
iterative reconstruction. The disentanglement of content and style allows
explicit representation of contrast-independent and contrast-specific factors.
Based on this, incorporating prior information into the reconstruction reduces
to simply replacing the aliased content of the image estimate with high-quality
content derived from the reference scan. Combining this component with a data
consistency step and introducing a general corrective process for the content
yields an iterative scheme. We name this novel approach PnP-MUNIT. Various
aspects like interpretability and convergence are explored via simulations.
Furthermore, its practicality is demonstrated on the NYU fastMRI DICOM dataset
and two in-house multi-coil raw datasets, obtaining up to 32.6% more
acceleration over learning-based non-guided reconstruction for a given SSIM. In
a radiological task, PnP-MUNIT allowed 33.3% more acceleration over clinical
reconstruction at diagnostic quality.
|
2409.14876 | Shilong Yang | Shilong Yang, Chulong Zhang, Qi Zang, Juan Yu, Liang Zeng, Xiao Luo,
Yexuan Xing, Xin Pan, Qi Li, Xiaokun Liang, Yaoqin Xie | Mammo-Clustering: A Multi-views Tri-level Information Fusion Context
Clustering Framework for Localization and Classification in Mammography | 10 pages, 6 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Breast cancer is a significant global health issue, and the diagnosis of
breast imaging has always been challenging. Mammography images typically have
extremely high resolution, with lesions occupying only a very small area.
Down-sampling in neural networks can easily lead to the loss of
microcalcifications or subtle structures, making it difficult for traditional
neural network architectures to address these issues. To tackle these
challenges, we propose a Context Clustering Network with triple information
fusion. Firstly, compared to CNNs or transformers, we find that Context
clustering methods (1) are more computationally efficient and (2) can more
easily associate structural or pathological features, making them suitable for
the clinical tasks of mammography. Secondly, we propose a triple information
fusion mechanism that integrates global information, feature-based local
information, and patch-based local information. The proposed approach is
rigorously evaluated on two public datasets, Vindr-Mammo and CBIS-DDSM, using
five independent splits to ensure statistical robustness. Our method achieves
an AUC of 0.828 on Vindr-Mammo and 0.805 on CBIS-DDSM, outperforming the next
best method by 3.1% and 2.4%, respectively. These improvements are
statistically significant (p<0.05), underscoring the benefits of Context
Clustering Network with triple information fusion. Overall, our Context
Clustering framework demonstrates strong potential as a scalable and
cost-effective solution for large-scale mammography screening, enabling more
efficient and accurate breast cancer detection. Access to our method is
available at https://github.com/Sohyu1/Mammo_Clustering.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2024 10:17:13 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Feb 2025 16:00:00 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 17:27:04 GMT"
},
{
"version": "v4",
"created": "Sat, 15 Mar 2025 07:30:53 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yang",
"Shilong",
""
],
[
"Zhang",
"Chulong",
""
],
[
"Zang",
"Qi",
""
],
[
"Yu",
"Juan",
""
],
[
"Zeng",
"Liang",
""
],
[
"Luo",
"Xiao",
""
],
[
"Xing",
"Yexuan",
""
],
[
"Pan",
"Xin",
""
],
[
"Li",
"Qi",
""
],
[
"Liang",
"Xiaokun",
""
],
[
"Xie",
"Yaoqin",
""
]
] | TITLE: Mammo-Clustering: A Multi-views Tri-level Information Fusion Context
Clustering Framework for Localization and Classification in Mammography
ABSTRACT: Breast cancer is a significant global health issue, and the diagnosis of
breast imaging has always been challenging. Mammography images typically have
extremely high resolution, with lesions occupying only a very small area.
Down-sampling in neural networks can easily lead to the loss of
microcalcifications or subtle structures, making it difficult for traditional
neural network architectures to address these issues. To tackle these
challenges, we propose a Context Clustering Network with triple information
fusion. Firstly, compared to CNNs or transformers, we find that Context
clustering methods (1) are more computationally efficient and (2) can more
easily associate structural or pathological features, making them suitable for
the clinical tasks of mammography. Secondly, we propose a triple information
fusion mechanism that integrates global information, feature-based local
information, and patch-based local information. The proposed approach is
rigorously evaluated on two public datasets, Vindr-Mammo and CBIS-DDSM, using
five independent splits to ensure statistical robustness. Our method achieves
an AUC of 0.828 on Vindr-Mammo and 0.805 on CBIS-DDSM, outperforming the next
best method by 3.1% and 2.4%, respectively. These improvements are
statistically significant (p<0.05), underscoring the benefits of Context
Clustering Network with triple information fusion. Overall, our Context
Clustering framework demonstrates strong potential as a scalable and
cost-effective solution for large-scale mammography screening, enabling more
efficient and accurate breast cancer detection. Access to our method is
available at https://github.com/Sohyu1/Mammo_Clustering.
|
2409.18896 | Denys Iliash | Denys Iliash, Hanxiao Jiang, Yiming Zhang, Manolis Savva, Angel X.
Chang | S2O: Static to Openable Enhancement for Articulated 3D Objects | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Despite much progress in large 3D datasets there are currently few
interactive 3D object datasets, and their scale is limited due to the manual
effort required in their construction. We introduce the static to openable
(S2O) task which creates interactive articulated 3D objects from static
counterparts through openable part detection, motion prediction, and interior
geometry completion. We formulate a unified framework to tackle this task, and
curate a challenging dataset of openable 3D objects that serves as a test bed
for systematic evaluation. Our experiments benchmark methods from prior work,
extended and improved methods, and simple yet effective heuristics for the S2O
task. We find that turning static 3D objects into interactively openable
counterparts is possible but that all methods struggle to generalize to
realistic settings of the task, and we highlight promising future work
directions. Our work enables efficient creation of interactive 3D objects for
robotic manipulation and embodied AI tasks.
| [
{
"version": "v1",
"created": "Fri, 27 Sep 2024 16:34:13 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 19:13:28 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Iliash",
"Denys",
""
],
[
"Jiang",
"Hanxiao",
""
],
[
"Zhang",
"Yiming",
""
],
[
"Savva",
"Manolis",
""
],
[
"Chang",
"Angel X.",
""
]
] | TITLE: S2O: Static to Openable Enhancement for Articulated 3D Objects
ABSTRACT: Despite much progress in large 3D datasets there are currently few
interactive 3D object datasets, and their scale is limited due to the manual
effort required in their construction. We introduce the static to openable
(S2O) task which creates interactive articulated 3D objects from static
counterparts through openable part detection, motion prediction, and interior
geometry completion. We formulate a unified framework to tackle this task, and
curate a challenging dataset of openable 3D objects that serves as a test bed
for systematic evaluation. Our experiments benchmark methods from prior work,
extended and improved methods, and simple yet effective heuristics for the S2O
task. We find that turning static 3D objects into interactively openable
counterparts is possible but that all methods struggle to generalize to
realistic settings of the task, and we highlight promising future work
directions. Our work enables efficient creation of interactive 3D objects for
robotic manipulation and embodied AI tasks.
|
2409.19583 | Jun Liu | Jun Liu, Geng Yuan, Weihao Zeng, Hao Tang, Wenbin Zhang, Xue Lin,
XiaoLin Xu, Dong Huang, and Yanzhi Wang | Brain Tumor Classification on MRI in Light of Molecular Markers | ICAI'22 - The 24th International Conference on Artificial
Intelligence, The 2022 World Congress in Computer Science, Computer
Engineering, & Applied Computing (CSCE'22), Las Vegas, USA. The paper
acceptance rate 17% for regular papers. The publication of the CSCE 2022
conference proceedings has been delayed due to the pandemic | Springer Nature - Book Series: Transactions on Computational
Science & Computational Intelligence, 2022 | null | null | eess.IV cs.CV cs.LG q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | In research findings, co-deletion of the 1p/19q gene is associated with
clinical outcomes in low-grade gliomas. The ability to predict 1p19q status is
critical for treatment planning and patient follow-up. This study aims to
utilize a specially MRI-based convolutional neural network for brain cancer
detection. Although public networks such as RestNet and AlexNet can effectively
diagnose brain cancers using transfer learning, the model includes quite a few
weights that have nothing to do with medical images. As a result, the
diagnostic results are unreliable by the transfer learning model. To deal with
the problem of trustworthiness, we create the model from the ground up, rather
than depending on a pre-trained model. To enable flexibility, we combined
convolution stacking with a dropout and full connect operation, it improved
performance by reducing overfitting. During model training, we also supplement
the given dataset and inject Gaussian noise. We use three--fold
cross-validation to train the best selection model. Comparing InceptionV3,
VGG16, and MobileNetV2 fine-tuned with pre-trained models, our model produces
better results. On an validation set of 125 codeletion vs. 31 not codeletion
images, the proposed network achieves 96.37\% percent F1-score, 97.46\% percent
precision, and 96.34\% percent recall when classifying 1p/19q codeletion and
not codeletion images.
| [
{
"version": "v1",
"created": "Sun, 29 Sep 2024 07:04:26 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 17:01:47 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 18:50:23 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Jun",
""
],
[
"Yuan",
"Geng",
""
],
[
"Zeng",
"Weihao",
""
],
[
"Tang",
"Hao",
""
],
[
"Zhang",
"Wenbin",
""
],
[
"Lin",
"Xue",
""
],
[
"Xu",
"XiaoLin",
""
],
[
"Huang",
"Dong",
""
],
[
"Wang",
"Yanzhi",
""
]
] | TITLE: Brain Tumor Classification on MRI in Light of Molecular Markers
ABSTRACT: In research findings, co-deletion of the 1p/19q gene is associated with
clinical outcomes in low-grade gliomas. The ability to predict 1p19q status is
critical for treatment planning and patient follow-up. This study aims to
utilize a specially MRI-based convolutional neural network for brain cancer
detection. Although public networks such as RestNet and AlexNet can effectively
diagnose brain cancers using transfer learning, the model includes quite a few
weights that have nothing to do with medical images. As a result, the
diagnostic results are unreliable by the transfer learning model. To deal with
the problem of trustworthiness, we create the model from the ground up, rather
than depending on a pre-trained model. To enable flexibility, we combined
convolution stacking with a dropout and full connect operation, it improved
performance by reducing overfitting. During model training, we also supplement
the given dataset and inject Gaussian noise. We use three--fold
cross-validation to train the best selection model. Comparing InceptionV3,
VGG16, and MobileNetV2 fine-tuned with pre-trained models, our model produces
better results. On an validation set of 125 codeletion vs. 31 not codeletion
images, the proposed network achieves 96.37\% percent F1-score, 97.46\% percent
precision, and 96.34\% percent recall when classifying 1p/19q codeletion and
not codeletion images.
|
2409.19917 | Hongjie Fang | Jingjing Chen, Hongjie Fang, Hao-Shu Fang and Cewu Lu | Towards Effective Utilization of Mixed-Quality Demonstrations in Robotic
Manipulation via Segment-Level Selection and Optimization | ICRA 2025. Project website: https://tonyfang.net/s2i/ | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Data is crucial for robotic manipulation, as it underpins the development of
robotic systems for complex tasks. While high-quality, diverse datasets enhance
the performance and adaptability of robotic manipulation policies, collecting
extensive expert-level data is resource-intensive. Consequently, many current
datasets suffer from quality inconsistencies due to operator variability,
highlighting the need for methods to utilize mixed-quality data effectively. To
mitigate these issues, we propose "Select Segments to Imitate" (S2I), a
framework that selects and optimizes mixed-quality demonstration data at the
segment level, while ensuring plug-and-play compatibility with existing robotic
manipulation policies. The framework has three components: demonstration
segmentation dividing origin data into meaningful segments, segment selection
using contrastive learning to find high-quality segments, and trajectory
optimization to refine suboptimal segments for better policy learning. We
evaluate S2I through comprehensive experiments in simulation and real-world
environments across six tasks, demonstrating that with only 3 expert
demonstrations for reference, S2I can improve the performance of various
downstream policies when trained with mixed-quality demonstrations. Project
website: https://tonyfang.net/s2i/.
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2024 03:42:06 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 09:58:58 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chen",
"Jingjing",
""
],
[
"Fang",
"Hongjie",
""
],
[
"Fang",
"Hao-Shu",
""
],
[
"Lu",
"Cewu",
""
]
] | TITLE: Towards Effective Utilization of Mixed-Quality Demonstrations in Robotic
Manipulation via Segment-Level Selection and Optimization
ABSTRACT: Data is crucial for robotic manipulation, as it underpins the development of
robotic systems for complex tasks. While high-quality, diverse datasets enhance
the performance and adaptability of robotic manipulation policies, collecting
extensive expert-level data is resource-intensive. Consequently, many current
datasets suffer from quality inconsistencies due to operator variability,
highlighting the need for methods to utilize mixed-quality data effectively. To
mitigate these issues, we propose "Select Segments to Imitate" (S2I), a
framework that selects and optimizes mixed-quality demonstration data at the
segment level, while ensuring plug-and-play compatibility with existing robotic
manipulation policies. The framework has three components: demonstration
segmentation dividing origin data into meaningful segments, segment selection
using contrastive learning to find high-quality segments, and trajectory
optimization to refine suboptimal segments for better policy learning. We
evaluate S2I through comprehensive experiments in simulation and real-world
environments across six tasks, demonstrating that with only 3 expert
demonstrations for reference, S2I can improve the performance of various
downstream policies when trained with mixed-quality demonstrations. Project
website: https://tonyfang.net/s2i/.
|
2410.00871 | Yunze Liu | Yunze Liu, Li Yi | MAP: Unleashing Hybrid Mamba-Transformer Vision Backbone's Potential
with Masked Autoregressive Pretraining | null | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition 2025 | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hybrid Mamba-Transformer networks have recently garnered broad attention.
These networks can leverage the scalability of Transformers while capitalizing
on Mamba's strengths in long-context modeling and computational efficiency.
However, the challenge of effectively pretraining such hybrid networks remains
an open question. Existing methods, such as Masked Autoencoders (MAE) or
autoregressive (AR) pretraining, primarily focus on single-type network
architectures. In contrast, pretraining strategies for hybrid architectures
must be effective for both Mamba and Transformer components. Based on this, we
propose Masked Autoregressive Pretraining (MAP) to pretrain a hybrid
Mamba-Transformer vision backbone network. This strategy combines the strengths
of both MAE and Autoregressive pretraining, improving the performance of Mamba
and Transformer modules within a unified paradigm. Experimental results show
that the hybrid Mamba-Transformer vision backbone network pretrained with MAP
significantly outperforms other pretraining strategies, achieving
state-of-the-art performance. We validate the method's effectiveness on both 2D
and 3D datasets and provide detailed ablation studies to support the design
choices for each component. The code and checkpoints are available at
https://github.com/yunzeliu/MAP
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 17:05:08 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 15:21:48 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Yunze",
""
],
[
"Yi",
"Li",
""
]
] | TITLE: MAP: Unleashing Hybrid Mamba-Transformer Vision Backbone's Potential
with Masked Autoregressive Pretraining
ABSTRACT: Hybrid Mamba-Transformer networks have recently garnered broad attention.
These networks can leverage the scalability of Transformers while capitalizing
on Mamba's strengths in long-context modeling and computational efficiency.
However, the challenge of effectively pretraining such hybrid networks remains
an open question. Existing methods, such as Masked Autoencoders (MAE) or
autoregressive (AR) pretraining, primarily focus on single-type network
architectures. In contrast, pretraining strategies for hybrid architectures
must be effective for both Mamba and Transformer components. Based on this, we
propose Masked Autoregressive Pretraining (MAP) to pretrain a hybrid
Mamba-Transformer vision backbone network. This strategy combines the strengths
of both MAE and Autoregressive pretraining, improving the performance of Mamba
and Transformer modules within a unified paradigm. Experimental results show
that the hybrid Mamba-Transformer vision backbone network pretrained with MAP
significantly outperforms other pretraining strategies, achieving
state-of-the-art performance. We validate the method's effectiveness on both 2D
and 3D datasets and provide detailed ablation studies to support the design
choices for each component. The code and checkpoints are available at
https://github.com/yunzeliu/MAP
|
2410.02683 | Yu Ying Chiu | Yu Ying Chiu, Liwei Jiang and Yejin Choi | DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of
Daily Life | Accepted into ICLR 2025 (spotlight) | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | As users increasingly seek guidance from LLMs for decision-making in daily
life, many of these decisions are not clear-cut and depend significantly on the
personal values and ethical standards of people. We present DailyDilemmas, a
dataset of 1,360 moral dilemmas encountered in everyday life. Each dilemma
presents two possible actions, along with affected parties and relevant human
values for each action. Based on these dilemmas, we gather a repository of
human values covering diverse everyday topics, such as interpersonal
relationships, workplace, and environmental issues. With DailyDilemmas, we
evaluate LLMs on these dilemmas to determine what action they will choose and
the values represented by these action choices. Then, we analyze values through
the lens of five theoretical frameworks inspired by sociology, psychology, and
philosophy, including the World Values Survey, Moral Foundations Theory,
Maslow's Hierarchy of Needs, Aristotle's Virtues, and Plutchik's Wheel of
Emotions. For instance, we find LLMs are most aligned with self-expression over
survival in World Values Survey and care over loyalty in Moral Foundations
Theory. Interestingly, we find substantial preference differences in models for
some core values. For example, for truthfulness, Mixtral-8x7B neglects it by
9.7% while GPT-4-turbo selects it by 9.4%. We also study the recent guidance
released by OpenAI (ModelSpec), and Anthropic (Constitutional AI) to understand
how their designated principles reflect their models' actual value
prioritization when facing nuanced moral reasoning in daily-life settings.
Finally, we find that end users cannot effectively steer such prioritization
using system prompts.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 17:08:52 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 07:20:54 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 03:54:40 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chiu",
"Yu Ying",
""
],
[
"Jiang",
"Liwei",
""
],
[
"Choi",
"Yejin",
""
]
] | TITLE: DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of
Daily Life
ABSTRACT: As users increasingly seek guidance from LLMs for decision-making in daily
life, many of these decisions are not clear-cut and depend significantly on the
personal values and ethical standards of people. We present DailyDilemmas, a
dataset of 1,360 moral dilemmas encountered in everyday life. Each dilemma
presents two possible actions, along with affected parties and relevant human
values for each action. Based on these dilemmas, we gather a repository of
human values covering diverse everyday topics, such as interpersonal
relationships, workplace, and environmental issues. With DailyDilemmas, we
evaluate LLMs on these dilemmas to determine what action they will choose and
the values represented by these action choices. Then, we analyze values through
the lens of five theoretical frameworks inspired by sociology, psychology, and
philosophy, including the World Values Survey, Moral Foundations Theory,
Maslow's Hierarchy of Needs, Aristotle's Virtues, and Plutchik's Wheel of
Emotions. For instance, we find LLMs are most aligned with self-expression over
survival in World Values Survey and care over loyalty in Moral Foundations
Theory. Interestingly, we find substantial preference differences in models for
some core values. For example, for truthfulness, Mixtral-8x7B neglects it by
9.7% while GPT-4-turbo selects it by 9.4%. We also study the recent guidance
released by OpenAI (ModelSpec), and Anthropic (Constitutional AI) to understand
how their designated principles reflect their models' actual value
prioritization when facing nuanced moral reasoning in daily-life settings.
Finally, we find that end users cannot effectively steer such prioritization
using system prompts.
|
2410.05270 | Mohammad Fahes | Mohammad Fahes, Tuan-Hung Vu, Andrei Bursuc, Patrick P\'erez, Raoul de
Charette | CLIP's Visual Embedding Projector is a Few-shot Cornucopia | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of adapting a contrastively pretrained
vision-language model like CLIP (Radford et al., 2021) for few-shot
classification. The literature addresses this problem by learning a linear
classifier of the frozen visual features, optimizing word embeddings, or
learning external feature adapters. We introduce an alternative way for
few-shot CLIP adaptation without adding ''external'' parameters to optimize. We
find that simply fine-tuning the embedding projection matrix of the vision
encoder leads to better performance than all baselines. Furthermore, we show
that regularizing training with the distance between the fine-tuned and
pretrained matrices adds reliability for adapting CLIP, making the results
stable across different learning rates in the ''validation-free'' setting. This
simple approach, coined ProLIP, yields state-of-the-art performance on 11
few-shot classification benchmarks, few-shot cross-dataset transfer, domain
generalization, and base-to-new class generalization. We also show that ProLIP
significantly outperforms prompt tuning when extended to another task of
test-time adaptation, while being one order of magnitude faster to train. Code
will be made available at: https://github.com/astra-vision/ProLIP .
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 17:59:59 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Dec 2024 16:07:47 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 17:52:55 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Fahes",
"Mohammad",
""
],
[
"Vu",
"Tuan-Hung",
""
],
[
"Bursuc",
"Andrei",
""
],
[
"Pérez",
"Patrick",
""
],
[
"de Charette",
"Raoul",
""
]
] | TITLE: CLIP's Visual Embedding Projector is a Few-shot Cornucopia
ABSTRACT: We consider the problem of adapting a contrastively pretrained
vision-language model like CLIP (Radford et al., 2021) for few-shot
classification. The literature addresses this problem by learning a linear
classifier of the frozen visual features, optimizing word embeddings, or
learning external feature adapters. We introduce an alternative way for
few-shot CLIP adaptation without adding ''external'' parameters to optimize. We
find that simply fine-tuning the embedding projection matrix of the vision
encoder leads to better performance than all baselines. Furthermore, we show
that regularizing training with the distance between the fine-tuned and
pretrained matrices adds reliability for adapting CLIP, making the results
stable across different learning rates in the ''validation-free'' setting. This
simple approach, coined ProLIP, yields state-of-the-art performance on 11
few-shot classification benchmarks, few-shot cross-dataset transfer, domain
generalization, and base-to-new class generalization. We also show that ProLIP
significantly outperforms prompt tuning when extended to another task of
test-time adaptation, while being one order of magnitude faster to train. Code
will be made available at: https://github.com/astra-vision/ProLIP .
|
2410.05894 | Yichen Song | Yichen Song, Jiaming Wang, Yunbo Wang, Xiaokang Yang | DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the realm of computational physics, an enduring topic is the numerical
solutions to partial differential equations (PDEs). Recently, the attention of
researchers has shifted towards Neural Operator methods, renowned for their
capability to approximate ``operators'' -- mappings from functions to
functions. Despite the universal approximation theorem within neural operators,
ensuring error bounds often requires employing numerous Fourier layers.
However, what about lightweight models? In response to this question, we
introduce DimOL (Dimension-aware Operator Learning), drawing insights from
dimensional analysis. To implement DimOL, we propose the ProdLayer, which can
be seamlessly integrated into FNO-based and Transformer-based PDE solvers,
enhancing their ability to handle sum-of-products structures inherent in many
physical systems. Empirically, DimOL models achieve up to 48% performance gain
within the PDE datasets. Furthermore, by analyzing Fourier components' weights,
we can symbolically discern the physical significance of each term. This sheds
light on the opaque nature of neural networks, unveiling underlying physical
principles.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 10:48:50 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Feb 2025 08:27:05 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 06:54:47 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Song",
"Yichen",
""
],
[
"Wang",
"Jiaming",
""
],
[
"Wang",
"Yunbo",
""
],
[
"Yang",
"Xiaokang",
""
]
] | TITLE: DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning
ABSTRACT: In the realm of computational physics, an enduring topic is the numerical
solutions to partial differential equations (PDEs). Recently, the attention of
researchers has shifted towards Neural Operator methods, renowned for their
capability to approximate ``operators'' -- mappings from functions to
functions. Despite the universal approximation theorem within neural operators,
ensuring error bounds often requires employing numerous Fourier layers.
However, what about lightweight models? In response to this question, we
introduce DimOL (Dimension-aware Operator Learning), drawing insights from
dimensional analysis. To implement DimOL, we propose the ProdLayer, which can
be seamlessly integrated into FNO-based and Transformer-based PDE solvers,
enhancing their ability to handle sum-of-products structures inherent in many
physical systems. Empirically, DimOL models achieve up to 48% performance gain
within the PDE datasets. Furthermore, by analyzing Fourier components' weights,
we can symbolically discern the physical significance of each term. This sheds
light on the opaque nature of neural networks, unveiling underlying physical
principles.
|
2410.06418 | Hossein Resani | Hossein Resani, Behrooz Nasihatkon | MIRACLE3D: Memory-efficient Integrated Robust Approach for Continual
Learning on Point Clouds via Shape Model Construction | Accepted to ICLR 2025, Singapore | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce a novel framework for memory-efficient and
privacy-preserving continual learning in 3D object classification. Unlike
conventional memory-based approaches in continual learning that require storing
numerous exemplars, our method constructs a compact shape model for each class,
retaining only the mean shape along with a few key modes of variation. This
strategy not only enables the generation of diverse training samples while
drastically reducing memory usage but also enhances privacy by eliminating the
need to store original data. To further improve model robustness against input
variations, an issue common in 3D domains due to the absence of strong
backbones and limited training data, we incorporate Gradient Mode
Regularization. This technique enhances model stability and broadens
classification margins, resulting in accuracy improvements. We validate our
approach through extensive experiments on the ModelNet40, ShapeNet, and ScanNet
datasets, where we achieve state-of-the-art performance. Notably, our method
consumes only 15% of the memory required by competing methods on the ModelNet40
and ShapeNet, while achieving comparable performance on the challenging ScanNet
dataset with just 8.5% of the memory. These results underscore the scalability,
effectiveness, and privacy-preserving strengths of our framework for 3D object
classification.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 23:12:33 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 01:55:58 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Resani",
"Hossein",
""
],
[
"Nasihatkon",
"Behrooz",
""
]
] | TITLE: MIRACLE3D: Memory-efficient Integrated Robust Approach for Continual
Learning on Point Clouds via Shape Model Construction
ABSTRACT: In this paper, we introduce a novel framework for memory-efficient and
privacy-preserving continual learning in 3D object classification. Unlike
conventional memory-based approaches in continual learning that require storing
numerous exemplars, our method constructs a compact shape model for each class,
retaining only the mean shape along with a few key modes of variation. This
strategy not only enables the generation of diverse training samples while
drastically reducing memory usage but also enhances privacy by eliminating the
need to store original data. To further improve model robustness against input
variations, an issue common in 3D domains due to the absence of strong
backbones and limited training data, we incorporate Gradient Mode
Regularization. This technique enhances model stability and broadens
classification margins, resulting in accuracy improvements. We validate our
approach through extensive experiments on the ModelNet40, ShapeNet, and ScanNet
datasets, where we achieve state-of-the-art performance. Notably, our method
consumes only 15% of the memory required by competing methods on the ModelNet40
and ShapeNet, while achieving comparable performance on the challenging ScanNet
dataset with just 8.5% of the memory. These results underscore the scalability,
effectiveness, and privacy-preserving strengths of our framework for 3D object
classification.
|
2410.06757 | Peng Zhang | Peng Zhang, Qianqian Xue, Xingyu Liu, Guanglei Zhang, Wenjian Wang,
Jiye Liang | MDiff-FMT: Morphology-aware Diffusion Model for Fluorescence Molecular
Tomography with Small-scale Datasets | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fluorescence molecular tomography (FMT) is a sensitive optical imaging
technology widely used in biomedical research. However, the ill-posedness of
the inverse problem poses a huge challenge to FMT reconstruction. Although
end-to-end deep learning algorithms have been widely used to address this
critical issue, they still suffer from high data dependency and poor
morphological restoration. In this paper, we report for the first time a
morphology-aware diffusion model, MDiff-FMT, based on denoising diffusion
probabilistic model (DDPM) to achieve high-fidelity morphological
reconstruction for FMT. First, we use the noise addition of DDPM to simulate
the process of the gradual degradation of morphological features, and achieve
fine-grained reconstruction of morphological features through a stepwise
probabilistic sampling mechanism, avoiding problems such as loss of structure
details that may occur in end-to-end deep learning methods. Additionally, we
introduce the conditional fluorescence image as structural prior information to
sample a high-fidelity reconstructed image from the noisy images. Numerous
numerical and real phantom experimental results show that the proposed
MDiff-FMT achieves SOTA results in morphological reconstruction of FMT without
relying on large-scale datasets.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2024 10:41:31 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 04:47:18 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Peng",
""
],
[
"Xue",
"Qianqian",
""
],
[
"Liu",
"Xingyu",
""
],
[
"Zhang",
"Guanglei",
""
],
[
"Wang",
"Wenjian",
""
],
[
"Liang",
"Jiye",
""
]
] | TITLE: MDiff-FMT: Morphology-aware Diffusion Model for Fluorescence Molecular
Tomography with Small-scale Datasets
ABSTRACT: Fluorescence molecular tomography (FMT) is a sensitive optical imaging
technology widely used in biomedical research. However, the ill-posedness of
the inverse problem poses a huge challenge to FMT reconstruction. Although
end-to-end deep learning algorithms have been widely used to address this
critical issue, they still suffer from high data dependency and poor
morphological restoration. In this paper, we report for the first time a
morphology-aware diffusion model, MDiff-FMT, based on denoising diffusion
probabilistic model (DDPM) to achieve high-fidelity morphological
reconstruction for FMT. First, we use the noise addition of DDPM to simulate
the process of the gradual degradation of morphological features, and achieve
fine-grained reconstruction of morphological features through a stepwise
probabilistic sampling mechanism, avoiding problems such as loss of structure
details that may occur in end-to-end deep learning methods. Additionally, we
introduce the conditional fluorescence image as structural prior information to
sample a high-fidelity reconstructed image from the noisy images. Numerous
numerical and real phantom experimental results show that the proposed
MDiff-FMT achieves SOTA results in morphological reconstruction of FMT without
relying on large-scale datasets.
|
2410.08392 | Howon Lee | Howon Lee, Aanchal Save, Pranay Seshadri, Juergen Rauleder | Large Airfoil Models | null | null | null | null | physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | The development of a Large Airfoil Model (LAM), a transformative approach for
answering technical questions on airfoil aerodynamics, requires a vast dataset
and a model to leverage it. To build this foundation, a novel probabilistic
machine learning approach, A Deep Airfoil Prediction Tool (ADAPT), has been
developed. ADAPT makes uncertainty-aware predictions of airfoil pressure
coefficient ($C_p$) distributions by harnessing experimental data and
incorporating measurement uncertainties. By employing deep kernel learning,
performing Gaussian Process Regression in a ten-dimensional latent space
learned by a neural network, ADAPT effectively handles unstructured
experimental datasets. In tandem, Airfoil Surface Pressure Information
Repository of Experiments (ASPIRE), the first large-scale, open-source
repository of airfoil experimental data has been developed. ASPIRE integrates
century-old historical data with modern reports, forming an unparalleled
resource of real-world pressure measurements. This addresses a critical gap
left by prior repositories, which relied primarily on numerical simulations.
Demonstrative results for three airfoils show that ADAPT accurately predicts
$C_p$ distributions and aerodynamic coefficients across varied flow conditions,
achieving a mean absolute error in enclosed area ($\text{MAE}_\text{enclosed}$)
of 0.029. ASPIRE and ADAPT lay the foundation for an interactive airfoil
analysis tool driven by a large language model, enabling users to perform
design tasks based on natural language questions rather than explicit technical
input.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 21:59:29 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Nov 2024 16:18:31 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Dec 2024 00:31:52 GMT"
},
{
"version": "v4",
"created": "Sun, 16 Mar 2025 19:37:44 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lee",
"Howon",
""
],
[
"Save",
"Aanchal",
""
],
[
"Seshadri",
"Pranay",
""
],
[
"Rauleder",
"Juergen",
""
]
] | TITLE: Large Airfoil Models
ABSTRACT: The development of a Large Airfoil Model (LAM), a transformative approach for
answering technical questions on airfoil aerodynamics, requires a vast dataset
and a model to leverage it. To build this foundation, a novel probabilistic
machine learning approach, A Deep Airfoil Prediction Tool (ADAPT), has been
developed. ADAPT makes uncertainty-aware predictions of airfoil pressure
coefficient ($C_p$) distributions by harnessing experimental data and
incorporating measurement uncertainties. By employing deep kernel learning,
performing Gaussian Process Regression in a ten-dimensional latent space
learned by a neural network, ADAPT effectively handles unstructured
experimental datasets. In tandem, Airfoil Surface Pressure Information
Repository of Experiments (ASPIRE), the first large-scale, open-source
repository of airfoil experimental data has been developed. ASPIRE integrates
century-old historical data with modern reports, forming an unparalleled
resource of real-world pressure measurements. This addresses a critical gap
left by prior repositories, which relied primarily on numerical simulations.
Demonstrative results for three airfoils show that ADAPT accurately predicts
$C_p$ distributions and aerodynamic coefficients across varied flow conditions,
achieving a mean absolute error in enclosed area ($\text{MAE}_\text{enclosed}$)
of 0.029. ASPIRE and ADAPT lay the foundation for an interactive airfoil
analysis tool driven by a large language model, enabling users to perform
design tasks based on natural language questions rather than explicit technical
input.
|
2410.09374 | Junkai Niu | Junkai Niu, Sheng Zhong, Xiuyuan Lu, Shaojie Shen, Guillermo Gallego,
Yi Zhou | ESVO2: Direct Visual-Inertial Odometry with Stereo Event Cameras | null | IEEE Transactions on Robotics, 2025 | 10.1109/TRO.2025.3548523 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event-based visual odometry is a specific branch of visual Simultaneous
Localization and Mapping (SLAM) techniques, which aims at solving tracking and
mapping subproblems (typically in parallel), by exploiting the special working
principles of neuromorphic (i.e., event-based) cameras. Due to the
motion-dependent nature of event data, explicit data association (i.e., feature
matching) under large-baseline view-point changes is difficult to establish,
making direct methods a more rational choice. However, state-of-the-art direct
methods are limited by the high computational complexity of the mapping
sub-problem and the degeneracy of camera pose tracking in certain degrees of
freedom (DoF) in rotation. In this paper, we tackle these issues by building an
event-based stereo visual-inertial odometry system on top of a direct pipeline.
Specifically, to speed up the mapping operation, we propose an efficient
strategy for sampling contour points according to the local dynamics of events.
The mapping performance is also improved in terms of structure completeness and
local smoothness by merging the temporal stereo and static stereo results. To
circumvent the degeneracy of camera pose tracking in recovering the pitch and
yaw components of general 6-DoF motion, we introduce IMU measurements as motion
priors via pre-integration. To this end, a compact back-end is proposed for
continuously updating the IMU bias and predicting the linear velocity, enabling
an accurate motion prediction for camera pose tracking. The resulting system
scales well with modern high-resolution event cameras and leads to better
global positioning accuracy in large-scale outdoor environments. Extensive
evaluations on five publicly available datasets featuring different resolutions
and scenarios justify the superior performance of the proposed system against
five state-of-the-art methods.
| [
{
"version": "v1",
"created": "Sat, 12 Oct 2024 05:35:27 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jan 2025 15:52:06 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 05:31:05 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Niu",
"Junkai",
""
],
[
"Zhong",
"Sheng",
""
],
[
"Lu",
"Xiuyuan",
""
],
[
"Shen",
"Shaojie",
""
],
[
"Gallego",
"Guillermo",
""
],
[
"Zhou",
"Yi",
""
]
] | TITLE: ESVO2: Direct Visual-Inertial Odometry with Stereo Event Cameras
ABSTRACT: Event-based visual odometry is a specific branch of visual Simultaneous
Localization and Mapping (SLAM) techniques, which aims at solving tracking and
mapping subproblems (typically in parallel), by exploiting the special working
principles of neuromorphic (i.e., event-based) cameras. Due to the
motion-dependent nature of event data, explicit data association (i.e., feature
matching) under large-baseline view-point changes is difficult to establish,
making direct methods a more rational choice. However, state-of-the-art direct
methods are limited by the high computational complexity of the mapping
sub-problem and the degeneracy of camera pose tracking in certain degrees of
freedom (DoF) in rotation. In this paper, we tackle these issues by building an
event-based stereo visual-inertial odometry system on top of a direct pipeline.
Specifically, to speed up the mapping operation, we propose an efficient
strategy for sampling contour points according to the local dynamics of events.
The mapping performance is also improved in terms of structure completeness and
local smoothness by merging the temporal stereo and static stereo results. To
circumvent the degeneracy of camera pose tracking in recovering the pitch and
yaw components of general 6-DoF motion, we introduce IMU measurements as motion
priors via pre-integration. To this end, a compact back-end is proposed for
continuously updating the IMU bias and predicting the linear velocity, enabling
an accurate motion prediction for camera pose tracking. The resulting system
scales well with modern high-resolution event cameras and leads to better
global positioning accuracy in large-scale outdoor environments. Extensive
evaluations on five publicly available datasets featuring different resolutions
and scenarios justify the superior performance of the proposed system against
five state-of-the-art methods.
|
2410.10624 | Zechen Li | Zechen Li, Shohreh Deldari, Linyao Chen, Hao Xue and Flora D. Salim | SensorLLM: Aligning Large Language Models with Motion Sensors for Human
Activity Recognition | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce SensorLLM, a two-stage framework that enables Large Language
Models (LLMs) to perform human activity recognition (HAR) from sensor data.
Despite their strong reasoning and generalization capabilities, LLMs remain
underutilized for motion sensor data due to the lack of semantic context in
time-series, computational constraints, and challenges in processing numerical
inputs. SensorLLM addresses these limitations through a Sensor-Language
Alignment stage, where we introduce special tokens for each sensor channel and
automatically generate textual trend descriptions. This alignment enables LLMs
to capture numerical variations, channel-specific features, and data of varying
duration--without requiring human annotations. In the subsequent Task-Aware
Tuning stage, we refine the model for HAR classification, achieving performance
that matches or surpasses state-of-the-art methods. Our results demonstrate
that SensorLLM evolves into an effective sensor learner, reasoner, and
classifier through Sensor-Language Alignment, generalizing across diverse HAR
datasets. We believe this work establishes a foundation for future research on
time-series and text alignment, paving the way for foundation models in sensor
data analysis.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 15:30:41 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 09:28:43 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Zechen",
""
],
[
"Deldari",
"Shohreh",
""
],
[
"Chen",
"Linyao",
""
],
[
"Xue",
"Hao",
""
],
[
"Salim",
"Flora D.",
""
]
] | TITLE: SensorLLM: Aligning Large Language Models with Motion Sensors for Human
Activity Recognition
ABSTRACT: We introduce SensorLLM, a two-stage framework that enables Large Language
Models (LLMs) to perform human activity recognition (HAR) from sensor data.
Despite their strong reasoning and generalization capabilities, LLMs remain
underutilized for motion sensor data due to the lack of semantic context in
time-series, computational constraints, and challenges in processing numerical
inputs. SensorLLM addresses these limitations through a Sensor-Language
Alignment stage, where we introduce special tokens for each sensor channel and
automatically generate textual trend descriptions. This alignment enables LLMs
to capture numerical variations, channel-specific features, and data of varying
duration--without requiring human annotations. In the subsequent Task-Aware
Tuning stage, we refine the model for HAR classification, achieving performance
that matches or surpasses state-of-the-art methods. Our results demonstrate
that SensorLLM evolves into an effective sensor learner, reasoner, and
classifier through Sensor-Language Alignment, generalizing across diverse HAR
datasets. We believe this work establishes a foundation for future research on
time-series and text alignment, paving the way for foundation models in sensor
data analysis.
|
2410.10880 | Hengxiang Zhang | Hengxiang Zhang, Songxin Zhang, Bingyi Jing, Hongxin Wei | Fine-tuning can Help Detect Pretraining Data from Large Language Models | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the era of large language models (LLMs), detecting pretraining data has
been increasingly important due to concerns about fair evaluation and ethical
risks. Current methods differentiate members and non-members by designing
scoring functions, like Perplexity and Min-k%. However, the diversity and
complexity of training data magnifies the difficulty of distinguishing, leading
to suboptimal performance in detecting pretraining data. In this paper, we
first explore the benefits of unseen data, which can be easily collected after
the release of the LLM. We find that the perplexities of LLMs shift differently
for members and non-members, after fine-tuning with a small amount of
previously unseen data. In light of this, we introduce a novel and effective
method termed Fine-tuned Score Deviation(FSD), which improves the performance
of current scoring functions for pretraining data detection. In particular, we
propose to measure the deviation distance of current scores after fine-tuning
on a small amount of unseen data within the same domain. In effect, using a few
unseen data can largely decrease the scores of all non-members, leading to a
larger deviation distance than members. Extensive experiments demonstrate the
effectiveness of our method, significantly improving the AUC score on common
benchmark datasets across various models.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2024 15:36:42 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 12:29:05 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Hengxiang",
""
],
[
"Zhang",
"Songxin",
""
],
[
"Jing",
"Bingyi",
""
],
[
"Wei",
"Hongxin",
""
]
] | TITLE: Fine-tuning can Help Detect Pretraining Data from Large Language Models
ABSTRACT: In the era of large language models (LLMs), detecting pretraining data has
been increasingly important due to concerns about fair evaluation and ethical
risks. Current methods differentiate members and non-members by designing
scoring functions, like Perplexity and Min-k%. However, the diversity and
complexity of training data magnifies the difficulty of distinguishing, leading
to suboptimal performance in detecting pretraining data. In this paper, we
first explore the benefits of unseen data, which can be easily collected after
the release of the LLM. We find that the perplexities of LLMs shift differently
for members and non-members, after fine-tuning with a small amount of
previously unseen data. In light of this, we introduce a novel and effective
method termed Fine-tuned Score Deviation(FSD), which improves the performance
of current scoring functions for pretraining data detection. In particular, we
propose to measure the deviation distance of current scores after fine-tuning
on a small amount of unseen data within the same domain. In effect, using a few
unseen data can largely decrease the scores of all non-members, leading to a
larger deviation distance than members. Extensive experiments demonstrate the
effectiveness of our method, significantly improving the AUC score on common
benchmark datasets across various models.
|
2410.11506 | Hongyu An | Hongyu An, Xinfeng Zhang, Shijie Zhao, Li Zhang, Ruiqin Xiong | Spatio-Temporal Distortion Aware Omnidirectional Video Super-Resolution | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Omnidirectional video (ODV) provides an immersive visual experience and is
widely utilized in virtual reality and augmented reality. However, restricted
capturing devices and transmission bandwidth lead to low-resolution ODVs. Video
super-resolution (SR) is proposed to enhance resolution, but practical ODV
spatial projection distortions and temporal flickering are not well addressed
directly applying existing methods. To achieve better ODV-SR reconstruction, we
propose a Spatio-Temporal Distortion Aware Network (STDAN) oriented to ODV
characteristics. Specifically, a spatially continuous distortion modulation
module is introduced to improve discrete projection distortions. Next, we
design an interlaced multi-frame reconstruction mechanism to refine temporal
consistency across frames. Furthermore, we incorporate latitude-saliency
adaptive weights during training to concentrate on regions with higher texture
complexity and human-watching interest. In general, we explore inference-free
and real-world viewing matched strategies to provide an application-friendly
method on a novel ODV-SR dataset with practical scenarios. Extensive
experimental results demonstrate the superior performance of the proposed STDAN
over state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 11:17:19 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 16:22:15 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"An",
"Hongyu",
""
],
[
"Zhang",
"Xinfeng",
""
],
[
"Zhao",
"Shijie",
""
],
[
"Zhang",
"Li",
""
],
[
"Xiong",
"Ruiqin",
""
]
] | TITLE: Spatio-Temporal Distortion Aware Omnidirectional Video Super-Resolution
ABSTRACT: Omnidirectional video (ODV) provides an immersive visual experience and is
widely utilized in virtual reality and augmented reality. However, restricted
capturing devices and transmission bandwidth lead to low-resolution ODVs. Video
super-resolution (SR) is proposed to enhance resolution, but practical ODV
spatial projection distortions and temporal flickering are not well addressed
directly applying existing methods. To achieve better ODV-SR reconstruction, we
propose a Spatio-Temporal Distortion Aware Network (STDAN) oriented to ODV
characteristics. Specifically, a spatially continuous distortion modulation
module is introduced to improve discrete projection distortions. Next, we
design an interlaced multi-frame reconstruction mechanism to refine temporal
consistency across frames. Furthermore, we incorporate latitude-saliency
adaptive weights during training to concentrate on regions with higher texture
complexity and human-watching interest. In general, we explore inference-free
and real-world viewing matched strategies to provide an application-friendly
method on a novel ODV-SR dataset with practical scenarios. Extensive
experimental results demonstrate the superior performance of the proposed STDAN
over state-of-the-art methods.
|
2410.11698 | Travis Lloyd | Travis Lloyd, Jennah Gosciak, Tung Nguyen, Mor Naaman | AI Rules? Characterizing Reddit Community Policies Towards AI-Generated
Content | Forthcoming at ACM CHI 2025 | null | null | null | cs.CY cs.SI | http://creativecommons.org/licenses/by/4.0/ | How are Reddit communities responding to AI-generated content? We explored
this question through a large-scale analysis of subreddit community rules and
their change over time. We collected the metadata and community rules for over
$300,000$ public subreddits and measured the prevalence of rules governing AI.
We labeled subreddits and AI rules according to existing taxonomies from the
HCI literature and a new taxonomy we developed specific to AI rules. While
rules about AI are still relatively uncommon, the number of subreddits with
these rules more than doubled over the course of a year. AI rules are more
common in larger subreddits and communities focused on art or celebrity topics,
and less common in those focused on social support. These rules often focus on
AI images and evoke, as justification, concerns about quality and authenticity.
Overall, our findings illustrate the emergence of varied concerns about AI, in
different community contexts. Platform designers and HCI researchers should
heed these concerns if they hope to encourage community self-determination in
the age of generative AI. We make our datasets public to enable future
large-scale studies of community self-governance.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 15:31:41 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Dec 2024 19:57:34 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 19:30:03 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lloyd",
"Travis",
""
],
[
"Gosciak",
"Jennah",
""
],
[
"Nguyen",
"Tung",
""
],
[
"Naaman",
"Mor",
""
]
] | TITLE: AI Rules? Characterizing Reddit Community Policies Towards AI-Generated
Content
ABSTRACT: How are Reddit communities responding to AI-generated content? We explored
this question through a large-scale analysis of subreddit community rules and
their change over time. We collected the metadata and community rules for over
$300,000$ public subreddits and measured the prevalence of rules governing AI.
We labeled subreddits and AI rules according to existing taxonomies from the
HCI literature and a new taxonomy we developed specific to AI rules. While
rules about AI are still relatively uncommon, the number of subreddits with
these rules more than doubled over the course of a year. AI rules are more
common in larger subreddits and communities focused on art or celebrity topics,
and less common in those focused on social support. These rules often focus on
AI images and evoke, as justification, concerns about quality and authenticity.
Overall, our findings illustrate the emergence of varied concerns about AI, in
different community contexts. Platform designers and HCI researchers should
heed these concerns if they hope to encourage community self-determination in
the age of generative AI. We make our datasets public to enable future
large-scale studies of community self-governance.
|
2410.14675 | Yukun Huang | Yukun Huang, Sanxing Chen, Hongyi Cai, Bhuwan Dhingra | To Trust or Not to Trust? Enhancing Large Language Models' Situated
Faithfulness to External Contexts | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) are often augmented with external contexts, such
as those used in retrieval-augmented generation (RAG). However, these contexts
can be inaccurate or intentionally misleading, leading to conflicts with the
model's internal knowledge. We argue that robust LLMs should demonstrate
situated faithfulness, dynamically calibrating their trust in external
information based on their confidence in the internal knowledge and the
external context to resolve knowledge conflicts. To benchmark this capability,
we evaluate LLMs across several QA datasets, including a newly created dataset
featuring in-the-wild incorrect contexts sourced from Reddit posts. We show
that when provided with both correct and incorrect contexts, both open-source
and proprietary models tend to overly rely on external information, regardless
of its factual accuracy. To enhance situated faithfulness, we propose two
approaches: Self-Guided Confidence Reasoning (SCR) and Rule-Based Confidence
Reasoning (RCR). SCR enables models to self-assess the confidence of external
information relative to their own internal knowledge to produce the most
accurate answer. RCR, in contrast, extracts explicit confidence signals from
the LLM and determines the final answer using predefined rules. Our results
show that for LLMs with strong reasoning capabilities, such as GPT-4o and
GPT-4o mini, SCR outperforms RCR, achieving improvements of up to 24.2% over a
direct input augmentation baseline. Conversely, for a smaller model like
Llama-3-8B, RCR outperforms SCR. Fine-tuning SCR with our proposed Confidence
Reasoning Direct Preference Optimization (CR-DPO) method improves performance
on both seen and unseen datasets, yielding an average improvement of 8.9% on
Llama-3-8B. In addition to quantitative results, we offer insights into the
relative strengths of SCR and RCR.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 17:59:47 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 04:47:58 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huang",
"Yukun",
""
],
[
"Chen",
"Sanxing",
""
],
[
"Cai",
"Hongyi",
""
],
[
"Dhingra",
"Bhuwan",
""
]
] | TITLE: To Trust or Not to Trust? Enhancing Large Language Models' Situated
Faithfulness to External Contexts
ABSTRACT: Large Language Models (LLMs) are often augmented with external contexts, such
as those used in retrieval-augmented generation (RAG). However, these contexts
can be inaccurate or intentionally misleading, leading to conflicts with the
model's internal knowledge. We argue that robust LLMs should demonstrate
situated faithfulness, dynamically calibrating their trust in external
information based on their confidence in the internal knowledge and the
external context to resolve knowledge conflicts. To benchmark this capability,
we evaluate LLMs across several QA datasets, including a newly created dataset
featuring in-the-wild incorrect contexts sourced from Reddit posts. We show
that when provided with both correct and incorrect contexts, both open-source
and proprietary models tend to overly rely on external information, regardless
of its factual accuracy. To enhance situated faithfulness, we propose two
approaches: Self-Guided Confidence Reasoning (SCR) and Rule-Based Confidence
Reasoning (RCR). SCR enables models to self-assess the confidence of external
information relative to their own internal knowledge to produce the most
accurate answer. RCR, in contrast, extracts explicit confidence signals from
the LLM and determines the final answer using predefined rules. Our results
show that for LLMs with strong reasoning capabilities, such as GPT-4o and
GPT-4o mini, SCR outperforms RCR, achieving improvements of up to 24.2% over a
direct input augmentation baseline. Conversely, for a smaller model like
Llama-3-8B, RCR outperforms SCR. Fine-tuning SCR with our proposed Confidence
Reasoning Direct Preference Optimization (CR-DPO) method improves performance
on both seen and unseen datasets, yielding an average improvement of 8.9% on
Llama-3-8B. In addition to quantitative results, we offer insights into the
relative strengths of SCR and RCR.
|
2410.14729 | Zixin Wang | Zixin Wang, Dong Gong, Sen Wang, Zi Huang, Yadan Luo | Is Less More? Exploring Token Condensation as Training-free Test-time
Adaptation | 16 pages, 8 figures | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Contrastive Language-Image Pretraining (CLIP) excels at learning
generalizable image representations but often falls short in zero-shot
inference on certain downstream datasets. Test-time adaptation (TTA) mitigates
this issue by adjusting components like normalization layers or context
prompts, yet it typically requires large batch sizes and extensive
augmentations, leading to high computational costs. This raises a key question:
Can VLMs' performance drop in specific test cases be mitigated through
efficient, training-free approaches? To explore the solution, we investigate
token condensation (TC) techniques, originally designed to enhance vision
transformer efficiency by refining token usage during inference. We observe
that informative tokens improve visual-text alignment in VLMs like CLIP on
unseen datasets. However, existing TC methods often fail to maintain
in-distribution performance when reducing tokens, prompting us to ask: How can
we transform TC into an effective ``free-lunch'' adaptation strategy for VLMs?
To address this, we propose Token Condensation as Adaptation (TCA), a
training-free adaptation method that takes a step beyond standard TC. Rather
than passively discarding tokens, TCA condenses token representation by
introducing reservoir-based domain anchor tokens for information-preserving
token reduction and logits correction. TCA achieves up to a 21.4% performance
improvement over the strongest baseline on cross-dataset benchmark and the
CIFAR-100-Corrupted dataset while reducing GFLOPs by 12.2% to 48.9%, with
minimal hyperparameter dependency on both CLIP and SigLIP series.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 07:13:35 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Nov 2024 12:17:29 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 09:01:31 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Zixin",
""
],
[
"Gong",
"Dong",
""
],
[
"Wang",
"Sen",
""
],
[
"Huang",
"Zi",
""
],
[
"Luo",
"Yadan",
""
]
] | TITLE: Is Less More? Exploring Token Condensation as Training-free Test-time
Adaptation
ABSTRACT: Contrastive Language-Image Pretraining (CLIP) excels at learning
generalizable image representations but often falls short in zero-shot
inference on certain downstream datasets. Test-time adaptation (TTA) mitigates
this issue by adjusting components like normalization layers or context
prompts, yet it typically requires large batch sizes and extensive
augmentations, leading to high computational costs. This raises a key question:
Can VLMs' performance drop in specific test cases be mitigated through
efficient, training-free approaches? To explore the solution, we investigate
token condensation (TC) techniques, originally designed to enhance vision
transformer efficiency by refining token usage during inference. We observe
that informative tokens improve visual-text alignment in VLMs like CLIP on
unseen datasets. However, existing TC methods often fail to maintain
in-distribution performance when reducing tokens, prompting us to ask: How can
we transform TC into an effective ``free-lunch'' adaptation strategy for VLMs?
To address this, we propose Token Condensation as Adaptation (TCA), a
training-free adaptation method that takes a step beyond standard TC. Rather
than passively discarding tokens, TCA condenses token representation by
introducing reservoir-based domain anchor tokens for information-preserving
token reduction and logits correction. TCA achieves up to a 21.4% performance
improvement over the strongest baseline on cross-dataset benchmark and the
CIFAR-100-Corrupted dataset while reducing GFLOPs by 12.2% to 48.9%, with
minimal hyperparameter dependency on both CLIP and SigLIP series.
|
2410.15143 | Minhyuk Seo | Minhyuk Seo, Hyunseo Koh, Jonghyun Choi | Budgeted Online Continual Learning by Adaptive Layer Freezing and
Frequency-based Sampling | ICLR 2025 Spotlight | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The majority of online continual learning (CL) advocates single-epoch
training and imposes restrictions on the size of replay memory. However,
single-epoch training would incur a different amount of computations per CL
algorithm, and the additional storage cost to store logit or model in addition
to replay memory is largely ignored in calculating the storage budget. Arguing
different computational and storage budgets hinder fair comparison among CL
algorithms in practice, we propose to use floating point operations (FLOPs) and
total memory size in Byte as a metric for computational and memory budgets,
respectively, to compare and develop CL algorithms in the same 'total resource
budget.' To improve a CL method in a limited total budget, we propose adaptive
layer freezing that does not update the layers for less informative batches to
reduce computational costs with a negligible loss of accuracy. In addition, we
propose a memory retrieval method that allows the model to learn the same
amount of knowledge as using random retrieval in fewer iterations. Empirical
validations on the CIFAR-10/100, CLEAR-10/100, and ImageNet-1K datasets
demonstrate that the proposed approach outperforms the state-of-the-art methods
within the same total budget
| [
{
"version": "v1",
"created": "Sat, 19 Oct 2024 16:00:00 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 20:18:42 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Seo",
"Minhyuk",
""
],
[
"Koh",
"Hyunseo",
""
],
[
"Choi",
"Jonghyun",
""
]
] | TITLE: Budgeted Online Continual Learning by Adaptive Layer Freezing and
Frequency-based Sampling
ABSTRACT: The majority of online continual learning (CL) advocates single-epoch
training and imposes restrictions on the size of replay memory. However,
single-epoch training would incur a different amount of computations per CL
algorithm, and the additional storage cost to store logit or model in addition
to replay memory is largely ignored in calculating the storage budget. Arguing
different computational and storage budgets hinder fair comparison among CL
algorithms in practice, we propose to use floating point operations (FLOPs) and
total memory size in Byte as a metric for computational and memory budgets,
respectively, to compare and develop CL algorithms in the same 'total resource
budget.' To improve a CL method in a limited total budget, we propose adaptive
layer freezing that does not update the layers for less informative batches to
reduce computational costs with a negligible loss of accuracy. In addition, we
propose a memory retrieval method that allows the model to learn the same
amount of knowledge as using random retrieval in fewer iterations. Empirical
validations on the CIFAR-10/100, CLEAR-10/100, and ImageNet-1K datasets
demonstrate that the proposed approach outperforms the state-of-the-art methods
within the same total budget
|
2410.15154 | Yin Li | Yin Li, Liangwei Wang, Shiyuan Piao, Boo-Ho Yang, Ziyue Li, Wei Zeng,
and Fugee Tsung | MCCoder: Streamlining Motion Control with LLM-Assisted Code Generation
and Rigorous Verification | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large Language Models (LLMs) have demonstrated significant potential in code
generation. However, in the factory automation sector, particularly motion
control, manual programming, alongside inefficient and unsafe debugging
practices, remains prevalent. This stems from the complex interplay of
mechanical and electrical systems and stringent safety requirements. Moreover,
most current AI-assisted motion control programming efforts focus on PLCs, with
little attention given to high-level languages and function libraries. To
address these challenges, we introduce MCCoder, an LLM-powered system tailored
for generating motion control code, integrated with a soft-motion controller.
MCCoder improves code generation through a structured workflow that combines
multitask decomposition, hybrid retrieval-augmented generation (RAG), and
iterative self-correction, utilizing a well-established motion library.
Additionally, it integrates a 3D simulator for intuitive motion validation and
logs of full motion trajectories for data verification, significantly enhancing
accuracy and safety. In the absence of benchmark datasets and metrics tailored
for evaluating motion control code generation, we propose MCEVAL, a dataset
spanning motion tasks of varying complexity. Experiments show that MCCoder
outperforms baseline models using Advanced RAG, achieving an overall
performance gain of 33.09% and a 131.77% improvement on complex tasks in the
MCEVAL dataset.
| [
{
"version": "v1",
"created": "Sat, 19 Oct 2024 16:46:21 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 06:03:20 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Yin",
""
],
[
"Wang",
"Liangwei",
""
],
[
"Piao",
"Shiyuan",
""
],
[
"Yang",
"Boo-Ho",
""
],
[
"Li",
"Ziyue",
""
],
[
"Zeng",
"Wei",
""
],
[
"Tsung",
"Fugee",
""
]
] | TITLE: MCCoder: Streamlining Motion Control with LLM-Assisted Code Generation
and Rigorous Verification
ABSTRACT: Large Language Models (LLMs) have demonstrated significant potential in code
generation. However, in the factory automation sector, particularly motion
control, manual programming, alongside inefficient and unsafe debugging
practices, remains prevalent. This stems from the complex interplay of
mechanical and electrical systems and stringent safety requirements. Moreover,
most current AI-assisted motion control programming efforts focus on PLCs, with
little attention given to high-level languages and function libraries. To
address these challenges, we introduce MCCoder, an LLM-powered system tailored
for generating motion control code, integrated with a soft-motion controller.
MCCoder improves code generation through a structured workflow that combines
multitask decomposition, hybrid retrieval-augmented generation (RAG), and
iterative self-correction, utilizing a well-established motion library.
Additionally, it integrates a 3D simulator for intuitive motion validation and
logs of full motion trajectories for data verification, significantly enhancing
accuracy and safety. In the absence of benchmark datasets and metrics tailored
for evaluating motion control code generation, we propose MCEVAL, a dataset
spanning motion tasks of varying complexity. Experiments show that MCCoder
outperforms baseline models using Advanced RAG, achieving an overall
performance gain of 33.09% and a 131.77% improvement on complex tasks in the
MCEVAL dataset.
|
2410.18325 | Kim Sung-Bin | Kim Sung-Bin, Oh Hyun-Bin, JungMok Lee, Arda Senocak, Joon Son Chung,
Tae-Hyun Oh | AVHBench: A Cross-Modal Hallucination Benchmark for Audio-Visual Large
Language Models | ICLR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Following the success of Large Language Models (LLMs), expanding their
boundaries to new modalities represents a significant paradigm shift in
multimodal understanding. Human perception is inherently multimodal, relying
not only on text but also on auditory and visual cues for a complete
understanding of the world. In recognition of this fact, audio-visual LLMs have
recently emerged. Despite promising developments, the lack of dedicated
benchmarks poses challenges for understanding and evaluating models. In this
work, we show that audio-visual LLMs struggle to discern subtle relationships
between audio and visual signals, leading to hallucinations and highlighting
the need for reliable benchmarks. To address this, we introduce AVHBench, the
first comprehensive benchmark specifically designed to evaluate the perception
and comprehension capabilities of audio-visual LLMs. Our benchmark includes
tests for assessing hallucinations, as well as the cross-modal matching and
reasoning abilities of these models. Our results reveal that most existing
audio-visual LLMs struggle with hallucinations caused by cross-interactions
between modalities, due to their limited capacity to perceive complex
multimodal signals and their relationships. Additionally, we demonstrate that
simple training with our AVHBench improves robustness of audio-visual LLMs
against hallucinations. Dataset: https://github.com/kaist-ami/AVHBench
| [
{
"version": "v1",
"created": "Wed, 23 Oct 2024 23:36:06 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 08:14:35 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Sung-Bin",
"Kim",
""
],
[
"Hyun-Bin",
"Oh",
""
],
[
"Lee",
"JungMok",
""
],
[
"Senocak",
"Arda",
""
],
[
"Chung",
"Joon Son",
""
],
[
"Oh",
"Tae-Hyun",
""
]
] | TITLE: AVHBench: A Cross-Modal Hallucination Benchmark for Audio-Visual Large
Language Models
ABSTRACT: Following the success of Large Language Models (LLMs), expanding their
boundaries to new modalities represents a significant paradigm shift in
multimodal understanding. Human perception is inherently multimodal, relying
not only on text but also on auditory and visual cues for a complete
understanding of the world. In recognition of this fact, audio-visual LLMs have
recently emerged. Despite promising developments, the lack of dedicated
benchmarks poses challenges for understanding and evaluating models. In this
work, we show that audio-visual LLMs struggle to discern subtle relationships
between audio and visual signals, leading to hallucinations and highlighting
the need for reliable benchmarks. To address this, we introduce AVHBench, the
first comprehensive benchmark specifically designed to evaluate the perception
and comprehension capabilities of audio-visual LLMs. Our benchmark includes
tests for assessing hallucinations, as well as the cross-modal matching and
reasoning abilities of these models. Our results reveal that most existing
audio-visual LLMs struggle with hallucinations caused by cross-interactions
between modalities, due to their limited capacity to perceive complex
multimodal signals and their relationships. Additionally, we demonstrate that
simple training with our AVHBench improves robustness of audio-visual LLMs
against hallucinations. Dataset: https://github.com/kaist-ami/AVHBench
|
2410.18656 | Torbj{\o}rn Smith | Torbj{\o}rn Smith and Olav Egeland | Learning dissipative Hamiltonian dynamics with reproducing kernel
Hilbert spaces and random Fourier features | null | IFAC-PapersOnLine: The 4th Modeling, Estimation, and Control
Conference - 2024 | 10.1016/j.ifacol.2025.01.146 | null | cs.LG cs.RO | http://creativecommons.org/licenses/by/4.0/ | This paper presents a new method for learning dissipative Hamiltonian
dynamics from a limited and noisy dataset. The method uses the Helmholtz
decomposition to learn a vector field as the sum of a symplectic and a
dissipative vector field. The two vector fields are learned using two
reproducing kernel Hilbert spaces, defined by a symplectic and a curl-free
kernel, where the kernels are specialized to enforce odd symmetry. Random
Fourier features are used to approximate the kernels to reduce the dimension of
the optimization problem. The performance of the method is validated in
simulations for two dissipative Hamiltonian systems, and it is shown that the
method improves predictive accuracy significantly compared to a method where a
Gaussian separable kernel is used.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2024 11:35:39 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Smith",
"Torbjørn",
""
],
[
"Egeland",
"Olav",
""
]
] | TITLE: Learning dissipative Hamiltonian dynamics with reproducing kernel
Hilbert spaces and random Fourier features
ABSTRACT: This paper presents a new method for learning dissipative Hamiltonian
dynamics from a limited and noisy dataset. The method uses the Helmholtz
decomposition to learn a vector field as the sum of a symplectic and a
dissipative vector field. The two vector fields are learned using two
reproducing kernel Hilbert spaces, defined by a symplectic and a curl-free
kernel, where the kernels are specialized to enforce odd symmetry. Random
Fourier features are used to approximate the kernels to reduce the dimension of
the optimization problem. The performance of the method is validated in
simulations for two dissipative Hamiltonian systems, and it is shown that the
method improves predictive accuracy significantly compared to a method where a
Gaussian separable kernel is used.
|
2410.19371 | Talal Alrawajfeh | Talal Alrawajfeh, Joonas J\"alk\"o, Antti Honkela | Noise-Aware Differentially Private Variational Inference | null | null | null | null | stat.ML cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Differential privacy (DP) provides robust privacy guarantees for statistical
inference, but this can lead to unreliable results and biases in downstream
applications. While several noise-aware approaches have been proposed which
integrate DP perturbation into the inference, they are limited to specific
types of simple probabilistic models. In this work, we propose a novel method
for noise-aware approximate Bayesian inference based on stochastic gradient
variational inference which can also be applied to high-dimensional and
non-conjugate models. We also propose a more accurate evaluation method for
noise-aware posteriors. Empirically, our inference method has similar
performance to existing methods in the domain where they are applicable.
Outside this domain, we obtain accurate coverages on high-dimensional Bayesian
linear regression and well-calibrated predictive probabilities on Bayesian
logistic regression with the UCI Adult dataset.
| [
{
"version": "v1",
"created": "Fri, 25 Oct 2024 08:18:49 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 13:23:33 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Alrawajfeh",
"Talal",
""
],
[
"Jälkö",
"Joonas",
""
],
[
"Honkela",
"Antti",
""
]
] | TITLE: Noise-Aware Differentially Private Variational Inference
ABSTRACT: Differential privacy (DP) provides robust privacy guarantees for statistical
inference, but this can lead to unreliable results and biases in downstream
applications. While several noise-aware approaches have been proposed which
integrate DP perturbation into the inference, they are limited to specific
types of simple probabilistic models. In this work, we propose a novel method
for noise-aware approximate Bayesian inference based on stochastic gradient
variational inference which can also be applied to high-dimensional and
non-conjugate models. We also propose a more accurate evaluation method for
noise-aware posteriors. Empirically, our inference method has similar
performance to existing methods in the domain where they are applicable.
Outside this domain, we obtain accurate coverages on high-dimensional Bayesian
linear regression and well-calibrated predictive probabilities on Bayesian
logistic regression with the UCI Adult dataset.
|
2410.23642 | Ramin Nateghi | Ramin Nateghi, Ruoji Zhou, Madeline Saft, Marina Schnauss, Clayton
Neill, Ridwan Alam, Nicole Handa, Mitchell Huang, Eric V Li, Jeffery A
Goldstein, Edward M Schaeffer, Menatalla Nadim, Fattaneh Pourakpour, Bogdan
Isaila, Christopher Felicelli, Vikas Mehta, Behtash G Nezami, Ashley Ross,
Ximing Yang, Lee AD Cooper | Development and prospective validation of a prostate cancer detection,
grading, and workflow optimization system at an academic medical center | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Artificial intelligence may assist healthcare systems in meeting increasing
demand for pathology services while maintaining diagnostic quality and reducing
turnaround time and costs. We aimed to investigate the performance of an
institutionally developed system for prostate cancer detection, grading, and
workflow optimization and to contrast this with commercial alternatives. From
August 2021 to March 2023, we scanned 21,396 slides from 1,147 patients
receiving prostate biopsy. We developed models for cancer detection, grading,
and screening of equivocal cases for IHC ordering. We compared the performance
of task-specific prostate models with general-purpose foundation models in a
prospectively collected dataset that reflects our patient population. We also
evaluated the contributions of a bespoke model designed to improve sensitivity
to small cancer foci and perception of low-resolution patterns. We found high
concordance with pathologist ground-truth in detection (area under curve 98.5%,
sensitivity 95.0%, and specificity 97.8%), ISUP grading (Cohen's kappa 0.869),
grade group 3 or higher classification (area under curve 97.5%, sensitivity
94.9%, specificity 96.6%). Screening models could correctly classify 55% of
biopsy blocks where immunohistochemistry was ordered with a 1.4% error rate. No
statistically significant differences were observed between task-specific and
foundation models in cancer detection, although the task-specific model is
significantly smaller and faster. Institutions like academic medical centers
that have high scanning volumes and report abstraction capabilities can develop
highly accurate computational pathology models for internal use. These models
have the potential to aid in quality control role and to improve resource
allocation and workflow in the pathology lab to help meet future challenges in
prostate cancer diagnosis.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2024 05:29:18 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 22:39:29 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Nateghi",
"Ramin",
""
],
[
"Zhou",
"Ruoji",
""
],
[
"Saft",
"Madeline",
""
],
[
"Schnauss",
"Marina",
""
],
[
"Neill",
"Clayton",
""
],
[
"Alam",
"Ridwan",
""
],
[
"Handa",
"Nicole",
""
],
[
"Huang",
"Mitchell",
""
],
[
"Li",
"Eric V",
""
],
[
"Goldstein",
"Jeffery A",
""
],
[
"Schaeffer",
"Edward M",
""
],
[
"Nadim",
"Menatalla",
""
],
[
"Pourakpour",
"Fattaneh",
""
],
[
"Isaila",
"Bogdan",
""
],
[
"Felicelli",
"Christopher",
""
],
[
"Mehta",
"Vikas",
""
],
[
"Nezami",
"Behtash G",
""
],
[
"Ross",
"Ashley",
""
],
[
"Yang",
"Ximing",
""
],
[
"Cooper",
"Lee AD",
""
]
] | TITLE: Development and prospective validation of a prostate cancer detection,
grading, and workflow optimization system at an academic medical center
ABSTRACT: Artificial intelligence may assist healthcare systems in meeting increasing
demand for pathology services while maintaining diagnostic quality and reducing
turnaround time and costs. We aimed to investigate the performance of an
institutionally developed system for prostate cancer detection, grading, and
workflow optimization and to contrast this with commercial alternatives. From
August 2021 to March 2023, we scanned 21,396 slides from 1,147 patients
receiving prostate biopsy. We developed models for cancer detection, grading,
and screening of equivocal cases for IHC ordering. We compared the performance
of task-specific prostate models with general-purpose foundation models in a
prospectively collected dataset that reflects our patient population. We also
evaluated the contributions of a bespoke model designed to improve sensitivity
to small cancer foci and perception of low-resolution patterns. We found high
concordance with pathologist ground-truth in detection (area under curve 98.5%,
sensitivity 95.0%, and specificity 97.8%), ISUP grading (Cohen's kappa 0.869),
grade group 3 or higher classification (area under curve 97.5%, sensitivity
94.9%, specificity 96.6%). Screening models could correctly classify 55% of
biopsy blocks where immunohistochemistry was ordered with a 1.4% error rate. No
statistically significant differences were observed between task-specific and
foundation models in cancer detection, although the task-specific model is
significantly smaller and faster. Institutions like academic medical centers
that have high scanning volumes and report abstraction capabilities can develop
highly accurate computational pathology models for internal use. These models
have the potential to aid in quality control role and to improve resource
allocation and workflow in the pathology lab to help meet future challenges in
prostate cancer diagnosis.
|
2410.23996 | Chenyu Wang | Chenyu Wang, Sharut Gupta, Xinyi Zhang, Sana Tonekaboni, Stefanie
Jegelka, Tommi Jaakkola, Caroline Uhler | An Information Criterion for Controlled Disentanglement of Multimodal
Data | ICLR 2025 | null | null | null | cs.LG cs.AI cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | Multimodal representation learning seeks to relate and decompose information
inherent in multiple modalities. By disentangling modality-specific information
from information that is shared across modalities, we can improve
interpretability and robustness and enable downstream tasks such as the
generation of counterfactual outcomes. Separating the two types of information
is challenging since they are often deeply entangled in many real-world
applications. We propose Disentangled Self-Supervised Learning
(DisentangledSSL), a novel self-supervised approach for learning disentangled
representations. We present a comprehensive analysis of the optimality of each
disentangled representation, particularly focusing on the scenario not covered
in prior work where the so-called Minimum Necessary Information (MNI) point is
not attainable. We demonstrate that DisentangledSSL successfully learns shared
and modality-specific features on multiple synthetic and real-world datasets
and consistently outperforms baselines on various downstream tasks, including
prediction tasks for vision-language data, as well as molecule-phenotype
retrieval tasks for biological data. The code is available at
https://github.com/uhlerlab/DisentangledSSL.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2024 14:57:31 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 16:27:27 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Chenyu",
""
],
[
"Gupta",
"Sharut",
""
],
[
"Zhang",
"Xinyi",
""
],
[
"Tonekaboni",
"Sana",
""
],
[
"Jegelka",
"Stefanie",
""
],
[
"Jaakkola",
"Tommi",
""
],
[
"Uhler",
"Caroline",
""
]
] | TITLE: An Information Criterion for Controlled Disentanglement of Multimodal
Data
ABSTRACT: Multimodal representation learning seeks to relate and decompose information
inherent in multiple modalities. By disentangling modality-specific information
from information that is shared across modalities, we can improve
interpretability and robustness and enable downstream tasks such as the
generation of counterfactual outcomes. Separating the two types of information
is challenging since they are often deeply entangled in many real-world
applications. We propose Disentangled Self-Supervised Learning
(DisentangledSSL), a novel self-supervised approach for learning disentangled
representations. We present a comprehensive analysis of the optimality of each
disentangled representation, particularly focusing on the scenario not covered
in prior work where the so-called Minimum Necessary Information (MNI) point is
not attainable. We demonstrate that DisentangledSSL successfully learns shared
and modality-specific features on multiple synthetic and real-world datasets
and consistently outperforms baselines on various downstream tasks, including
prediction tasks for vision-language data, as well as molecule-phenotype
retrieval tasks for biological data. The code is available at
https://github.com/uhlerlab/DisentangledSSL.
|
2410.24160 | Fu Feng | Fu Feng, Yucheng Xie, Xu Yang, Jing Wang, Xin Geng | Redefining <Creative> in Dictionary: Towards an Enhanced Semantic
Understanding of Creative Generation | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ``Creative'' remains an inherently abstract concept for both humans and
diffusion models. While text-to-image (T2I) diffusion models can easily
generate out-of-distribution concepts like ``a blue banana'', they struggle
with generating combinatorial objects such as ``a creative mixture that
resembles a lettuce and a mantis'', due to difficulties in understanding the
semantic depth of ``creative''. Current methods rely heavily on synthesizing
reference prompts or images to achieve a creative effect, typically requiring
retraining for each unique creative output-a process that is computationally
intensive and limits practical applications. To address this, we introduce
CreTok, which brings meta-creativity to diffusion models by redefining
``creative'' as a new token, \texttt{<CreTok>}, thus enhancing models' semantic
understanding for combinatorial creativity. CreTok achieves such redefinition
by iteratively sampling diverse text pairs from our proposed CangJie dataset to
form adaptive prompts and restrictive prompts, and then optimizing the
similarity between their respective text embeddings. Extensive experiments
demonstrate that <CreTok> enables the universal and direct generation of
combinatorial creativity across diverse concepts without additional training,
achieving state-of-the-art performance with improved text-image alignment and
higher human preference ratings. Code will be made available at
https://github.com/fu-feng/CreTok.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2024 17:19:03 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Nov 2024 10:22:59 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 06:33:07 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Feng",
"Fu",
""
],
[
"Xie",
"Yucheng",
""
],
[
"Yang",
"Xu",
""
],
[
"Wang",
"Jing",
""
],
[
"Geng",
"Xin",
""
]
] | TITLE: Redefining <Creative> in Dictionary: Towards an Enhanced Semantic
Understanding of Creative Generation
ABSTRACT: ``Creative'' remains an inherently abstract concept for both humans and
diffusion models. While text-to-image (T2I) diffusion models can easily
generate out-of-distribution concepts like ``a blue banana'', they struggle
with generating combinatorial objects such as ``a creative mixture that
resembles a lettuce and a mantis'', due to difficulties in understanding the
semantic depth of ``creative''. Current methods rely heavily on synthesizing
reference prompts or images to achieve a creative effect, typically requiring
retraining for each unique creative output-a process that is computationally
intensive and limits practical applications. To address this, we introduce
CreTok, which brings meta-creativity to diffusion models by redefining
``creative'' as a new token, \texttt{<CreTok>}, thus enhancing models' semantic
understanding for combinatorial creativity. CreTok achieves such redefinition
by iteratively sampling diverse text pairs from our proposed CangJie dataset to
form adaptive prompts and restrictive prompts, and then optimizing the
similarity between their respective text embeddings. Extensive experiments
demonstrate that <CreTok> enables the universal and direct generation of
combinatorial creativity across diverse concepts without additional training,
achieving state-of-the-art performance with improved text-image alignment and
higher human preference ratings. Code will be made available at
https://github.com/fu-feng/CreTok.
|
2411.01841 | XiaoBei Niu | Shi Dong and Xiaobei Niu and Rui Zhong and Zhifeng Wang and Mingzhang
Zuo | Leveraging Label Semantics and Meta-Label Refinement for Multi-Label
Question Classification | null | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate annotation of educational resources is crucial for effective
personalized learning and resource recommendation in online education. However,
fine-grained knowledge labels often overlap or share similarities, making it
difficult for existing multi-label classification methods to differentiate
them. The label distribution imbalance due to sparsity of human annotations
further intensifies these challenges. To address these issues, this paper
introduces RR2QC, a novel Retrieval Reranking method to multi-label Question
Classification by leveraging label semantics and meta-label refinement. First,
RR2QC improves the pre-training strategy by utilizing semantic relationships
within and across label groups. Second, it introduces a class center learning
task to align questions with label semantics during downstream training.
Finally, this method decomposes labels into meta-labels and uses a meta-label
classifier to rerank the retrieved label sequences. In doing so, RR2QC enhances
the understanding and prediction capability of long-tail labels by learning
from meta-labels that frequently appear in other labels. Additionally, a
mathematical LLM is used to generate solutions for questions, extracting latent
information to further refine the model's insights. Experimental results show
that RR2QC outperforms existing methods in Precision@K and F1 scores across
multiple educational datasets, demonstrating its effectiveness for online
education applications. The code and datasets are available at
https://github.com/78Erii/RR2QC.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2024 06:27:14 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 17:31:43 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Dong",
"Shi",
""
],
[
"Niu",
"Xiaobei",
""
],
[
"Zhong",
"Rui",
""
],
[
"Wang",
"Zhifeng",
""
],
[
"Zuo",
"Mingzhang",
""
]
] | TITLE: Leveraging Label Semantics and Meta-Label Refinement for Multi-Label
Question Classification
ABSTRACT: Accurate annotation of educational resources is crucial for effective
personalized learning and resource recommendation in online education. However,
fine-grained knowledge labels often overlap or share similarities, making it
difficult for existing multi-label classification methods to differentiate
them. The label distribution imbalance due to sparsity of human annotations
further intensifies these challenges. To address these issues, this paper
introduces RR2QC, a novel Retrieval Reranking method to multi-label Question
Classification by leveraging label semantics and meta-label refinement. First,
RR2QC improves the pre-training strategy by utilizing semantic relationships
within and across label groups. Second, it introduces a class center learning
task to align questions with label semantics during downstream training.
Finally, this method decomposes labels into meta-labels and uses a meta-label
classifier to rerank the retrieved label sequences. In doing so, RR2QC enhances
the understanding and prediction capability of long-tail labels by learning
from meta-labels that frequently appear in other labels. Additionally, a
mathematical LLM is used to generate solutions for questions, extracting latent
information to further refine the model's insights. Experimental results show
that RR2QC outperforms existing methods in Precision@K and F1 scores across
multiple educational datasets, demonstrating its effectiveness for online
education applications. The code and datasets are available at
https://github.com/78Erii/RR2QC.
|
2411.02136 | Robert Fonod | Robert Fonod and Haechan Cho and Hwasoo Yeo and Nikolas Geroliminis | Advanced computer vision for extracting georeferenced vehicle
trajectories from drone imagery | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper presents a framework for extracting georeferenced vehicle
trajectories from high-altitude drone imagery, addressing key challenges in
urban traffic monitoring and the limitations of traditional ground-based
systems. Our approach integrates several novel contributions, including a
tailored object detector optimized for high-altitude bird's-eye view
perspectives, a unique track stabilization method that uses detected vehicle
bounding boxes as exclusion masks during image registration, and an orthophoto
and master frame-based georeferencing strategy that enhances consistent
alignment across multiple drone viewpoints. Additionally, our framework
features robust vehicle dimension estimation and detailed road segmentation,
enabling comprehensive traffic analysis. Conducted in the Songdo International
Business District, South Korea, the study utilized a multi-drone experiment
covering 20 intersections, capturing approximately 12TB of 4K video data over
four days. The framework produced two high-quality datasets: the Songdo Traffic
dataset, comprising approximately 700,000 unique vehicle trajectories, and the
Songdo Vision dataset, containing over 5,000 human-annotated images with about
300,000 vehicle instances in four classes. Comparisons with high-precision
sensor data from an instrumented probe vehicle highlight the accuracy and
consistency of our extraction pipeline in dense urban environments. The public
release of Songdo Traffic and Songdo Vision, and the complete source code for
the extraction pipeline, establishes new benchmarks in data quality,
reproducibility, and scalability in traffic research. Results demonstrate the
potential of integrating drone technology with advanced computer vision for
precise and cost-effective urban traffic monitoring, providing valuable
resources for developing intelligent transportation systems and enhancing
traffic management strategies.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2024 14:49:01 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 09:25:50 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Fonod",
"Robert",
""
],
[
"Cho",
"Haechan",
""
],
[
"Yeo",
"Hwasoo",
""
],
[
"Geroliminis",
"Nikolas",
""
]
] | TITLE: Advanced computer vision for extracting georeferenced vehicle
trajectories from drone imagery
ABSTRACT: This paper presents a framework for extracting georeferenced vehicle
trajectories from high-altitude drone imagery, addressing key challenges in
urban traffic monitoring and the limitations of traditional ground-based
systems. Our approach integrates several novel contributions, including a
tailored object detector optimized for high-altitude bird's-eye view
perspectives, a unique track stabilization method that uses detected vehicle
bounding boxes as exclusion masks during image registration, and an orthophoto
and master frame-based georeferencing strategy that enhances consistent
alignment across multiple drone viewpoints. Additionally, our framework
features robust vehicle dimension estimation and detailed road segmentation,
enabling comprehensive traffic analysis. Conducted in the Songdo International
Business District, South Korea, the study utilized a multi-drone experiment
covering 20 intersections, capturing approximately 12TB of 4K video data over
four days. The framework produced two high-quality datasets: the Songdo Traffic
dataset, comprising approximately 700,000 unique vehicle trajectories, and the
Songdo Vision dataset, containing over 5,000 human-annotated images with about
300,000 vehicle instances in four classes. Comparisons with high-precision
sensor data from an instrumented probe vehicle highlight the accuracy and
consistency of our extraction pipeline in dense urban environments. The public
release of Songdo Traffic and Songdo Vision, and the complete source code for
the extraction pipeline, establishes new benchmarks in data quality,
reproducibility, and scalability in traffic research. Results demonstrate the
potential of integrating drone technology with advanced computer vision for
precise and cost-effective urban traffic monitoring, providing valuable
resources for developing intelligent transportation systems and enhancing
traffic management strategies.
|
2411.06518 | Yuewen Sun | Yuewen Sun, Lingjing Kong, Guangyi Chen, Loka Li, Gongxu Luo, Zijian
Li, Yixuan Zhang, Yujia Zheng, Mengyue Yang, Petar Stojanov, Eran Segal, Eric
P. Xing, Kun Zhang | Causal Representation Learning from Multimodal Biomedical Observations | null | null | null | null | cs.LG q-bio.QM stat.ME | http://creativecommons.org/licenses/by/4.0/ | Prevalent in biomedical applications (e.g., human phenotype research),
multimodal datasets can provide valuable insights into the underlying
physiological mechanisms. However, current machine learning (ML) models
designed to analyze these datasets often lack interpretability and
identifiability guarantees, which are essential for biomedical research. Recent
advances in causal representation learning have shown promise in identifying
interpretable latent causal variables with formal theoretical guarantees.
Unfortunately, most current work on multimodal distributions either relies on
restrictive parametric assumptions or yields only coarse identification
results, limiting their applicability to biomedical research that favors a
detailed understanding of the mechanisms.
In this work, we aim to develop flexible identification conditions for
multimodal data and principled methods to facilitate the understanding of
biomedical datasets. Theoretically, we consider a nonparametric latent
distribution (c.f., parametric assumptions in previous work) that allows for
causal relationships across potentially different modalities. We establish
identifiability guarantees for each latent component, extending the subspace
identification results from previous work. Our key theoretical contribution is
the structural sparsity of causal connections between modalities, which, as we
will discuss, is natural for a large collection of biomedical systems.
Empirically, we present a practical framework to instantiate our theoretical
insights. We demonstrate the effectiveness of our approach through extensive
experiments on both numerical and synthetic datasets. Results on a real-world
human phenotype dataset are consistent with established biomedical research,
validating our theoretical and methodological framework.
| [
{
"version": "v1",
"created": "Sun, 10 Nov 2024 16:40:27 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 08:56:49 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 13:07:14 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Sun",
"Yuewen",
""
],
[
"Kong",
"Lingjing",
""
],
[
"Chen",
"Guangyi",
""
],
[
"Li",
"Loka",
""
],
[
"Luo",
"Gongxu",
""
],
[
"Li",
"Zijian",
""
],
[
"Zhang",
"Yixuan",
""
],
[
"Zheng",
"Yujia",
""
],
[
"Yang",
"Mengyue",
""
],
[
"Stojanov",
"Petar",
""
],
[
"Segal",
"Eran",
""
],
[
"Xing",
"Eric P.",
""
],
[
"Zhang",
"Kun",
""
]
] | TITLE: Causal Representation Learning from Multimodal Biomedical Observations
ABSTRACT: Prevalent in biomedical applications (e.g., human phenotype research),
multimodal datasets can provide valuable insights into the underlying
physiological mechanisms. However, current machine learning (ML) models
designed to analyze these datasets often lack interpretability and
identifiability guarantees, which are essential for biomedical research. Recent
advances in causal representation learning have shown promise in identifying
interpretable latent causal variables with formal theoretical guarantees.
Unfortunately, most current work on multimodal distributions either relies on
restrictive parametric assumptions or yields only coarse identification
results, limiting their applicability to biomedical research that favors a
detailed understanding of the mechanisms.
In this work, we aim to develop flexible identification conditions for
multimodal data and principled methods to facilitate the understanding of
biomedical datasets. Theoretically, we consider a nonparametric latent
distribution (c.f., parametric assumptions in previous work) that allows for
causal relationships across potentially different modalities. We establish
identifiability guarantees for each latent component, extending the subspace
identification results from previous work. Our key theoretical contribution is
the structural sparsity of causal connections between modalities, which, as we
will discuss, is natural for a large collection of biomedical systems.
Empirically, we present a practical framework to instantiate our theoretical
insights. We demonstrate the effectiveness of our approach through extensive
experiments on both numerical and synthetic datasets. Results on a real-world
human phenotype dataset are consistent with established biomedical research,
validating our theoretical and methodological framework.
|
2411.06802 | Yuxiu Shao | Yuxiu Shao (1 and 2), David Dahmen (3), Stefano Recanatesi (4), Eric
Shea-Brown (5 and 6), Srdjan Ostojic (2) ((1) School of Systems Science,
Beijing Normal University, China, (2) Laboratoire de Neurosciences Cognitives
et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research
University, France, (3) Institute for Advanced Simulation (IAS-6)
Computational and Systems Neuroscience, J\"ulich Research Center, Germany,
(4) Technion, Israel Institute of Technology, Israel, (5) Department of
Applied Mathematics and Computational Neuroscience Center, University of
Washington, USA, (6) Allen Institute for Brain Science, USA) | Identifying the impact of local connectivity patterns on dynamics in
excitatory-inhibitory networks | 30 pages, 17 figures | null | null | null | q-bio.NC cond-mat.dis-nn cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Networks of excitatory and inhibitory (EI) neurons form a canonical circuit
in the brain. Seminal theoretical results on dynamics of such networks are
based on the assumption that synaptic strengths depend on the type of neurons
they connect, but are otherwise statistically independent. Recent synaptic
physiology datasets however highlight the prominence of specific connectivity
patterns that go well beyond what is expected from independent connections.
While decades of influential research have demonstrated the strong role of the
basic EI cell type structure, to which extent additional connectivity features
influence dynamics remains to be fully determined. Here we examine the effects
of pairwise connectivity motifs on the linear dynamics in EI networks using an
analytical framework that approximates the connectivity in terms of low-rank
structures. This low-rank approximation is based on a mathematical derivation
of the dominant eigenvalues of the connectivity matrix and predicts the impact
on responses to external inputs of connectivity motifs and their interactions
with cell-type structure. Our results reveal that a particular pattern of
connectivity, chain motifs, have a much stronger impact on dominant eigenmodes
than other pairwise motifs. An overrepresentation of chain motifs induces a
strong positive eigenvalue in inhibition-dominated networks and generates a
potential instability that requires revisiting the classical
excitation-inhibition balance criteria. Examining effects of external inputs,
we show that chain motifs can on their own induce paradoxical responses where
an increased input to inhibitory neurons leads to a decrease in their activity
due to the recurrent feedback. These findings have direct implications for the
interpretation of experiments in which responses to optogenetic perturbations
are measured and used to infer the dynamical regime of cortical circuits.
| [
{
"version": "v1",
"created": "Mon, 11 Nov 2024 08:57:44 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Nov 2024 13:27:46 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 13:59:15 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Shao",
"Yuxiu",
"",
"1 and 2"
],
[
"Dahmen",
"David",
"",
"5 and 6"
],
[
"Recanatesi",
"Stefano",
"",
"5 and 6"
],
[
"Shea-Brown",
"Eric",
"",
"5 and 6"
],
[
"Ostojic",
"Srdjan",
""
]
] | TITLE: Identifying the impact of local connectivity patterns on dynamics in
excitatory-inhibitory networks
ABSTRACT: Networks of excitatory and inhibitory (EI) neurons form a canonical circuit
in the brain. Seminal theoretical results on dynamics of such networks are
based on the assumption that synaptic strengths depend on the type of neurons
they connect, but are otherwise statistically independent. Recent synaptic
physiology datasets however highlight the prominence of specific connectivity
patterns that go well beyond what is expected from independent connections.
While decades of influential research have demonstrated the strong role of the
basic EI cell type structure, to which extent additional connectivity features
influence dynamics remains to be fully determined. Here we examine the effects
of pairwise connectivity motifs on the linear dynamics in EI networks using an
analytical framework that approximates the connectivity in terms of low-rank
structures. This low-rank approximation is based on a mathematical derivation
of the dominant eigenvalues of the connectivity matrix and predicts the impact
on responses to external inputs of connectivity motifs and their interactions
with cell-type structure. Our results reveal that a particular pattern of
connectivity, chain motifs, have a much stronger impact on dominant eigenmodes
than other pairwise motifs. An overrepresentation of chain motifs induces a
strong positive eigenvalue in inhibition-dominated networks and generates a
potential instability that requires revisiting the classical
excitation-inhibition balance criteria. Examining effects of external inputs,
we show that chain motifs can on their own induce paradoxical responses where
an increased input to inhibitory neurons leads to a decrease in their activity
due to the recurrent feedback. These findings have direct implications for the
interpretation of experiments in which responses to optogenetic perturbations
are measured and used to infer the dynamical regime of cortical circuits.
|
2411.06976 | He Huang | He Huang and Wenjie Huang and Qi Yang and Yiling Xu and Zhu li | A Hierarchical Compression Technique for 3D Gaussian Splatting
Compression | null | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D Gaussian Splatting (GS) demonstrates excellent rendering quality and
generation speed in novel view synthesis. However, substantial data size poses
challenges for storage and transmission, making 3D GS compression an essential
technology. Current 3D GS compression research primarily focuses on developing
more compact scene representations, such as converting explicit 3D GS data into
implicit forms. In contrast, compression of the GS data itself has hardly been
explored. To address this gap, we propose a Hierarchical GS Compression (HGSC)
technique. Initially, we prune unimportant Gaussians based on importance scores
derived from both global and local significance, effectively reducing
redundancy while maintaining visual quality. An Octree structure is used to
compress 3D positions. Based on the 3D GS Octree, we implement a hierarchical
attribute compression strategy by employing a KD-tree to partition the 3D GS
into multiple blocks. We apply farthest point sampling to select anchor
primitives within each block and others as non-anchor primitives with varying
Levels of Details (LoDs). Anchor primitives serve as reference points for
predicting non-anchor primitives across different LoDs to reduce spatial
redundancy. For anchor primitives, we use the region adaptive hierarchical
transform to achieve near-lossless compression of various attributes. For
non-anchor primitives, each is predicted based on the k-nearest anchor
primitives. To further minimize prediction errors, the reconstructed LoD and
anchor primitives are combined to form new anchor primitives to predict the
next LoD. Our method notably achieves superior compression quality and a
significant data size reduction of over 4.5 times compared to the
state-of-the-art compression method on small scenes datasets.
| [
{
"version": "v1",
"created": "Mon, 11 Nov 2024 13:34:24 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 12:12:03 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huang",
"He",
""
],
[
"Huang",
"Wenjie",
""
],
[
"Yang",
"Qi",
""
],
[
"Xu",
"Yiling",
""
],
[
"li",
"Zhu",
""
]
] | TITLE: A Hierarchical Compression Technique for 3D Gaussian Splatting
Compression
ABSTRACT: 3D Gaussian Splatting (GS) demonstrates excellent rendering quality and
generation speed in novel view synthesis. However, substantial data size poses
challenges for storage and transmission, making 3D GS compression an essential
technology. Current 3D GS compression research primarily focuses on developing
more compact scene representations, such as converting explicit 3D GS data into
implicit forms. In contrast, compression of the GS data itself has hardly been
explored. To address this gap, we propose a Hierarchical GS Compression (HGSC)
technique. Initially, we prune unimportant Gaussians based on importance scores
derived from both global and local significance, effectively reducing
redundancy while maintaining visual quality. An Octree structure is used to
compress 3D positions. Based on the 3D GS Octree, we implement a hierarchical
attribute compression strategy by employing a KD-tree to partition the 3D GS
into multiple blocks. We apply farthest point sampling to select anchor
primitives within each block and others as non-anchor primitives with varying
Levels of Details (LoDs). Anchor primitives serve as reference points for
predicting non-anchor primitives across different LoDs to reduce spatial
redundancy. For anchor primitives, we use the region adaptive hierarchical
transform to achieve near-lossless compression of various attributes. For
non-anchor primitives, each is predicted based on the k-nearest anchor
primitives. To further minimize prediction errors, the reconstructed LoD and
anchor primitives are combined to form new anchor primitives to predict the
next LoD. Our method notably achieves superior compression quality and a
significant data size reduction of over 4.5 times compared to the
state-of-the-art compression method on small scenes datasets.
|
2411.07107 | Brian DuSell | Alexandra Butoi and Ghazal Khalighinejad and Anej Svete and Josef
Valvoda and Ryan Cotterell and Brian DuSell | Training Neural Networks as Recognizers of Formal Languages | 44 pages, 3 figures. ICLR 2025 | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Characterizing the computational power of neural network architectures in
terms of formal language theory remains a crucial line of research, as it
describes lower and upper bounds on the reasoning capabilities of modern AI.
However, when empirically testing these bounds, existing work often leaves a
discrepancy between experiments and the formal claims they are meant to
support. The problem is that formal language theory pertains specifically to
recognizers: machines that receive a string as input and classify whether it
belongs to a language. On the other hand, it is common instead to evaluate
language models on proxy tasks, e.g., language modeling or sequence-to-sequence
transduction, that are similar in only an informal sense to the underlying
theory. We correct this mismatch by training and evaluating neural networks
directly as binary classifiers of strings, using a general method that can be
applied to a wide variety of languages. As part of this, we extend an algorithm
recently proposed by Sn{\ae}bjarnarson et al. (2025) for efficient
length-controlled sampling of strings from regular languages. We provide
results on a variety of languages across the Chomsky hierarchy for three neural
architectures: a simple RNN, an LSTM, and a causally-masked transformer. We
find that the RNN and LSTM often outperform the transformer, and that auxiliary
training objectives such as language modeling can help, although no single
objective uniformly improves performance across languages and architectures.
Our contributions will facilitate theoretically sound empirical testing of
language recognition claims in future work. We have released our datasets as a
benchmark called FLaRe (Formal Language Recognition), along with our code.
| [
{
"version": "v1",
"created": "Mon, 11 Nov 2024 16:33:25 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 14:51:27 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Butoi",
"Alexandra",
""
],
[
"Khalighinejad",
"Ghazal",
""
],
[
"Svete",
"Anej",
""
],
[
"Valvoda",
"Josef",
""
],
[
"Cotterell",
"Ryan",
""
],
[
"DuSell",
"Brian",
""
]
] | TITLE: Training Neural Networks as Recognizers of Formal Languages
ABSTRACT: Characterizing the computational power of neural network architectures in
terms of formal language theory remains a crucial line of research, as it
describes lower and upper bounds on the reasoning capabilities of modern AI.
However, when empirically testing these bounds, existing work often leaves a
discrepancy between experiments and the formal claims they are meant to
support. The problem is that formal language theory pertains specifically to
recognizers: machines that receive a string as input and classify whether it
belongs to a language. On the other hand, it is common instead to evaluate
language models on proxy tasks, e.g., language modeling or sequence-to-sequence
transduction, that are similar in only an informal sense to the underlying
theory. We correct this mismatch by training and evaluating neural networks
directly as binary classifiers of strings, using a general method that can be
applied to a wide variety of languages. As part of this, we extend an algorithm
recently proposed by Sn{\ae}bjarnarson et al. (2025) for efficient
length-controlled sampling of strings from regular languages. We provide
results on a variety of languages across the Chomsky hierarchy for three neural
architectures: a simple RNN, an LSTM, and a causally-masked transformer. We
find that the RNN and LSTM often outperform the transformer, and that auxiliary
training objectives such as language modeling can help, although no single
objective uniformly improves performance across languages and architectures.
Our contributions will facilitate theoretically sound empirical testing of
language recognition claims in future work. We have released our datasets as a
benchmark called FLaRe (Formal Language Recognition), along with our code.
|
2411.07758 | Wen Dongcheng | Ran Lingyan, Wen Dongcheng, Zhuo Tao, Zhang Shizhou, Zhang Xiuwei and
Zhang Yanning | AdaSemiCD: An Adaptive Semi-Supervised Change Detection Method Based on
Pseudo-Label Evaluation | Accepted by IEEE Transactions on Geoscience and Remote Sensing(TGRS) | null | 10.1109/TGRS.2025.3551504 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Change Detection (CD) is an essential field in remote sensing, with a primary
focus on identifying areas of change in bi-temporal image pairs captured at
varying intervals of the same region by a satellite. The data annotation
process for the CD task is both time-consuming and labor-intensive. To make
better use of the scarce labeled data and abundant unlabeled data, we present
an adaptive dynamic semi-supervised learning method, AdaSemiCD, to improve the
use of pseudo-labels and optimize the training process. Initially, due to the
extreme class imbalance inherent in CD, the model is more inclined to focus on
the background class, and it is easy to confuse the boundary of the target
object. Considering these two points, we develop a measurable evaluation metric
for pseudo-labels that enhances the representation of information entropy by
class rebalancing and amplification of confusing areas to give a larger weight
to prospects change objects. Subsequently, to enhance the reliability of
sample-wise pseudo-labels, we introduce the AdaFusion module, which is capable
of dynamically identifying the most uncertain region and substituting it with
more trustworthy content. Lastly, to ensure better training stability, we
introduce the AdaEMA module, which updates the teacher model using only batches
of trusted samples. Experimental results from LEVIR-CD, WHU-CD, and CDD
datasets validate the efficacy and universality of our proposed adaptive
training framework.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 12:35:34 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 07:28:26 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lingyan",
"Ran",
""
],
[
"Dongcheng",
"Wen",
""
],
[
"Tao",
"Zhuo",
""
],
[
"Shizhou",
"Zhang",
""
],
[
"Xiuwei",
"Zhang",
""
],
[
"Yanning",
"Zhang",
""
]
] | TITLE: AdaSemiCD: An Adaptive Semi-Supervised Change Detection Method Based on
Pseudo-Label Evaluation
ABSTRACT: Change Detection (CD) is an essential field in remote sensing, with a primary
focus on identifying areas of change in bi-temporal image pairs captured at
varying intervals of the same region by a satellite. The data annotation
process for the CD task is both time-consuming and labor-intensive. To make
better use of the scarce labeled data and abundant unlabeled data, we present
an adaptive dynamic semi-supervised learning method, AdaSemiCD, to improve the
use of pseudo-labels and optimize the training process. Initially, due to the
extreme class imbalance inherent in CD, the model is more inclined to focus on
the background class, and it is easy to confuse the boundary of the target
object. Considering these two points, we develop a measurable evaluation metric
for pseudo-labels that enhances the representation of information entropy by
class rebalancing and amplification of confusing areas to give a larger weight
to prospects change objects. Subsequently, to enhance the reliability of
sample-wise pseudo-labels, we introduce the AdaFusion module, which is capable
of dynamically identifying the most uncertain region and substituting it with
more trustworthy content. Lastly, to ensure better training stability, we
introduce the AdaEMA module, which updates the teacher model using only batches
of trusted samples. Experimental results from LEVIR-CD, WHU-CD, and CDD
datasets validate the efficacy and universality of our proposed adaptive
training framework.
|
2411.09145 | ChengBo Yuan | Chengbo Yuan, Geng Chen, Li Yi, Yang Gao | Self-Supervised Monocular 4D Scene Reconstruction for Egocentric Videos | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | Egocentric videos provide valuable insights into human interactions with the
physical world, which has sparked growing interest in the computer vision and
robotics communities. A critical challenge in fully understanding the geometry
and dynamics of egocentric videos is dense scene reconstruction. However, the
lack of high-quality labeled datasets in this field has hindered the
effectiveness of current supervised learning methods. In this work, we aim to
address this issue by exploring an self-supervised dynamic scene reconstruction
approach. We introduce EgoMono4D, a novel model that unifies the estimation of
multiple variables necessary for Egocentric Monocular 4D reconstruction,
including camera intrinsic, camera poses, and video depth, all within a fast
feed-forward framework. Starting from pretrained single-frame depth and
intrinsic estimation model, we extend it with camera poses estimation and align
multi-frame results on large-scale unlabeled egocentric videos. We evaluate
EgoMono4D in both in-domain and zero-shot generalization settings, achieving
superior performance in dense pointclouds sequence reconstruction compared to
all baselines. EgoMono4D represents the first attempt to apply self-supervised
learning for pointclouds sequence reconstruction to the label-scarce egocentric
field, enabling fast, dense, and generalizable reconstruction. The interactable
visualization, code and trained models are released
https://egomono4d.github.io/
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2024 02:57:11 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Nov 2024 12:27:39 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 15:05:12 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yuan",
"Chengbo",
""
],
[
"Chen",
"Geng",
""
],
[
"Yi",
"Li",
""
],
[
"Gao",
"Yang",
""
]
] | TITLE: Self-Supervised Monocular 4D Scene Reconstruction for Egocentric Videos
ABSTRACT: Egocentric videos provide valuable insights into human interactions with the
physical world, which has sparked growing interest in the computer vision and
robotics communities. A critical challenge in fully understanding the geometry
and dynamics of egocentric videos is dense scene reconstruction. However, the
lack of high-quality labeled datasets in this field has hindered the
effectiveness of current supervised learning methods. In this work, we aim to
address this issue by exploring an self-supervised dynamic scene reconstruction
approach. We introduce EgoMono4D, a novel model that unifies the estimation of
multiple variables necessary for Egocentric Monocular 4D reconstruction,
including camera intrinsic, camera poses, and video depth, all within a fast
feed-forward framework. Starting from pretrained single-frame depth and
intrinsic estimation model, we extend it with camera poses estimation and align
multi-frame results on large-scale unlabeled egocentric videos. We evaluate
EgoMono4D in both in-domain and zero-shot generalization settings, achieving
superior performance in dense pointclouds sequence reconstruction compared to
all baselines. EgoMono4D represents the first attempt to apply self-supervised
learning for pointclouds sequence reconstruction to the label-scarce egocentric
field, enabling fast, dense, and generalizable reconstruction. The interactable
visualization, code and trained models are released
https://egomono4d.github.io/
|
2411.10962 | Lei Yang | Lei Yang, Xinyu Zhang, Chen Wang, Jun Li, Jiaqi Ma, Zhiying Song, Tong
Zhao, Ziying Song, Li Wang, Mo Zhou, Yang Shen, Kai Wu, Chen Lv | V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative
Perception | 15 pages, 9 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern autonomous vehicle perception systems often struggle with occlusions
and limited perception range. Previous studies have demonstrated the
effectiveness of cooperative perception in extending the perception range and
overcoming occlusions, thereby enhancing the safety of autonomous driving. In
recent years, a series of cooperative perception datasets have emerged;
however, these datasets primarily focus on cameras and LiDAR, neglecting 4D
Radar, a sensor used in single-vehicle autonomous driving to provide robust
perception in adverse weather conditions. In this paper, to bridge the gap
created by the absence of 4D Radar datasets in cooperative perception, we
present V2X-Radar, the first large-scale, real-world multi-modal dataset
featuring 4D Radar. V2X-Radar dataset is collected using a connected vehicle
platform and an intelligent roadside unit equipped with 4D Radar, LiDAR, and
multi-view cameras. The collected data encompasses sunny and rainy weather
conditions, spanning daytime, dusk, and nighttime, as well as various typical
challenging scenarios. The dataset consists of 20K LiDAR frames, 40K camera
images, and 20K 4D Radar data, including 350K annotated boxes across five
categories. To support various research domains, we have established
V2X-Radar-C for cooperative perception, V2X-Radar-I for roadside perception,
and V2X-Radar-V for single-vehicle perception. Furthermore, we provide
comprehensive benchmarks across these three sub-datasets. We will release all
datasets and benchmark codebase at http://openmpd.com/column/V2X-Radar and
https://github.com/yanglei18/V2X-Radar.
| [
{
"version": "v1",
"created": "Sun, 17 Nov 2024 04:59:00 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 02:06:12 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yang",
"Lei",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Wang",
"Chen",
""
],
[
"Li",
"Jun",
""
],
[
"Ma",
"Jiaqi",
""
],
[
"Song",
"Zhiying",
""
],
[
"Zhao",
"Tong",
""
],
[
"Song",
"Ziying",
""
],
[
"Wang",
"Li",
""
],
[
"Zhou",
"Mo",
""
],
[
"Shen",
"Yang",
""
],
[
"Wu",
"Kai",
""
],
[
"Lv",
"Chen",
""
]
] | TITLE: V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative
Perception
ABSTRACT: Modern autonomous vehicle perception systems often struggle with occlusions
and limited perception range. Previous studies have demonstrated the
effectiveness of cooperative perception in extending the perception range and
overcoming occlusions, thereby enhancing the safety of autonomous driving. In
recent years, a series of cooperative perception datasets have emerged;
however, these datasets primarily focus on cameras and LiDAR, neglecting 4D
Radar, a sensor used in single-vehicle autonomous driving to provide robust
perception in adverse weather conditions. In this paper, to bridge the gap
created by the absence of 4D Radar datasets in cooperative perception, we
present V2X-Radar, the first large-scale, real-world multi-modal dataset
featuring 4D Radar. V2X-Radar dataset is collected using a connected vehicle
platform and an intelligent roadside unit equipped with 4D Radar, LiDAR, and
multi-view cameras. The collected data encompasses sunny and rainy weather
conditions, spanning daytime, dusk, and nighttime, as well as various typical
challenging scenarios. The dataset consists of 20K LiDAR frames, 40K camera
images, and 20K 4D Radar data, including 350K annotated boxes across five
categories. To support various research domains, we have established
V2X-Radar-C for cooperative perception, V2X-Radar-I for roadside perception,
and V2X-Radar-V for single-vehicle perception. Furthermore, we provide
comprehensive benchmarks across these three sub-datasets. We will release all
datasets and benchmark codebase at http://openmpd.com/column/V2X-Radar and
https://github.com/yanglei18/V2X-Radar.
|
2411.11886 | Lu Wang-Nöth | Lu Wang-N\"oth, Philipp Heiler, Hai Huang, Daniel Lichtenstern,
Alexandra Reichenbach, Luis Flacke, Linus Maisch, Helmut Mayer | How Much Data is Enough? Optimization of Data Collection for Artifact
Detection in EEG Recordings | Several changes of wording. Caption of figure 10 corrected | null | 10.1088/1741-2552/adbebe | null | eess.SP cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective. Electroencephalography (EEG) is a widely used neuroimaging
technique known for its cost-effectiveness and user-friendliness. However,
various artifacts, particularly biological artifacts like Electromyography
(EMG) signals, lead to a poor signal-to-noise ratio, limiting the precision of
analyses and applications. The currently reported EEG data cleaning performance
largely depends on the data used for validation, and in the case of machine
learning approaches, also on the data used for training. The data are typically
gathered either by recruiting subjects to perform specific artifact tasks or by
integrating existing datasets. Prevailing approaches, however, tend to rely on
intuitive, concept-oriented data collection with minimal justification for the
selection of artifacts and their quantities. Given the substantial costs
associated with biological data collection and the pressing need for effective
data utilization, we propose an optimization procedure for data-oriented data
collection design using deep learning-based artifact detection. Approach. We
apply a binary classification between artifact epochs (time intervals
containing artifacts) and non-artifact epochs (time intervals containing no
artifact) using three different neural architectures. Our aim is to minimize
data collection efforts while preserving the cleaning efficiency. Main results.
We were able to reduce the number of artifact tasks from twelve to three and
decrease repetitions of isometric contraction tasks from ten to three or
sometimes even just one. Significance. Our work addresses the need for
effective data utilization in biological data collection, offering a systematic
and dynamic quantitative approach. By providing clear justifications for the
choices of artifacts and their quantity, we aim to guide future studies toward
more effective and economical data collection in EEG and EMG research.
| [
{
"version": "v1",
"created": "Tue, 5 Nov 2024 11:47:59 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Nov 2024 10:38:55 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang-Nöth",
"Lu",
""
],
[
"Heiler",
"Philipp",
""
],
[
"Huang",
"Hai",
""
],
[
"Lichtenstern",
"Daniel",
""
],
[
"Reichenbach",
"Alexandra",
""
],
[
"Flacke",
"Luis",
""
],
[
"Maisch",
"Linus",
""
],
[
"Mayer",
"Helmut",
""
]
] | TITLE: How Much Data is Enough? Optimization of Data Collection for Artifact
Detection in EEG Recordings
ABSTRACT: Objective. Electroencephalography (EEG) is a widely used neuroimaging
technique known for its cost-effectiveness and user-friendliness. However,
various artifacts, particularly biological artifacts like Electromyography
(EMG) signals, lead to a poor signal-to-noise ratio, limiting the precision of
analyses and applications. The currently reported EEG data cleaning performance
largely depends on the data used for validation, and in the case of machine
learning approaches, also on the data used for training. The data are typically
gathered either by recruiting subjects to perform specific artifact tasks or by
integrating existing datasets. Prevailing approaches, however, tend to rely on
intuitive, concept-oriented data collection with minimal justification for the
selection of artifacts and their quantities. Given the substantial costs
associated with biological data collection and the pressing need for effective
data utilization, we propose an optimization procedure for data-oriented data
collection design using deep learning-based artifact detection. Approach. We
apply a binary classification between artifact epochs (time intervals
containing artifacts) and non-artifact epochs (time intervals containing no
artifact) using three different neural architectures. Our aim is to minimize
data collection efforts while preserving the cleaning efficiency. Main results.
We were able to reduce the number of artifact tasks from twelve to three and
decrease repetitions of isometric contraction tasks from ten to three or
sometimes even just one. Significance. Our work addresses the need for
effective data utilization in biological data collection, offering a systematic
and dynamic quantitative approach. By providing clear justifications for the
choices of artifacts and their quantity, we aim to guide future studies toward
more effective and economical data collection in EEG and EMG research.
|
2411.13376 | Ricardo Monta\~nana G\'omez | Ricardo Monta\~nana and Jos\'e A. G\'amez and Jos\'e M. Puerta | ODTE -- An ensemble of multi-class SVM-based oblique decision trees | Accepted version | Ricardo Monta\~nana, Jos\'e A. G\'amez, Jos\'e M. Puerta(2025).
ODTE-An ensemble of multi-class SVM-based oblique decision trees. Expert
Systems with Applications 273:126833 | 10.1016/j.eswa.2025.126833 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose ODTE, a new ensemble that uses oblique decision trees as base
classifiers. Additionally, we introduce STree, the base algorithm for growing
oblique decision trees, which leverages support vector machines to define
hyperplanes within the decision nodes. We embed a multiclass strategy --
one-vs-one or one-vs-rest -- at the decision nodes, allowing the model to
directly handle non-binary classification tasks without the need to cluster
instances into two groups, as is common in other approaches from the
literature. In each decision node, only the best-performing model SVM -- the
one that minimizes an impurity measure for the n-ary classification -- is
retained, even if the learned SVM addresses a binary classification subtask. An
extensive experimental study involving 49 datasets and various state-of-the-art
algorithms for oblique decision tree ensembles has been conducted. Our results
show that ODTE ranks consistently above its competitors, achieving significant
performance gains when hyperparameters are carefully tuned. Moreover, the
oblique decision trees learned through STree are more compact than those
produced by other algorithms evaluated in our experiments.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 14:58:32 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 11:34:32 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Montañana",
"Ricardo",
""
],
[
"Gámez",
"José A.",
""
],
[
"Puerta",
"José M.",
""
]
] | TITLE: ODTE -- An ensemble of multi-class SVM-based oblique decision trees
ABSTRACT: We propose ODTE, a new ensemble that uses oblique decision trees as base
classifiers. Additionally, we introduce STree, the base algorithm for growing
oblique decision trees, which leverages support vector machines to define
hyperplanes within the decision nodes. We embed a multiclass strategy --
one-vs-one or one-vs-rest -- at the decision nodes, allowing the model to
directly handle non-binary classification tasks without the need to cluster
instances into two groups, as is common in other approaches from the
literature. In each decision node, only the best-performing model SVM -- the
one that minimizes an impurity measure for the n-ary classification -- is
retained, even if the learned SVM addresses a binary classification subtask. An
extensive experimental study involving 49 datasets and various state-of-the-art
algorithms for oblique decision tree ensembles has been conducted. Our results
show that ODTE ranks consistently above its competitors, achieving significant
performance gains when hyperparameters are carefully tuned. Moreover, the
oblique decision trees learned through STree are more compact than those
produced by other algorithms evaluated in our experiments.
|
2411.16106 | Xingyu Liu | Xingyu Liu, Gu Wang, Ruida Zhang, Chenyangguang Zhang, Federico
Tombari, Xiangyang Ji | UNOPose: Unseen Object Pose Estimation with an Unposed RGB-D Reference
Image | Accepted by CVPR'25 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unseen object pose estimation methods often rely on CAD models or multiple
reference views, making the onboarding stage costly. To simplify reference
acquisition, we aim to estimate the unseen object's pose through a single
unposed RGB-D reference image. While previous works leverage reference images
as pose anchors to limit the range of relative pose, our scenario presents
significant challenges since the relative transformation could vary across the
entire SE(3) space. Moreover, factors like occlusion, sensor noise, and extreme
geometry could result in low viewpoint overlap. To address these challenges, we
present a novel approach and benchmark, termed UNOPose, for unseen
one-reference-based object pose estimation. Building upon a coarse-to-fine
paradigm, UNOPose constructs an SE(3)-invariant reference frame to standardize
object representation despite pose and size variations. To alleviate small
overlap across viewpoints, we recalibrate the weight of each correspondence
based on its predicted likelihood of being within the overlapping region.
Evaluated on our proposed benchmark based on the BOP Challenge, UNOPose
demonstrates superior performance, significantly outperforming traditional and
learning-based methods in the one-reference setting and remaining competitive
with CAD-model-based methods. The code and dataset are available at
https://github.com/shanice-l/UNOPose.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 05:36:00 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 07:54:18 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Xingyu",
""
],
[
"Wang",
"Gu",
""
],
[
"Zhang",
"Ruida",
""
],
[
"Zhang",
"Chenyangguang",
""
],
[
"Tombari",
"Federico",
""
],
[
"Ji",
"Xiangyang",
""
]
] | TITLE: UNOPose: Unseen Object Pose Estimation with an Unposed RGB-D Reference
Image
ABSTRACT: Unseen object pose estimation methods often rely on CAD models or multiple
reference views, making the onboarding stage costly. To simplify reference
acquisition, we aim to estimate the unseen object's pose through a single
unposed RGB-D reference image. While previous works leverage reference images
as pose anchors to limit the range of relative pose, our scenario presents
significant challenges since the relative transformation could vary across the
entire SE(3) space. Moreover, factors like occlusion, sensor noise, and extreme
geometry could result in low viewpoint overlap. To address these challenges, we
present a novel approach and benchmark, termed UNOPose, for unseen
one-reference-based object pose estimation. Building upon a coarse-to-fine
paradigm, UNOPose constructs an SE(3)-invariant reference frame to standardize
object representation despite pose and size variations. To alleviate small
overlap across viewpoints, we recalibrate the weight of each correspondence
based on its predicted likelihood of being within the overlapping region.
Evaluated on our proposed benchmark based on the BOP Challenge, UNOPose
demonstrates superior performance, significantly outperforming traditional and
learning-based methods in the one-reference setting and remaining competitive
with CAD-model-based methods. The code and dataset are available at
https://github.com/shanice-l/UNOPose.
|
2411.16446 | Changjian Li | Jiawei Wang, Zhiming Cui, Changjian Li | VQ-SGen: A Vector Quantized Stroke Representation for Creative Sketch
Generation | null | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents VQ-SGen, a novel algorithm for high-quality creative
sketch generation. Recent approaches have framed the task as pixel-based
generation either as a whole or part-by-part, neglecting the intrinsic and
contextual relationships among individual strokes, such as the shape and
spatial positioning of both proximal and distant strokes. To overcome these
limitations, we propose treating each stroke within a sketch as an entity and
introducing a vector-quantized (VQ) stroke representation for fine-grained
sketch generation. Our method follows a two-stage framework - in stage one, we
decouple each stroke's shape and location information to ensure the VQ
representation prioritizes stroke shape learning. In stage two, we feed the
precise and compact representation into an auto-decoding Transformer to
incorporate stroke semantics, positions, and shapes into the generation
process. By utilizing tokenized stroke representation, our approach generates
strokes with high fidelity and facilitates novel applications, such as text or
class label conditioned generation and sketch completion. Comprehensive
experiments demonstrate our method surpasses existing state-of-the-art
techniques on the CreativeSketch dataset, underscoring its effectiveness.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 14:51:22 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 09:19:45 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Jiawei",
""
],
[
"Cui",
"Zhiming",
""
],
[
"Li",
"Changjian",
""
]
] | TITLE: VQ-SGen: A Vector Quantized Stroke Representation for Creative Sketch
Generation
ABSTRACT: This paper presents VQ-SGen, a novel algorithm for high-quality creative
sketch generation. Recent approaches have framed the task as pixel-based
generation either as a whole or part-by-part, neglecting the intrinsic and
contextual relationships among individual strokes, such as the shape and
spatial positioning of both proximal and distant strokes. To overcome these
limitations, we propose treating each stroke within a sketch as an entity and
introducing a vector-quantized (VQ) stroke representation for fine-grained
sketch generation. Our method follows a two-stage framework - in stage one, we
decouple each stroke's shape and location information to ensure the VQ
representation prioritizes stroke shape learning. In stage two, we feed the
precise and compact representation into an auto-decoding Transformer to
incorporate stroke semantics, positions, and shapes into the generation
process. By utilizing tokenized stroke representation, our approach generates
strokes with high fidelity and facilitates novel applications, such as text or
class label conditioned generation and sketch completion. Comprehensive
experiments demonstrate our method surpasses existing state-of-the-art
techniques on the CreativeSketch dataset, underscoring its effectiveness.
|
2411.17386 | Bastian Wittmann | Bastian Wittmann, Yannick Wattenberg, Tamaz Amiranashvili, Suprosanna
Shit, Bjoern Menze | vesselFM: A Foundation Model for Universal 3D Blood Vessel Segmentation | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Segmenting 3D blood vessels is a critical yet challenging task in medical
image analysis. This is due to significant imaging modality-specific variations
in artifacts, vascular patterns and scales, signal-to-noise ratios, and
background tissues. These variations, along with domain gaps arising from
varying imaging protocols, limit the generalization of existing supervised
learning-based methods, requiring tedious voxel-level annotations for each
dataset separately. While foundation models promise to alleviate this
limitation, they typically fail to generalize to the task of blood vessel
segmentation, posing a unique, complex problem. In this work, we present
vesselFM, a foundation model designed specifically for the broad task of 3D
blood vessel segmentation. Unlike previous models, vesselFM can effortlessly
generalize to unseen domains. To achieve zero-shot generalization, we train
vesselFM on three heterogeneous data sources: a large, curated annotated
dataset, data generated by a domain randomization scheme, and data sampled from
a flow matching-based generative model. Extensive evaluations show that
vesselFM outperforms state-of-the-art medical image segmentation foundation
models across four (pre-)clinically relevant imaging modalities in zero-, one-,
and few-shot scenarios, therefore providing a universal solution for 3D blood
vessel segmentation.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 12:44:42 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 18:56:29 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wittmann",
"Bastian",
""
],
[
"Wattenberg",
"Yannick",
""
],
[
"Amiranashvili",
"Tamaz",
""
],
[
"Shit",
"Suprosanna",
""
],
[
"Menze",
"Bjoern",
""
]
] | TITLE: vesselFM: A Foundation Model for Universal 3D Blood Vessel Segmentation
ABSTRACT: Segmenting 3D blood vessels is a critical yet challenging task in medical
image analysis. This is due to significant imaging modality-specific variations
in artifacts, vascular patterns and scales, signal-to-noise ratios, and
background tissues. These variations, along with domain gaps arising from
varying imaging protocols, limit the generalization of existing supervised
learning-based methods, requiring tedious voxel-level annotations for each
dataset separately. While foundation models promise to alleviate this
limitation, they typically fail to generalize to the task of blood vessel
segmentation, posing a unique, complex problem. In this work, we present
vesselFM, a foundation model designed specifically for the broad task of 3D
blood vessel segmentation. Unlike previous models, vesselFM can effortlessly
generalize to unseen domains. To achieve zero-shot generalization, we train
vesselFM on three heterogeneous data sources: a large, curated annotated
dataset, data generated by a domain randomization scheme, and data sampled from
a flow matching-based generative model. Extensive evaluations show that
vesselFM outperforms state-of-the-art medical image segmentation foundation
models across four (pre-)clinically relevant imaging modalities in zero-, one-,
and few-shot scenarios, therefore providing a universal solution for 3D blood
vessel segmentation.
|
2411.17388 | Haoyu Huang | Haoyu Huang, Chong Chen, Conghui He, Yang Li, Jiawei Jiang, Wentao
Zhang | Can LLMs be Good Graph Judger for Knowledge Graph Construction? | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | In real-world scenarios, most of the data obtained from information retrieval
(IR) system is unstructured. Converting natural language sentences into
structured Knowledge Graphs (KGs) remains a critical challenge. The quality of
constructed KGs may also impact the performance of some KG-dependent domains
like GraphRAG systems and recommendation systems. Recently, Large Language
Models (LLMs) have demonstrated impressive capabilities in addressing a wide
range of natural language processing tasks. However, there are still challenges
when utilizing LLMs to address the task of generating structured KGs. And we
have identified three limitations with respect to existing KG construction
methods. (1)There is a large amount of information and excessive noise in
real-world documents, which could result in extracting messy information.
(2)Native LLMs struggle to effectively extract accuracy knowledge from some
domain-specific documents. (3)Hallucinations phenomenon cannot be overlooked
when utilizing LLMs directly as an unsupervised method for constructing KGs.
In this paper, we propose GraphJudger, a knowledge graph construction
framework to address the aforementioned challenges. We introduce three
innovative modules in our method, which are entity-centric iterative text
denoising, knowledge aware instruction tuning and graph judgement,
respectively. We seek to utilize the capacity of LLMs to function as a graph
judger, a capability superior to their role only as a predictor for KG
construction problems. Experiments conducted on two general text-graph pair
datasets and one domain-specific text-graph pair dataset show superior
performances compared to baseline methods. The code of our proposed method is
available at https://github.com/hhy-huang/GraphJudger.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 12:46:57 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 11:49:45 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huang",
"Haoyu",
""
],
[
"Chen",
"Chong",
""
],
[
"He",
"Conghui",
""
],
[
"Li",
"Yang",
""
],
[
"Jiang",
"Jiawei",
""
],
[
"Zhang",
"Wentao",
""
]
] | TITLE: Can LLMs be Good Graph Judger for Knowledge Graph Construction?
ABSTRACT: In real-world scenarios, most of the data obtained from information retrieval
(IR) system is unstructured. Converting natural language sentences into
structured Knowledge Graphs (KGs) remains a critical challenge. The quality of
constructed KGs may also impact the performance of some KG-dependent domains
like GraphRAG systems and recommendation systems. Recently, Large Language
Models (LLMs) have demonstrated impressive capabilities in addressing a wide
range of natural language processing tasks. However, there are still challenges
when utilizing LLMs to address the task of generating structured KGs. And we
have identified three limitations with respect to existing KG construction
methods. (1)There is a large amount of information and excessive noise in
real-world documents, which could result in extracting messy information.
(2)Native LLMs struggle to effectively extract accuracy knowledge from some
domain-specific documents. (3)Hallucinations phenomenon cannot be overlooked
when utilizing LLMs directly as an unsupervised method for constructing KGs.
In this paper, we propose GraphJudger, a knowledge graph construction
framework to address the aforementioned challenges. We introduce three
innovative modules in our method, which are entity-centric iterative text
denoising, knowledge aware instruction tuning and graph judgement,
respectively. We seek to utilize the capacity of LLMs to function as a graph
judger, a capability superior to their role only as a predictor for KG
construction problems. Experiments conducted on two general text-graph pair
datasets and one domain-specific text-graph pair dataset show superior
performances compared to baseline methods. The code of our proposed method is
available at https://github.com/hhy-huang/GraphJudger.
|
2411.17698 | Ziyang Chen | Ziyang Chen, Prem Seetharaman, Bryan Russell, Oriol Nieto, David
Bourgin, Andrew Owens, Justin Salamon | Video-Guided Foley Sound Generation with Multimodal Controls | Accepted at CVPR 2025. Project site:
https://ificl.github.io/MultiFoley/ | null | null | null | cs.CV cs.MM cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating sound effects for videos often requires creating artistic sound
effects that diverge significantly from real-life sources and flexible control
in the sound design. To address this problem, we introduce MultiFoley, a model
designed for video-guided sound generation that supports multimodal
conditioning through text, audio, and video. Given a silent video and a text
prompt, MultiFoley allows users to create clean sounds (e.g., skateboard wheels
spinning without wind noise) or more whimsical sounds (e.g., making a lion's
roar sound like a cat's meow). MultiFoley also allows users to choose reference
audio from sound effects (SFX) libraries or partial videos for conditioning. A
key novelty of our model lies in its joint training on both internet video
datasets with low-quality audio and professional SFX recordings, enabling
high-quality, full-bandwidth (48kHz) audio generation. Through automated
evaluations and human studies, we demonstrate that MultiFoley successfully
generates synchronized high-quality sounds across varied conditional inputs and
outperforms existing methods. Please see our project page for video results:
https://ificl.github.io/MultiFoley/
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 18:59:58 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Nov 2024 13:25:04 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Jan 2025 20:03:04 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Mar 2025 17:44:37 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chen",
"Ziyang",
""
],
[
"Seetharaman",
"Prem",
""
],
[
"Russell",
"Bryan",
""
],
[
"Nieto",
"Oriol",
""
],
[
"Bourgin",
"David",
""
],
[
"Owens",
"Andrew",
""
],
[
"Salamon",
"Justin",
""
]
] | TITLE: Video-Guided Foley Sound Generation with Multimodal Controls
ABSTRACT: Generating sound effects for videos often requires creating artistic sound
effects that diverge significantly from real-life sources and flexible control
in the sound design. To address this problem, we introduce MultiFoley, a model
designed for video-guided sound generation that supports multimodal
conditioning through text, audio, and video. Given a silent video and a text
prompt, MultiFoley allows users to create clean sounds (e.g., skateboard wheels
spinning without wind noise) or more whimsical sounds (e.g., making a lion's
roar sound like a cat's meow). MultiFoley also allows users to choose reference
audio from sound effects (SFX) libraries or partial videos for conditioning. A
key novelty of our model lies in its joint training on both internet video
datasets with low-quality audio and professional SFX recordings, enabling
high-quality, full-bandwidth (48kHz) audio generation. Through automated
evaluations and human studies, we demonstrate that MultiFoley successfully
generates synchronized high-quality sounds across varied conditional inputs and
outperforms existing methods. Please see our project page for video results:
https://ificl.github.io/MultiFoley/
|
2411.18412 | David Serrano-Lozano | David Serrano-Lozano, Luis Herranz, Shaolin Su and Javier
Vazquez-Corral | Adaptive Blind All-in-One Image Restoration | 17 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Blind all-in-one image restoration models aim to recover a high-quality image
from an input degraded with unknown distortions. However, these models require
all the possible degradation types to be defined during the training stage
while showing limited generalization to unseen degradations, which limits their
practical application in complex cases. In this paper, we introduce ABAIR, a
simple yet effective adaptive blind all-in-one restoration model that not only
handles multiple degradations and generalizes well to unseen distortions but
also efficiently integrates new degradations by training only a small subset of
parameters. We first train our baseline model on a large dataset of natural
images with multiple synthetic degradations. To enhance its ability to
recognize distortions, we incorporate a segmentation head that estimates
per-pixel degradation types. Second, we adapt our initial model to varying
image restoration tasks using independent low-rank adapters. Third, we learn to
adaptively combine adapters to versatile images via a flexible and lightweight
degradation estimator. This specialize-then-merge approach is both powerful in
addressing specific distortions and flexible in adapting to complex tasks.
Moreover, our model not only surpasses state-of-the-art performance on five-
and three-task IR setups but also demonstrates superior generalization to
unseen degradations and composite distortions.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2024 14:58:08 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 08:04:17 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Serrano-Lozano",
"David",
""
],
[
"Herranz",
"Luis",
""
],
[
"Su",
"Shaolin",
""
],
[
"Vazquez-Corral",
"Javier",
""
]
] | TITLE: Adaptive Blind All-in-One Image Restoration
ABSTRACT: Blind all-in-one image restoration models aim to recover a high-quality image
from an input degraded with unknown distortions. However, these models require
all the possible degradation types to be defined during the training stage
while showing limited generalization to unseen degradations, which limits their
practical application in complex cases. In this paper, we introduce ABAIR, a
simple yet effective adaptive blind all-in-one restoration model that not only
handles multiple degradations and generalizes well to unseen distortions but
also efficiently integrates new degradations by training only a small subset of
parameters. We first train our baseline model on a large dataset of natural
images with multiple synthetic degradations. To enhance its ability to
recognize distortions, we incorporate a segmentation head that estimates
per-pixel degradation types. Second, we adapt our initial model to varying
image restoration tasks using independent low-rank adapters. Third, we learn to
adaptively combine adapters to versatile images via a flexible and lightweight
degradation estimator. This specialize-then-merge approach is both powerful in
addressing specific distortions and flexible in adapting to complex tasks.
Moreover, our model not only surpasses state-of-the-art performance on five-
and three-task IR setups but also demonstrates superior generalization to
unseen degradations and composite distortions.
|
2411.19921 | Wenjia Wang | Wenjia Wang, Liang Pan, Zhiyang Dou, Jidong Mei, Zhouyingcheng Liao,
Yuke Lou, Yifan Wu, Lei Yang, Jingbo Wang, Taku Komura | SIMS: Simulating Stylized Human-Scene Interactions with
Retrieval-Augmented Script Generation | null | null | null | null | cs.CV cs.AI cs.CL cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simulating stylized human-scene interactions (HSI) in physical environments
is a challenging yet fascinating task. Prior works emphasize long-term
execution but fall short in achieving both diverse style and physical
plausibility. To tackle this challenge, we introduce a novel hierarchical
framework named SIMS that seamlessly bridges highlevel script-driven intent
with a low-level control policy, enabling more expressive and diverse
human-scene interactions. Specifically, we employ Large Language Models with
Retrieval-Augmented Generation (RAG) to generate coherent and diverse long-form
scripts, providing a rich foundation for motion planning. A versatile
multicondition physics-based control policy is also developed, which leverages
text embeddings from the generated scripts to encode stylistic cues,
simultaneously perceiving environmental geometries and accomplishing task
goals. By integrating the retrieval-augmented script generation with the
multi-condition controller, our approach provides a unified solution for
generating stylized HSI motions. We further introduce a comprehensive planning
dataset produced by RAG and a stylized motion dataset featuring diverse
locomotions and interactions. Extensive experiments demonstrate SIMS's
effectiveness in executing various tasks and generalizing across different
scenarios, significantly outperforming previous methods.
| [
{
"version": "v1",
"created": "Fri, 29 Nov 2024 18:36:15 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 04:09:27 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Wenjia",
""
],
[
"Pan",
"Liang",
""
],
[
"Dou",
"Zhiyang",
""
],
[
"Mei",
"Jidong",
""
],
[
"Liao",
"Zhouyingcheng",
""
],
[
"Lou",
"Yuke",
""
],
[
"Wu",
"Yifan",
""
],
[
"Yang",
"Lei",
""
],
[
"Wang",
"Jingbo",
""
],
[
"Komura",
"Taku",
""
]
] | TITLE: SIMS: Simulating Stylized Human-Scene Interactions with
Retrieval-Augmented Script Generation
ABSTRACT: Simulating stylized human-scene interactions (HSI) in physical environments
is a challenging yet fascinating task. Prior works emphasize long-term
execution but fall short in achieving both diverse style and physical
plausibility. To tackle this challenge, we introduce a novel hierarchical
framework named SIMS that seamlessly bridges highlevel script-driven intent
with a low-level control policy, enabling more expressive and diverse
human-scene interactions. Specifically, we employ Large Language Models with
Retrieval-Augmented Generation (RAG) to generate coherent and diverse long-form
scripts, providing a rich foundation for motion planning. A versatile
multicondition physics-based control policy is also developed, which leverages
text embeddings from the generated scripts to encode stylistic cues,
simultaneously perceiving environmental geometries and accomplishing task
goals. By integrating the retrieval-augmented script generation with the
multi-condition controller, our approach provides a unified solution for
generating stylized HSI motions. We further introduce a comprehensive planning
dataset produced by RAG and a stylized motion dataset featuring diverse
locomotions and interactions. Extensive experiments demonstrate SIMS's
effectiveness in executing various tasks and generalizing across different
scenarios, significantly outperforming previous methods.
|
2412.00622 | Heitor Medeiros Mr. | Heitor R. Medeiros, Atif Belal, Srikanth Muralidharan, Eric Granger
and Marco Pedersoli | Visual Modality Prompt for Adapting Vision-Language Object Detectors | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The zero-shot performance of object detectors degrades when tested on
different modalities, such as infrared and depth. While recent work has
explored image translation techniques to adapt detectors to new modalities,
these methods are limited to a single modality and apply only to traditional
detectors. Recently, vision-language detectors, such as YOLO-World and
Grounding DINO, have shown promising zero-shot capabilities, however, they have
not yet been adapted for other visual modalities. Traditional fine-tuning
approaches compromise the zero-shot capabilities of the detectors. The visual
prompt strategies commonly used for classification with vision-language models
apply the same linear prompt translation to each image, making them less
effective. To address these limitations, we propose ModPrompt, a visual prompt
strategy to adapt vision-language detectors to new modalities without degrading
zero-shot performance. In particular, an encoder-decoder visual prompt strategy
is proposed, further enhanced by the integration of inference-friendly modality
prompt decoupled residual, facilitating a more robust adaptation. Empirical
benchmarking results show our method for modality adaptation on two
vision-language detectors, YOLO-World and Grounding DINO, and on challenging
infrared (LLVIP, FLIR) and depth (NYUv2) datasets, achieving performance
comparable to full fine-tuning while preserving the model's zero-shot
capability. Code available at: https://github.com/heitorrapela/ModPrompt.
| [
{
"version": "v1",
"created": "Sun, 1 Dec 2024 00:19:59 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 20:32:12 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Medeiros",
"Heitor R.",
""
],
[
"Belal",
"Atif",
""
],
[
"Muralidharan",
"Srikanth",
""
],
[
"Granger",
"Eric",
""
],
[
"Pedersoli",
"Marco",
""
]
] | TITLE: Visual Modality Prompt for Adapting Vision-Language Object Detectors
ABSTRACT: The zero-shot performance of object detectors degrades when tested on
different modalities, such as infrared and depth. While recent work has
explored image translation techniques to adapt detectors to new modalities,
these methods are limited to a single modality and apply only to traditional
detectors. Recently, vision-language detectors, such as YOLO-World and
Grounding DINO, have shown promising zero-shot capabilities, however, they have
not yet been adapted for other visual modalities. Traditional fine-tuning
approaches compromise the zero-shot capabilities of the detectors. The visual
prompt strategies commonly used for classification with vision-language models
apply the same linear prompt translation to each image, making them less
effective. To address these limitations, we propose ModPrompt, a visual prompt
strategy to adapt vision-language detectors to new modalities without degrading
zero-shot performance. In particular, an encoder-decoder visual prompt strategy
is proposed, further enhanced by the integration of inference-friendly modality
prompt decoupled residual, facilitating a more robust adaptation. Empirical
benchmarking results show our method for modality adaptation on two
vision-language detectors, YOLO-World and Grounding DINO, and on challenging
infrared (LLVIP, FLIR) and depth (NYUv2) datasets, achieving performance
comparable to full fine-tuning while preserving the model's zero-shot
capability. Code available at: https://github.com/heitorrapela/ModPrompt.
|
2412.00678 | Mahdi S. Hosseini Dr. | Jingwei Zhang and Anh Tien Nguyen and Xi Han and Vincent Quoc-Huy
Trinh and Hong Qin and Dimitris Samaras and Mahdi S. Hosseini | 2DMamba: Efficient State Space Model for Image Representation with
Applications on Giga-Pixel Whole Slide Image Classification | Accepted in CVPR 2025 Main Conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficiently modeling large 2D contexts is essential for various fields
including Giga-Pixel Whole Slide Imaging (WSI) and remote sensing.
Transformer-based models offer high parallelism but face challenges due to
their quadratic complexity for handling long sequences. Recently, Mamba
introduced a selective State Space Model (SSM) with linear complexity and high
parallelism, enabling effective and efficient modeling of wide context in 1D
sequences. However, extending Mamba to vision tasks, which inherently involve
2D structures, results in spatial discrepancies due to the limitations of 1D
sequence processing. On the other hand, current 2D SSMs inherently model 2D
structures but they suffer from prohibitively slow computation due to the lack
of efficient parallel algorithms. In this work, we propose 2DMamba, a novel 2D
selective SSM framework that incorporates the 2D spatial structure of images
into Mamba, with a highly optimized hardware-aware operator, adopting both
spatial continuity and computational efficiency. We validate the versatility of
our approach on both WSIs and natural images. Extensive experiments on 10
public datasets for WSI classification and survival analysis show that 2DMamba
improves up to 2.48% in AUC, 3.11% in F1 score, 2.47% in accuracy and 5.52% in
C-index. Additionally, integrating our method with VMamba for natural imaging
yields 0.5 to 0.7 improvements in mIoU on the ADE20k semantic segmentation
dataset, and 0.2% accuracy improvement on ImageNet-1K classification dataset.
Our code is available at https://github.com/AtlasAnalyticsLab/2DMamba.
| [
{
"version": "v1",
"created": "Sun, 1 Dec 2024 05:42:58 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 22:54:10 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Jingwei",
""
],
[
"Nguyen",
"Anh Tien",
""
],
[
"Han",
"Xi",
""
],
[
"Trinh",
"Vincent Quoc-Huy",
""
],
[
"Qin",
"Hong",
""
],
[
"Samaras",
"Dimitris",
""
],
[
"Hosseini",
"Mahdi S.",
""
]
] | TITLE: 2DMamba: Efficient State Space Model for Image Representation with
Applications on Giga-Pixel Whole Slide Image Classification
ABSTRACT: Efficiently modeling large 2D contexts is essential for various fields
including Giga-Pixel Whole Slide Imaging (WSI) and remote sensing.
Transformer-based models offer high parallelism but face challenges due to
their quadratic complexity for handling long sequences. Recently, Mamba
introduced a selective State Space Model (SSM) with linear complexity and high
parallelism, enabling effective and efficient modeling of wide context in 1D
sequences. However, extending Mamba to vision tasks, which inherently involve
2D structures, results in spatial discrepancies due to the limitations of 1D
sequence processing. On the other hand, current 2D SSMs inherently model 2D
structures but they suffer from prohibitively slow computation due to the lack
of efficient parallel algorithms. In this work, we propose 2DMamba, a novel 2D
selective SSM framework that incorporates the 2D spatial structure of images
into Mamba, with a highly optimized hardware-aware operator, adopting both
spatial continuity and computational efficiency. We validate the versatility of
our approach on both WSIs and natural images. Extensive experiments on 10
public datasets for WSI classification and survival analysis show that 2DMamba
improves up to 2.48% in AUC, 3.11% in F1 score, 2.47% in accuracy and 5.52% in
C-index. Additionally, integrating our method with VMamba for natural imaging
yields 0.5 to 0.7 improvements in mIoU on the ADE20k semantic segmentation
dataset, and 0.2% accuracy improvement on ImageNet-1K classification dataset.
Our code is available at https://github.com/AtlasAnalyticsLab/2DMamba.
|
2412.02857 | Youssef Mansour | Youssef Mansour and Reinhard Heckel | Measuring Bias of Web-filtered Text Datasets and Bias Propagation
Through Training | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We investigate biases in pretraining datasets for large language models
(LLMs) through dataset classification experiments. Building on prior work
demonstrating the existence of biases in popular computer vision datasets, we
analyze popular open-source pretraining datasets for LLMs derived from
CommonCrawl including C4, RefinedWeb, DolmaCC, RedPajama-V2, FineWeb, and
DCLM-Baseline. Despite those datasets being obtained with similar curation
steps, neural networks can classify surprisingly well which dataset a single
text sequence belongs to, significantly better than a human can. This indicates
that small differences in filtering and processing pipelines induce
fingerprints evident in formatting, vocabulary, and content distributions.
Those biases remain even when the text is rewritten with LLMs. Moreover, these
biases propagate through training: Random sequences generated by models trained
on those datasets can be classified well by a classifier trained on the
original datasets. This can be leveraged to estimate the pretraining mixture
proportions of the data sources.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 21:43:58 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 23:07:45 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Mansour",
"Youssef",
""
],
[
"Heckel",
"Reinhard",
""
]
] | TITLE: Measuring Bias of Web-filtered Text Datasets and Bias Propagation
Through Training
ABSTRACT: We investigate biases in pretraining datasets for large language models
(LLMs) through dataset classification experiments. Building on prior work
demonstrating the existence of biases in popular computer vision datasets, we
analyze popular open-source pretraining datasets for LLMs derived from
CommonCrawl including C4, RefinedWeb, DolmaCC, RedPajama-V2, FineWeb, and
DCLM-Baseline. Despite those datasets being obtained with similar curation
steps, neural networks can classify surprisingly well which dataset a single
text sequence belongs to, significantly better than a human can. This indicates
that small differences in filtering and processing pipelines induce
fingerprints evident in formatting, vocabulary, and content distributions.
Those biases remain even when the text is rewritten with LLMs. Moreover, these
biases propagate through training: Random sequences generated by models trained
on those datasets can be classified well by a classifier trained on the
original datasets. This can be leveraged to estimate the pretraining mixture
proportions of the data sources.
|
2412.04606 | Chenyu Wang | Chenyu Wang, Weichao Zhou, Shantanu Ghosh, Kayhan Batmanghelich,
Wenchao Li | Semantic Consistency-Based Uncertainty Quantification for Factuality in
Radiology Report Generation | null | null | null | null | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Radiology report generation (RRG) has shown great potential in assisting
radiologists by automating the labor-intensive task of report writing. While
recent advancements have improved the quality and coherence of generated
reports, ensuring their factual correctness remains a critical challenge.
Although generative medical Vision Large Language Models (VLLMs) have been
proposed to address this issue, these models are prone to hallucinations and
can produce inaccurate diagnostic information. To address these concerns, we
introduce a novel Semantic Consistency-Based Uncertainty Quantification
framework that provides both report-level and sentence-level uncertainties.
Unlike existing approaches, our method does not require modifications to the
underlying model or access to its inner state, such as output token logits,
thus serving as a plug-and-play module that can be seamlessly integrated with
state-of-the-art models. Extensive experiments demonstrate the efficacy of our
method in detecting hallucinations and enhancing the factual accuracy of
automatically generated radiology reports. By abstaining from high-uncertainty
reports, our approach improves factuality scores by $10$\%, achieved by
rejecting $20$\% of reports using the \texttt{Radialog} model on the MIMIC-CXR
dataset. Furthermore, sentence-level uncertainty flags the lowest-precision
sentence in each report with an $82.9$\% success rate. Our implementation is
open-source and available at https://github.com/BU-DEPEND-Lab/SCUQ-RRG.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 20:43:39 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 19:19:05 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Chenyu",
""
],
[
"Zhou",
"Weichao",
""
],
[
"Ghosh",
"Shantanu",
""
],
[
"Batmanghelich",
"Kayhan",
""
],
[
"Li",
"Wenchao",
""
]
] | TITLE: Semantic Consistency-Based Uncertainty Quantification for Factuality in
Radiology Report Generation
ABSTRACT: Radiology report generation (RRG) has shown great potential in assisting
radiologists by automating the labor-intensive task of report writing. While
recent advancements have improved the quality and coherence of generated
reports, ensuring their factual correctness remains a critical challenge.
Although generative medical Vision Large Language Models (VLLMs) have been
proposed to address this issue, these models are prone to hallucinations and
can produce inaccurate diagnostic information. To address these concerns, we
introduce a novel Semantic Consistency-Based Uncertainty Quantification
framework that provides both report-level and sentence-level uncertainties.
Unlike existing approaches, our method does not require modifications to the
underlying model or access to its inner state, such as output token logits,
thus serving as a plug-and-play module that can be seamlessly integrated with
state-of-the-art models. Extensive experiments demonstrate the efficacy of our
method in detecting hallucinations and enhancing the factual accuracy of
automatically generated radiology reports. By abstaining from high-uncertainty
reports, our approach improves factuality scores by $10$\%, achieved by
rejecting $20$\% of reports using the \texttt{Radialog} model on the MIMIC-CXR
dataset. Furthermore, sentence-level uncertainty flags the lowest-precision
sentence in each report with an $82.9$\% success rate. Our implementation is
open-source and available at https://github.com/BU-DEPEND-Lab/SCUQ-RRG.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.