id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
listlengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
2405.03420
Marcin Denkowski
Emil Benedykciuk and Marcin Denkowski and Grzegorz W\'ojcik
Implantable Adaptive Cells: A Novel Enhancement for Pre-Trained U-Nets in Medical Image Segmentation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper introduces a novel approach to enhance the performance of pre-trained neural networks in medical image segmentation using gradient-based Neural Architecture Search (NAS) methods. We present the concept of Implantable Adaptive Cell (IAC), small modules identified through Partially-Connected DARTS based approach, designed to be injected into the skip connections of an existing and already trained U-shaped model. Unlike traditional NAS methods, our approach refines existing architectures without full retraining. Experiments on four medical datasets with MRI and CT images show consistent accuracy improvements on various U-Net configurations, with segmentation accuracy gain by approximately 5 percentage points across all validation datasets, with improvements reaching up to 11\%pt in the best-performing cases. The findings of this study not only offer a cost-effective alternative to the complete overhaul of complex models for performance upgrades but also indicate the potential applicability of our method to other architectures and problem domains.
[ { "version": "v1", "created": "Mon, 6 May 2024 12:40:15 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 12:52:29 GMT" } ]
2025-03-07T00:00:00
[ [ "Benedykciuk", "Emil", "" ], [ "Denkowski", "Marcin", "" ], [ "Wójcik", "Grzegorz", "" ] ]
TITLE: Implantable Adaptive Cells: A Novel Enhancement for Pre-Trained U-Nets in Medical Image Segmentation ABSTRACT: This paper introduces a novel approach to enhance the performance of pre-trained neural networks in medical image segmentation using gradient-based Neural Architecture Search (NAS) methods. We present the concept of Implantable Adaptive Cell (IAC), small modules identified through Partially-Connected DARTS based approach, designed to be injected into the skip connections of an existing and already trained U-shaped model. Unlike traditional NAS methods, our approach refines existing architectures without full retraining. Experiments on four medical datasets with MRI and CT images show consistent accuracy improvements on various U-Net configurations, with segmentation accuracy gain by approximately 5 percentage points across all validation datasets, with improvements reaching up to 11\%pt in the best-performing cases. The findings of this study not only offer a cost-effective alternative to the complete overhaul of complex models for performance upgrades but also indicate the potential applicability of our method to other architectures and problem domains.
no_new_dataset
0.945751
2405.04902
Zhihan Ju
Zhihan Ju, Wanting Zhou, Longteng Kong, Yu Chen, Yi Li, Zhenan Sun, Caifeng Shan
HAGAN: Hybrid Augmented Generative Adversarial Network for Medical Image Synthesis
null
Machine Intelligence Research 2024
10.1007/s11633-024-1528-y
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Medical Image Synthesis (MIS) plays an important role in the intelligent medical field, which greatly saves the economic and time costs of medical diagnosis. However, due to the complexity of medical images and similar characteristics of different tissue cells, existing methods face great challenges in meeting their biological consistency. To this end, we propose the Hybrid Augmented Generative Adversarial Network (HAGAN) to maintain the authenticity of structural texture and tissue cells. HAGAN contains Attention Mixed (AttnMix) Generator, Hierarchical Discriminator and Reverse Skip Connection between Discriminator and Generator. The AttnMix consistency differentiable regularization encourages the perception in structural and textural variations between real and fake images, which improves the pathological integrity of synthetic images and the accuracy of features in local areas. The Hierarchical Discriminator introduces pixel-by-pixel discriminant feedback to generator for enhancing the saliency and discriminance of global and local details simultaneously. The Reverse Skip Connection further improves the accuracy for fine details by fusing real and synthetic distribution features. Our experimental evaluations on three datasets of different scales, i.e., COVID-CT, ACDC and BraTS2018, demonstrate that HAGAN outperforms the existing methods and achieves state-of-the-art performance in both high-resolution and low-resolution.
[ { "version": "v1", "created": "Wed, 8 May 2024 09:13:42 GMT" } ]
2025-03-07T00:00:00
[ [ "Ju", "Zhihan", "" ], [ "Zhou", "Wanting", "" ], [ "Kong", "Longteng", "" ], [ "Chen", "Yu", "" ], [ "Li", "Yi", "" ], [ "Sun", "Zhenan", "" ], [ "Shan", "Caifeng", "" ] ]
TITLE: HAGAN: Hybrid Augmented Generative Adversarial Network for Medical Image Synthesis ABSTRACT: Medical Image Synthesis (MIS) plays an important role in the intelligent medical field, which greatly saves the economic and time costs of medical diagnosis. However, due to the complexity of medical images and similar characteristics of different tissue cells, existing methods face great challenges in meeting their biological consistency. To this end, we propose the Hybrid Augmented Generative Adversarial Network (HAGAN) to maintain the authenticity of structural texture and tissue cells. HAGAN contains Attention Mixed (AttnMix) Generator, Hierarchical Discriminator and Reverse Skip Connection between Discriminator and Generator. The AttnMix consistency differentiable regularization encourages the perception in structural and textural variations between real and fake images, which improves the pathological integrity of synthetic images and the accuracy of features in local areas. The Hierarchical Discriminator introduces pixel-by-pixel discriminant feedback to generator for enhancing the saliency and discriminance of global and local details simultaneously. The Reverse Skip Connection further improves the accuracy for fine details by fusing real and synthetic distribution features. Our experimental evaluations on three datasets of different scales, i.e., COVID-CT, ACDC and BraTS2018, demonstrate that HAGAN outperforms the existing methods and achieves state-of-the-art performance in both high-resolution and low-resolution.
no_new_dataset
0.953966
2405.07155
Hu Wang
Hu Wang, Salma Hassan, Yuyuan Liu, Congbo Ma, Yuanhong Chen, Yutong Xie, Mostafa Salem, Yu Tian, Jodie Avery, Louise Hull, Ian Reid, Mohammad Yaqub, Gustavo Carneiro
Meta-Learned Modality-Weighted Knowledge Distillation for Robust Multi-Modal Learning with Missing Data
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
In multi-modal learning, some modalities are more influential than others, and their absence can have a significant impact on classification/segmentation accuracy. Addressing this challenge, we propose a novel approach called Meta-learned Modality-weighted Knowledge Distillation (MetaKD), which enables multi-modal models to maintain high accuracy even when key modalities are missing. MetaKD adaptively estimates the importance weight of each modality through a meta-learning process. These learned importance weights guide a pairwise modality-weighted knowledge distillation process, allowing high-importance modalities to transfer knowledge to lower-importance ones, resulting in robust performance despite missing inputs. Unlike previous methods in the field, which are often task-specific and require significant modifications, our approach is designed to work in multiple tasks (e.g., segmentation and classification) with minimal adaptation. Experimental results on five prevalent datasets, including three Brain Tumor Segmentation datasets (BraTS2018, BraTS2019 and BraTS2020), the Alzheimer's Disease Neuroimaging Initiative (ADNI) classification dataset and the Audiovision-MNIST classification dataset, demonstrate the proposed model is able to outperform the compared models by a large margin. The code is available at https://github.com/billhhh/MetaKD.
[ { "version": "v1", "created": "Sun, 12 May 2024 04:18:10 GMT" }, { "version": "v2", "created": "Tue, 12 Nov 2024 16:39:29 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 08:51:28 GMT" } ]
2025-03-07T00:00:00
[ [ "Wang", "Hu", "" ], [ "Hassan", "Salma", "" ], [ "Liu", "Yuyuan", "" ], [ "Ma", "Congbo", "" ], [ "Chen", "Yuanhong", "" ], [ "Xie", "Yutong", "" ], [ "Salem", "Mostafa", "" ], [ "Tian", "Yu", "" ], [ "Avery", "Jodie", "" ], [ "Hull", "Louise", "" ], [ "Reid", "Ian", "" ], [ "Yaqub", "Mohammad", "" ], [ "Carneiro", "Gustavo", "" ] ]
TITLE: Meta-Learned Modality-Weighted Knowledge Distillation for Robust Multi-Modal Learning with Missing Data ABSTRACT: In multi-modal learning, some modalities are more influential than others, and their absence can have a significant impact on classification/segmentation accuracy. Addressing this challenge, we propose a novel approach called Meta-learned Modality-weighted Knowledge Distillation (MetaKD), which enables multi-modal models to maintain high accuracy even when key modalities are missing. MetaKD adaptively estimates the importance weight of each modality through a meta-learning process. These learned importance weights guide a pairwise modality-weighted knowledge distillation process, allowing high-importance modalities to transfer knowledge to lower-importance ones, resulting in robust performance despite missing inputs. Unlike previous methods in the field, which are often task-specific and require significant modifications, our approach is designed to work in multiple tasks (e.g., segmentation and classification) with minimal adaptation. Experimental results on five prevalent datasets, including three Brain Tumor Segmentation datasets (BraTS2018, BraTS2019 and BraTS2020), the Alzheimer's Disease Neuroimaging Initiative (ADNI) classification dataset and the Audiovision-MNIST classification dataset, demonstrate the proposed model is able to outperform the compared models by a large margin. The code is available at https://github.com/billhhh/MetaKD.
no_new_dataset
0.955026
2405.12833
Ruizhe Li
Xinyi Wang, Grazziela Figueredo, Ruizhe Li, Wei Emma Zhang, Weitong Chen, Xin Chen
A Survey of Deep Learning-based Radiology Report Generation Using Multimodal Data
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic radiology report generation can alleviate the workload for physicians and minimize regional disparities in medical resources, therefore becoming an important topic in the medical image analysis field. It is a challenging task, as the computational model needs to mimic physicians to obtain information from multi-modal input data (i.e., medical images, clinical information, medical knowledge, etc.), and produce comprehensive and accurate reports. Recently, numerous works have emerged to address this issue using deep-learning-based methods, such as transformers, contrastive learning, and knowledge-base construction. This survey summarizes the key techniques developed in the most recent works and proposes a general workflow for deep-learning-based report generation with five main components, including multi-modality data acquisition, data preparation, feature learning, feature fusion and interaction, and report generation. The state-of-the-art methods for each of these components are highlighted. Additionally, we summarize the latest developments in large model-based methods and model explainability, along with public datasets, evaluation methods, current challenges, and future directions in this field. We have also conducted a quantitative comparison between different methods in the same experimental setting. This is the most up-to-date survey that focuses on multi-modality inputs and data fusion for radiology report generation. The aim is to provide comprehensive and rich information for researchers interested in automatic clinical report generation and medical image analysis, especially when using multimodal inputs, and to assist them in developing new algorithms to advance the field.
[ { "version": "v1", "created": "Tue, 21 May 2024 14:37:35 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 17:18:49 GMT" } ]
2025-03-07T00:00:00
[ [ "Wang", "Xinyi", "" ], [ "Figueredo", "Grazziela", "" ], [ "Li", "Ruizhe", "" ], [ "Zhang", "Wei Emma", "" ], [ "Chen", "Weitong", "" ], [ "Chen", "Xin", "" ] ]
TITLE: A Survey of Deep Learning-based Radiology Report Generation Using Multimodal Data ABSTRACT: Automatic radiology report generation can alleviate the workload for physicians and minimize regional disparities in medical resources, therefore becoming an important topic in the medical image analysis field. It is a challenging task, as the computational model needs to mimic physicians to obtain information from multi-modal input data (i.e., medical images, clinical information, medical knowledge, etc.), and produce comprehensive and accurate reports. Recently, numerous works have emerged to address this issue using deep-learning-based methods, such as transformers, contrastive learning, and knowledge-base construction. This survey summarizes the key techniques developed in the most recent works and proposes a general workflow for deep-learning-based report generation with five main components, including multi-modality data acquisition, data preparation, feature learning, feature fusion and interaction, and report generation. The state-of-the-art methods for each of these components are highlighted. Additionally, we summarize the latest developments in large model-based methods and model explainability, along with public datasets, evaluation methods, current challenges, and future directions in this field. We have also conducted a quantitative comparison between different methods in the same experimental setting. This is the most up-to-date survey that focuses on multi-modality inputs and data fusion for radiology report generation. The aim is to provide comprehensive and rich information for researchers interested in automatic clinical report generation and medical image analysis, especially when using multimodal inputs, and to assist them in developing new algorithms to advance the field.
no_new_dataset
0.947381
2405.14736
Xinyi Shang
Xinyi Shang, Peng Sun, Tao Lin
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost
https://github.com/LINs-lab/GIFT
ICLR 2025
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advancements in dataset distillation have demonstrated the significant benefits of employing soft labels generated by pre-trained teacher models. In this paper, we introduce a novel perspective by emphasizing the full utilization of labels. We first conduct a comprehensive comparison of various loss functions for soft label utilization in dataset distillation, revealing that the model trained on the synthetic dataset exhibits high sensitivity to the choice of loss function for soft label utilization. This finding highlights the necessity of a universal loss function for training models on synthetic datasets. Building on these insights, we introduce an extremely simple yet surprisingly effective plug-and-play approach, GIFT, which encompasses soft label refinement and a cosine similarity-based loss function to efficiently leverage full label information. Extensive experiments indicate that GIFT consistently enhances state-of-the-art dataset distillation methods across various dataset scales, without incurring additional computational costs. Importantly, GIFT significantly enhances cross-optimizer generalization, an area previously overlooked. For instance, on ImageNet-1K with IPC = 10, GIFT enhances the state-of-the-art method RDED by 30.8% in cross-optimizer generalization. Our code is available at https://github.com/LINs-lab/GIFT.
[ { "version": "v1", "created": "Thu, 23 May 2024 16:02:30 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 09:52:43 GMT" } ]
2025-03-07T00:00:00
[ [ "Shang", "Xinyi", "" ], [ "Sun", "Peng", "" ], [ "Lin", "Tao", "" ] ]
TITLE: GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost ABSTRACT: Recent advancements in dataset distillation have demonstrated the significant benefits of employing soft labels generated by pre-trained teacher models. In this paper, we introduce a novel perspective by emphasizing the full utilization of labels. We first conduct a comprehensive comparison of various loss functions for soft label utilization in dataset distillation, revealing that the model trained on the synthetic dataset exhibits high sensitivity to the choice of loss function for soft label utilization. This finding highlights the necessity of a universal loss function for training models on synthetic datasets. Building on these insights, we introduce an extremely simple yet surprisingly effective plug-and-play approach, GIFT, which encompasses soft label refinement and a cosine similarity-based loss function to efficiently leverage full label information. Extensive experiments indicate that GIFT consistently enhances state-of-the-art dataset distillation methods across various dataset scales, without incurring additional computational costs. Importantly, GIFT significantly enhances cross-optimizer generalization, an area previously overlooked. For instance, on ImageNet-1K with IPC = 10, GIFT enhances the state-of-the-art method RDED by 30.8% in cross-optimizer generalization. Our code is available at https://github.com/LINs-lab/GIFT.
no_new_dataset
0.941169
2406.00799
Sahar Abdelnabi
Sahar Abdelnabi, Aideen Fay, Giovanni Cherubin, Ahmed Salem, Mario Fritz, Andrew Paverd
Get my drift? Catching LLM Task Drift with Activation Deltas
SaTML 2025
null
null
null
cs.CR cs.CL cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LLMs are commonly used in retrieval-augmented applications to execute user instructions based on data from external sources. For example, modern search engines use LLMs to answer queries based on relevant search results; email plugins summarize emails by processing their content through an LLM. However, the potentially untrusted provenance of these data sources can lead to prompt injection attacks, where the LLM is manipulated by natural language instructions embedded in the external data, causing it to deviate from the user's original instruction(s). We define this deviation as task drift. Task drift is a significant concern as it allows attackers to exfiltrate data or influence the LLM's output for other users. We study LLM activations as a solution to detect task drift, showing that activation deltas - the difference in activations before and after processing external data - are strongly correlated with this phenomenon. Through two probing methods, we demonstrate that a simple linear classifier can detect drift with near-perfect ROC AUC on an out-of-distribution test set. We evaluate these methods by making minimal assumptions about how users' tasks, system prompts, and attacks can be phrased. We observe that this approach generalizes surprisingly well to unseen task domains, such as prompt injections, jailbreaks, and malicious instructions, without being trained on any of these attacks. Interestingly, the fact that this solution does not require any modifications to the LLM (e.g., fine-tuning), as well as its compatibility with existing meta-prompting solutions, makes it cost-efficient and easy to deploy. To encourage further research on activation-based task inspection, decoding, and interpretability, we release our large-scale TaskTracker toolkit, featuring a dataset of over 500K instances, representations from six SoTA language models, and a suite of inspection tools.
[ { "version": "v1", "created": "Sun, 2 Jun 2024 16:53:21 GMT" }, { "version": "v2", "created": "Mon, 10 Jun 2024 15:39:56 GMT" }, { "version": "v3", "created": "Thu, 20 Jun 2024 13:33:08 GMT" }, { "version": "v4", "created": "Fri, 19 Jul 2024 13:07:25 GMT" }, { "version": "v5", "created": "Sun, 3 Nov 2024 14:52:35 GMT" }, { "version": "v6", "created": "Thu, 6 Mar 2025 17:43:10 GMT" } ]
2025-03-07T00:00:00
[ [ "Abdelnabi", "Sahar", "" ], [ "Fay", "Aideen", "" ], [ "Cherubin", "Giovanni", "" ], [ "Salem", "Ahmed", "" ], [ "Fritz", "Mario", "" ], [ "Paverd", "Andrew", "" ] ]
TITLE: Get my drift? Catching LLM Task Drift with Activation Deltas ABSTRACT: LLMs are commonly used in retrieval-augmented applications to execute user instructions based on data from external sources. For example, modern search engines use LLMs to answer queries based on relevant search results; email plugins summarize emails by processing their content through an LLM. However, the potentially untrusted provenance of these data sources can lead to prompt injection attacks, where the LLM is manipulated by natural language instructions embedded in the external data, causing it to deviate from the user's original instruction(s). We define this deviation as task drift. Task drift is a significant concern as it allows attackers to exfiltrate data or influence the LLM's output for other users. We study LLM activations as a solution to detect task drift, showing that activation deltas - the difference in activations before and after processing external data - are strongly correlated with this phenomenon. Through two probing methods, we demonstrate that a simple linear classifier can detect drift with near-perfect ROC AUC on an out-of-distribution test set. We evaluate these methods by making minimal assumptions about how users' tasks, system prompts, and attacks can be phrased. We observe that this approach generalizes surprisingly well to unseen task domains, such as prompt injections, jailbreaks, and malicious instructions, without being trained on any of these attacks. Interestingly, the fact that this solution does not require any modifications to the LLM (e.g., fine-tuning), as well as its compatibility with existing meta-prompting solutions, makes it cost-efficient and easy to deploy. To encourage further research on activation-based task inspection, decoding, and interpretability, we release our large-scale TaskTracker toolkit, featuring a dataset of over 500K instances, representations from six SoTA language models, and a suite of inspection tools.
no_new_dataset
0.924891
2406.01145
Guangyi Liu
Guangyi Liu, Yongqi Zhang, Yong Li, Quanming Yao
Dual Reasoning: A GNN-LLM Collaborative Framework for Knowledge Graph Question Answering
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) excel at intuitive, implicit reasoning. Guiding LLMs to construct thought chains can enhance their deliberate reasoning abilities, but also faces challenges such as hallucination. Knowledge Graphs (KGs) can provide explicit structured knowledge for LLMs to alleviate these issues. However, existing KG-enhanced methods often overlook explicit graph learning, making it challenging to efficiently provide precise reasoning chains for LLMs. Following dual-process theory, we propose Dual-Reasoning (DualR), a novel framework that integrates an external system based on Graph Neural Network (GNN) for explicit reasoning on KGs, complementing the implicit reasoning of LLMs through externalized reasoning chains. DualR designs an LLM-empowered GNN module for explicit learning on KGs, efficiently extracting high-quality reasoning chains. These reasoning chains are then refined to a knowledge-enhanced multiple-choice prompt, guiding a frozen LLM to reason thoughtfully for final answer determination. Extensive experiments on three benchmark KGQA datasets demonstrate that DualR achieves state-of-the-art performance while maintaining high efficiency and interpretability.
[ { "version": "v1", "created": "Mon, 3 Jun 2024 09:38:28 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 06:49:04 GMT" } ]
2025-03-07T00:00:00
[ [ "Liu", "Guangyi", "" ], [ "Zhang", "Yongqi", "" ], [ "Li", "Yong", "" ], [ "Yao", "Quanming", "" ] ]
TITLE: Dual Reasoning: A GNN-LLM Collaborative Framework for Knowledge Graph Question Answering ABSTRACT: Large Language Models (LLMs) excel at intuitive, implicit reasoning. Guiding LLMs to construct thought chains can enhance their deliberate reasoning abilities, but also faces challenges such as hallucination. Knowledge Graphs (KGs) can provide explicit structured knowledge for LLMs to alleviate these issues. However, existing KG-enhanced methods often overlook explicit graph learning, making it challenging to efficiently provide precise reasoning chains for LLMs. Following dual-process theory, we propose Dual-Reasoning (DualR), a novel framework that integrates an external system based on Graph Neural Network (GNN) for explicit reasoning on KGs, complementing the implicit reasoning of LLMs through externalized reasoning chains. DualR designs an LLM-empowered GNN module for explicit learning on KGs, efficiently extracting high-quality reasoning chains. These reasoning chains are then refined to a knowledge-enhanced multiple-choice prompt, guiding a frozen LLM to reason thoughtfully for final answer determination. Extensive experiments on three benchmark KGQA datasets demonstrate that DualR achieves state-of-the-art performance while maintaining high efficiency and interpretability.
no_new_dataset
0.946547
2406.10292
Chufan Gao
Chufan Gao, Jathurshan Pradeepkumar, Trisha Das, Shivashankar Thati, Jimeng Sun
Automatically Labeling Clinical Trial Outcomes: A Large-Scale Benchmark for Drug Development
null
null
null
null
cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background The cost of drug discovery and development is substantial, with clinical trial outcomes playing a critical role in regulatory approval and patient care. However, access to large-scale, high-quality clinical trial outcome data remains limited, hindering advancements in predictive modeling and evidence-based decision-making. Methods We present the Clinical Trial Outcome (CTO) benchmark, a fully reproducible, large-scale repository encompassing approximately 125,000 drug and biologics trials. CTO integrates large language model (LLM) interpretations of publications, trial phase progression tracking, sentiment analysis from news sources, stock price movements of trial sponsors, and additional trial-related metrics. Furthermore, we manually annotated a dataset of clinical trials conducted between 2020 and 2024 to enhance the quality and reliability of outcome labels. Results The trial outcome labels in the CTO benchmark agree strongly with expert annotations, achieving an F1 score of 94 for Phase 3 trials and 91 across all phases. Additionally, benchmarking standard machine learning models on our manually annotated dataset revealed distribution shifts in recent trials, underscoring the necessity of continuously updated labeling approaches. Conclusions By analyzing CTO's performance on recent clinical trials, we demonstrate the ongoing need for high-quality, up-to-date trial outcome labels. We publicly release the CTO knowledge base and annotated labels at https://chufangao.github.io/CTOD, with regular updates to support research on clinical trial outcomes and inform data-driven improvements in drug development.
[ { "version": "v1", "created": "Thu, 13 Jun 2024 04:23:35 GMT" }, { "version": "v2", "created": "Mon, 3 Feb 2025 20:16:56 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 02:41:55 GMT" } ]
2025-03-07T00:00:00
[ [ "Gao", "Chufan", "" ], [ "Pradeepkumar", "Jathurshan", "" ], [ "Das", "Trisha", "" ], [ "Thati", "Shivashankar", "" ], [ "Sun", "Jimeng", "" ] ]
TITLE: Automatically Labeling Clinical Trial Outcomes: A Large-Scale Benchmark for Drug Development ABSTRACT: Background The cost of drug discovery and development is substantial, with clinical trial outcomes playing a critical role in regulatory approval and patient care. However, access to large-scale, high-quality clinical trial outcome data remains limited, hindering advancements in predictive modeling and evidence-based decision-making. Methods We present the Clinical Trial Outcome (CTO) benchmark, a fully reproducible, large-scale repository encompassing approximately 125,000 drug and biologics trials. CTO integrates large language model (LLM) interpretations of publications, trial phase progression tracking, sentiment analysis from news sources, stock price movements of trial sponsors, and additional trial-related metrics. Furthermore, we manually annotated a dataset of clinical trials conducted between 2020 and 2024 to enhance the quality and reliability of outcome labels. Results The trial outcome labels in the CTO benchmark agree strongly with expert annotations, achieving an F1 score of 94 for Phase 3 trials and 91 across all phases. Additionally, benchmarking standard machine learning models on our manually annotated dataset revealed distribution shifts in recent trials, underscoring the necessity of continuously updated labeling approaches. Conclusions By analyzing CTO's performance on recent clinical trials, we demonstrate the ongoing need for high-quality, up-to-date trial outcome labels. We publicly release the CTO knowledge base and annotated labels at https://chufangao.github.io/CTOD, with regular updates to support research on clinical trial outcomes and inform data-driven improvements in drug development.
no_new_dataset
0.935228
2406.12723
Scott Lowe
Zahra Gharaee, Scott C. Lowe, ZeMing Gong, Pablo Millan Arias, Nicholas Pellegrino, Austin T. Wang, Joakim Bruslund Haurum, Iuliia Zarubiieva, Lila Kari, Dirk Steinke, Graham W. Taylor, Paul Fieguth, Angel X. Chang
BIOSCAN-5M: A Multimodal Dataset for Insect Biodiversity
null
NeurIPS 2024
null
null
cs.LG cs.AI cs.CV q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
As part of an ongoing worldwide effort to comprehend and monitor insect biodiversity, this paper presents the BIOSCAN-5M Insect dataset to the machine learning community and establish several benchmark tasks. BIOSCAN-5M is a comprehensive dataset containing multi-modal information for over 5 million insect specimens, and it significantly expands existing image-based biological datasets by including taxonomic labels, raw nucleotide barcode sequences, assigned barcode index numbers, geographical, and size information. We propose three benchmark experiments to demonstrate the impact of the multi-modal data types on the classification and clustering accuracy. First, we pretrain a masked language model on the DNA barcode sequences of the BIOSCAN-5M dataset, and demonstrate the impact of using this large reference library on species- and genus-level classification performance. Second, we propose a zero-shot transfer learning task applied to images and DNA barcodes to cluster feature embeddings obtained from self-supervised learning, to investigate whether meaningful clusters can be derived from these representation embeddings. Third, we benchmark multi-modality by performing contrastive learning on DNA barcodes, image data, and taxonomic information. This yields a general shared embedding space enabling taxonomic classification using multiple types of information and modalities. The code repository of the BIOSCAN-5M Insect dataset is available at https://github.com/bioscan-ml/BIOSCAN-5M.
[ { "version": "v1", "created": "Tue, 18 Jun 2024 15:45:21 GMT" }, { "version": "v2", "created": "Sat, 22 Jun 2024 04:13:11 GMT" }, { "version": "v3", "created": "Tue, 25 Jun 2024 02:00:48 GMT" }, { "version": "v4", "created": "Wed, 13 Nov 2024 01:45:11 GMT" }, { "version": "v5", "created": "Fri, 24 Jan 2025 23:38:09 GMT" }, { "version": "v6", "created": "Sat, 1 Mar 2025 00:03:47 GMT" } ]
2025-03-07T00:00:00
[ [ "Gharaee", "Zahra", "" ], [ "Lowe", "Scott C.", "" ], [ "Gong", "ZeMing", "" ], [ "Arias", "Pablo Millan", "" ], [ "Pellegrino", "Nicholas", "" ], [ "Wang", "Austin T.", "" ], [ "Haurum", "Joakim Bruslund", "" ], [ "Zarubiieva", "Iuliia", "" ], [ "Kari", "Lila", "" ], [ "Steinke", "Dirk", "" ], [ "Taylor", "Graham W.", "" ], [ "Fieguth", "Paul", "" ], [ "Chang", "Angel X.", "" ] ]
TITLE: BIOSCAN-5M: A Multimodal Dataset for Insect Biodiversity ABSTRACT: As part of an ongoing worldwide effort to comprehend and monitor insect biodiversity, this paper presents the BIOSCAN-5M Insect dataset to the machine learning community and establish several benchmark tasks. BIOSCAN-5M is a comprehensive dataset containing multi-modal information for over 5 million insect specimens, and it significantly expands existing image-based biological datasets by including taxonomic labels, raw nucleotide barcode sequences, assigned barcode index numbers, geographical, and size information. We propose three benchmark experiments to demonstrate the impact of the multi-modal data types on the classification and clustering accuracy. First, we pretrain a masked language model on the DNA barcode sequences of the BIOSCAN-5M dataset, and demonstrate the impact of using this large reference library on species- and genus-level classification performance. Second, we propose a zero-shot transfer learning task applied to images and DNA barcodes to cluster feature embeddings obtained from self-supervised learning, to investigate whether meaningful clusters can be derived from these representation embeddings. Third, we benchmark multi-modality by performing contrastive learning on DNA barcodes, image data, and taxonomic information. This yields a general shared embedding space enabling taxonomic classification using multiple types of information and modalities. The code repository of the BIOSCAN-5M Insect dataset is available at https://github.com/bioscan-ml/BIOSCAN-5M.
new_dataset
0.960137
2406.12753
Zhen Huang
Zhen Huang, Zengzhi Wang, Shijie Xia, Xuefeng Li, Haoyang Zou, Ruijie Xu, Run-Ze Fan, Lyumanshan Ye, Ethan Chern, Yixin Ye, Yikai Zhang, Yuqing Yang, Ting Wu, Binjie Wang, Shichao Sun, Yang Xiao, Yiyuan Li, Fan Zhou, Steffi Chern, Yiwei Qin, Yan Ma, Jiadi Su, Yixiu Liu, Yuxiang Zheng, Shaoting Zhang, Dahua Lin, Yu Qiao, Pengfei Liu
OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI
Accepted by NeurIPS 2024
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and scientific discovery (i.e., AI4Science) once exclusive to human intellect. To comprehensively evaluate current models' performance in cognitive reasoning abilities, we introduce OlympicArena, which includes 11,163 bilingual problems across both text-only and interleaved text-image modalities. These challenges encompass a wide range of disciplines spanning seven fields and 62 international Olympic competitions, rigorously examined for data leakage. We argue that the challenges in Olympic competition problems are ideal for evaluating AI's cognitive reasoning due to their complexity and interdisciplinary nature, which are essential for tackling complex scientific challenges and facilitating discoveries. Beyond evaluating performance across various disciplines using answer-only criteria, we conduct detailed experiments and analyses from multiple perspectives. We delve into the models' cognitive reasoning abilities, their performance across different modalities, and their outcomes in process-level evaluations, which are vital for tasks requiring complex reasoning with lengthy solutions. Our extensive evaluations reveal that even advanced models like GPT-4o only achieve a 39.97% overall accuracy, illustrating current AI limitations in complex reasoning and multimodal integration. Through the OlympicArena, we aim to advance AI towards superintelligence, equipping it to address more complex challenges in science and beyond. We also provide a comprehensive set of resources to support AI research, including a benchmark dataset, an open-source annotation platform, a detailed evaluation tool, and a leaderboard with automatic submission features.
[ { "version": "v1", "created": "Tue, 18 Jun 2024 16:20:53 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 12:55:25 GMT" } ]
2025-03-07T00:00:00
[ [ "Huang", "Zhen", "" ], [ "Wang", "Zengzhi", "" ], [ "Xia", "Shijie", "" ], [ "Li", "Xuefeng", "" ], [ "Zou", "Haoyang", "" ], [ "Xu", "Ruijie", "" ], [ "Fan", "Run-Ze", "" ], [ "Ye", "Lyumanshan", "" ], [ "Chern", "Ethan", "" ], [ "Ye", "Yixin", "" ], [ "Zhang", "Yikai", "" ], [ "Yang", "Yuqing", "" ], [ "Wu", "Ting", "" ], [ "Wang", "Binjie", "" ], [ "Sun", "Shichao", "" ], [ "Xiao", "Yang", "" ], [ "Li", "Yiyuan", "" ], [ "Zhou", "Fan", "" ], [ "Chern", "Steffi", "" ], [ "Qin", "Yiwei", "" ], [ "Ma", "Yan", "" ], [ "Su", "Jiadi", "" ], [ "Liu", "Yixiu", "" ], [ "Zheng", "Yuxiang", "" ], [ "Zhang", "Shaoting", "" ], [ "Lin", "Dahua", "" ], [ "Qiao", "Yu", "" ], [ "Liu", "Pengfei", "" ] ]
TITLE: OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI ABSTRACT: The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and scientific discovery (i.e., AI4Science) once exclusive to human intellect. To comprehensively evaluate current models' performance in cognitive reasoning abilities, we introduce OlympicArena, which includes 11,163 bilingual problems across both text-only and interleaved text-image modalities. These challenges encompass a wide range of disciplines spanning seven fields and 62 international Olympic competitions, rigorously examined for data leakage. We argue that the challenges in Olympic competition problems are ideal for evaluating AI's cognitive reasoning due to their complexity and interdisciplinary nature, which are essential for tackling complex scientific challenges and facilitating discoveries. Beyond evaluating performance across various disciplines using answer-only criteria, we conduct detailed experiments and analyses from multiple perspectives. We delve into the models' cognitive reasoning abilities, their performance across different modalities, and their outcomes in process-level evaluations, which are vital for tasks requiring complex reasoning with lengthy solutions. Our extensive evaluations reveal that even advanced models like GPT-4o only achieve a 39.97% overall accuracy, illustrating current AI limitations in complex reasoning and multimodal integration. Through the OlympicArena, we aim to advance AI towards superintelligence, equipping it to address more complex challenges in science and beyond. We also provide a comprehensive set of resources to support AI research, including a benchmark dataset, an open-source annotation platform, a detailed evaluation tool, and a leaderboard with automatic submission features.
no_new_dataset
0.61708
2406.18380
Roman Bresson
Roman Bresson and Giannis Nikolentzos and George Panagopoulos and Michail Chatzianastasis and Jun Pang and Michalis Vazirgiannis
KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
In recent years, Graph Neural Networks (GNNs) have become the de facto tool for learning node and graph representations. Most GNNs typically consist of a sequence of neighborhood aggregation (a.k.a., message-passing) layers, within which the representation of each node is updated based on those of its neighbors. The most expressive message-passing GNNs can be obtained through the use of the sum aggregator and of MLPs for feature transformation, thanks to their universal approximation capabilities. However, the limitations of MLPs recently motivated the introduction of another family of universal approximators, called Kolmogorov-Arnold Networks (KANs) which rely on a different representation theorem. In this work, we compare the performance of KANs against that of MLPs on graph learning tasks. We implement three new KAN-based GNN layers, inspired respectively by the GCN, GAT and GIN layers. We evaluate two different implementations of KANs using two distinct base families of functions, namely B-splines and radial basis functions. We perform extensive experiments on node classification, link prediction, graph classification and graph regression datasets. Our results indicate that KANs are on-par with or better than MLPs on all tasks studied in this paper. We also show that the size and training speed of RBF-based KANs is only marginally higher than for MLPs, making them viable alternatives. Code available at https://github.com/RomanBresson/KAGNN.
[ { "version": "v1", "created": "Wed, 26 Jun 2024 14:21:21 GMT" }, { "version": "v2", "created": "Mon, 1 Jul 2024 07:13:08 GMT" }, { "version": "v3", "created": "Fri, 13 Dec 2024 09:34:22 GMT" }, { "version": "v4", "created": "Thu, 6 Mar 2025 10:25:17 GMT" } ]
2025-03-07T00:00:00
[ [ "Bresson", "Roman", "" ], [ "Nikolentzos", "Giannis", "" ], [ "Panagopoulos", "George", "" ], [ "Chatzianastasis", "Michail", "" ], [ "Pang", "Jun", "" ], [ "Vazirgiannis", "Michalis", "" ] ]
TITLE: KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning ABSTRACT: In recent years, Graph Neural Networks (GNNs) have become the de facto tool for learning node and graph representations. Most GNNs typically consist of a sequence of neighborhood aggregation (a.k.a., message-passing) layers, within which the representation of each node is updated based on those of its neighbors. The most expressive message-passing GNNs can be obtained through the use of the sum aggregator and of MLPs for feature transformation, thanks to their universal approximation capabilities. However, the limitations of MLPs recently motivated the introduction of another family of universal approximators, called Kolmogorov-Arnold Networks (KANs) which rely on a different representation theorem. In this work, we compare the performance of KANs against that of MLPs on graph learning tasks. We implement three new KAN-based GNN layers, inspired respectively by the GCN, GAT and GIN layers. We evaluate two different implementations of KANs using two distinct base families of functions, namely B-splines and radial basis functions. We perform extensive experiments on node classification, link prediction, graph classification and graph regression datasets. Our results indicate that KANs are on-par with or better than MLPs on all tasks studied in this paper. We also show that the size and training speed of RBF-based KANs is only marginally higher than for MLPs, making them viable alternatives. Code available at https://github.com/RomanBresson/KAGNN.
no_new_dataset
0.948585
2407.04287
Alex Ergasti
Alex Ergasti, Tomaso Fontanini, Claudio Ferrari, Massimo Bertozzi, Andrea Prati
MARS: Paying more attention to visual attributes for text-based person search
null
null
10.1145/3721482
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text-based person search (TBPS) is a problem that gained significant interest within the research community. The task is that of retrieving one or more images of a specific individual based on a textual description. The multi-modal nature of the task requires learning representations that bridge text and image data within a shared latent space. Existing TBPS systems face two major challenges. One is defined as inter-identity noise that is due to the inherent vagueness and imprecision of text descriptions and it indicates how descriptions of visual attributes can be generally associated to different people; the other is the intra-identity variations, which are all those nuisances e.g. pose, illumination, that can alter the visual appearance of the same textual attributes for a given subject. To address these issues, this paper presents a novel TBPS architecture named MARS (Mae-Attribute-Relation-Sensitive), which enhances current state-of-the-art models by introducing two key components: a Visual Reconstruction Loss and an Attribute Loss. The former employs a Masked AutoEncoder trained to reconstruct randomly masked image patches with the aid of the textual description. In doing so the model is encouraged to learn more expressive representations and textual-visual relations in the latent space. The Attribute Loss, instead, balances the contribution of different types of attributes, defined as adjective-noun chunks of text. This loss ensures that every attribute is taken into consideration in the person retrieval process. Extensive experiments on three commonly used datasets, namely CUHK-PEDES, ICFG-PEDES, and RSTPReid, report performance improvements, with significant gains in the mean Average Precision (mAP) metric w.r.t. the current state of the art.
[ { "version": "v1", "created": "Fri, 5 Jul 2024 06:44:43 GMT" } ]
2025-03-07T00:00:00
[ [ "Ergasti", "Alex", "" ], [ "Fontanini", "Tomaso", "" ], [ "Ferrari", "Claudio", "" ], [ "Bertozzi", "Massimo", "" ], [ "Prati", "Andrea", "" ] ]
TITLE: MARS: Paying more attention to visual attributes for text-based person search ABSTRACT: Text-based person search (TBPS) is a problem that gained significant interest within the research community. The task is that of retrieving one or more images of a specific individual based on a textual description. The multi-modal nature of the task requires learning representations that bridge text and image data within a shared latent space. Existing TBPS systems face two major challenges. One is defined as inter-identity noise that is due to the inherent vagueness and imprecision of text descriptions and it indicates how descriptions of visual attributes can be generally associated to different people; the other is the intra-identity variations, which are all those nuisances e.g. pose, illumination, that can alter the visual appearance of the same textual attributes for a given subject. To address these issues, this paper presents a novel TBPS architecture named MARS (Mae-Attribute-Relation-Sensitive), which enhances current state-of-the-art models by introducing two key components: a Visual Reconstruction Loss and an Attribute Loss. The former employs a Masked AutoEncoder trained to reconstruct randomly masked image patches with the aid of the textual description. In doing so the model is encouraged to learn more expressive representations and textual-visual relations in the latent space. The Attribute Loss, instead, balances the contribution of different types of attributes, defined as adjective-noun chunks of text. This loss ensures that every attribute is taken into consideration in the person retrieval process. Extensive experiments on three commonly used datasets, namely CUHK-PEDES, ICFG-PEDES, and RSTPReid, report performance improvements, with significant gains in the mean Average Precision (mAP) metric w.r.t. the current state of the art.
no_new_dataset
0.949809
2407.06639
Zihao Zhou
Zihao Zhou, Antti Aitio, David Howey
Learning Li-ion battery health and degradation modes from data with aging-aware circuit models
11 pages, 10 figures
null
null
null
eess.SY cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-invasive estimation of Li-ion battery state-of-health from operational data is valuable for battery applications, but remains challenging. Pure model-based methods may suffer from inaccuracy and long-term instability of parameter estimates, whereas pure data-driven methods rely heavily on training data quality and quantity, causing lack of generality when extrapolating to unseen cases. We apply an aging-aware equivalent circuit model for health estimation, combining the flexibility of data-driven techniques within a model-based approach. A simplified electrical model with voltage source and resistor incorporates Gaussian process regression to learn capacity fade over time and also the dependence of resistance on operating conditions and time. The approach was validated against two datasets and shown to give accurate performance with less than 1% relative root mean square error (RMSE) in capacity and less than 2% mean absolute percentage error (MAPE). Critically, we show that the open circuit voltage versus state-of-charge function must be accurately known, and any inaccuracies or changes in this over time strongly influence the inferred resistance. However, this feature (or bug) may also be used to estimate in operando differential voltage curves from operational data.
[ { "version": "v1", "created": "Tue, 9 Jul 2024 08:07:39 GMT" }, { "version": "v2", "created": "Wed, 10 Jul 2024 01:45:08 GMT" }, { "version": "v3", "created": "Sun, 14 Jul 2024 03:24:18 GMT" }, { "version": "v4", "created": "Thu, 6 Mar 2025 14:54:01 GMT" } ]
2025-03-07T00:00:00
[ [ "Zhou", "Zihao", "" ], [ "Aitio", "Antti", "" ], [ "Howey", "David", "" ] ]
TITLE: Learning Li-ion battery health and degradation modes from data with aging-aware circuit models ABSTRACT: Non-invasive estimation of Li-ion battery state-of-health from operational data is valuable for battery applications, but remains challenging. Pure model-based methods may suffer from inaccuracy and long-term instability of parameter estimates, whereas pure data-driven methods rely heavily on training data quality and quantity, causing lack of generality when extrapolating to unseen cases. We apply an aging-aware equivalent circuit model for health estimation, combining the flexibility of data-driven techniques within a model-based approach. A simplified electrical model with voltage source and resistor incorporates Gaussian process regression to learn capacity fade over time and also the dependence of resistance on operating conditions and time. The approach was validated against two datasets and shown to give accurate performance with less than 1% relative root mean square error (RMSE) in capacity and less than 2% mean absolute percentage error (MAPE). Critically, we show that the open circuit voltage versus state-of-charge function must be accurately known, and any inaccuracies or changes in this over time strongly influence the inferred resistance. However, this feature (or bug) may also be used to estimate in operando differential voltage curves from operational data.
no_new_dataset
0.947527
2407.07918
Adnane Ez-Zizi
Oladipo A. Madamidola, Felix Ngobigha and Adnane Ez-zizi
Detecting new obfuscated malware variants: A lightweight and interpretable machine learning approach
30 pages (excluding Appendix), 5 figures and 5 tables. Now published in Intelligent Systems with Applications (https://doi.org/10.1016/j.iswa.2024.200472)
Intelligent Systems with Applications, 25 (2025)
10.1016/j.iswa.2024.200472
null
cs.CR cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning has been successfully applied in developing malware detection systems, with a primary focus on accuracy, and increasing attention to reducing computational overhead and improving model interpretability. However, an important question remains underexplored: How well can machine learning-based models detect entirely new forms of malware not present in the training data? In this study, we present a machine learning-based system for detecting obfuscated malware that is not only highly accurate, lightweight and interpretable, but also capable of successfully adapting to new types of malware attacks. Our system is capable of detecting 15 malware subtypes despite being exclusively trained on one malware subtype, namely the Transponder from the Spyware family. This system was built after training 15 distinct random forest-based models, each on a different malware subtype from the CIC-MalMem-2022 dataset. These models were evaluated against the entire range of malware subtypes, including all unseen malware subtypes. To maintain the system's streamlined nature, training was confined to the top five most important features, which also enhanced interpretability. The Transponder-focused model exhibited high accuracy, exceeding 99.8%, with an average processing speed of 5.7 microseconds per file. We also illustrate how the Shapley additive explanations technique can facilitate the interpretation of the model predictions. Our research contributes to advancing malware detection methodologies, pioneering the feasibility of detecting obfuscated malware by exclusively training a model on a single or a few carefully selected malware subtypes and applying it to detect unseen subtypes.
[ { "version": "v1", "created": "Sun, 7 Jul 2024 12:41:40 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 12:41:21 GMT" } ]
2025-03-07T00:00:00
[ [ "Madamidola", "Oladipo A.", "" ], [ "Ngobigha", "Felix", "" ], [ "Ez-zizi", "Adnane", "" ] ]
TITLE: Detecting new obfuscated malware variants: A lightweight and interpretable machine learning approach ABSTRACT: Machine learning has been successfully applied in developing malware detection systems, with a primary focus on accuracy, and increasing attention to reducing computational overhead and improving model interpretability. However, an important question remains underexplored: How well can machine learning-based models detect entirely new forms of malware not present in the training data? In this study, we present a machine learning-based system for detecting obfuscated malware that is not only highly accurate, lightweight and interpretable, but also capable of successfully adapting to new types of malware attacks. Our system is capable of detecting 15 malware subtypes despite being exclusively trained on one malware subtype, namely the Transponder from the Spyware family. This system was built after training 15 distinct random forest-based models, each on a different malware subtype from the CIC-MalMem-2022 dataset. These models were evaluated against the entire range of malware subtypes, including all unseen malware subtypes. To maintain the system's streamlined nature, training was confined to the top five most important features, which also enhanced interpretability. The Transponder-focused model exhibited high accuracy, exceeding 99.8%, with an average processing speed of 5.7 microseconds per file. We also illustrate how the Shapley additive explanations technique can facilitate the interpretation of the model predictions. Our research contributes to advancing malware detection methodologies, pioneering the feasibility of detecting obfuscated malware by exclusively training a model on a single or a few carefully selected malware subtypes and applying it to detect unseen subtypes.
no_new_dataset
0.943243
2407.08974
Xue Gong
Joshua Zhi En Tan, JunJie Wee, Xue Gong, Kelin Xia
Topology-enhanced machine learning model (Top-ML) for anticancer peptide prediction
null
null
null
null
q-bio.QM cs.LG math.GN q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Recently, therapeutic peptides have demonstrated great promise for cancer treatment. To explore powerful anticancer peptides, artificial intelligence (AI)-based approaches have been developed to systematically screen potential candidates. However, the lack of efficient featurization of peptides has become a bottleneck for these machine-learning models. In this paper, we propose a topology-enhanced machine learning model (Top-ML) for anticancer peptides prediction. Our Top-ML employs peptide topological features derived from its sequence "connection" information characterized by vector and spectral descriptors. Our Top-ML model, employing an Extra-Trees classifier, has been validated on the AntiCP 2.0 and mACPpred 2.0 benchmark datasets, achieving state-of-the-art performance or results comparable to existing deep learning models, while providing greater interpretability. Our results highlight the potential of leveraging novel topology-based featurization to accelerate the identification of anticancer peptides.
[ { "version": "v1", "created": "Fri, 12 Jul 2024 04:04:54 GMT" }, { "version": "v2", "created": "Wed, 8 Jan 2025 06:42:39 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 03:33:25 GMT" } ]
2025-03-07T00:00:00
[ [ "Tan", "Joshua Zhi En", "" ], [ "Wee", "JunJie", "" ], [ "Gong", "Xue", "" ], [ "Xia", "Kelin", "" ] ]
TITLE: Topology-enhanced machine learning model (Top-ML) for anticancer peptide prediction ABSTRACT: Recently, therapeutic peptides have demonstrated great promise for cancer treatment. To explore powerful anticancer peptides, artificial intelligence (AI)-based approaches have been developed to systematically screen potential candidates. However, the lack of efficient featurization of peptides has become a bottleneck for these machine-learning models. In this paper, we propose a topology-enhanced machine learning model (Top-ML) for anticancer peptides prediction. Our Top-ML employs peptide topological features derived from its sequence "connection" information characterized by vector and spectral descriptors. Our Top-ML model, employing an Extra-Trees classifier, has been validated on the AntiCP 2.0 and mACPpred 2.0 benchmark datasets, achieving state-of-the-art performance or results comparable to existing deep learning models, while providing greater interpretability. Our results highlight the potential of leveraging novel topology-based featurization to accelerate the identification of anticancer peptides.
no_new_dataset
0.949623
2407.13304
Federico Magistri
Federico Magistri, Thomas L\"abe, Elias Marks, Sumanth Nagulavancha, Yue Pan, Claus Smitt, Lasse Klingbeil, Michael Halstead, Heiner Kuhlmann, Chris McCool, Jens Behley, Cyrill Stachniss
A Dataset and Benchmark for Shape Completion of Fruits for Agricultural Robotics
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the world population is expected to reach 10 billion by 2050, our agricultural production system needs to double its productivity despite a decline of human workforce in the agricultural sector. Autonomous robotic systems are one promising pathway to increase productivity by taking over labor-intensive manual tasks like fruit picking. To be effective, such systems need to monitor and interact with plants and fruits precisely, which is challenging due to the cluttered nature of agricultural environments causing, for example, strong occlusions. Thus, being able to estimate the complete 3D shapes of objects in presence of occlusions is crucial for automating operations such as fruit harvesting. In this paper, we propose the first publicly available 3D shape completion dataset for agricultural vision systems. We provide an RGB-D dataset for estimating the 3D shape of fruits. Specifically, our dataset contains RGB-D frames of single sweet peppers in lab conditions but also in a commercial greenhouse. For each fruit, we additionally collected high-precision point clouds that we use as ground truth. For acquiring the ground truth shape, we developed a measuring process that allows us to record data of real sweet pepper plants, both in the lab and in the greenhouse with high precision, and determine the shape of the sensed fruits. We release our dataset, consisting of almost 7,000 RGB-D frames belonging to more than 100 different fruits. We provide segmented RGB-D frames, with camera intrinsics to easily obtain colored point clouds, together with the corresponding high-precision, occlusion-free point clouds obtained with a high-precision laser scanner. We additionally enable evaluation of shape completion approaches on a hidden test set through a public challenge on a benchmark server.
[ { "version": "v1", "created": "Thu, 18 Jul 2024 09:07:23 GMT" }, { "version": "v2", "created": "Tue, 17 Sep 2024 08:16:57 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 14:06:01 GMT" } ]
2025-03-07T00:00:00
[ [ "Magistri", "Federico", "" ], [ "Läbe", "Thomas", "" ], [ "Marks", "Elias", "" ], [ "Nagulavancha", "Sumanth", "" ], [ "Pan", "Yue", "" ], [ "Smitt", "Claus", "" ], [ "Klingbeil", "Lasse", "" ], [ "Halstead", "Michael", "" ], [ "Kuhlmann", "Heiner", "" ], [ "McCool", "Chris", "" ], [ "Behley", "Jens", "" ], [ "Stachniss", "Cyrill", "" ] ]
TITLE: A Dataset and Benchmark for Shape Completion of Fruits for Agricultural Robotics ABSTRACT: As the world population is expected to reach 10 billion by 2050, our agricultural production system needs to double its productivity despite a decline of human workforce in the agricultural sector. Autonomous robotic systems are one promising pathway to increase productivity by taking over labor-intensive manual tasks like fruit picking. To be effective, such systems need to monitor and interact with plants and fruits precisely, which is challenging due to the cluttered nature of agricultural environments causing, for example, strong occlusions. Thus, being able to estimate the complete 3D shapes of objects in presence of occlusions is crucial for automating operations such as fruit harvesting. In this paper, we propose the first publicly available 3D shape completion dataset for agricultural vision systems. We provide an RGB-D dataset for estimating the 3D shape of fruits. Specifically, our dataset contains RGB-D frames of single sweet peppers in lab conditions but also in a commercial greenhouse. For each fruit, we additionally collected high-precision point clouds that we use as ground truth. For acquiring the ground truth shape, we developed a measuring process that allows us to record data of real sweet pepper plants, both in the lab and in the greenhouse with high precision, and determine the shape of the sensed fruits. We release our dataset, consisting of almost 7,000 RGB-D frames belonging to more than 100 different fruits. We provide segmented RGB-D frames, with camera intrinsics to easily obtain colored point clouds, together with the corresponding high-precision, occlusion-free point clouds obtained with a high-precision laser scanner. We additionally enable evaluation of shape completion approaches on a hidden test set through a public challenge on a benchmark server.
new_dataset
0.96395
2407.18125
Vito Paolo Pastore
Roberto Di Via, Francesca Odone, Vito Paolo Pastore
Self-supervised pre-training with diffusion model for few-shot landmark detection in x-ray images
Accepted at WACV 2025
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Deep neural networks have been extensively applied in the medical domain for various tasks, including image classification, segmentation, and landmark detection. However, their application is often hindered by data scarcity, both in terms of available annotations and images. This study introduces a novel application of denoising diffusion probabilistic models (DDPMs) to the landmark detection task, specifically addressing the challenge of limited annotated data in x-ray imaging. Our key innovation lies in leveraging DDPMs for self-supervised pre-training in landmark detection, a previously unexplored approach in this domain. This method enables accurate landmark detection with minimal annotated training data (as few as 50 images), surpassing both ImageNet supervised pre-training and traditional self-supervised techniques across three popular x-ray benchmark datasets. To our knowledge, this work represents the first application of diffusion models for self-supervised learning in landmark detection, which may offer a valuable pre-training approach in few-shot regimes, for mitigating data scarcity.
[ { "version": "v1", "created": "Thu, 25 Jul 2024 15:32:59 GMT" }, { "version": "v2", "created": "Tue, 29 Oct 2024 16:10:10 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 17:03:35 GMT" } ]
2025-03-07T00:00:00
[ [ "Di Via", "Roberto", "" ], [ "Odone", "Francesca", "" ], [ "Pastore", "Vito Paolo", "" ] ]
TITLE: Self-supervised pre-training with diffusion model for few-shot landmark detection in x-ray images ABSTRACT: Deep neural networks have been extensively applied in the medical domain for various tasks, including image classification, segmentation, and landmark detection. However, their application is often hindered by data scarcity, both in terms of available annotations and images. This study introduces a novel application of denoising diffusion probabilistic models (DDPMs) to the landmark detection task, specifically addressing the challenge of limited annotated data in x-ray imaging. Our key innovation lies in leveraging DDPMs for self-supervised pre-training in landmark detection, a previously unexplored approach in this domain. This method enables accurate landmark detection with minimal annotated training data (as few as 50 images), surpassing both ImageNet supervised pre-training and traditional self-supervised techniques across three popular x-ray benchmark datasets. To our knowledge, this work represents the first application of diffusion models for self-supervised learning in landmark detection, which may offer a valuable pre-training approach in few-shot regimes, for mitigating data scarcity.
no_new_dataset
0.953362
2407.18691
Mengjie Zhao
Mengjie Zhao, Cees Taal, Stephan Baggerohr, Olga Fink
Graph Neural Networks for Virtual Sensing in Complex Systems: Addressing Heterogeneous Temporal Dynamics
This paper extends our previous conference paper (Best Paper at European Conference of the PHM Society 2024, https://doi.org/10.36001/phme.2024.v8i1.3998). Accepted by Mechanical Systems and Signal Processing (MSSP)
null
null
null
cs.LG cs.AI cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time condition monitoring is crucial for the reliable and efficient operation of complex systems. However, relying solely on physical sensors can be limited due to their cost, placement constraints, or inability to directly measure certain critical parameters. Virtual sensing addresses these limitations by leveraging readily available sensor data and system knowledge to estimate inaccessible parameters or infer system states. The increasing complexity of industrial systems necessitates deployments of sensors with diverse modalities to provide a comprehensive understanding of system states. These sensors capture data at varying frequencies to monitor both rapid and slowly varying system dynamics, as well as local and global state evolutions of the systems. This leads to heterogeneous temporal dynamics, which, particularly under varying operational end environmental conditions, pose a significant challenge for accurate virtual sensing. To address this, we propose a Heterogeneous Temporal Graph Neural Network (HTGNN) framework. HTGNN explicitly models signals from diverse sensors and integrates operating conditions into the model architecture. We evaluate HTGNN using two newly released datasets: a bearing dataset with diverse load conditions for bearing load prediction and a year-long simulated dataset for predicting bridge live loads. Our results demonstrate that HTGNN significantly outperforms established baseline methods in both tasks, particularly under highly varying operating conditions. These results highlight HTGNN's potential as a robust and accurate virtual sensing approach for complex systems, paving the way for improved monitoring, predictive maintenance, and enhanced system performance. Our code and data are available under https://github.com/EPFL-IMOS/htgnn.
[ { "version": "v1", "created": "Fri, 26 Jul 2024 12:16:53 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 15:47:01 GMT" } ]
2025-03-07T00:00:00
[ [ "Zhao", "Mengjie", "" ], [ "Taal", "Cees", "" ], [ "Baggerohr", "Stephan", "" ], [ "Fink", "Olga", "" ] ]
TITLE: Graph Neural Networks for Virtual Sensing in Complex Systems: Addressing Heterogeneous Temporal Dynamics ABSTRACT: Real-time condition monitoring is crucial for the reliable and efficient operation of complex systems. However, relying solely on physical sensors can be limited due to their cost, placement constraints, or inability to directly measure certain critical parameters. Virtual sensing addresses these limitations by leveraging readily available sensor data and system knowledge to estimate inaccessible parameters or infer system states. The increasing complexity of industrial systems necessitates deployments of sensors with diverse modalities to provide a comprehensive understanding of system states. These sensors capture data at varying frequencies to monitor both rapid and slowly varying system dynamics, as well as local and global state evolutions of the systems. This leads to heterogeneous temporal dynamics, which, particularly under varying operational end environmental conditions, pose a significant challenge for accurate virtual sensing. To address this, we propose a Heterogeneous Temporal Graph Neural Network (HTGNN) framework. HTGNN explicitly models signals from diverse sensors and integrates operating conditions into the model architecture. We evaluate HTGNN using two newly released datasets: a bearing dataset with diverse load conditions for bearing load prediction and a year-long simulated dataset for predicting bridge live loads. Our results demonstrate that HTGNN significantly outperforms established baseline methods in both tasks, particularly under highly varying operating conditions. These results highlight HTGNN's potential as a robust and accurate virtual sensing approach for complex systems, paving the way for improved monitoring, predictive maintenance, and enhanced system performance. Our code and data are available under https://github.com/EPFL-IMOS/htgnn.
new_dataset
0.964921
2408.06707
JunYong Choi
JunYong Choi, SeokYeong Lee, Haesol Park, Seung-Won Jung, Ig-Jae Kim, Junghyun Cho
MAIR++: Improving Multi-view Attention Inverse Rendering with Implicit Lighting Representation
Published in TPAMI. project page : https://bring728.github.io/mairplusplus.project/. arXiv admin note: text overlap with arXiv:2303.12368
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a scene-level inverse rendering framework that uses multi-view images to decompose the scene into geometry, SVBRDF, and 3D spatially-varying lighting. While multi-view images have been widely used for object-level inverse rendering, scene-level inverse rendering has primarily been studied using single-view images due to the lack of a dataset containing high dynamic range multi-view images with ground-truth geometry, material, and spatially-varying lighting. To improve the quality of scene-level inverse rendering, a novel framework called Multi-view Attention Inverse Rendering (MAIR) was recently introduced. MAIR performs scene-level multi-view inverse rendering by expanding the OpenRooms dataset, designing efficient pipelines to handle multi-view images, and splitting spatially-varying lighting. Although MAIR showed impressive results, its lighting representation is fixed to spherical Gaussians, which limits its ability to render images realistically. Consequently, MAIR cannot be directly used in applications such as material editing. Moreover, its multi-view aggregation networks have difficulties extracting rich features because they only focus on the mean and variance between multi-view features. In this paper, we propose its extended version, called MAIR++. MAIR++ addresses the aforementioned limitations by introducing an implicit lighting representation that accurately captures the lighting conditions of an image while facilitating realistic rendering. Furthermore, we design a directional attention-based multi-view aggregation network to infer more intricate relationships between views. Experimental results show that MAIR++ not only achieves better performance than MAIR and single-view-based methods, but also displays robust performance on unseen real-world scenes.
[ { "version": "v1", "created": "Tue, 13 Aug 2024 08:04:23 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 02:03:44 GMT" } ]
2025-03-07T00:00:00
[ [ "Choi", "JunYong", "" ], [ "Lee", "SeokYeong", "" ], [ "Park", "Haesol", "" ], [ "Jung", "Seung-Won", "" ], [ "Kim", "Ig-Jae", "" ], [ "Cho", "Junghyun", "" ] ]
TITLE: MAIR++: Improving Multi-view Attention Inverse Rendering with Implicit Lighting Representation ABSTRACT: In this paper, we propose a scene-level inverse rendering framework that uses multi-view images to decompose the scene into geometry, SVBRDF, and 3D spatially-varying lighting. While multi-view images have been widely used for object-level inverse rendering, scene-level inverse rendering has primarily been studied using single-view images due to the lack of a dataset containing high dynamic range multi-view images with ground-truth geometry, material, and spatially-varying lighting. To improve the quality of scene-level inverse rendering, a novel framework called Multi-view Attention Inverse Rendering (MAIR) was recently introduced. MAIR performs scene-level multi-view inverse rendering by expanding the OpenRooms dataset, designing efficient pipelines to handle multi-view images, and splitting spatially-varying lighting. Although MAIR showed impressive results, its lighting representation is fixed to spherical Gaussians, which limits its ability to render images realistically. Consequently, MAIR cannot be directly used in applications such as material editing. Moreover, its multi-view aggregation networks have difficulties extracting rich features because they only focus on the mean and variance between multi-view features. In this paper, we propose its extended version, called MAIR++. MAIR++ addresses the aforementioned limitations by introducing an implicit lighting representation that accurately captures the lighting conditions of an image while facilitating realistic rendering. Furthermore, we design a directional attention-based multi-view aggregation network to infer more intricate relationships between views. Experimental results show that MAIR++ not only achieves better performance than MAIR and single-view-based methods, but also displays robust performance on unseen real-world scenes.
no_new_dataset
0.951639
2408.09110
Jiancheng Pan
Jiancheng Pan, Yanxing Liu, Yuqian Fu, Muyuan Ma, Jiahao Li, Danda Pani Paudel, Luc Van Gool and Xiaomeng Huang
Locate Anything on Earth: Advancing Open-Vocabulary Object Detection for Remote Sensing Community
15 pages, 11 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object detection, particularly open-vocabulary object detection, plays a crucial role in Earth sciences, such as environmental monitoring, natural disaster assessment, and land-use planning. However, existing open-vocabulary detectors, primarily trained on natural-world images, struggle to generalize to remote sensing images due to a significant data domain gap. Thus, this paper aims to advance the development of open-vocabulary object detection in remote sensing community. To achieve this, we first reformulate the task as Locate Anything on Earth (LAE) with the goal of detecting any novel concepts on Earth. We then developed the LAE-Label Engine which collects, auto-annotates, and unifies up to 10 remote sensing datasets creating the LAE-1M - the first large-scale remote sensing object detection dataset with broad category coverage. Using the LAE-1M, we further propose and train the novel LAE-DINO Model, the first open-vocabulary foundation object detector for the LAE task, featuring Dynamic Vocabulary Construction (DVC) and Visual-Guided Text Prompt Learning (VisGT) modules. DVC dynamically constructs vocabulary for each training batch, while VisGT maps visual features to semantic space, enhancing text features. We comprehensively conduct experiments on established remote sensing benchmark DIOR, DOTAv2.0, as well as our newly introduced 80-class LAE-80C benchmark. Results demonstrate the advantages of the LAE-1M dataset and the effectiveness of the LAE-DINO method.
[ { "version": "v1", "created": "Sat, 17 Aug 2024 06:24:43 GMT" }, { "version": "v2", "created": "Thu, 13 Feb 2025 18:01:16 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 09:26:00 GMT" } ]
2025-03-07T00:00:00
[ [ "Pan", "Jiancheng", "" ], [ "Liu", "Yanxing", "" ], [ "Fu", "Yuqian", "" ], [ "Ma", "Muyuan", "" ], [ "Li", "Jiahao", "" ], [ "Paudel", "Danda Pani", "" ], [ "Van Gool", "Luc", "" ], [ "Huang", "Xiaomeng", "" ] ]
TITLE: Locate Anything on Earth: Advancing Open-Vocabulary Object Detection for Remote Sensing Community ABSTRACT: Object detection, particularly open-vocabulary object detection, plays a crucial role in Earth sciences, such as environmental monitoring, natural disaster assessment, and land-use planning. However, existing open-vocabulary detectors, primarily trained on natural-world images, struggle to generalize to remote sensing images due to a significant data domain gap. Thus, this paper aims to advance the development of open-vocabulary object detection in remote sensing community. To achieve this, we first reformulate the task as Locate Anything on Earth (LAE) with the goal of detecting any novel concepts on Earth. We then developed the LAE-Label Engine which collects, auto-annotates, and unifies up to 10 remote sensing datasets creating the LAE-1M - the first large-scale remote sensing object detection dataset with broad category coverage. Using the LAE-1M, we further propose and train the novel LAE-DINO Model, the first open-vocabulary foundation object detector for the LAE task, featuring Dynamic Vocabulary Construction (DVC) and Visual-Guided Text Prompt Learning (VisGT) modules. DVC dynamically constructs vocabulary for each training batch, while VisGT maps visual features to semantic space, enhancing text features. We comprehensively conduct experiments on established remote sensing benchmark DIOR, DOTAv2.0, as well as our newly introduced 80-class LAE-80C benchmark. Results demonstrate the advantages of the LAE-1M dataset and the effectiveness of the LAE-DINO method.
new_dataset
0.859133
2408.14507
Longyu Feng
Longyu Feng and Huahang Li and Chen Jason Zhang
Prompt-Matcher: Leveraging Large Models to Reduce Uncertainty in Schema Matching Results
null
null
null
null
cs.DB cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Schema matching is the process of identifying correspondences between the elements of two given schemata, essential for database management systems, data integration, and data warehousing. For datasets across different scenarios, the optimal schema matching algorithm is different. For single algorithm, hyperparameter tuning also cases multiple results. All results assigned equal probabilities are stored in probabilistic databases to facilitate uncertainty management. The substantial degree of uncertainty diminishes the efficiency and reliability of data processing, thereby precluding the provision of more accurate information for decision-makers. To address this problem, we introduce a new approach based on fine-grained correspondence verification with specific prompt of Large Language Model. Our approach is an iterative loop that consists of three main components: (1) the correspondence selection algorithm, (2) correspondence verification, and (3) the update of probability distribution. The core idea is that correspondences intersect across multiple results, thereby linking the verification of correspondences to the reduction of uncertainty in candidate results. The task of selecting an optimal correspondence set to maximize the anticipated uncertainty reduction within a fixed budgetary framework is established as an NP-hard problem. We propose a novel $(1-1/e)$-approximation algorithm that significantly outperforms brute algorithm in terms of computational efficiency. To enhance correspondence verification, we have developed two prompt templates that enable GPT-4 to achieve state-of-the-art performance across two established benchmark datasets. Our comprehensive experimental evaluation demonstrates the superior effectiveness and robustness of the proposed approach.
[ { "version": "v1", "created": "Sat, 24 Aug 2024 16:54:08 GMT" }, { "version": "v2", "created": "Wed, 5 Mar 2025 11:15:58 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 10:26:32 GMT" } ]
2025-03-07T00:00:00
[ [ "Feng", "Longyu", "" ], [ "Li", "Huahang", "" ], [ "Zhang", "Chen Jason", "" ] ]
TITLE: Prompt-Matcher: Leveraging Large Models to Reduce Uncertainty in Schema Matching Results ABSTRACT: Schema matching is the process of identifying correspondences between the elements of two given schemata, essential for database management systems, data integration, and data warehousing. For datasets across different scenarios, the optimal schema matching algorithm is different. For single algorithm, hyperparameter tuning also cases multiple results. All results assigned equal probabilities are stored in probabilistic databases to facilitate uncertainty management. The substantial degree of uncertainty diminishes the efficiency and reliability of data processing, thereby precluding the provision of more accurate information for decision-makers. To address this problem, we introduce a new approach based on fine-grained correspondence verification with specific prompt of Large Language Model. Our approach is an iterative loop that consists of three main components: (1) the correspondence selection algorithm, (2) correspondence verification, and (3) the update of probability distribution. The core idea is that correspondences intersect across multiple results, thereby linking the verification of correspondences to the reduction of uncertainty in candidate results. The task of selecting an optimal correspondence set to maximize the anticipated uncertainty reduction within a fixed budgetary framework is established as an NP-hard problem. We propose a novel $(1-1/e)$-approximation algorithm that significantly outperforms brute algorithm in terms of computational efficiency. To enhance correspondence verification, we have developed two prompt templates that enable GPT-4 to achieve state-of-the-art performance across two established benchmark datasets. Our comprehensive experimental evaluation demonstrates the superior effectiveness and robustness of the proposed approach.
no_new_dataset
0.943815
2409.01329
Lucas Lange
Lucas Lange and Maurice-Maximilian Heykeroth and Erhard Rahm
Assessing the Impact of Image Dataset Features on Privacy-Preserving Machine Learning
Accepted at 21st Conference on Database Systems for Business, Technology and Web (BTW 2025)
21st Conference on Database Systems for Business, Technology and Web (BTW 2025)
10.18420/BTW2025-27
null
cs.LG cs.CR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine Learning (ML) is crucial in many sectors, including computer vision. However, ML models trained on sensitive data face security challenges, as they can be attacked and leak information. Privacy-Preserving Machine Learning (PPML) addresses this by using Differential Privacy (DP) to balance utility and privacy. This study identifies image dataset characteristics that affect the utility and vulnerability of private and non-private Convolutional Neural Network (CNN) models. Through analyzing multiple datasets and privacy budgets, we find that imbalanced datasets increase vulnerability in minority classes, but DP mitigates this issue. Datasets with fewer classes improve both model utility and privacy, while high entropy or low Fisher Discriminant Ratio (FDR) datasets deteriorate the utility-privacy trade-off. These insights offer valuable guidance for practitioners and researchers in estimating and optimizing the utility-privacy trade-off in image datasets, helping to inform data and privacy modifications for better outcomes based on dataset characteristics.
[ { "version": "v1", "created": "Mon, 2 Sep 2024 15:30:27 GMT" }, { "version": "v2", "created": "Wed, 11 Dec 2024 14:15:21 GMT" } ]
2025-03-07T00:00:00
[ [ "Lange", "Lucas", "" ], [ "Heykeroth", "Maurice-Maximilian", "" ], [ "Rahm", "Erhard", "" ] ]
TITLE: Assessing the Impact of Image Dataset Features on Privacy-Preserving Machine Learning ABSTRACT: Machine Learning (ML) is crucial in many sectors, including computer vision. However, ML models trained on sensitive data face security challenges, as they can be attacked and leak information. Privacy-Preserving Machine Learning (PPML) addresses this by using Differential Privacy (DP) to balance utility and privacy. This study identifies image dataset characteristics that affect the utility and vulnerability of private and non-private Convolutional Neural Network (CNN) models. Through analyzing multiple datasets and privacy budgets, we find that imbalanced datasets increase vulnerability in minority classes, but DP mitigates this issue. Datasets with fewer classes improve both model utility and privacy, while high entropy or low Fisher Discriminant Ratio (FDR) datasets deteriorate the utility-privacy trade-off. These insights offer valuable guidance for practitioners and researchers in estimating and optimizing the utility-privacy trade-off in image datasets, helping to inform data and privacy modifications for better outcomes based on dataset characteristics.
no_new_dataset
0.951863
2409.02421
Jiatao Chen
Jiatao Chen, Tianming Xie, Xing Tang, Jing Wang, Wenjing Dong, Bing Shi
MusicMamba: A Dual-Feature Modeling Approach for Generating Chinese Traditional Music with Modal Precision
Accepted by ICASSP 2025
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, deep learning has significantly advanced the MIDI domain, solidifying music generation as a key application of artificial intelligence. However, existing research primarily focuses on Western music and encounters challenges in generating melodies for Chinese traditional music, especially in capturing modal characteristics and emotional expression. To address these issues, we propose a new architecture, the Dual-Feature Modeling Module, which integrates the long-range dependency modeling of the Mamba Block with the global structure capturing capabilities of the Transformer Block. Additionally, we introduce the Bidirectional Mamba Fusion Layer, which integrates local details and global structures through bidirectional scanning, enhancing the modeling of complex sequences. Building on this architecture, we propose the REMI-M representation, which more accurately captures and generates modal information in melodies. To support this research, we developed FolkDB, a high-quality Chinese traditional music dataset encompassing various styles and totaling over 11 hours of music. Experimental results demonstrate that the proposed architecture excels in generating melodies with Chinese traditional music characteristics, offering a new and effective solution for music generation.
[ { "version": "v1", "created": "Wed, 4 Sep 2024 04:00:22 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 02:11:09 GMT" } ]
2025-03-07T00:00:00
[ [ "Chen", "Jiatao", "" ], [ "Xie", "Tianming", "" ], [ "Tang", "Xing", "" ], [ "Wang", "Jing", "" ], [ "Dong", "Wenjing", "" ], [ "Shi", "Bing", "" ] ]
TITLE: MusicMamba: A Dual-Feature Modeling Approach for Generating Chinese Traditional Music with Modal Precision ABSTRACT: In recent years, deep learning has significantly advanced the MIDI domain, solidifying music generation as a key application of artificial intelligence. However, existing research primarily focuses on Western music and encounters challenges in generating melodies for Chinese traditional music, especially in capturing modal characteristics and emotional expression. To address these issues, we propose a new architecture, the Dual-Feature Modeling Module, which integrates the long-range dependency modeling of the Mamba Block with the global structure capturing capabilities of the Transformer Block. Additionally, we introduce the Bidirectional Mamba Fusion Layer, which integrates local details and global structures through bidirectional scanning, enhancing the modeling of complex sequences. Building on this architecture, we propose the REMI-M representation, which more accurately captures and generates modal information in melodies. To support this research, we developed FolkDB, a high-quality Chinese traditional music dataset encompassing various styles and totaling over 11 hours of music. Experimental results demonstrate that the proposed architecture excels in generating melodies with Chinese traditional music characteristics, offering a new and effective solution for music generation.
new_dataset
0.957078
2409.07055
Shuyuan Zheng
Junkai Liu, Yujie Tong, Hui Huang, Bowen Zheng, Yiran Hu, Peicheng Wu, Chuan Xiao, Makoto Onizuka, Muyun Yang, Shuyuan Zheng
Legal Fact Prediction: The Missing Piece in Legal Judgment Prediction
null
null
null
null
cs.CL cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Legal judgment prediction (LJP), which enables litigants and their lawyers to forecast judgment outcomes and refine litigation strategies, has emerged as a crucial legal NLP task. Existing studies typically utilize legal facts, i.e., facts that have been established by evidence and determined by the judge, to predict the judgment. However, legal facts are often difficult to obtain in the early stages of litigation, significantly limiting the practical applicability of fact-based LJP. To address this limitation, we propose a novel legal NLP task: \textit{legal fact prediction} (LFP), which takes the evidence submitted by litigants for trial as input to predict legal facts, thereby empowering fact-based LJP technologies to perform prediction in the absence of ground-truth legal facts. We also propose the first benchmark dataset, LFPBench, for evaluating the LFP task. Our extensive experiments on LFPBench demonstrate the effectiveness of LFP-empowered LJP and highlight promising research directions for LFP. Our code and data are available at https://github.com/HPRCEST/LFPBench.
[ { "version": "v1", "created": "Wed, 11 Sep 2024 07:01:08 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 05:48:54 GMT" } ]
2025-03-07T00:00:00
[ [ "Liu", "Junkai", "" ], [ "Tong", "Yujie", "" ], [ "Huang", "Hui", "" ], [ "Zheng", "Bowen", "" ], [ "Hu", "Yiran", "" ], [ "Wu", "Peicheng", "" ], [ "Xiao", "Chuan", "" ], [ "Onizuka", "Makoto", "" ], [ "Yang", "Muyun", "" ], [ "Zheng", "Shuyuan", "" ] ]
TITLE: Legal Fact Prediction: The Missing Piece in Legal Judgment Prediction ABSTRACT: Legal judgment prediction (LJP), which enables litigants and their lawyers to forecast judgment outcomes and refine litigation strategies, has emerged as a crucial legal NLP task. Existing studies typically utilize legal facts, i.e., facts that have been established by evidence and determined by the judge, to predict the judgment. However, legal facts are often difficult to obtain in the early stages of litigation, significantly limiting the practical applicability of fact-based LJP. To address this limitation, we propose a novel legal NLP task: \textit{legal fact prediction} (LFP), which takes the evidence submitted by litigants for trial as input to predict legal facts, thereby empowering fact-based LJP technologies to perform prediction in the absence of ground-truth legal facts. We also propose the first benchmark dataset, LFPBench, for evaluating the LFP task. Our extensive experiments on LFPBench demonstrate the effectiveness of LFP-empowered LJP and highlight promising research directions for LFP. Our code and data are available at https://github.com/HPRCEST/LFPBench.
new_dataset
0.96682
2409.07723
Bojian Li
Bojian Li, Bo Liu, Xinning Yao, Jinghua Yue, Fugen Zhou
Advancing Depth Anything Model for Unsupervised Monocular Depth Estimation in Endoscopy
8 pages, 7 figures
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Depth estimation is a cornerstone of 3D reconstruction and plays a vital role in minimally invasive endoscopic surgeries. However, most current depth estimation networks rely on traditional convolutional neural networks, which are limited in their ability to capture global information. Foundation models offer a promising approach to enhance depth estimation, but those models currently available are primarily trained on natural images, leading to suboptimal performance when applied to endoscopic images. In this work, we introduce a novel fine-tuning strategy for the Depth Anything Model and integrate it with an intrinsic-based unsupervised monocular depth estimation framework. Our approach includes a low-rank adaptation technique based on random vectors, which improves the model's adaptability to different scales. Additionally, we propose a residual block built on depthwise separable convolution to compensate for the transformer's limited ability to capture local features. Our experimental results on the SCARED dataset and Hamlyn dataset show that our method achieves state-of-the-art performance while minimizing the number of trainable parameters. Applying this method in minimally invasive endoscopic surgery can enhance surgeons' spatial awareness, thereby improving the precision and safety of the procedures.
[ { "version": "v1", "created": "Thu, 12 Sep 2024 03:04:43 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 01:40:10 GMT" } ]
2025-03-07T00:00:00
[ [ "Li", "Bojian", "" ], [ "Liu", "Bo", "" ], [ "Yao", "Xinning", "" ], [ "Yue", "Jinghua", "" ], [ "Zhou", "Fugen", "" ] ]
TITLE: Advancing Depth Anything Model for Unsupervised Monocular Depth Estimation in Endoscopy ABSTRACT: Depth estimation is a cornerstone of 3D reconstruction and plays a vital role in minimally invasive endoscopic surgeries. However, most current depth estimation networks rely on traditional convolutional neural networks, which are limited in their ability to capture global information. Foundation models offer a promising approach to enhance depth estimation, but those models currently available are primarily trained on natural images, leading to suboptimal performance when applied to endoscopic images. In this work, we introduce a novel fine-tuning strategy for the Depth Anything Model and integrate it with an intrinsic-based unsupervised monocular depth estimation framework. Our approach includes a low-rank adaptation technique based on random vectors, which improves the model's adaptability to different scales. Additionally, we propose a residual block built on depthwise separable convolution to compensate for the transformer's limited ability to capture local features. Our experimental results on the SCARED dataset and Hamlyn dataset show that our method achieves state-of-the-art performance while minimizing the number of trainable parameters. Applying this method in minimally invasive endoscopic surgery can enhance surgeons' spatial awareness, thereby improving the precision and safety of the procedures.
no_new_dataset
0.94625
2409.08388
Hossein Resani
Hossein Resani, Behrooz Nasihatkon, Mohammadreza Alimoradi Jazi
Continual Learning in 3D Point Clouds: Employing Spectral Techniques for Exemplar Selection
Accepted to WACV 2025, Tucson, Arizona, USA
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We introduce a novel framework for Continual Learning in 3D object classification. Our approach, CL3D, is based on the selection of prototypes from each class using spectral clustering. For non-Euclidean data such as point clouds, spectral clustering can be employed as long as one can define a distance measure between pairs of samples. Choosing the appropriate distance measure enables us to leverage 3D geometric characteristics to identify representative prototypes for each class. We explore the effectiveness of clustering in the input space (3D points), local feature space (1024-dimensional points), and global feature space. We conduct experiments on the ModelNet40, ShapeNet, and ScanNet datasets, achieving state-of-the-art accuracy exclusively through the use of input space features. By leveraging the combined input, local, and global features, we have improved the state-of-the-art on ModelNet and ShapeNet, utilizing nearly half the memory used by competing approaches. For the challenging ScanNet dataset, our method enhances accuracy by 4.1% while consuming just 28% of the memory used by our competitors, demonstrating the scalability of our approach.
[ { "version": "v1", "created": "Thu, 12 Sep 2024 20:34:34 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 13:19:58 GMT" } ]
2025-03-07T00:00:00
[ [ "Resani", "Hossein", "" ], [ "Nasihatkon", "Behrooz", "" ], [ "Jazi", "Mohammadreza Alimoradi", "" ] ]
TITLE: Continual Learning in 3D Point Clouds: Employing Spectral Techniques for Exemplar Selection ABSTRACT: We introduce a novel framework for Continual Learning in 3D object classification. Our approach, CL3D, is based on the selection of prototypes from each class using spectral clustering. For non-Euclidean data such as point clouds, spectral clustering can be employed as long as one can define a distance measure between pairs of samples. Choosing the appropriate distance measure enables us to leverage 3D geometric characteristics to identify representative prototypes for each class. We explore the effectiveness of clustering in the input space (3D points), local feature space (1024-dimensional points), and global feature space. We conduct experiments on the ModelNet40, ShapeNet, and ScanNet datasets, achieving state-of-the-art accuracy exclusively through the use of input space features. By leveraging the combined input, local, and global features, we have improved the state-of-the-art on ModelNet and ShapeNet, utilizing nearly half the memory used by competing approaches. For the challenging ScanNet dataset, our method enhances accuracy by 4.1% while consuming just 28% of the memory used by our competitors, demonstrating the scalability of our approach.
no_new_dataset
0.949576
2409.08824
Kaijie Yin
Kaijie Yin and Tian Gao and Hui Kong
Pathfinder for Low-altitude Aircraft with Binary Neural Network
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A prior global topological map (e.g., the OpenStreetMap, OSM) can boost the performance of autonomous mapping by a ground mobile robot. However, the prior map is usually incomplete due to lacking labeling in partial paths. To solve this problem, this paper proposes an OSM maker using airborne sensors carried by low-altitude aircraft, where the core of the OSM maker is a novel efficient pathfinder approach based on LiDAR and camera data, i.e., a binary dual-stream road segmentation model. Specifically, a multi-scale feature extraction based on the UNet architecture is implemented for images and point clouds. To reduce the effect caused by the sparsity of point cloud, an attention-guided gated block is designed to integrate image and point-cloud features. To optimize the model for edge deployment that significantly reduces storage footprint and computational demands, we propose a binarization streamline to each model component, including a variant of vision transformer (ViT) architecture as the encoder of the image branch, and new focal and perception losses to optimize the model training. The experimental results on two datasets demonstrate that our pathfinder method achieves SOTA accuracy with high efficiency in finding paths from the low-level airborne sensors, and we can create complete OSM prior maps based on the segmented road skeletons. Code and data are available at: \href{https://github.com/IMRL/Pathfinder}{https://github.com/IMRL/Pathfinder}.
[ { "version": "v1", "created": "Fri, 13 Sep 2024 13:37:33 GMT" }, { "version": "v2", "created": "Mon, 23 Sep 2024 01:29:58 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 12:26:08 GMT" } ]
2025-03-07T00:00:00
[ [ "Yin", "Kaijie", "" ], [ "Gao", "Tian", "" ], [ "Kong", "Hui", "" ] ]
TITLE: Pathfinder for Low-altitude Aircraft with Binary Neural Network ABSTRACT: A prior global topological map (e.g., the OpenStreetMap, OSM) can boost the performance of autonomous mapping by a ground mobile robot. However, the prior map is usually incomplete due to lacking labeling in partial paths. To solve this problem, this paper proposes an OSM maker using airborne sensors carried by low-altitude aircraft, where the core of the OSM maker is a novel efficient pathfinder approach based on LiDAR and camera data, i.e., a binary dual-stream road segmentation model. Specifically, a multi-scale feature extraction based on the UNet architecture is implemented for images and point clouds. To reduce the effect caused by the sparsity of point cloud, an attention-guided gated block is designed to integrate image and point-cloud features. To optimize the model for edge deployment that significantly reduces storage footprint and computational demands, we propose a binarization streamline to each model component, including a variant of vision transformer (ViT) architecture as the encoder of the image branch, and new focal and perception losses to optimize the model training. The experimental results on two datasets demonstrate that our pathfinder method achieves SOTA accuracy with high efficiency in finding paths from the low-level airborne sensors, and we can create complete OSM prior maps based on the segmented road skeletons. Code and data are available at: \href{https://github.com/IMRL/Pathfinder}{https://github.com/IMRL/Pathfinder}.
no_new_dataset
0.950686
2409.10329
{\L}ukasz Struski
{\L}ukasz Struski, Dawid Rymarczyk, Jacek Tabor
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
In this work, we introduce InfoDisent, a hybrid approach to explainability based on the information bottleneck principle. InfoDisent enables the disentanglement of information in the final layer of any pretrained model into atomic concepts, which can be interpreted as prototypical parts. This approach merges the flexibility of post-hoc methods with the concept-level modeling capabilities of self-explainable neural networks, such as ProtoPNets. We demonstrate the effectiveness of InfoDisent through computational experiments and user studies across various datasets using modern backbones such as ViTs and convolutional networks. Notably, InfoDisent generalizes the prototypical parts approach to novel domains (ImageNet).
[ { "version": "v1", "created": "Mon, 16 Sep 2024 14:39:15 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 12:16:09 GMT" } ]
2025-03-07T00:00:00
[ [ "Struski", "Łukasz", "" ], [ "Rymarczyk", "Dawid", "" ], [ "Tabor", "Jacek", "" ] ]
TITLE: InfoDisent: Explainability of Image Classification Models by Information Disentanglement ABSTRACT: In this work, we introduce InfoDisent, a hybrid approach to explainability based on the information bottleneck principle. InfoDisent enables the disentanglement of information in the final layer of any pretrained model into atomic concepts, which can be interpreted as prototypical parts. This approach merges the flexibility of post-hoc methods with the concept-level modeling capabilities of self-explainable neural networks, such as ProtoPNets. We demonstrate the effectiveness of InfoDisent through computational experiments and user studies across various datasets using modern backbones such as ViTs and convolutional networks. Notably, InfoDisent generalizes the prototypical parts approach to novel domains (ImageNet).
no_new_dataset
0.944791
2409.11699
Hubert Pham
Liam Hebert, Marialena Kyriakidi, Hubert Pham, Krishna Sayana, James Pine, Sukhdeep Sodhi, Ambarish Jash
FLARE: Fusing Language Models and Collaborative Architectures for Recommender Enhancement
null
null
null
null
cs.IR cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent proposals in recommender systems represent items with their textual description, using a large language model. They show better results on standard benchmarks compared to an item ID-only model, such as Bert4Rec. In this work, we revisit the often-used Bert4Rec baseline and show that with further tuning, Bert4Rec significantly outperforms previously reported numbers, and in some datasets, is competitive with state-of-the-art models. With revised baselines for item ID-only models, this paper also establishes new competitive results for architectures that combine IDs and textual descriptions. We demonstrate this with Flare (Fusing Language models and collaborative Architectures for Recommender Enhancement). Flare is a novel hybrid sequence recommender that integrates a language model with a collaborative filtering model using a Perceiver network. Prior studies focus evaluation on datasets with limited-corpus size, but many commercially-applicable recommender systems common on the web must handle larger corpora. We evaluate Flare on a more realistic dataset with a significantly larger item vocabulary, introducing new baselines for this setting. This paper also showcases Flare's inherent ability to support critiquing, enabling users to provide feedback and refine recommendations. We leverage critiquing as an evaluation method to assess the model's language understanding and its transferability to the recommendation task.
[ { "version": "v1", "created": "Wed, 18 Sep 2024 04:43:41 GMT" }, { "version": "v2", "created": "Wed, 5 Mar 2025 23:46:26 GMT" } ]
2025-03-07T00:00:00
[ [ "Hebert", "Liam", "" ], [ "Kyriakidi", "Marialena", "" ], [ "Pham", "Hubert", "" ], [ "Sayana", "Krishna", "" ], [ "Pine", "James", "" ], [ "Sodhi", "Sukhdeep", "" ], [ "Jash", "Ambarish", "" ] ]
TITLE: FLARE: Fusing Language Models and Collaborative Architectures for Recommender Enhancement ABSTRACT: Recent proposals in recommender systems represent items with their textual description, using a large language model. They show better results on standard benchmarks compared to an item ID-only model, such as Bert4Rec. In this work, we revisit the often-used Bert4Rec baseline and show that with further tuning, Bert4Rec significantly outperforms previously reported numbers, and in some datasets, is competitive with state-of-the-art models. With revised baselines for item ID-only models, this paper also establishes new competitive results for architectures that combine IDs and textual descriptions. We demonstrate this with Flare (Fusing Language models and collaborative Architectures for Recommender Enhancement). Flare is a novel hybrid sequence recommender that integrates a language model with a collaborative filtering model using a Perceiver network. Prior studies focus evaluation on datasets with limited-corpus size, but many commercially-applicable recommender systems common on the web must handle larger corpora. We evaluate Flare on a more realistic dataset with a significantly larger item vocabulary, introducing new baselines for this setting. This paper also showcases Flare's inherent ability to support critiquing, enabling users to provide feedback and refine recommendations. We leverage critiquing as an evaluation method to assess the model's language understanding and its transferability to the recommendation task.
no_new_dataset
0.949342
2409.11919
Amaia Cardiel
Amaia Cardiel, Eloi Zablocki, Elias Ramzi, Oriane Sim\'eoni, Matthieu Cord
LLM-wrapper: Black-Box Semantic-Aware Adaptation of Vision-Language Models for Referring Expression Comprehension
LLM-wrapper (v3) is published as a conference paper at ICLR 2025. (v1 was presented at EVAL-FoMo workshop, ECCV 2024.)
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Vision Language Models (VLMs) have demonstrated remarkable capabilities in various open-vocabulary tasks, yet their zero-shot performance lags behind task-specific fine-tuned models, particularly in complex tasks like Referring Expression Comprehension (REC). Fine-tuning usually requires 'white-box' access to the model's architecture and weights, which is not always feasible due to proprietary or privacy concerns. In this work, we propose LLM-wrapper, a method for 'black-box' adaptation of VLMs for the REC task using Large Language Models (LLMs). LLM-wrapper capitalizes on the reasoning abilities of LLMs, improved with a light fine-tuning, to select the most relevant bounding box matching the referring expression, from candidates generated by a zero-shot black-box VLM. Our approach offers several advantages: it enables the adaptation of closed-source models without needing access to their internal workings, it is versatile as it works with any VLM, it transfers to new VLMs and datasets, and it allows for the adaptation of an ensemble of VLMs. We evaluate LLM-wrapper on multiple datasets using different VLMs and LLMs, demonstrating significant performance improvements and highlighting the versatility of our method. While LLM-wrapper is not meant to directly compete with standard white-box fine-tuning, it offers a practical and effective alternative for black-box VLM adaptation. Code and checkpoints are available at https://github.com/valeoai/LLM_wrapper .
[ { "version": "v1", "created": "Wed, 18 Sep 2024 12:32:25 GMT" }, { "version": "v2", "created": "Tue, 15 Oct 2024 14:52:55 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 17:12:48 GMT" } ]
2025-03-07T00:00:00
[ [ "Cardiel", "Amaia", "" ], [ "Zablocki", "Eloi", "" ], [ "Ramzi", "Elias", "" ], [ "Siméoni", "Oriane", "" ], [ "Cord", "Matthieu", "" ] ]
TITLE: LLM-wrapper: Black-Box Semantic-Aware Adaptation of Vision-Language Models for Referring Expression Comprehension ABSTRACT: Vision Language Models (VLMs) have demonstrated remarkable capabilities in various open-vocabulary tasks, yet their zero-shot performance lags behind task-specific fine-tuned models, particularly in complex tasks like Referring Expression Comprehension (REC). Fine-tuning usually requires 'white-box' access to the model's architecture and weights, which is not always feasible due to proprietary or privacy concerns. In this work, we propose LLM-wrapper, a method for 'black-box' adaptation of VLMs for the REC task using Large Language Models (LLMs). LLM-wrapper capitalizes on the reasoning abilities of LLMs, improved with a light fine-tuning, to select the most relevant bounding box matching the referring expression, from candidates generated by a zero-shot black-box VLM. Our approach offers several advantages: it enables the adaptation of closed-source models without needing access to their internal workings, it is versatile as it works with any VLM, it transfers to new VLMs and datasets, and it allows for the adaptation of an ensemble of VLMs. We evaluate LLM-wrapper on multiple datasets using different VLMs and LLMs, demonstrating significant performance improvements and highlighting the versatility of our method. While LLM-wrapper is not meant to directly compete with standard white-box fine-tuning, it offers a practical and effective alternative for black-box VLM adaptation. Code and checkpoints are available at https://github.com/valeoai/LLM_wrapper .
no_new_dataset
0.951006
2410.00299
Zhangshuo Qi
Zhangshuo Qi, Junyi Ma, Jingyi Xu, Zijie Zhou, Luqi Cheng, and Guangming Xiong
GSPR: Multimodal Place Recognition Using 3D Gaussian Splatting for Autonomous Driving
8 pages, 6 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Place recognition is a crucial component that enables autonomous vehicles to obtain localization results in GPS-denied environments. In recent years, multimodal place recognition methods have gained increasing attention. They overcome the weaknesses of unimodal sensor systems by leveraging complementary information from different modalities. However, most existing methods explore cross-modality correlations through feature-level or descriptor-level fusion, suffering from a lack of interpretability. Conversely, the recently proposed 3D Gaussian Splatting provides a new perspective on multimodal fusion by harmonizing different modalities into an explicit scene representation. In this paper, we propose a 3D Gaussian Splatting-based multimodal place recognition network dubbed GSPR. It explicitly combines multi-view RGB images and LiDAR point clouds into a spatio-temporally unified scene representation with the proposed Multimodal Gaussian Splatting. A network composed of 3D graph convolution and transformer is designed to extract spatio-temporal features and global descriptors from the Gaussian scenes for place recognition. Extensive evaluations on three datasets demonstrate that our method can effectively leverage complementary strengths of both multi-view cameras and LiDAR, achieving SOTA place recognition performance while maintaining solid generalization ability. Our open-source code will be released at https://github.com/QiZS-BIT/GSPR.
[ { "version": "v1", "created": "Tue, 1 Oct 2024 00:43:45 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 15:32:33 GMT" } ]
2025-03-07T00:00:00
[ [ "Qi", "Zhangshuo", "" ], [ "Ma", "Junyi", "" ], [ "Xu", "Jingyi", "" ], [ "Zhou", "Zijie", "" ], [ "Cheng", "Luqi", "" ], [ "Xiong", "Guangming", "" ] ]
TITLE: GSPR: Multimodal Place Recognition Using 3D Gaussian Splatting for Autonomous Driving ABSTRACT: Place recognition is a crucial component that enables autonomous vehicles to obtain localization results in GPS-denied environments. In recent years, multimodal place recognition methods have gained increasing attention. They overcome the weaknesses of unimodal sensor systems by leveraging complementary information from different modalities. However, most existing methods explore cross-modality correlations through feature-level or descriptor-level fusion, suffering from a lack of interpretability. Conversely, the recently proposed 3D Gaussian Splatting provides a new perspective on multimodal fusion by harmonizing different modalities into an explicit scene representation. In this paper, we propose a 3D Gaussian Splatting-based multimodal place recognition network dubbed GSPR. It explicitly combines multi-view RGB images and LiDAR point clouds into a spatio-temporally unified scene representation with the proposed Multimodal Gaussian Splatting. A network composed of 3D graph convolution and transformer is designed to extract spatio-temporal features and global descriptors from the Gaussian scenes for place recognition. Extensive evaluations on three datasets demonstrate that our method can effectively leverage complementary strengths of both multi-view cameras and LiDAR, achieving SOTA place recognition performance while maintaining solid generalization ability. Our open-source code will be released at https://github.com/QiZS-BIT/GSPR.
no_new_dataset
0.947039
2410.00862
Lorenzo Cazzaro
Stefano Calzavara, Lorenzo Cazzaro, Massimo Vettori
Timber! Poisoning Decision Trees
This work has been accepted for publication in the 3rd IEEE Conference on Secure and Trustworthy Machine Learning (IEEE SaTML 2025). The final version will be available on IEEE Xplore. 17 pages, 7 figures, 5 tables
null
null
null
cs.LG cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present Timber, the first white-box poisoning attack targeting decision trees. Timber is based on a greedy attack strategy that leverages sub-tree retraining to efficiently estimate the damage caused by poisoning a given training instance. The attack relies on a tree annotation procedure, which enables the sorting of training instances so that they are processed in increasing order of the computational cost of sub-tree retraining. This sorting yields a variant of Timber that supports an early stopping criterion, designed to make poisoning attacks more efficient and feasible on larger datasets. We also discuss an extension of Timber to traditional random forest models, which is valuable since decision trees are typically combined into ensembles to improve their predictive power. Our experimental evaluation on public datasets demonstrates that our attacks outperform existing baselines in terms of effectiveness, efficiency, or both. Moreover, we show that two representative defenses can mitigate the effect of our attacks, but fail to effectively thwart them.
[ { "version": "v1", "created": "Tue, 1 Oct 2024 16:58:54 GMT" }, { "version": "v2", "created": "Wed, 5 Mar 2025 22:12:27 GMT" } ]
2025-03-07T00:00:00
[ [ "Calzavara", "Stefano", "" ], [ "Cazzaro", "Lorenzo", "" ], [ "Vettori", "Massimo", "" ] ]
TITLE: Timber! Poisoning Decision Trees ABSTRACT: We present Timber, the first white-box poisoning attack targeting decision trees. Timber is based on a greedy attack strategy that leverages sub-tree retraining to efficiently estimate the damage caused by poisoning a given training instance. The attack relies on a tree annotation procedure, which enables the sorting of training instances so that they are processed in increasing order of the computational cost of sub-tree retraining. This sorting yields a variant of Timber that supports an early stopping criterion, designed to make poisoning attacks more efficient and feasible on larger datasets. We also discuss an extension of Timber to traditional random forest models, which is valuable since decision trees are typically combined into ensembles to improve their predictive power. Our experimental evaluation on public datasets demonstrates that our attacks outperform existing baselines in terms of effectiveness, efficiency, or both. Moreover, we show that two representative defenses can mitigate the effect of our attacks, but fail to effectively thwart them.
no_new_dataset
0.945801
2410.01166
Zhijian Li
Zhijian Li, Stefan Larson, Kevin Leach
Document Classification using File Names
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Rapid document classification is critical in several time-sensitive applications like digital forensics and large-scale media classification. Traditional approaches that rely on heavy-duty deep learning models fall short due to high inference times over vast input datasets and computational resources associated with analyzing whole documents. In this paper, we present a method using lightweight supervised learning models, combined with a TF-IDF feature extraction-based tokenization method, to accurately and efficiently classify documents based solely on file names, that substantially reduces inference time. Our results indicate that file name classifiers can process more than 90% of in-scope documents with 99.63% and 96.57% accuracy when tested on two datasets, while being 442x faster than more complex models such as DiT. Our method offers a crucial solution to efficiently process vast document datasets in critical scenarios, enabling fast and more reliable document classification.
[ { "version": "v1", "created": "Wed, 2 Oct 2024 01:42:19 GMT" }, { "version": "v2", "created": "Wed, 5 Mar 2025 19:11:50 GMT" } ]
2025-03-07T00:00:00
[ [ "Li", "Zhijian", "" ], [ "Larson", "Stefan", "" ], [ "Leach", "Kevin", "" ] ]
TITLE: Document Classification using File Names ABSTRACT: Rapid document classification is critical in several time-sensitive applications like digital forensics and large-scale media classification. Traditional approaches that rely on heavy-duty deep learning models fall short due to high inference times over vast input datasets and computational resources associated with analyzing whole documents. In this paper, we present a method using lightweight supervised learning models, combined with a TF-IDF feature extraction-based tokenization method, to accurately and efficiently classify documents based solely on file names, that substantially reduces inference time. Our results indicate that file name classifiers can process more than 90% of in-scope documents with 99.63% and 96.57% accuracy when tested on two datasets, while being 442x faster than more complex models such as DiT. Our method offers a crucial solution to efficiently process vast document datasets in critical scenarios, enabling fast and more reliable document classification.
no_new_dataset
0.949949
2410.01257
Zhilin Wang
Zhilin Wang, Alexander Bukharin, Olivier Delalleau, Daniel Egert, Gerald Shen, Jiaqi Zeng, Oleksii Kuchaiev, Yi Dong
HelpSteer2-Preference: Complementing Ratings with Preferences
Accepted to ICLR 2025; 28 pages, 3 figures
null
null
null
cs.LG cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reward models are critical for aligning models to follow instructions, and are typically trained following one of two popular paradigms: Bradley-Terry style or Regression style. However, there is a lack of evidence that either approach is better than the other, when adequately matched for data. This is primarily because these approaches require data collected in different (but incompatible) formats, meaning that adequately matched data is not available in existing public datasets. To tackle this problem, we release preference annotations (designed for Bradley-Terry training) to complement existing ratings (designed for Regression style training) in the HelpSteer2 dataset. To improve data interpretability, preference annotations are accompanied with human-written justifications. Using this data, we conduct the first head-to-head comparison of Bradley-Terry and Regression models when adequately matched for data. Based on insights derived from such a comparison, we propose a novel approach to combine Bradley-Terry and Regression reward modeling. A Llama-3.1-70B-Instruct model tuned with this approach scores 94.1 on RewardBench, emerging top of more than 140 reward models as of 1 Oct 2024. This reward model can then be used with REINFORCE algorithm (RLHF) to align an Instruct model to reach 85.0 on Arena Hard, which is No. 1 as of 1 Oct 2024. We open-source this dataset (CC-BY-4.0 license) at https://huggingface.co/datasets/nvidia/HelpSteer2#preferences-new -- 1-oct-2024 and openly release the trained Reward and Instruct models at https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Reward and https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct
[ { "version": "v1", "created": "Wed, 2 Oct 2024 06:05:52 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 12:13:14 GMT" } ]
2025-03-07T00:00:00
[ [ "Wang", "Zhilin", "" ], [ "Bukharin", "Alexander", "" ], [ "Delalleau", "Olivier", "" ], [ "Egert", "Daniel", "" ], [ "Shen", "Gerald", "" ], [ "Zeng", "Jiaqi", "" ], [ "Kuchaiev", "Oleksii", "" ], [ "Dong", "Yi", "" ] ]
TITLE: HelpSteer2-Preference: Complementing Ratings with Preferences ABSTRACT: Reward models are critical for aligning models to follow instructions, and are typically trained following one of two popular paradigms: Bradley-Terry style or Regression style. However, there is a lack of evidence that either approach is better than the other, when adequately matched for data. This is primarily because these approaches require data collected in different (but incompatible) formats, meaning that adequately matched data is not available in existing public datasets. To tackle this problem, we release preference annotations (designed for Bradley-Terry training) to complement existing ratings (designed for Regression style training) in the HelpSteer2 dataset. To improve data interpretability, preference annotations are accompanied with human-written justifications. Using this data, we conduct the first head-to-head comparison of Bradley-Terry and Regression models when adequately matched for data. Based on insights derived from such a comparison, we propose a novel approach to combine Bradley-Terry and Regression reward modeling. A Llama-3.1-70B-Instruct model tuned with this approach scores 94.1 on RewardBench, emerging top of more than 140 reward models as of 1 Oct 2024. This reward model can then be used with REINFORCE algorithm (RLHF) to align an Instruct model to reach 85.0 on Arena Hard, which is No. 1 as of 1 Oct 2024. We open-source this dataset (CC-BY-4.0 license) at https://huggingface.co/datasets/nvidia/HelpSteer2#preferences-new -- 1-oct-2024 and openly release the trained Reward and Instruct models at https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Reward and https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct
no_new_dataset
0.942823
2410.01469
Kai Li
Mohan Xu, Kai Li, Guo Chen, Xiaolin Hu
TIGER: Time-frequency Interleaved Gain Extraction and Reconstruction for Efficient Speech Separation
Accepted by ICLR 2025, demo page: https://cslikai.cn/TIGER/
null
null
null
cs.SD cs.AI eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, much speech separation research has focused primarily on improving model performance. However, for low-latency speech processing systems, high efficiency is equally important. Therefore, we propose a speech separation model with significantly reduced parameters and computational costs: Time-frequency Interleaved Gain Extraction and Reconstruction network (TIGER). TIGER leverages prior knowledge to divide frequency bands and compresses frequency information. We employ a multi-scale selective attention module to extract contextual features while introducing a full-frequency-frame attention module to capture both temporal and frequency contextual information. Additionally, to more realistically evaluate the performance of speech separation models in complex acoustic environments, we introduce a dataset called EchoSet. This dataset includes noise and more realistic reverberation (e.g., considering object occlusions and material properties), with speech from two speakers overlapping at random proportions. Experimental results showed that models trained on EchoSet had better generalization ability than those trained on other datasets compared to the data collected in the physical world, which validated the practical value of the EchoSet. On EchoSet and real-world data, TIGER significantly reduces the number of parameters by 94.3% and the MACs by 95.3% while achieving performance surpassing the state-of-the-art (SOTA) model TF-GridNet.
[ { "version": "v1", "created": "Wed, 2 Oct 2024 12:21:06 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 04:03:53 GMT" } ]
2025-03-07T00:00:00
[ [ "Xu", "Mohan", "" ], [ "Li", "Kai", "" ], [ "Chen", "Guo", "" ], [ "Hu", "Xiaolin", "" ] ]
TITLE: TIGER: Time-frequency Interleaved Gain Extraction and Reconstruction for Efficient Speech Separation ABSTRACT: In recent years, much speech separation research has focused primarily on improving model performance. However, for low-latency speech processing systems, high efficiency is equally important. Therefore, we propose a speech separation model with significantly reduced parameters and computational costs: Time-frequency Interleaved Gain Extraction and Reconstruction network (TIGER). TIGER leverages prior knowledge to divide frequency bands and compresses frequency information. We employ a multi-scale selective attention module to extract contextual features while introducing a full-frequency-frame attention module to capture both temporal and frequency contextual information. Additionally, to more realistically evaluate the performance of speech separation models in complex acoustic environments, we introduce a dataset called EchoSet. This dataset includes noise and more realistic reverberation (e.g., considering object occlusions and material properties), with speech from two speakers overlapping at random proportions. Experimental results showed that models trained on EchoSet had better generalization ability than those trained on other datasets compared to the data collected in the physical world, which validated the practical value of the EchoSet. On EchoSet and real-world data, TIGER significantly reduces the number of parameters by 94.3% and the MACs by 95.3% while achieving performance surpassing the state-of-the-art (SOTA) model TF-GridNet.
new_dataset
0.96502
2410.01481
Kai Li
Kai Li, Wendi Sang, Chang Zeng, Runxuan Yang, Guo Chen, Xiaolin Hu
SonicSim: A customizable simulation platform for speech processing in moving sound source scenarios
Accepted by ICLR 2025
null
null
null
cs.SD cs.AI eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Systematic evaluation of speech separation and enhancement models under moving sound source conditions requires extensive and diverse data. However, real-world datasets often lack sufficient data for training and evaluation, and synthetic datasets, while larger, lack acoustic realism. Consequently, neither effectively meets practical needs. To address this issue, we introduce SonicSim, a synthetic toolkit based on the embodied AI simulation platform Habitat-sim, designed to generate highly customizable data for moving sound sources. SonicSim supports multi-level adjustments, including scene-level, microphone-level, and source-level adjustments, enabling the creation of more diverse synthetic data. Leveraging SonicSim, we constructed a benchmark dataset called SonicSet, utilizing LibriSpeech, Freesound Dataset 50k (FSD50K), Free Music Archive (FMA), and 90 scenes from Matterport3D to evaluate speech separation and enhancement models. Additionally, to investigate the differences between synthetic and real-world data, we selected 5 hours of raw, non-reverberant data from the SonicSet validation set and recorded a real-world speech separation dataset, providing a reference for comparing SonicSet with other synthetic datasets. For speech enhancement, we utilized the real-world dataset RealMAN to validate the acoustic gap between SonicSet and existing synthetic datasets. The results indicate that models trained on SonicSet generalize better to real-world scenarios compared to other synthetic datasets. The code is publicly available at https://cslikai.cn/SonicSim/.
[ { "version": "v1", "created": "Wed, 2 Oct 2024 12:33:59 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 04:11:26 GMT" } ]
2025-03-07T00:00:00
[ [ "Li", "Kai", "" ], [ "Sang", "Wendi", "" ], [ "Zeng", "Chang", "" ], [ "Yang", "Runxuan", "" ], [ "Chen", "Guo", "" ], [ "Hu", "Xiaolin", "" ] ]
TITLE: SonicSim: A customizable simulation platform for speech processing in moving sound source scenarios ABSTRACT: Systematic evaluation of speech separation and enhancement models under moving sound source conditions requires extensive and diverse data. However, real-world datasets often lack sufficient data for training and evaluation, and synthetic datasets, while larger, lack acoustic realism. Consequently, neither effectively meets practical needs. To address this issue, we introduce SonicSim, a synthetic toolkit based on the embodied AI simulation platform Habitat-sim, designed to generate highly customizable data for moving sound sources. SonicSim supports multi-level adjustments, including scene-level, microphone-level, and source-level adjustments, enabling the creation of more diverse synthetic data. Leveraging SonicSim, we constructed a benchmark dataset called SonicSet, utilizing LibriSpeech, Freesound Dataset 50k (FSD50K), Free Music Archive (FMA), and 90 scenes from Matterport3D to evaluate speech separation and enhancement models. Additionally, to investigate the differences between synthetic and real-world data, we selected 5 hours of raw, non-reverberant data from the SonicSet validation set and recorded a real-world speech separation dataset, providing a reference for comparing SonicSet with other synthetic datasets. For speech enhancement, we utilized the real-world dataset RealMAN to validate the acoustic gap between SonicSet and existing synthetic datasets. The results indicate that models trained on SonicSet generalize better to real-world scenarios compared to other synthetic datasets. The code is publicly available at https://cslikai.cn/SonicSim/.
new_dataset
0.964954
2410.06209
Adarsh Kumarappan
Adarsh Kumarappan, Mo Tiwari, Peiyang Song, Robert Joseph George, Chaowei Xiao, Anima Anandkumar
LeanAgent: Lifelong Learning for Formal Theorem Proving
null
null
null
null
cs.LG cs.AI cs.LO
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) have been successful in mathematical reasoning tasks such as formal theorem proving when integrated with interactive proof assistants like Lean. Existing approaches involve training or fine-tuning an LLM on a specific dataset to perform well on particular domains, such as undergraduate-level mathematics. These methods struggle with generalizability to advanced mathematics. A fundamental limitation is that these approaches operate on static domains, failing to capture how mathematicians often work across multiple domains and projects simultaneously or cyclically. We present LeanAgent, a novel lifelong learning framework for formal theorem proving that continuously generalizes to and improves on ever-expanding mathematical knowledge without forgetting previously learned knowledge. LeanAgent introduces several key innovations, including a curriculum learning strategy that optimizes the learning trajectory in terms of mathematical difficulty, a dynamic database for efficient management of evolving mathematical knowledge, and progressive training to balance stability and plasticity. LeanAgent successfully generates formal proofs for 155 theorems across 23 diverse Lean repositories where formal proofs were previously missing, many from advanced mathematics. It performs significantly better than the static LLM baseline, proving challenging theorems in domains like abstract algebra and algebraic topology while showcasing a clear progression of learning from basic concepts to advanced topics. In addition, we analyze LeanAgent's superior performance on key lifelong learning metrics. LeanAgent achieves exceptional scores in stability and backward transfer, where learning new tasks improves performance on previously learned tasks. This emphasizes LeanAgent's continuous generalizability and improvement, explaining its superior theorem-proving performance.
[ { "version": "v1", "created": "Tue, 8 Oct 2024 17:11:24 GMT" }, { "version": "v2", "created": "Sat, 12 Oct 2024 05:20:07 GMT" }, { "version": "v3", "created": "Wed, 16 Oct 2024 19:26:56 GMT" }, { "version": "v4", "created": "Fri, 18 Oct 2024 03:59:57 GMT" }, { "version": "v5", "created": "Wed, 30 Oct 2024 05:20:25 GMT" }, { "version": "v6", "created": "Wed, 6 Nov 2024 02:35:30 GMT" }, { "version": "v7", "created": "Mon, 25 Nov 2024 02:39:03 GMT" }, { "version": "v8", "created": "Thu, 6 Mar 2025 00:20:32 GMT" } ]
2025-03-07T00:00:00
[ [ "Kumarappan", "Adarsh", "" ], [ "Tiwari", "Mo", "" ], [ "Song", "Peiyang", "" ], [ "George", "Robert Joseph", "" ], [ "Xiao", "Chaowei", "" ], [ "Anandkumar", "Anima", "" ] ]
TITLE: LeanAgent: Lifelong Learning for Formal Theorem Proving ABSTRACT: Large Language Models (LLMs) have been successful in mathematical reasoning tasks such as formal theorem proving when integrated with interactive proof assistants like Lean. Existing approaches involve training or fine-tuning an LLM on a specific dataset to perform well on particular domains, such as undergraduate-level mathematics. These methods struggle with generalizability to advanced mathematics. A fundamental limitation is that these approaches operate on static domains, failing to capture how mathematicians often work across multiple domains and projects simultaneously or cyclically. We present LeanAgent, a novel lifelong learning framework for formal theorem proving that continuously generalizes to and improves on ever-expanding mathematical knowledge without forgetting previously learned knowledge. LeanAgent introduces several key innovations, including a curriculum learning strategy that optimizes the learning trajectory in terms of mathematical difficulty, a dynamic database for efficient management of evolving mathematical knowledge, and progressive training to balance stability and plasticity. LeanAgent successfully generates formal proofs for 155 theorems across 23 diverse Lean repositories where formal proofs were previously missing, many from advanced mathematics. It performs significantly better than the static LLM baseline, proving challenging theorems in domains like abstract algebra and algebraic topology while showcasing a clear progression of learning from basic concepts to advanced topics. In addition, we analyze LeanAgent's superior performance on key lifelong learning metrics. LeanAgent achieves exceptional scores in stability and backward transfer, where learning new tasks improves performance on previously learned tasks. This emphasizes LeanAgent's continuous generalizability and improvement, explaining its superior theorem-proving performance.
no_new_dataset
0.947817
2410.10877
Jinlong Pang
Jinlong Pang, Jiaheng Wei, Ankit Parag Shah, Zhaowei Zhu, Yaxuan Wang, Chen Qian, Yang Liu, Yujia Bao, Wei Wei
Improving Data Efficiency via Curating LLM-Driven Rating Systems
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Instruction tuning is critical for adapting large language models (LLMs) to downstream tasks, and recent studies have demonstrated that small amounts of human-curated data can outperform larger datasets, challenging traditional data scaling laws. While LLM-based data quality rating systems offer a cost-effective alternative to human annotation, they often suffer from inaccuracies and biases, even in powerful models like GPT-4. In this work, we introduce DS2, a Diversity-aware Score curation method for Data Selection. By systematically modeling error patterns through a score transition matrix, DS2 corrects LLM-based scores and promotes diversity in the selected data samples. Our approach shows that a curated subset (just 3.3% of the original dataset) outperforms full-scale datasets (300k samples) across various machine-alignment benchmarks, and matches or surpasses human-aligned datasets such as LIMA with the same sample size (1k samples). These findings challenge conventional data scaling assumptions, highlighting that redundant, low-quality samples can degrade performance and reaffirming that "more can be less."
[ { "version": "v1", "created": "Wed, 9 Oct 2024 10:07:55 GMT" }, { "version": "v2", "created": "Wed, 5 Mar 2025 23:56:10 GMT" } ]
2025-03-07T00:00:00
[ [ "Pang", "Jinlong", "" ], [ "Wei", "Jiaheng", "" ], [ "Shah", "Ankit Parag", "" ], [ "Zhu", "Zhaowei", "" ], [ "Wang", "Yaxuan", "" ], [ "Qian", "Chen", "" ], [ "Liu", "Yang", "" ], [ "Bao", "Yujia", "" ], [ "Wei", "Wei", "" ] ]
TITLE: Improving Data Efficiency via Curating LLM-Driven Rating Systems ABSTRACT: Instruction tuning is critical for adapting large language models (LLMs) to downstream tasks, and recent studies have demonstrated that small amounts of human-curated data can outperform larger datasets, challenging traditional data scaling laws. While LLM-based data quality rating systems offer a cost-effective alternative to human annotation, they often suffer from inaccuracies and biases, even in powerful models like GPT-4. In this work, we introduce DS2, a Diversity-aware Score curation method for Data Selection. By systematically modeling error patterns through a score transition matrix, DS2 corrects LLM-based scores and promotes diversity in the selected data samples. Our approach shows that a curated subset (just 3.3% of the original dataset) outperforms full-scale datasets (300k samples) across various machine-alignment benchmarks, and matches or surpasses human-aligned datasets such as LIMA with the same sample size (1k samples). These findings challenge conventional data scaling assumptions, highlighting that redundant, low-quality samples can degrade performance and reaffirming that "more can be less."
no_new_dataset
0.947039
2410.12261
Xingjian Wu
Xingjian Wu, Xiangfei Qiu, Zhengyu Li, Yihang Wang, Jilin Hu, Chenjuan Guo, Hui Xiong, Bin Yang
CATCH: Channel-Aware multivariate Time Series Anomaly Detection via Frequency Patching
Accepted by ICLR 2025
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anomaly detection in multivariate time series is challenging as heterogeneous subsequence anomalies may occur. Reconstruction-based methods, which focus on learning normal patterns in the frequency domain to detect diverse abnormal subsequences, achieve promising results, while still falling short on capturing fine-grained frequency characteristics and channel correlations. To contend with the limitations, we introduce CATCH, a framework based on frequency patching. We propose to patchify the frequency domain into frequency bands, which enhances its ability to capture fine-grained frequency characteristics. To perceive appropriate channel correlations, we propose a Channel Fusion Module (CFM), which features a patch-wise mask generator and a masked-attention mechanism. Driven by a bi-level multi-objective optimization algorithm, the CFM is encouraged to iteratively discover appropriate patch-wise channel correlations, and to cluster relevant channels while isolating adverse effects from irrelevant channels. Extensive experiments on 10 real-world datasets and 12 synthetic datasets demonstrate that CATCH achieves state-of-the-art performance. We make our code and datasets available at https://github.com/decisionintelligence/CATCH.
[ { "version": "v1", "created": "Wed, 16 Oct 2024 05:58:55 GMT" }, { "version": "v2", "created": "Fri, 14 Feb 2025 06:59:46 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 13:39:32 GMT" } ]
2025-03-07T00:00:00
[ [ "Wu", "Xingjian", "" ], [ "Qiu", "Xiangfei", "" ], [ "Li", "Zhengyu", "" ], [ "Wang", "Yihang", "" ], [ "Hu", "Jilin", "" ], [ "Guo", "Chenjuan", "" ], [ "Xiong", "Hui", "" ], [ "Yang", "Bin", "" ] ]
TITLE: CATCH: Channel-Aware multivariate Time Series Anomaly Detection via Frequency Patching ABSTRACT: Anomaly detection in multivariate time series is challenging as heterogeneous subsequence anomalies may occur. Reconstruction-based methods, which focus on learning normal patterns in the frequency domain to detect diverse abnormal subsequences, achieve promising results, while still falling short on capturing fine-grained frequency characteristics and channel correlations. To contend with the limitations, we introduce CATCH, a framework based on frequency patching. We propose to patchify the frequency domain into frequency bands, which enhances its ability to capture fine-grained frequency characteristics. To perceive appropriate channel correlations, we propose a Channel Fusion Module (CFM), which features a patch-wise mask generator and a masked-attention mechanism. Driven by a bi-level multi-objective optimization algorithm, the CFM is encouraged to iteratively discover appropriate patch-wise channel correlations, and to cluster relevant channels while isolating adverse effects from irrelevant channels. Extensive experiments on 10 real-world datasets and 12 synthetic datasets demonstrate that CATCH achieves state-of-the-art performance. We make our code and datasets available at https://github.com/decisionintelligence/CATCH.
no_new_dataset
0.94743
2410.14595
Gao Yu Lee Mr.
Gao Yu Lee, Tanmoy Dam, Md Meftahul Ferdaus, Daniel Puiu Poenar, Vu Duong
DRACO-DehazeNet: An Efficient Image Dehazing Network Combining Detail Recovery and a Novel Contrastive Learning Paradigm
Once the paper is accepted and published, the copyright will be transferred to the corresponding journal
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image dehazing is crucial for clarifying images obscured by haze or fog, but current learning-based approaches is dependent on large volumes of training data and hence consumed significant computational power. Additionally, their performance is often inadequate under non-uniform or heavy haze. To address these challenges, we developed the Detail Recovery And Contrastive DehazeNet, which facilitates efficient and effective dehazing via a dense dilated inverted residual block and an attention-based detail recovery network that tailors enhancements to specific dehazed scene contexts. A major innovation is its ability to train effectively with limited data, achieved through a novel quadruplet loss-based contrastive dehazing paradigm. This approach distinctly separates hazy and clear image features while also distinguish lower-quality and higher-quality dehazed images obtained from each sub-modules of our network, thereby refining the dehazing process to a larger extent. Extensive tests on a variety of benchmarked haze datasets demonstrated the superiority of our approach. The code repository for this work is available at https://github.com/GreedYLearner1146/DRACO-DehazeNet.
[ { "version": "v1", "created": "Fri, 18 Oct 2024 16:48:31 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 07:06:50 GMT" } ]
2025-03-07T00:00:00
[ [ "Lee", "Gao Yu", "" ], [ "Dam", "Tanmoy", "" ], [ "Ferdaus", "Md Meftahul", "" ], [ "Poenar", "Daniel Puiu", "" ], [ "Duong", "Vu", "" ] ]
TITLE: DRACO-DehazeNet: An Efficient Image Dehazing Network Combining Detail Recovery and a Novel Contrastive Learning Paradigm ABSTRACT: Image dehazing is crucial for clarifying images obscured by haze or fog, but current learning-based approaches is dependent on large volumes of training data and hence consumed significant computational power. Additionally, their performance is often inadequate under non-uniform or heavy haze. To address these challenges, we developed the Detail Recovery And Contrastive DehazeNet, which facilitates efficient and effective dehazing via a dense dilated inverted residual block and an attention-based detail recovery network that tailors enhancements to specific dehazed scene contexts. A major innovation is its ability to train effectively with limited data, achieved through a novel quadruplet loss-based contrastive dehazing paradigm. This approach distinctly separates hazy and clear image features while also distinguish lower-quality and higher-quality dehazed images obtained from each sub-modules of our network, thereby refining the dehazing process to a larger extent. Extensive tests on a variety of benchmarked haze datasets demonstrated the superiority of our approach. The code repository for this work is available at https://github.com/GreedYLearner1146/DRACO-DehazeNet.
no_new_dataset
0.943712
2410.17494
Jun-En Ding
Jun-En Ding, Chien-Chin Hsu, Chi-Hsiang Chu, Shuqiang Wang, and Feng Liu
Enhancing Multimodal Medical Image Classification using Cross-Graph Modal Contrastive Learning
null
null
null
null
eess.IV cs.CV
http://creativecommons.org/licenses/by/4.0/
The classification of medical images is a pivotal aspect of disease diagnosis, often enhanced by deep learning techniques. However, traditional approaches typically focus on unimodal medical image data, neglecting the integration of diverse non-image patient data. This paper proposes a novel Cross-Graph Modal Contrastive Learning (CGMCL) framework for multimodal structured data from different data domains to improve medical image classification. The model effectively integrates both image and non-image data by constructing cross-modality graphs and leveraging contrastive learning to align multimodal features in a shared latent space. An inter-modality feature scaling module further optimizes the representation learning process by reducing the gap between heterogeneous modalities. The proposed approach is evaluated on two datasets: a Parkinson's disease (PD) dataset and a public melanoma dataset. Results demonstrate that CGMCL outperforms conventional unimodal methods in accuracy, interpretability, and early disease prediction. Additionally, the method shows superior performance in multi-class melanoma classification. The CGMCL framework provides valuable insights into medical image classification while offering improved disease interpretability and predictive capabilities.
[ { "version": "v1", "created": "Wed, 23 Oct 2024 01:25:25 GMT" }, { "version": "v2", "created": "Mon, 25 Nov 2024 18:35:14 GMT" }, { "version": "v3", "created": "Fri, 7 Feb 2025 05:16:45 GMT" }, { "version": "v4", "created": "Thu, 6 Mar 2025 16:43:10 GMT" } ]
2025-03-07T00:00:00
[ [ "Ding", "Jun-En", "" ], [ "Hsu", "Chien-Chin", "" ], [ "Chu", "Chi-Hsiang", "" ], [ "Wang", "Shuqiang", "" ], [ "Liu", "Feng", "" ] ]
TITLE: Enhancing Multimodal Medical Image Classification using Cross-Graph Modal Contrastive Learning ABSTRACT: The classification of medical images is a pivotal aspect of disease diagnosis, often enhanced by deep learning techniques. However, traditional approaches typically focus on unimodal medical image data, neglecting the integration of diverse non-image patient data. This paper proposes a novel Cross-Graph Modal Contrastive Learning (CGMCL) framework for multimodal structured data from different data domains to improve medical image classification. The model effectively integrates both image and non-image data by constructing cross-modality graphs and leveraging contrastive learning to align multimodal features in a shared latent space. An inter-modality feature scaling module further optimizes the representation learning process by reducing the gap between heterogeneous modalities. The proposed approach is evaluated on two datasets: a Parkinson's disease (PD) dataset and a public melanoma dataset. Results demonstrate that CGMCL outperforms conventional unimodal methods in accuracy, interpretability, and early disease prediction. Additionally, the method shows superior performance in multi-class melanoma classification. The CGMCL framework provides valuable insights into medical image classification while offering improved disease interpretability and predictive capabilities.
no_new_dataset
0.943815
2410.17635
Wen Yang
Wen Yang, Minpeng Liao, Kai Fan
Markov Chain of Thought for Efficient Mathematical Reasoning
Camera ready version for NAACL 2025 Main
null
null
null
cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Chain of Thought (CoT) of multi-step benefits from the logical structure of the reasoning steps and task-specific actions, significantly enhancing the mathematical reasoning capabilities of large language models. As the prevalence of long CoT, the number of reasoning steps exceeds manageable token limits and leads to higher computational demands. Inspired by the fundamental logic of human cognition, "derive, then reduce", we conceptualize the standard multi-step CoT as a novel Markov Chain of Thought (MCoT). In this study, we consider the mathematical reasoning task, defining each reasoning step as text accompanied by a Python code snippet. To facilitate a longer reasoning path, self-correction is enabled through interactions with the code interpreter. Our MCoT aims to compress previous reasoning steps into a simplified question, enabling efficient next-step inference without relying on a lengthy KV cache. In our experiments, we curate the $\texttt{MCoTInstruct}$ dataset, and the empirical results indicate that MCoT not only significantly enhances efficiency but also maintains comparable accuracy. While much remains to be explored, this work paves the way for exploring the long CoT reasoning abilities of LLMs. The code is available at https://github.com/james-yw/Markov-Chain-of-Thought
[ { "version": "v1", "created": "Wed, 23 Oct 2024 07:53:29 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 06:39:56 GMT" } ]
2025-03-07T00:00:00
[ [ "Yang", "Wen", "" ], [ "Liao", "Minpeng", "" ], [ "Fan", "Kai", "" ] ]
TITLE: Markov Chain of Thought for Efficient Mathematical Reasoning ABSTRACT: Chain of Thought (CoT) of multi-step benefits from the logical structure of the reasoning steps and task-specific actions, significantly enhancing the mathematical reasoning capabilities of large language models. As the prevalence of long CoT, the number of reasoning steps exceeds manageable token limits and leads to higher computational demands. Inspired by the fundamental logic of human cognition, "derive, then reduce", we conceptualize the standard multi-step CoT as a novel Markov Chain of Thought (MCoT). In this study, we consider the mathematical reasoning task, defining each reasoning step as text accompanied by a Python code snippet. To facilitate a longer reasoning path, self-correction is enabled through interactions with the code interpreter. Our MCoT aims to compress previous reasoning steps into a simplified question, enabling efficient next-step inference without relying on a lengthy KV cache. In our experiments, we curate the $\texttt{MCoTInstruct}$ dataset, and the empirical results indicate that MCoT not only significantly enhances efficiency but also maintains comparable accuracy. While much remains to be explored, this work paves the way for exploring the long CoT reasoning abilities of LLMs. The code is available at https://github.com/james-yw/Markov-Chain-of-Thought
new_dataset
0.957118
2410.18653
Esteban Garces Arias
Esteban Garces Arias and Hannah Blocher and Julian Rodemann and Meimingwei Li and Christian Heumann and Matthias A{\ss}enmacher
Towards Better Open-Ended Text Generation: A Multicriteria Evaluation Framework
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open-ended text generation has become a prominent task in natural language processing due to the rise of powerful (large) language models. However, evaluating the quality of these models and the employed decoding strategies remains challenging because of trade-offs among widely used metrics such as coherence, diversity, and perplexity. Decoding methods often excel in some metrics while underperforming in others, complicating the establishment of a clear ranking. In this paper, we present novel ranking strategies within this multicriteria framework. Specifically, we employ benchmarking approaches based on partial orderings and present a new summary metric designed to balance existing automatic indicators, providing a more holistic evaluation of text generation quality. Our experiments demonstrate that the proposed methods offer a robust way to compare decoding strategies, and serve as valuable tools in guiding model selection for open-ended text generation tasks. Finally, we suggest future directions for improving evaluation methodologies in text generation. Our codebase, datasets, and models are publicly available.
[ { "version": "v1", "created": "Thu, 24 Oct 2024 11:32:01 GMT" }, { "version": "v2", "created": "Wed, 5 Mar 2025 21:24:29 GMT" } ]
2025-03-07T00:00:00
[ [ "Arias", "Esteban Garces", "" ], [ "Blocher", "Hannah", "" ], [ "Rodemann", "Julian", "" ], [ "Li", "Meimingwei", "" ], [ "Heumann", "Christian", "" ], [ "Aßenmacher", "Matthias", "" ] ]
TITLE: Towards Better Open-Ended Text Generation: A Multicriteria Evaluation Framework ABSTRACT: Open-ended text generation has become a prominent task in natural language processing due to the rise of powerful (large) language models. However, evaluating the quality of these models and the employed decoding strategies remains challenging because of trade-offs among widely used metrics such as coherence, diversity, and perplexity. Decoding methods often excel in some metrics while underperforming in others, complicating the establishment of a clear ranking. In this paper, we present novel ranking strategies within this multicriteria framework. Specifically, we employ benchmarking approaches based on partial orderings and present a new summary metric designed to balance existing automatic indicators, providing a more holistic evaluation of text generation quality. Our experiments demonstrate that the proposed methods offer a robust way to compare decoding strategies, and serve as valuable tools in guiding model selection for open-ended text generation tasks. Finally, we suggest future directions for improving evaluation methodologies in text generation. Our codebase, datasets, and models are publicly available.
no_new_dataset
0.945801
2410.24185
Zhenyu Jiang
Zhenyu Jiang, Yuqi Xie, Kevin Lin, Zhenjia Xu, Weikang Wan, Ajay Mandlekar, Linxi Fan, Yuke Zhu
DexMimicGen: Automated Data Generation for Bimanual Dexterous Manipulation via Imitation Learning
ICRA 2025. Project website: https://dexmimicgen.github.io/
null
null
null
cs.RO cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Imitation learning from human demonstrations is an effective means to teach robots manipulation skills. But data acquisition is a major bottleneck in applying this paradigm more broadly, due to the amount of cost and human effort involved. There has been significant interest in imitation learning for bimanual dexterous robots, like humanoids. Unfortunately, data collection is even more challenging here due to the challenges of simultaneously controlling multiple arms and multi-fingered hands. Automated data generation in simulation is a compelling, scalable alternative to fuel this need for data. To this end, we introduce DexMimicGen, a large-scale automated data generation system that synthesizes trajectories from a handful of human demonstrations for humanoid robots with dexterous hands. We present a collection of simulation environments in the setting of bimanual dexterous manipulation, spanning a range of manipulation behaviors and different requirements for coordination among the two arms. We generate 21K demos across these tasks from just 60 source human demos and study the effect of several data generation and policy learning decisions on agent performance. Finally, we present a real-to-sim-to-real pipeline and deploy it on a real-world humanoid can sorting task. Generated datasets, simulation environments and additional results are at https://dexmimicgen.github.io/
[ { "version": "v1", "created": "Thu, 31 Oct 2024 17:48:45 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 05:34:17 GMT" } ]
2025-03-07T00:00:00
[ [ "Jiang", "Zhenyu", "" ], [ "Xie", "Yuqi", "" ], [ "Lin", "Kevin", "" ], [ "Xu", "Zhenjia", "" ], [ "Wan", "Weikang", "" ], [ "Mandlekar", "Ajay", "" ], [ "Fan", "Linxi", "" ], [ "Zhu", "Yuke", "" ] ]
TITLE: DexMimicGen: Automated Data Generation for Bimanual Dexterous Manipulation via Imitation Learning ABSTRACT: Imitation learning from human demonstrations is an effective means to teach robots manipulation skills. But data acquisition is a major bottleneck in applying this paradigm more broadly, due to the amount of cost and human effort involved. There has been significant interest in imitation learning for bimanual dexterous robots, like humanoids. Unfortunately, data collection is even more challenging here due to the challenges of simultaneously controlling multiple arms and multi-fingered hands. Automated data generation in simulation is a compelling, scalable alternative to fuel this need for data. To this end, we introduce DexMimicGen, a large-scale automated data generation system that synthesizes trajectories from a handful of human demonstrations for humanoid robots with dexterous hands. We present a collection of simulation environments in the setting of bimanual dexterous manipulation, spanning a range of manipulation behaviors and different requirements for coordination among the two arms. We generate 21K demos across these tasks from just 60 source human demos and study the effect of several data generation and policy learning decisions on agent performance. Finally, we present a real-to-sim-to-real pipeline and deploy it on a real-world humanoid can sorting task. Generated datasets, simulation environments and additional results are at https://dexmimicgen.github.io/
no_new_dataset
0.746786
2411.07076
Yichen He
Yichen He, Yuan Lin, Jianchao Wu, Hanchong Zhang, Yuchen Zhang, Ruicheng Le
StoryTeller: Improving Long Video Description through Global Audio-Visual Character Identification
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Existing large vision-language models (LVLMs) are largely limited to processing short, seconds-long videos and struggle with generating coherent descriptions for extended video spanning minutes or more. Long video description introduces new challenges, such as consistent character identification and plot-level descriptions incorporating both visual and audio information. To address these, we figure out audio-visual character identification, matching character names to each dialogue, as a key factor. We propose StoryTeller, a system for generating dense descriptions of long videos, incorporating both low-level visual concepts and high-level plot information. StoryTeller uses a multimodal large language model that integrates visual, audio, and text modalities to perform audio-visual character identification on minute-long video clips. The results are then fed into a LVLM to enhance consistency of video description. We validate our approach on movie description tasks and introduce MovieStory101, a dataset with dense descriptions for three-minute movie clips. To evaluate long video descriptions, we create StoryQA, a large set of multiple-choice questions for MovieStory101 test set. We assess descriptions by inputting them into GPT-4 to answer these questions, using accuracy as an automatic evaluation metric. Experiments show that StoryTeller outperforms all open and closed-source baselines on StoryQA, achieving 9.5% higher accuracy than the strongest baseline, Gemini-1.5-pro, and demonstrating a +15.56% advantage in human side-by-side evaluations. Additionally, incorporating audio-visual character identification from StoryTeller improves the performance of all video description models, with Gemini-1.5-pro and GPT-4o showing relative improvement of 5.5% and 13.0%, respectively, in accuracy on StoryQA.
[ { "version": "v1", "created": "Mon, 11 Nov 2024 15:51:48 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 09:13:28 GMT" } ]
2025-03-07T00:00:00
[ [ "He", "Yichen", "" ], [ "Lin", "Yuan", "" ], [ "Wu", "Jianchao", "" ], [ "Zhang", "Hanchong", "" ], [ "Zhang", "Yuchen", "" ], [ "Le", "Ruicheng", "" ] ]
TITLE: StoryTeller: Improving Long Video Description through Global Audio-Visual Character Identification ABSTRACT: Existing large vision-language models (LVLMs) are largely limited to processing short, seconds-long videos and struggle with generating coherent descriptions for extended video spanning minutes or more. Long video description introduces new challenges, such as consistent character identification and plot-level descriptions incorporating both visual and audio information. To address these, we figure out audio-visual character identification, matching character names to each dialogue, as a key factor. We propose StoryTeller, a system for generating dense descriptions of long videos, incorporating both low-level visual concepts and high-level plot information. StoryTeller uses a multimodal large language model that integrates visual, audio, and text modalities to perform audio-visual character identification on minute-long video clips. The results are then fed into a LVLM to enhance consistency of video description. We validate our approach on movie description tasks and introduce MovieStory101, a dataset with dense descriptions for three-minute movie clips. To evaluate long video descriptions, we create StoryQA, a large set of multiple-choice questions for MovieStory101 test set. We assess descriptions by inputting them into GPT-4 to answer these questions, using accuracy as an automatic evaluation metric. Experiments show that StoryTeller outperforms all open and closed-source baselines on StoryQA, achieving 9.5% higher accuracy than the strongest baseline, Gemini-1.5-pro, and demonstrating a +15.56% advantage in human side-by-side evaluations. Additionally, incorporating audio-visual character identification from StoryTeller improves the performance of all video description models, with Gemini-1.5-pro and GPT-4o showing relative improvement of 5.5% and 13.0%, respectively, in accuracy on StoryQA.
new_dataset
0.932269
2411.08410
Yangyang Guo
Yangyang Guo and Fangkai Jiao and Liqiang Nie and Mohan Kankanhalli
The VLLM Safety Paradox: Dual Ease in Jailbreak Attack and Defense
Logic smoothing and language polishing
null
null
null
cs.CR cs.CV
http://creativecommons.org/licenses/by/4.0/
The vulnerability of Vision Large Language Models (VLLMs) to jailbreak attacks appears as no surprise. However, recent defense mechanisms against these attacks have reached near-saturation performance on benchmark evaluations, often with minimal effort. This \emph{dual high performance} in both attack and defense raises a fundamental and perplexing paradox. To gain a deep understanding of this issue and thus further help strengthen the trustworthiness of VLLMs, this paper makes three key contributions: i) One tentative explanation for VLLMs being prone to jailbreak attacks--\textbf{inclusion of vision inputs}, as well as its in-depth analysis. ii) The recognition of a largely ignored problem in existing defense mechanisms--\textbf{over-prudence}. The problem causes these defense methods to exhibit unintended abstention, even in the presence of benign inputs, thereby undermining their reliability in faithfully defending against attacks. iii) A simple safety-aware method--\textbf{LLM-Pipeline}. Our method repurposes the more advanced guardrails of LLMs on the shelf, serving as an effective alternative detector prior to VLLM response. Last but not least, we find that the two representative evaluation methods for jailbreak often exhibit chance agreement. This limitation makes it potentially misleading when evaluating attack strategies or defense mechanisms. We believe the findings from this paper offer useful insights to rethink the foundational development of VLLM safety with respect to benchmark datasets, defense strategies, and evaluation methods.
[ { "version": "v1", "created": "Wed, 13 Nov 2024 07:57:19 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 01:45:26 GMT" } ]
2025-03-07T00:00:00
[ [ "Guo", "Yangyang", "" ], [ "Jiao", "Fangkai", "" ], [ "Nie", "Liqiang", "" ], [ "Kankanhalli", "Mohan", "" ] ]
TITLE: The VLLM Safety Paradox: Dual Ease in Jailbreak Attack and Defense ABSTRACT: The vulnerability of Vision Large Language Models (VLLMs) to jailbreak attacks appears as no surprise. However, recent defense mechanisms against these attacks have reached near-saturation performance on benchmark evaluations, often with minimal effort. This \emph{dual high performance} in both attack and defense raises a fundamental and perplexing paradox. To gain a deep understanding of this issue and thus further help strengthen the trustworthiness of VLLMs, this paper makes three key contributions: i) One tentative explanation for VLLMs being prone to jailbreak attacks--\textbf{inclusion of vision inputs}, as well as its in-depth analysis. ii) The recognition of a largely ignored problem in existing defense mechanisms--\textbf{over-prudence}. The problem causes these defense methods to exhibit unintended abstention, even in the presence of benign inputs, thereby undermining their reliability in faithfully defending against attacks. iii) A simple safety-aware method--\textbf{LLM-Pipeline}. Our method repurposes the more advanced guardrails of LLMs on the shelf, serving as an effective alternative detector prior to VLLM response. Last but not least, we find that the two representative evaluation methods for jailbreak often exhibit chance agreement. This limitation makes it potentially misleading when evaluating attack strategies or defense mechanisms. We believe the findings from this paper offer useful insights to rethink the foundational development of VLLM safety with respect to benchmark datasets, defense strategies, and evaluation methods.
no_new_dataset
0.945901
2411.09263
Hu Wang
Hu Wang, Congbo Ma, Ibrahim Almakky, Ian Reid, Gustavo Carneiro, Mohammad Yaqub
Rethinking Weight-Averaged Model-merging
null
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Model-merging has emerged as a powerful approach in deep learning, capable of enhancing model performance without any training. However, the underlying mechanisms that explain its effectiveness remain largely unexplored. In this paper, we investigate this technique from three novel perspectives to empirically provide deeper insights into why and how weight-averaged model-merging works: (1) we examine the intrinsic patterns captured by the learning of the model weights, through the visualizations of their patterns on several datasets, showing that these weights often encode structured and interpretable patterns and that is the essential why model-merging can work; (2) we mathematically and empirically investigate model ensemble merging strategies based on averaging on weights versus averaging on features, providing detailed analyses across diverse architectures and datasets; and (3) we explore the impact on model-merging prediction stability in terms of changing the parameter magnitude, revealing insights into the way of weight averaging works as regularization by showing the robustness across different parameter scales. Our findings shed light on the "black box" of weight-averaged model-merging, offering valuable insights and practical recommendations that advance the model-merging process. The code is available at https://github.com/billhhh/Rethink-Merge.
[ { "version": "v1", "created": "Thu, 14 Nov 2024 08:02:14 GMT" }, { "version": "v2", "created": "Thu, 21 Nov 2024 10:46:18 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 09:10:07 GMT" } ]
2025-03-07T00:00:00
[ [ "Wang", "Hu", "" ], [ "Ma", "Congbo", "" ], [ "Almakky", "Ibrahim", "" ], [ "Reid", "Ian", "" ], [ "Carneiro", "Gustavo", "" ], [ "Yaqub", "Mohammad", "" ] ]
TITLE: Rethinking Weight-Averaged Model-merging ABSTRACT: Model-merging has emerged as a powerful approach in deep learning, capable of enhancing model performance without any training. However, the underlying mechanisms that explain its effectiveness remain largely unexplored. In this paper, we investigate this technique from three novel perspectives to empirically provide deeper insights into why and how weight-averaged model-merging works: (1) we examine the intrinsic patterns captured by the learning of the model weights, through the visualizations of their patterns on several datasets, showing that these weights often encode structured and interpretable patterns and that is the essential why model-merging can work; (2) we mathematically and empirically investigate model ensemble merging strategies based on averaging on weights versus averaging on features, providing detailed analyses across diverse architectures and datasets; and (3) we explore the impact on model-merging prediction stability in terms of changing the parameter magnitude, revealing insights into the way of weight averaging works as regularization by showing the robustness across different parameter scales. Our findings shed light on the "black box" of weight-averaged model-merging, offering valuable insights and practical recommendations that advance the model-merging process. The code is available at https://github.com/billhhh/Rethink-Merge.
no_new_dataset
0.941223
2411.11006
Haiyang Yu
Haiyang Yu, Tian Xie, Jiaping Gui, Pengyang Wang, Ping Yi, Yue Wu
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense Evaluation
null
null
10.1145/3690624.3709385
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
Over the past few years, the emergence of backdoor attacks has presented significant challenges to deep learning systems, allowing attackers to insert backdoors into neural networks. When data with a trigger is processed by a backdoor model, it can lead to mispredictions targeted by attackers, whereas normal data yields regular results. The scope of backdoor attacks is expanding beyond computer vision and encroaching into areas such as natural language processing and speech recognition. Nevertheless, existing backdoor defense methods are typically tailored to specific data modalities, restricting their application in multimodal contexts. While multimodal learning proves highly applicable in facial recognition, sentiment analysis, action recognition, visual question answering, the security of these models remains a crucial concern. Specifically, there are no existing backdoor benchmarks targeting multimodal applications or related tasks. In order to facilitate the research in multimodal backdoor, we introduce BackdoorMBTI, the first backdoor learning toolkit and benchmark designed for multimodal evaluation across three representative modalities from eleven commonly used datasets. BackdoorMBTI provides a systematic backdoor learning pipeline, encompassing data processing, data poisoning, backdoor training, and evaluation. The generated poison datasets and backdoor models enable detailed evaluation of backdoor defenses. Given the diversity of modalities, BackdoorMBTI facilitates systematic evaluation across different data types. Furthermore, BackdoorMBTI offers a standardized approach to handling practical factors in backdoor learning, such as issues related to data quality and erroneous labels. We anticipate that BackdoorMBTI will expedite future research in backdoor defense methods within a multimodal context. Code is available at https://github.com/SJTUHaiyangYu/BackdoorMBTI.
[ { "version": "v1", "created": "Sun, 17 Nov 2024 09:01:55 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 07:50:21 GMT" } ]
2025-03-07T00:00:00
[ [ "Yu", "Haiyang", "" ], [ "Xie", "Tian", "" ], [ "Gui", "Jiaping", "" ], [ "Wang", "Pengyang", "" ], [ "Yi", "Ping", "" ], [ "Wu", "Yue", "" ] ]
TITLE: BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense Evaluation ABSTRACT: Over the past few years, the emergence of backdoor attacks has presented significant challenges to deep learning systems, allowing attackers to insert backdoors into neural networks. When data with a trigger is processed by a backdoor model, it can lead to mispredictions targeted by attackers, whereas normal data yields regular results. The scope of backdoor attacks is expanding beyond computer vision and encroaching into areas such as natural language processing and speech recognition. Nevertheless, existing backdoor defense methods are typically tailored to specific data modalities, restricting their application in multimodal contexts. While multimodal learning proves highly applicable in facial recognition, sentiment analysis, action recognition, visual question answering, the security of these models remains a crucial concern. Specifically, there are no existing backdoor benchmarks targeting multimodal applications or related tasks. In order to facilitate the research in multimodal backdoor, we introduce BackdoorMBTI, the first backdoor learning toolkit and benchmark designed for multimodal evaluation across three representative modalities from eleven commonly used datasets. BackdoorMBTI provides a systematic backdoor learning pipeline, encompassing data processing, data poisoning, backdoor training, and evaluation. The generated poison datasets and backdoor models enable detailed evaluation of backdoor defenses. Given the diversity of modalities, BackdoorMBTI facilitates systematic evaluation across different data types. Furthermore, BackdoorMBTI offers a standardized approach to handling practical factors in backdoor learning, such as issues related to data quality and erroneous labels. We anticipate that BackdoorMBTI will expedite future research in backdoor defense methods within a multimodal context. Code is available at https://github.com/SJTUHaiyangYu/BackdoorMBTI.
no_new_dataset
0.937268
2411.11505
Zhaoqing Wang
Zhaoqing Wang, Xiaobo Xia, Runnan Chen, Dongdong Yu, Changhu Wang, Mingming Gong, Tongliang Liu
LaVin-DiT: Large Vision Diffusion Transformer
37 pages, 30 figures, 4 tables. Accepted by CVPR 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper presents the Large Vision Diffusion Transformer (LaVin-DiT), a scalable and unified foundation model designed to tackle over 20 computer vision tasks in a generative framework. Unlike existing large vision models directly adapted from natural language processing architectures, which rely on less efficient autoregressive techniques and disrupt spatial relationships essential for vision data, LaVin-DiT introduces key innovations to optimize generative performance for vision tasks. First, to address the high dimensionality of visual data, we incorporate a spatial-temporal variational autoencoder that encodes data into a continuous latent space. Second, for generative modeling, we develop a joint diffusion transformer that progressively produces vision outputs. Third, for unified multi-task training, in-context learning is implemented. Input-target pairs serve as task context, which guides the diffusion transformer to align outputs with specific tasks within the latent space. During inference, a task-specific context set and test data as queries allow LaVin-DiT to generalize across tasks without fine-tuning. Trained on extensive vision datasets, the model is scaled from 0.1B to 3.4B parameters, demonstrating substantial scalability and state-of-the-art performance across diverse vision tasks. This work introduces a novel pathway for large vision foundation models, underscoring the promising potential of diffusion transformers. The code and models are available.
[ { "version": "v1", "created": "Mon, 18 Nov 2024 12:05:27 GMT" }, { "version": "v2", "created": "Sat, 23 Nov 2024 21:10:24 GMT" }, { "version": "v3", "created": "Tue, 26 Nov 2024 06:48:45 GMT" }, { "version": "v4", "created": "Thu, 6 Mar 2025 07:26:35 GMT" } ]
2025-03-07T00:00:00
[ [ "Wang", "Zhaoqing", "" ], [ "Xia", "Xiaobo", "" ], [ "Chen", "Runnan", "" ], [ "Yu", "Dongdong", "" ], [ "Wang", "Changhu", "" ], [ "Gong", "Mingming", "" ], [ "Liu", "Tongliang", "" ] ]
TITLE: LaVin-DiT: Large Vision Diffusion Transformer ABSTRACT: This paper presents the Large Vision Diffusion Transformer (LaVin-DiT), a scalable and unified foundation model designed to tackle over 20 computer vision tasks in a generative framework. Unlike existing large vision models directly adapted from natural language processing architectures, which rely on less efficient autoregressive techniques and disrupt spatial relationships essential for vision data, LaVin-DiT introduces key innovations to optimize generative performance for vision tasks. First, to address the high dimensionality of visual data, we incorporate a spatial-temporal variational autoencoder that encodes data into a continuous latent space. Second, for generative modeling, we develop a joint diffusion transformer that progressively produces vision outputs. Third, for unified multi-task training, in-context learning is implemented. Input-target pairs serve as task context, which guides the diffusion transformer to align outputs with specific tasks within the latent space. During inference, a task-specific context set and test data as queries allow LaVin-DiT to generalize across tasks without fine-tuning. Trained on extensive vision datasets, the model is scaled from 0.1B to 3.4B parameters, demonstrating substantial scalability and state-of-the-art performance across diverse vision tasks. This work introduces a novel pathway for large vision foundation models, underscoring the promising potential of diffusion transformers. The code and models are available.
no_new_dataset
0.946349
2411.13056
Quanhao Lu
Bing Cao, Quanhao Lu, Jiekang Feng, Qilong Wang, Qinghua Hu, Pengfei Zhu
Efficient Masked AutoEncoder for Video Object Counting and A Large-Scale Benchmark
ICLR25
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamic imbalance of the fore-background is a major challenge in video object counting, which is usually caused by the sparsity of target objects. This remains understudied in existing works and often leads to severe under-/over-prediction errors. To tackle this issue in video object counting, we propose a density-embedded Efficient Masked Autoencoder Counting (E-MAC) framework in this paper. To empower the model's representation ability on density regression, we develop a new $\mathtt{D}$ensity-$\mathtt{E}$mbedded $\mathtt{M}$asked m$\mathtt{O}$deling ($\mathtt{DEMO}$) method, which first takes the density map as an auxiliary modality to perform multimodal self-representation learning for image and density map. Although $\mathtt{DEMO}$ contributes to effective cross-modal regression guidance, it also brings in redundant background information, making it difficult to focus on the foreground regions. To handle this dilemma, we propose an efficient spatial adaptive masking derived from density maps to boost efficiency. Meanwhile, we employ an optical flow-based temporal collaborative fusion strategy to effectively capture the dynamic variations across frames, aligning features to derive multi-frame density residuals. The counting accuracy of the current frame is boosted by harnessing the information from adjacent frames. In addition, considering that most existing datasets are limited to human-centric scenarios, we first propose a large video bird counting dataset, DroneBird, in natural scenarios for migratory bird protection. Extensive experiments on three crowd datasets and our \textit{DroneBird} validate our superiority against the counterparts. The code and dataset are available.
[ { "version": "v1", "created": "Wed, 20 Nov 2024 06:08:21 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 08:28:09 GMT" } ]
2025-03-07T00:00:00
[ [ "Cao", "Bing", "" ], [ "Lu", "Quanhao", "" ], [ "Feng", "Jiekang", "" ], [ "Wang", "Qilong", "" ], [ "Hu", "Qinghua", "" ], [ "Zhu", "Pengfei", "" ] ]
TITLE: Efficient Masked AutoEncoder for Video Object Counting and A Large-Scale Benchmark ABSTRACT: The dynamic imbalance of the fore-background is a major challenge in video object counting, which is usually caused by the sparsity of target objects. This remains understudied in existing works and often leads to severe under-/over-prediction errors. To tackle this issue in video object counting, we propose a density-embedded Efficient Masked Autoencoder Counting (E-MAC) framework in this paper. To empower the model's representation ability on density regression, we develop a new $\mathtt{D}$ensity-$\mathtt{E}$mbedded $\mathtt{M}$asked m$\mathtt{O}$deling ($\mathtt{DEMO}$) method, which first takes the density map as an auxiliary modality to perform multimodal self-representation learning for image and density map. Although $\mathtt{DEMO}$ contributes to effective cross-modal regression guidance, it also brings in redundant background information, making it difficult to focus on the foreground regions. To handle this dilemma, we propose an efficient spatial adaptive masking derived from density maps to boost efficiency. Meanwhile, we employ an optical flow-based temporal collaborative fusion strategy to effectively capture the dynamic variations across frames, aligning features to derive multi-frame density residuals. The counting accuracy of the current frame is boosted by harnessing the information from adjacent frames. In addition, considering that most existing datasets are limited to human-centric scenarios, we first propose a large video bird counting dataset, DroneBird, in natural scenarios for migratory bird protection. Extensive experiments on three crowd datasets and our \textit{DroneBird} validate our superiority against the counterparts. The code and dataset are available.
new_dataset
0.864024
2411.16157
Chenjie Cao
Chenjie Cao, Chaohui Yu, Shang Liu, Fan Wang, Xiangyang Xue, Yanwei Fu
MVGenMaster: Scaling Multi-View Generation from Any Image via 3D Priors Enhanced Diffusion Model
Accepted by CVPR2025. Models and codes will be released at https://github.com/ewrfcas/MVGenMaster/. The project page is at https://ewrfcas.github.io/MVGenMaster/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We introduce MVGenMaster, a multi-view diffusion model enhanced with 3D priors to address versatile Novel View Synthesis (NVS) tasks. MVGenMaster leverages 3D priors that are warped using metric depth and camera poses, significantly enhancing both generalization and 3D consistency in NVS. Our model features a simple yet effective pipeline that can generate up to 100 novel views conditioned on variable reference views and camera poses with a single forward process. Additionally, we have developed a comprehensive large-scale multi-view image dataset called MvD-1M, comprising up to 1.6 million scenes, equipped with well-aligned metric depth to train MVGenMaster. Moreover, we present several training and model modifications to strengthen the model with scaled-up datasets. Extensive evaluations across in- and out-of-domain benchmarks demonstrate the effectiveness of our proposed method and data formulation. Models and codes will be released at https://github.com/ewrfcas/MVGenMaster/.
[ { "version": "v1", "created": "Mon, 25 Nov 2024 07:34:23 GMT" }, { "version": "v2", "created": "Tue, 26 Nov 2024 06:33:58 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 02:45:21 GMT" } ]
2025-03-07T00:00:00
[ [ "Cao", "Chenjie", "" ], [ "Yu", "Chaohui", "" ], [ "Liu", "Shang", "" ], [ "Wang", "Fan", "" ], [ "Xue", "Xiangyang", "" ], [ "Fu", "Yanwei", "" ] ]
TITLE: MVGenMaster: Scaling Multi-View Generation from Any Image via 3D Priors Enhanced Diffusion Model ABSTRACT: We introduce MVGenMaster, a multi-view diffusion model enhanced with 3D priors to address versatile Novel View Synthesis (NVS) tasks. MVGenMaster leverages 3D priors that are warped using metric depth and camera poses, significantly enhancing both generalization and 3D consistency in NVS. Our model features a simple yet effective pipeline that can generate up to 100 novel views conditioned on variable reference views and camera poses with a single forward process. Additionally, we have developed a comprehensive large-scale multi-view image dataset called MvD-1M, comprising up to 1.6 million scenes, equipped with well-aligned metric depth to train MVGenMaster. Moreover, we present several training and model modifications to strengthen the model with scaled-up datasets. Extensive evaluations across in- and out-of-domain benchmarks demonstrate the effectiveness of our proposed method and data formulation. Models and codes will be released at https://github.com/ewrfcas/MVGenMaster/.
new_dataset
0.953837
2412.00411
Mohammad Hasan Rahmani
Mohammad Hasan Rahmani, Rafael Berkvens and Maarten Weyn
Seismocardiography for Emotion Recognition: A Study on EmoWear with Insights from DEAP
16 pages, 9 figures
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emotions have a profound impact on our daily lives, influencing our thoughts, behaviors, and interactions, but also our physiological reactions. Recent advances in wearable technology have facilitated studying emotions through cardio-respiratory signals. Accelerometers offer a non-invasive, convenient, and cost-effective method for capturing heart- and pulmonary-induced vibrations on the chest wall, specifically Seismocardiography (SCG) and Accelerometry-Derived Respiration (ADR). Their affordability, wide availability, and ability to provide rich contextual data make accelerometers ideal for everyday use. While accelerometers have been used as part of broader modality fusions for Emotion Recognition (ER), their stand-alone potential via SCG and ADR remains unexplored. Bridging this gap could significantly help the embedding of ER into real-world applications. To address this gap, we introduce SCG as a novel modality for ER and evaluate its performance using the EmoWear dataset. First, we replicate the single-trial emotion classification pipeline from the DEAP dataset study, achieving similar results. Then we use our validated pipeline to train models that predict affective valence-arousal states using SCG and compare them against established cardiac signals, Electrocardiography (ECG) and Blood Volume Pulse (BVP). Results show that SCG is a viable modality for ER, achieving similar performance to ECG and BVP. By combining ADR with SCG, we achieved a working ER framework that only requires a single chest-worn accelerometer. These findings pave the way for integrating ER into real-world, enabling seamless affective computing in everyday life.
[ { "version": "v1", "created": "Sat, 30 Nov 2024 09:39:30 GMT" }, { "version": "v2", "created": "Wed, 5 Mar 2025 22:03:48 GMT" } ]
2025-03-07T00:00:00
[ [ "Rahmani", "Mohammad Hasan", "" ], [ "Berkvens", "Rafael", "" ], [ "Weyn", "Maarten", "" ] ]
TITLE: Seismocardiography for Emotion Recognition: A Study on EmoWear with Insights from DEAP ABSTRACT: Emotions have a profound impact on our daily lives, influencing our thoughts, behaviors, and interactions, but also our physiological reactions. Recent advances in wearable technology have facilitated studying emotions through cardio-respiratory signals. Accelerometers offer a non-invasive, convenient, and cost-effective method for capturing heart- and pulmonary-induced vibrations on the chest wall, specifically Seismocardiography (SCG) and Accelerometry-Derived Respiration (ADR). Their affordability, wide availability, and ability to provide rich contextual data make accelerometers ideal for everyday use. While accelerometers have been used as part of broader modality fusions for Emotion Recognition (ER), their stand-alone potential via SCG and ADR remains unexplored. Bridging this gap could significantly help the embedding of ER into real-world applications. To address this gap, we introduce SCG as a novel modality for ER and evaluate its performance using the EmoWear dataset. First, we replicate the single-trial emotion classification pipeline from the DEAP dataset study, achieving similar results. Then we use our validated pipeline to train models that predict affective valence-arousal states using SCG and compare them against established cardiac signals, Electrocardiography (ECG) and Blood Volume Pulse (BVP). Results show that SCG is a viable modality for ER, achieving similar performance to ECG and BVP. By combining ADR with SCG, we achieved a working ER framework that only requires a single chest-worn accelerometer. These findings pave the way for integrating ER into real-world, enabling seamless affective computing in everyday life.
no_new_dataset
0.943971
2412.07093
Joel Daniel Andersson
Joel Daniel Andersson, Rasmus Pagh
Streaming Private Continual Counting via Binning
Accepted to SaTML 2025. Final version to appear on IEEE eXplore
null
null
null
cs.LG cs.CR cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In differential privacy, $\textit{continual observation}$ refers to problems in which we wish to continuously release a function of a dataset that is revealed one element at a time. The challenge is to maintain a good approximation while keeping the combined output over all time steps differentially private. In the special case of $\textit{continual counting}$ we seek to approximate a sum of binary input elements. This problem has received considerable attention lately, in part due to its relevance in implementations of differentially private stochastic gradient descent. $\textit{Factorization mechanisms}$ are the leading approach to continual counting, but the best such mechanisms do not work well in $\textit{streaming}$ settings since they require space proportional to the size of the input. In this paper, we present a simple approach to approximating factorization mechanisms in low space via $\textit{binning}$, where adjacent matrix entries with similar values are changed to be identical in such a way that a matrix-vector product can be maintained in sublinear space. Our approach has provable sublinear space guarantees for a class of lower triangular matrices whose entries are monotonically decreasing away from the diagonal. We show empirically that even with very low space usage we are able to closely match, and sometimes surpass, the performance of asymptotically optimal factorization mechanisms. Recently, and independently of our work, Dvijotham et al. have also suggested an approach to implementing factorization mechanisms in a streaming setting. Their work differs from ours in several respects: It only addresses factorization into $\textit{Toeplitz}$ matrices, only considers $\textit{maximum}$ error, and uses a different technique based on rational function approximation that seems less versatile than our binning approach.
[ { "version": "v1", "created": "Tue, 10 Dec 2024 01:21:56 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 16:14:01 GMT" } ]
2025-03-07T00:00:00
[ [ "Andersson", "Joel Daniel", "" ], [ "Pagh", "Rasmus", "" ] ]
TITLE: Streaming Private Continual Counting via Binning ABSTRACT: In differential privacy, $\textit{continual observation}$ refers to problems in which we wish to continuously release a function of a dataset that is revealed one element at a time. The challenge is to maintain a good approximation while keeping the combined output over all time steps differentially private. In the special case of $\textit{continual counting}$ we seek to approximate a sum of binary input elements. This problem has received considerable attention lately, in part due to its relevance in implementations of differentially private stochastic gradient descent. $\textit{Factorization mechanisms}$ are the leading approach to continual counting, but the best such mechanisms do not work well in $\textit{streaming}$ settings since they require space proportional to the size of the input. In this paper, we present a simple approach to approximating factorization mechanisms in low space via $\textit{binning}$, where adjacent matrix entries with similar values are changed to be identical in such a way that a matrix-vector product can be maintained in sublinear space. Our approach has provable sublinear space guarantees for a class of lower triangular matrices whose entries are monotonically decreasing away from the diagonal. We show empirically that even with very low space usage we are able to closely match, and sometimes surpass, the performance of asymptotically optimal factorization mechanisms. Recently, and independently of our work, Dvijotham et al. have also suggested an approach to implementing factorization mechanisms in a streaming setting. Their work differs from ours in several respects: It only addresses factorization into $\textit{Toeplitz}$ matrices, only considers $\textit{maximum}$ error, and uses a different technique based on rational function approximation that seems less versatile than our binning approach.
no_new_dataset
0.939137
2412.15527
Weizhi Xian
Weizhi Xian, Mingliang Zhou, Leong Hou U, Lang Shujun, Bin Fang, Tao Xiang, Zhaowei Shang, Weijia Jia
PIGUIQA: A Physical Imaging Guided Perceptual Framework for Underwater Image Quality Assessment
null
null
null
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a Physical Imaging Guided perceptual framework for Underwater Image Quality Assessment (UIQA), termed PIGUIQA. First, we formulate UIQA as a comprehensive problem that considers the combined effects of direct transmission attenuation and backward scattering on image perception. By leveraging underwater radiative transfer theory, we systematically integrate physics-based imaging estimations to establish quantitative metrics for these distortions. Second, recognizing spatial variations in image content significance and human perceptual sensitivity to distortions, we design a module built upon a neighborhood attention mechanism for local perception of images. This module effectively captures subtle features in images, thereby enhancing the adaptive perception of distortions on the basis of local information. Third, by employing a global perceptual aggregator that further integrates holistic image scene with underwater distortion information, the proposed model accurately predicts image quality scores. Extensive experiments across multiple benchmarks demonstrate that PIGUIQA achieves state-of-the-art performance while maintaining robust cross-dataset generalizability. The implementation is publicly available at https://anonymous.4open.science/r/PIGUIQA-A465/
[ { "version": "v1", "created": "Fri, 20 Dec 2024 03:31:45 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 03:19:13 GMT" } ]
2025-03-07T00:00:00
[ [ "Xian", "Weizhi", "" ], [ "Zhou", "Mingliang", "" ], [ "U", "Leong Hou", "" ], [ "Shujun", "Lang", "" ], [ "Fang", "Bin", "" ], [ "Xiang", "Tao", "" ], [ "Shang", "Zhaowei", "" ], [ "Jia", "Weijia", "" ] ]
TITLE: PIGUIQA: A Physical Imaging Guided Perceptual Framework for Underwater Image Quality Assessment ABSTRACT: In this paper, we propose a Physical Imaging Guided perceptual framework for Underwater Image Quality Assessment (UIQA), termed PIGUIQA. First, we formulate UIQA as a comprehensive problem that considers the combined effects of direct transmission attenuation and backward scattering on image perception. By leveraging underwater radiative transfer theory, we systematically integrate physics-based imaging estimations to establish quantitative metrics for these distortions. Second, recognizing spatial variations in image content significance and human perceptual sensitivity to distortions, we design a module built upon a neighborhood attention mechanism for local perception of images. This module effectively captures subtle features in images, thereby enhancing the adaptive perception of distortions on the basis of local information. Third, by employing a global perceptual aggregator that further integrates holistic image scene with underwater distortion information, the proposed model accurately predicts image quality scores. Extensive experiments across multiple benchmarks demonstrate that PIGUIQA achieves state-of-the-art performance while maintaining robust cross-dataset generalizability. The implementation is publicly available at https://anonymous.4open.science/r/PIGUIQA-A465/
no_new_dataset
0.947866
2412.16490
Jiayi Chen
Jiayi Chen, Yubin Ke, He Wang
BODex: Scalable and Efficient Robotic Dexterous Grasp Synthesis Using Bilevel Optimization
ICRA 2025
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Robotic dexterous grasping is important for interacting with the environment. To unleash the potential of data-driven models for dexterous grasping, a large-scale, high-quality dataset is essential. While gradient-based optimization offers a promising way for constructing such datasets, previous works suffer from limitations, such as inefficiency, strong assumptions in the grasp quality energy, or limited object sets for experiments. Moreover, the lack of a standard benchmark for comparing different methods and datasets hinders progress in this field. To address these challenges, we develop a highly efficient synthesis system and a comprehensive benchmark with MuJoCo for dexterous grasping. We formulate grasp synthesis as a bilevel optimization problem, combining a novel lower-level quadratic programming (QP) with an upper-level gradient descent process. By leveraging recent advances in CUDA-accelerated robotic libraries and GPU-based QP solvers, our system can parallelize thousands of grasps and synthesize over 49 grasps per second on a single 3090 GPU. Our synthesized grasps for Shadow, Allegro, and Leap hands all achieve a success rate above 75% in simulation, with a penetration depth under 1 mm, outperforming existing baselines on nearly all metrics. Compared to the previous large-scale dataset, DexGraspNet, our dataset significantly improves the performance of learning models, with a success rate from around 40% to 80% in simulation. Real-world testing of the trained model on the Shadow Hand achieves an 81% success rate across 20 diverse objects. The codes and datasets are released on our project page: https://pku-epic.github.io/BODex.
[ { "version": "v1", "created": "Sat, 21 Dec 2024 05:22:53 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 03:12:16 GMT" } ]
2025-03-07T00:00:00
[ [ "Chen", "Jiayi", "" ], [ "Ke", "Yubin", "" ], [ "Wang", "He", "" ] ]
TITLE: BODex: Scalable and Efficient Robotic Dexterous Grasp Synthesis Using Bilevel Optimization ABSTRACT: Robotic dexterous grasping is important for interacting with the environment. To unleash the potential of data-driven models for dexterous grasping, a large-scale, high-quality dataset is essential. While gradient-based optimization offers a promising way for constructing such datasets, previous works suffer from limitations, such as inefficiency, strong assumptions in the grasp quality energy, or limited object sets for experiments. Moreover, the lack of a standard benchmark for comparing different methods and datasets hinders progress in this field. To address these challenges, we develop a highly efficient synthesis system and a comprehensive benchmark with MuJoCo for dexterous grasping. We formulate grasp synthesis as a bilevel optimization problem, combining a novel lower-level quadratic programming (QP) with an upper-level gradient descent process. By leveraging recent advances in CUDA-accelerated robotic libraries and GPU-based QP solvers, our system can parallelize thousands of grasps and synthesize over 49 grasps per second on a single 3090 GPU. Our synthesized grasps for Shadow, Allegro, and Leap hands all achieve a success rate above 75% in simulation, with a penetration depth under 1 mm, outperforming existing baselines on nearly all metrics. Compared to the previous large-scale dataset, DexGraspNet, our dataset significantly improves the performance of learning models, with a success rate from around 40% to 80% in simulation. Real-world testing of the trained model on the Shadow Hand achieves an 81% success rate across 20 diverse objects. The codes and datasets are released on our project page: https://pku-epic.github.io/BODex.
no_new_dataset
0.915015
2412.16880
Shenghai Yuan
Shenghai Yuan, Boyang Lou, Thien-Minh Nguyen, Pengyu Yin, Muqing Cao, Xinghang Xu, Jianping Li, Jie Xu, Siyu Chen, Lihua Xie
Large-Scale UWB Anchor Calibration and One-Shot Localization Using Gaussian Process
This work has been accepted to IEEE International Conference on Robotics and Automation (ICRA) @ 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, including reprinting/redistribution, creating new works, or reuse of any copyrighted components of this work in other media
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Ultra-wideband (UWB) is gaining popularity with devices like AirTags for precise home item localization but faces significant challenges when scaled to large environments like seaports. The main challenges are calibration and localization in obstructed conditions, which are common in logistics environments. Traditional calibration methods, dependent on line-of-sight (LoS), are slow, costly, and unreliable in seaports and warehouses, making large-scale localization a significant pain point in the industry. To overcome these challenges, we propose a UWB-LiDAR fusion-based calibration and one-shot localization framework. Our method uses Gaussian Processes to estimate anchor position from continuous-time LiDAR Inertial Odometry with sampled UWB ranges. This approach ensures accurate and reliable calibration with just one round of sampling in large-scale areas, I.e., 600x450 square meter. With the LoS issues, UWB-only localization can be problematic, even when anchor positions are known. We demonstrate that by applying a UWB-range filter, the search range for LiDAR loop closure descriptors is significantly reduced, improving both accuracy and speed. This concept can be applied to other loop closure detection methods, enabling cost-effective localization in large-scale warehouses and seaports. It significantly improves precision in challenging environments where UWB-only and LiDAR-Inertial methods fall short, as shown in the video (https://youtu.be/oY8jQKdM7lU). We will open-source our datasets and calibration codes for community use.
[ { "version": "v1", "created": "Sun, 22 Dec 2024 06:20:59 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 07:11:20 GMT" } ]
2025-03-07T00:00:00
[ [ "Yuan", "Shenghai", "" ], [ "Lou", "Boyang", "" ], [ "Nguyen", "Thien-Minh", "" ], [ "Yin", "Pengyu", "" ], [ "Cao", "Muqing", "" ], [ "Xu", "Xinghang", "" ], [ "Li", "Jianping", "" ], [ "Xu", "Jie", "" ], [ "Chen", "Siyu", "" ], [ "Xie", "Lihua", "" ] ]
TITLE: Large-Scale UWB Anchor Calibration and One-Shot Localization Using Gaussian Process ABSTRACT: Ultra-wideband (UWB) is gaining popularity with devices like AirTags for precise home item localization but faces significant challenges when scaled to large environments like seaports. The main challenges are calibration and localization in obstructed conditions, which are common in logistics environments. Traditional calibration methods, dependent on line-of-sight (LoS), are slow, costly, and unreliable in seaports and warehouses, making large-scale localization a significant pain point in the industry. To overcome these challenges, we propose a UWB-LiDAR fusion-based calibration and one-shot localization framework. Our method uses Gaussian Processes to estimate anchor position from continuous-time LiDAR Inertial Odometry with sampled UWB ranges. This approach ensures accurate and reliable calibration with just one round of sampling in large-scale areas, I.e., 600x450 square meter. With the LoS issues, UWB-only localization can be problematic, even when anchor positions are known. We demonstrate that by applying a UWB-range filter, the search range for LiDAR loop closure descriptors is significantly reduced, improving both accuracy and speed. This concept can be applied to other loop closure detection methods, enabling cost-effective localization in large-scale warehouses and seaports. It significantly improves precision in challenging environments where UWB-only and LiDAR-Inertial methods fall short, as shown in the video (https://youtu.be/oY8jQKdM7lU). We will open-source our datasets and calibration codes for community use.
no_new_dataset
0.953665
2501.04873
Alexander Gabriel Valverde Guillen
Alexander Valverde and Luis Solano
Back Home: A Machine Learning Approach to Seashell Classification and Ecosystem Restoration
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In Costa Rica, an average of 5 tons of seashells are extracted from ecosystems annually. Confiscated seashells, cannot be returned to their ecosystems due to the lack of origin recognition. To address this issue, we developed a convolutional neural network (CNN) specifically for seashell identification. We built a dataset from scratch, consisting of approximately 19000 images from the Pacific and Caribbean coasts. Using this dataset, the model achieved a classification accuracy exceeding 85%. The model has been integrated into a user-friendly application, which has classified over 36,000 seashells to date, delivering real-time results within 3 seconds per image. To further enhance the system's accuracy, an anomaly detection mechanism was incorporated to filter out irrelevant or anomalous inputs, ensuring only valid seashell images are processed.
[ { "version": "v1", "created": "Wed, 8 Jan 2025 23:07:10 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 17:35:19 GMT" } ]
2025-03-07T00:00:00
[ [ "Valverde", "Alexander", "" ], [ "Solano", "Luis", "" ] ]
TITLE: Back Home: A Machine Learning Approach to Seashell Classification and Ecosystem Restoration ABSTRACT: In Costa Rica, an average of 5 tons of seashells are extracted from ecosystems annually. Confiscated seashells, cannot be returned to their ecosystems due to the lack of origin recognition. To address this issue, we developed a convolutional neural network (CNN) specifically for seashell identification. We built a dataset from scratch, consisting of approximately 19000 images from the Pacific and Caribbean coasts. Using this dataset, the model achieved a classification accuracy exceeding 85%. The model has been integrated into a user-friendly application, which has classified over 36,000 seashells to date, delivering real-time results within 3 seconds per image. To further enhance the system's accuracy, an anomaly detection mechanism was incorporated to filter out irrelevant or anomalous inputs, ensuring only valid seashell images are processed.
new_dataset
0.96378
2501.05880
Daniel Rossi
Daniel Rossi, Guido Borghi, Roberto Vezzani
TakuNet: an Energy-Efficient CNN for Real-Time Inference on Embedded UAV systems in Emergency Response Scenarios
10 pages, 2 figures, 6 tables, Accepted at WACVW 2025, Tucson (Arizona), United States
In Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 376-385
null
null
cs.CV cs.PF
http://creativecommons.org/licenses/by-nc-sa/4.0/
Designing efficient neural networks for embedded devices is a critical challenge, particularly in applications requiring real-time performance, such as aerial imaging with drones and UAVs for emergency responses. In this work, we introduce TakuNet, a novel light-weight architecture which employs techniques such as depth-wise convolutions and an early downsampling stem to reduce computational complexity while maintaining high accuracy. It leverages dense connections for fast convergence during training and uses 16-bit floating-point precision for optimization on embedded hardware accelerators. Experimental evaluation on two public datasets shows that TakuNet achieves near-state-of-the-art accuracy in classifying aerial images of emergency situations, despite its minimal parameter count. Real-world tests on embedded devices, namely Jetson Orin Nano and Raspberry Pi, confirm TakuNet's efficiency, achieving more than 650 fps on the 15W Jetson board, making it suitable for real-time AI processing on resource-constrained platforms and advancing the applicability of drones in emergency scenarios. The code and implementation details are publicly released.
[ { "version": "v1", "created": "Fri, 10 Jan 2025 11:32:56 GMT" }, { "version": "v2", "created": "Thu, 16 Jan 2025 20:35:28 GMT" }, { "version": "v3", "created": "Wed, 5 Mar 2025 21:00:34 GMT" } ]
2025-03-07T00:00:00
[ [ "Rossi", "Daniel", "" ], [ "Borghi", "Guido", "" ], [ "Vezzani", "Roberto", "" ] ]
TITLE: TakuNet: an Energy-Efficient CNN for Real-Time Inference on Embedded UAV systems in Emergency Response Scenarios ABSTRACT: Designing efficient neural networks for embedded devices is a critical challenge, particularly in applications requiring real-time performance, such as aerial imaging with drones and UAVs for emergency responses. In this work, we introduce TakuNet, a novel light-weight architecture which employs techniques such as depth-wise convolutions and an early downsampling stem to reduce computational complexity while maintaining high accuracy. It leverages dense connections for fast convergence during training and uses 16-bit floating-point precision for optimization on embedded hardware accelerators. Experimental evaluation on two public datasets shows that TakuNet achieves near-state-of-the-art accuracy in classifying aerial images of emergency situations, despite its minimal parameter count. Real-world tests on embedded devices, namely Jetson Orin Nano and Raspberry Pi, confirm TakuNet's efficiency, achieving more than 650 fps on the 15W Jetson board, making it suitable for real-time AI processing on resource-constrained platforms and advancing the applicability of drones in emergency scenarios. The code and implementation details are publicly released.
no_new_dataset
0.948346
2501.13430
Rui Xu
Rui Xu, Chao Chen, Yue Sun, Parvathinathan Venkitasubramaniam, Sihong Xie
Wasserstein-regularized Conformal Prediction under General Distribution Shift
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Conformal prediction yields a prediction set with guaranteed $1-\alpha$ coverage of the true target under the i.i.d. assumption, which may not hold and lead to a gap between $1-\alpha$ and the actual coverage. Prior studies bound the gap using total variation distance, which cannot identify the gap changes under distribution shift at a given $\alpha$. Besides, existing methods are mostly limited to covariate shift,while general joint distribution shifts are more common in practice but less researched.In response, we first propose a Wasserstein distance-based upper bound of the coverage gap and analyze the bound using probability measure pushforwards between the shifted joint data and conformal score distributions, enabling a separation of the effect of covariate and concept shifts over the coverage gap. We exploit the separation to design an algorithm based on importance weighting and regularized representation learning (WR-CP) to reduce the Wasserstein bound with a finite-sample error bound.WR-CP achieves a controllable balance between conformal prediction accuracy and efficiency. Experiments on six datasets prove that WR-CP can reduce coverage gaps to $3.2\%$ across different confidence levels and outputs prediction sets 37$\%$ smaller than the worst-case approach on average.
[ { "version": "v1", "created": "Thu, 23 Jan 2025 07:29:44 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 09:22:38 GMT" } ]
2025-03-07T00:00:00
[ [ "Xu", "Rui", "" ], [ "Chen", "Chao", "" ], [ "Sun", "Yue", "" ], [ "Venkitasubramaniam", "Parvathinathan", "" ], [ "Xie", "Sihong", "" ] ]
TITLE: Wasserstein-regularized Conformal Prediction under General Distribution Shift ABSTRACT: Conformal prediction yields a prediction set with guaranteed $1-\alpha$ coverage of the true target under the i.i.d. assumption, which may not hold and lead to a gap between $1-\alpha$ and the actual coverage. Prior studies bound the gap using total variation distance, which cannot identify the gap changes under distribution shift at a given $\alpha$. Besides, existing methods are mostly limited to covariate shift,while general joint distribution shifts are more common in practice but less researched.In response, we first propose a Wasserstein distance-based upper bound of the coverage gap and analyze the bound using probability measure pushforwards between the shifted joint data and conformal score distributions, enabling a separation of the effect of covariate and concept shifts over the coverage gap. We exploit the separation to design an algorithm based on importance weighting and regularized representation learning (WR-CP) to reduce the Wasserstein bound with a finite-sample error bound.WR-CP achieves a controllable balance between conformal prediction accuracy and efficiency. Experiments on six datasets prove that WR-CP can reduce coverage gaps to $3.2\%$ across different confidence levels and outputs prediction sets 37$\%$ smaller than the worst-case approach on average.
no_new_dataset
0.946597
2501.15361
Sajjad Ghiasvand
Sajjad Ghiasvand, Mahnoosh Alizadeh, Ramtin Pedarsani
Decentralized Low-Rank Fine-Tuning of Large Language Models
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
While parameter-efficient fine-tuning (PEFT) techniques like Low-Rank Adaptation (LoRA) offer computationally efficient adaptations of Large Language Models (LLMs), their practical deployment often assumes centralized data and training environments. However, real-world scenarios frequently involve distributed, privacy-sensitive datasets that require decentralized solutions. Federated learning (FL) addresses data privacy by coordinating model updates across clients, but it is typically based on centralized aggregation through a parameter server, which can introduce bottlenecks and communication constraints. Decentralized learning, in contrast, eliminates this dependency by enabling direct collaboration between clients, improving scalability and efficiency in distributed environments. Despite its advantages, decentralized LLM fine-tuning remains underexplored. In this work, we propose Dec-LoRA, a decentralized fine-tuning algorithm for LLMs based on LoRA. Through extensive experiments on BERT and LLaMA-2 models, we demonstrate that Dec-LoRA achieves performance comparable to centralized LoRA under various conditions, including data heterogeneity and quantization constraints. Additionally, we provide a rigorous theoretical guarantee proving the convergence of our algorithm to a stationary point for non-convex and smooth loss functions. These findings highlight the potential of Dec-LoRA for scalable LLM fine-tuning in decentralized environments.
[ { "version": "v1", "created": "Sun, 26 Jan 2025 01:56:25 GMT" }, { "version": "v2", "created": "Sun, 9 Feb 2025 00:22:42 GMT" }, { "version": "v3", "created": "Wed, 5 Mar 2025 22:09:09 GMT" } ]
2025-03-07T00:00:00
[ [ "Ghiasvand", "Sajjad", "" ], [ "Alizadeh", "Mahnoosh", "" ], [ "Pedarsani", "Ramtin", "" ] ]
TITLE: Decentralized Low-Rank Fine-Tuning of Large Language Models ABSTRACT: While parameter-efficient fine-tuning (PEFT) techniques like Low-Rank Adaptation (LoRA) offer computationally efficient adaptations of Large Language Models (LLMs), their practical deployment often assumes centralized data and training environments. However, real-world scenarios frequently involve distributed, privacy-sensitive datasets that require decentralized solutions. Federated learning (FL) addresses data privacy by coordinating model updates across clients, but it is typically based on centralized aggregation through a parameter server, which can introduce bottlenecks and communication constraints. Decentralized learning, in contrast, eliminates this dependency by enabling direct collaboration between clients, improving scalability and efficiency in distributed environments. Despite its advantages, decentralized LLM fine-tuning remains underexplored. In this work, we propose Dec-LoRA, a decentralized fine-tuning algorithm for LLMs based on LoRA. Through extensive experiments on BERT and LLaMA-2 models, we demonstrate that Dec-LoRA achieves performance comparable to centralized LoRA under various conditions, including data heterogeneity and quantization constraints. Additionally, we provide a rigorous theoretical guarantee proving the convergence of our algorithm to a stationary point for non-convex and smooth loss functions. These findings highlight the potential of Dec-LoRA for scalable LLM fine-tuning in decentralized environments.
no_new_dataset
0.947235
2501.17634
Lucas Lange
Lucas Lange and Ole Borchardt and Erhard Rahm
Federated Learning With Individualized Privacy Through Client Sampling
Accepted at 10th International Conference on Machine Learning Technologies (ICMLT 2025)
null
null
null
cs.LG cs.CR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With growing concerns about user data collection, individualized privacy has emerged as a promising solution to balance protection and utility by accounting for diverse user privacy preferences. Instead of enforcing a uniform level of anonymization for all users, this approach allows individuals to choose privacy settings that align with their comfort levels. Building on this idea, we propose an adapted method for enabling Individualized Differential Privacy (IDP) in Federated Learning (FL) by handling clients according to their personal privacy preferences. By extending the SAMPLE algorithm from centralized settings to FL, we calculate client-specific sampling rates based on their heterogeneous privacy budgets and integrate them into a modified IDP-FedAvg algorithm. We test this method under realistic privacy distributions and multiple datasets. The experimental results demonstrate that our approach achieves clear improvements over uniform DP baselines, reducing the trade-off between privacy and utility. Compared to the alternative SCALE method in related work, which assigns differing noise scales to clients, our method performs notably better. However, challenges remain for complex tasks with non-i.i.d. data, primarily stemming from the constraints of the decentralized setting.
[ { "version": "v1", "created": "Wed, 29 Jan 2025 13:11:21 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 11:17:31 GMT" } ]
2025-03-07T00:00:00
[ [ "Lange", "Lucas", "" ], [ "Borchardt", "Ole", "" ], [ "Rahm", "Erhard", "" ] ]
TITLE: Federated Learning With Individualized Privacy Through Client Sampling ABSTRACT: With growing concerns about user data collection, individualized privacy has emerged as a promising solution to balance protection and utility by accounting for diverse user privacy preferences. Instead of enforcing a uniform level of anonymization for all users, this approach allows individuals to choose privacy settings that align with their comfort levels. Building on this idea, we propose an adapted method for enabling Individualized Differential Privacy (IDP) in Federated Learning (FL) by handling clients according to their personal privacy preferences. By extending the SAMPLE algorithm from centralized settings to FL, we calculate client-specific sampling rates based on their heterogeneous privacy budgets and integrate them into a modified IDP-FedAvg algorithm. We test this method under realistic privacy distributions and multiple datasets. The experimental results demonstrate that our approach achieves clear improvements over uniform DP baselines, reducing the trade-off between privacy and utility. Compared to the alternative SCALE method in related work, which assigns differing noise scales to clients, our method performs notably better. However, challenges remain for complex tasks with non-i.i.d. data, primarily stemming from the constraints of the decentralized setting.
no_new_dataset
0.947284
2502.01262
EunSol Park
Eun-Sol Park, MiSo Park, Seung Park, Yong-Goo Shin
FSPGD: Rethinking Black-box Attacks on Semantic Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transferability, the ability of adversarial examples crafted for one model to deceive other models, is crucial for black-box attacks. Despite advancements in attack methods for semantic segmentation, transferability remains limited, reducing their effectiveness in real-world applications. To address this, we introduce the Feature Similarity Projected Gradient Descent (FSPGD) attack, a novel black-box approach that enhances both attack performance and transferability. Unlike conventional segmentation attacks that rely on output predictions for gradient calculation, FSPGD computes gradients from intermediate layer features. Specifically, our method introduces a loss function that targets local information by comparing features between clean images and adversarial examples, while also disrupting contextual information by accounting for spatial relationships between objects. Experiments on Pascal VOC 2012 and Cityscapes datasets demonstrate that FSPGD achieves superior transferability and attack performance, establishing a new state-of-the-art benchmark. Code is available at https://github.com/KU-AIVS/FSPGD.
[ { "version": "v1", "created": "Mon, 3 Feb 2025 11:36:01 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 14:50:58 GMT" } ]
2025-03-07T00:00:00
[ [ "Park", "Eun-Sol", "" ], [ "Park", "MiSo", "" ], [ "Park", "Seung", "" ], [ "Shin", "Yong-Goo", "" ] ]
TITLE: FSPGD: Rethinking Black-box Attacks on Semantic Segmentation ABSTRACT: Transferability, the ability of adversarial examples crafted for one model to deceive other models, is crucial for black-box attacks. Despite advancements in attack methods for semantic segmentation, transferability remains limited, reducing their effectiveness in real-world applications. To address this, we introduce the Feature Similarity Projected Gradient Descent (FSPGD) attack, a novel black-box approach that enhances both attack performance and transferability. Unlike conventional segmentation attacks that rely on output predictions for gradient calculation, FSPGD computes gradients from intermediate layer features. Specifically, our method introduces a loss function that targets local information by comparing features between clean images and adversarial examples, while also disrupting contextual information by accounting for spatial relationships between objects. Experiments on Pascal VOC 2012 and Cityscapes datasets demonstrate that FSPGD achieves superior transferability and attack performance, establishing a new state-of-the-art benchmark. Code is available at https://github.com/KU-AIVS/FSPGD.
no_new_dataset
0.943971
2502.02954
Ryotaro Kawata
Ryotaro Kawata, Kazusato Oko, Atsushi Nitanda, Taiji Suzuki
Direct Distributional Optimization for Provable Alignment of Diffusion Models
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We introduce a novel alignment method for diffusion models from distribution optimization perspectives while providing rigorous convergence guarantees. We first formulate the problem as a generic regularized loss minimization over probability distributions and directly optimize the distribution using the Dual Averaging method. Next, we enable sampling from the learned distribution by approximating its score function via Doob's $h$-transform technique. The proposed framework is supported by rigorous convergence guarantees and an end-to-end bound on the sampling error, which imply that when the original distribution's score is known accurately, the complexity of sampling from shifted distributions is independent of isoperimetric conditions. This framework is broadly applicable to general distribution optimization problems, including alignment tasks in Reinforcement Learning with Human Feedback (RLHF), Direct Preference Optimization (DPO), and Kahneman-Tversky Optimization (KTO). We empirically validate its performance on synthetic and image datasets using the DPO objective.
[ { "version": "v1", "created": "Wed, 5 Feb 2025 07:35:15 GMT" }, { "version": "v2", "created": "Mon, 3 Mar 2025 03:56:38 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 01:25:25 GMT" } ]
2025-03-07T00:00:00
[ [ "Kawata", "Ryotaro", "" ], [ "Oko", "Kazusato", "" ], [ "Nitanda", "Atsushi", "" ], [ "Suzuki", "Taiji", "" ] ]
TITLE: Direct Distributional Optimization for Provable Alignment of Diffusion Models ABSTRACT: We introduce a novel alignment method for diffusion models from distribution optimization perspectives while providing rigorous convergence guarantees. We first formulate the problem as a generic regularized loss minimization over probability distributions and directly optimize the distribution using the Dual Averaging method. Next, we enable sampling from the learned distribution by approximating its score function via Doob's $h$-transform technique. The proposed framework is supported by rigorous convergence guarantees and an end-to-end bound on the sampling error, which imply that when the original distribution's score is known accurately, the complexity of sampling from shifted distributions is independent of isoperimetric conditions. This framework is broadly applicable to general distribution optimization problems, including alignment tasks in Reinforcement Learning with Human Feedback (RLHF), Direct Preference Optimization (DPO), and Kahneman-Tversky Optimization (KTO). We empirically validate its performance on synthetic and image datasets using the DPO objective.
no_new_dataset
0.9455
2502.08177
Jacob Goldberg
Aaron Fanous and Jacob Goldberg (1), Ank A. Agarwal (1), Joanna Lin (1), Anson Zhou (1), Roxana Daneshjou (1), Sanmi Koyejo (1) ((1) Stanford University)
SycEval: Evaluating LLM Sycophancy
10 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Large language models (LLMs) are increasingly applied in educational, clinical, and professional settings, but their tendency for sycophancy -- prioritizing user agreement over independent reasoning -- poses risks to reliability. This study introduces a framework to evaluate sycophantic behavior in ChatGPT-4o, Claude-Sonnet, and Gemini-1.5-Pro across AMPS (mathematics) and MedQuad (medical advice) datasets. Sycophantic behavior was observed in 58.19% of cases, with Gemini exhibiting the highest rate (62.47%) and ChatGPT the lowest (56.71%). Progressive sycophancy, leading to correct answers, occurred in 43.52% of cases, while regressive sycophancy, leading to incorrect answers, was observed in 14.66%. Preemptive rebuttals demonstrated significantly higher sycophancy rates than in-context rebuttals (61.75% vs. 56.52%, $Z=5.87$, $p<0.001$), particularly in computational tasks, where regressive sycophancy increased significantly (preemptive: 8.13%, in-context: 3.54%, $p<0.001$). Simple rebuttals maximized progressive sycophancy ($Z=6.59$, $p<0.001$), while citation-based rebuttals exhibited the highest regressive rates ($Z=6.59$, $p<0.001$). Sycophantic behavior showed high persistence (78.5%, 95% CI: [77.2%, 79.8%]) regardless of context or model. These findings emphasize the risks and opportunities of deploying LLMs in structured and dynamic domains, offering insights into prompt programming and model optimization for safer AI applications.
[ { "version": "v1", "created": "Wed, 12 Feb 2025 07:32:42 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 00:41:10 GMT" } ]
2025-03-07T00:00:00
[ [ "Fanous", "Aaron", "" ], [ "Goldberg", "Jacob", "" ], [ "Agarwal", "Ank A.", "" ], [ "Lin", "Joanna", "" ], [ "Zhou", "Anson", "" ], [ "Daneshjou", "Roxana", "" ], [ "Koyejo", "Sanmi", "" ] ]
TITLE: SycEval: Evaluating LLM Sycophancy ABSTRACT: Large language models (LLMs) are increasingly applied in educational, clinical, and professional settings, but their tendency for sycophancy -- prioritizing user agreement over independent reasoning -- poses risks to reliability. This study introduces a framework to evaluate sycophantic behavior in ChatGPT-4o, Claude-Sonnet, and Gemini-1.5-Pro across AMPS (mathematics) and MedQuad (medical advice) datasets. Sycophantic behavior was observed in 58.19% of cases, with Gemini exhibiting the highest rate (62.47%) and ChatGPT the lowest (56.71%). Progressive sycophancy, leading to correct answers, occurred in 43.52% of cases, while regressive sycophancy, leading to incorrect answers, was observed in 14.66%. Preemptive rebuttals demonstrated significantly higher sycophancy rates than in-context rebuttals (61.75% vs. 56.52%, $Z=5.87$, $p<0.001$), particularly in computational tasks, where regressive sycophancy increased significantly (preemptive: 8.13%, in-context: 3.54%, $p<0.001$). Simple rebuttals maximized progressive sycophancy ($Z=6.59$, $p<0.001$), while citation-based rebuttals exhibited the highest regressive rates ($Z=6.59$, $p<0.001$). Sycophantic behavior showed high persistence (78.5%, 95% CI: [77.2%, 79.8%]) regardless of context or model. These findings emphasize the risks and opportunities of deploying LLMs in structured and dynamic domains, offering insights into prompt programming and model optimization for safer AI applications.
no_new_dataset
0.950365
2502.12360
Sujan Sai Gannamaneni
Sujan Sai Gannamaneni, Rohil Prakash Rao, Michael Mock, Maram Akila, Stefan Wrobel
Detecting Systematic Weaknesses in Vision Models along Predefined Human-Understandable Dimensions
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Slice discovery methods (SDMs) are prominent algorithms for finding systematic weaknesses in DNNs. They identify top-k semantically coherent slices/subsets of data where a DNN-under-test has low performance. For being directly useful, slices should be aligned with human-understandable and relevant dimensions, which, for example, are defined by safety and domain experts as part of the operational design domain (ODD). While SDMs can be applied effectively on structured data, their application on image data is complicated by the lack of semantic metadata. To address these issues, we present an algorithm that combines foundation models for zero-shot image classification to generate semantic metadata with methods for combinatorial search to find systematic weaknesses in images. In contrast to existing approaches, ours identifies weak slices that are in line with pre-defined human-understandable dimensions. As the algorithm includes foundation models, its intermediate and final results may not always be exact. Therefore, we include an approach to address the impact of noisy metadata. We validate our algorithm on both synthetic and real-world datasets, demonstrating its ability to recover human-understandable systematic weaknesses. Furthermore, using our approach, we identify systematic weaknesses of multiple pre-trained and publicly available state-of-the-art computer vision DNNs.
[ { "version": "v1", "created": "Mon, 17 Feb 2025 22:50:45 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 18:07:00 GMT" } ]
2025-03-07T00:00:00
[ [ "Gannamaneni", "Sujan Sai", "" ], [ "Rao", "Rohil Prakash", "" ], [ "Mock", "Michael", "" ], [ "Akila", "Maram", "" ], [ "Wrobel", "Stefan", "" ] ]
TITLE: Detecting Systematic Weaknesses in Vision Models along Predefined Human-Understandable Dimensions ABSTRACT: Slice discovery methods (SDMs) are prominent algorithms for finding systematic weaknesses in DNNs. They identify top-k semantically coherent slices/subsets of data where a DNN-under-test has low performance. For being directly useful, slices should be aligned with human-understandable and relevant dimensions, which, for example, are defined by safety and domain experts as part of the operational design domain (ODD). While SDMs can be applied effectively on structured data, their application on image data is complicated by the lack of semantic metadata. To address these issues, we present an algorithm that combines foundation models for zero-shot image classification to generate semantic metadata with methods for combinatorial search to find systematic weaknesses in images. In contrast to existing approaches, ours identifies weak slices that are in line with pre-defined human-understandable dimensions. As the algorithm includes foundation models, its intermediate and final results may not always be exact. Therefore, we include an approach to address the impact of noisy metadata. We validate our algorithm on both synthetic and real-world datasets, demonstrating its ability to recover human-understandable systematic weaknesses. Furthermore, using our approach, we identify systematic weaknesses of multiple pre-trained and publicly available state-of-the-art computer vision DNNs.
no_new_dataset
0.948251
2502.13524
Wei Dai
Wei Dai, Jun Liu
MobileViM: A Light-weight and Dimension-independent Vision Mamba for 3D Medical Image Analysis
The corresponding author disagrees with the manuscript submitted to arXiv
null
null
null
cs.CV cs.AI cs.LG cs.NI
http://creativecommons.org/licenses/by-sa/4.0/
Efficient evaluation of three-dimensional (3D) medical images is crucial for diagnostic and therapeutic practices in healthcare. Recent years have seen a substantial uptake in applying deep learning and computer vision to analyse and interpret medical images. Traditional approaches, such as convolutional neural networks (CNNs) and vision transformers (ViTs), face significant computational challenges, prompting the need for architectural advancements. Recent efforts have led to the introduction of novel architectures like the ``Mamba'' model as alternative solutions to traditional CNNs or ViTs. The Mamba model excels in the linear processing of one-dimensional data with low computational demands. However, Mamba's potential for 3D medical image analysis remains underexplored and could face significant computational challenges as the dimension increases. This manuscript presents MobileViM, a streamlined architecture for efficient segmentation of 3D medical images. In the MobileViM network, we invent a new dimension-independent mechanism and a dual-direction traversing approach to incorporate with a vision-Mamba-based framework. MobileViM also features a cross-scale bridging technique to improve efficiency and accuracy across various medical imaging modalities. With these enhancements, MobileViM achieves segmentation speeds exceeding 90 frames per second (FPS) on a single graphics processing unit (i.e., NVIDIA RTX 4090). This performance is over 24 FPS faster than the state-of-the-art deep learning models for processing 3D images with the same computational resources. In addition, experimental evaluations demonstrate that MobileViM delivers superior performance, with Dice similarity scores reaching 92.72%, 86.69%, 80.46%, and 77.43% for PENGWIN, BraTS2024, ATLAS, and Toothfairy2 datasets, respectively, which significantly surpasses existing models.
[ { "version": "v1", "created": "Wed, 19 Feb 2025 08:21:59 GMT" }, { "version": "v2", "created": "Sat, 1 Mar 2025 14:42:44 GMT" }, { "version": "v3", "created": "Wed, 5 Mar 2025 01:21:38 GMT" }, { "version": "v4", "created": "Thu, 6 Mar 2025 14:27:12 GMT" } ]
2025-03-07T00:00:00
[ [ "Dai", "Wei", "" ], [ "Liu", "Jun", "" ] ]
TITLE: MobileViM: A Light-weight and Dimension-independent Vision Mamba for 3D Medical Image Analysis ABSTRACT: Efficient evaluation of three-dimensional (3D) medical images is crucial for diagnostic and therapeutic practices in healthcare. Recent years have seen a substantial uptake in applying deep learning and computer vision to analyse and interpret medical images. Traditional approaches, such as convolutional neural networks (CNNs) and vision transformers (ViTs), face significant computational challenges, prompting the need for architectural advancements. Recent efforts have led to the introduction of novel architectures like the ``Mamba'' model as alternative solutions to traditional CNNs or ViTs. The Mamba model excels in the linear processing of one-dimensional data with low computational demands. However, Mamba's potential for 3D medical image analysis remains underexplored and could face significant computational challenges as the dimension increases. This manuscript presents MobileViM, a streamlined architecture for efficient segmentation of 3D medical images. In the MobileViM network, we invent a new dimension-independent mechanism and a dual-direction traversing approach to incorporate with a vision-Mamba-based framework. MobileViM also features a cross-scale bridging technique to improve efficiency and accuracy across various medical imaging modalities. With these enhancements, MobileViM achieves segmentation speeds exceeding 90 frames per second (FPS) on a single graphics processing unit (i.e., NVIDIA RTX 4090). This performance is over 24 FPS faster than the state-of-the-art deep learning models for processing 3D images with the same computational resources. In addition, experimental evaluations demonstrate that MobileViM delivers superior performance, with Dice similarity scores reaching 92.72%, 86.69%, 80.46%, and 77.43% for PENGWIN, BraTS2024, ATLAS, and Toothfairy2 datasets, respectively, which significantly surpasses existing models.
no_new_dataset
0.950686
2502.13728
Mert Cihangiroglu
Marco Arazzi, Mert Cihangiroglu, Serena Nicolazzo, Antonino Nocera
Secure Federated Data Distillation
null
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
Dataset Distillation (DD) is a powerful technique for reducing large datasets into compact, representative synthetic datasets, accelerating Machine Learning training. However, traditional DD methods operate in a centralized manner, which poses significant privacy threats and reduces its applicability. To mitigate these risks, we propose a Secure Federated Data Distillation (SFDD) framework to decentralize the distillation process while preserving privacy. Unlike existing Federated Distillation techniques that focus on training global models with distilled knowledge, our approach aims to produce a distilled dataset without exposing local contributions. We leverage the gradient-matching-based distillation method, adapting it for a distributed setting where clients contribute to the distillation process without sharing raw data. The central aggregator iteratively refines a synthetic dataset by integrating client-side updates while ensuring data confidentiality. To make our approach resilient to inference attacks perpetrated by the server that could exploit gradient updates to reconstruct private data, we create an optimized Local Differential Privacy approach, called LDPO-RLD. Furthermore, we assess the framework's resilience against malicious clients executing backdoor attacks (such as Doorping) and demonstrate robustness under the assumption of a sufficient number of participating clients. Our experimental results demonstrate the effectiveness of SFDD and that the proposed defense concretely mitigates the identified vulnerabilities, with minimal impact on the performance of the distilled dataset. By addressing the interplay between privacy and federation in dataset distillation, this work advances the field of privacy-preserving Machine Learning making our SFDD framework a viable solution for sensitive data-sharing applications.
[ { "version": "v1", "created": "Wed, 19 Feb 2025 13:54:44 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 14:07:57 GMT" } ]
2025-03-07T00:00:00
[ [ "Arazzi", "Marco", "" ], [ "Cihangiroglu", "Mert", "" ], [ "Nicolazzo", "Serena", "" ], [ "Nocera", "Antonino", "" ] ]
TITLE: Secure Federated Data Distillation ABSTRACT: Dataset Distillation (DD) is a powerful technique for reducing large datasets into compact, representative synthetic datasets, accelerating Machine Learning training. However, traditional DD methods operate in a centralized manner, which poses significant privacy threats and reduces its applicability. To mitigate these risks, we propose a Secure Federated Data Distillation (SFDD) framework to decentralize the distillation process while preserving privacy. Unlike existing Federated Distillation techniques that focus on training global models with distilled knowledge, our approach aims to produce a distilled dataset without exposing local contributions. We leverage the gradient-matching-based distillation method, adapting it for a distributed setting where clients contribute to the distillation process without sharing raw data. The central aggregator iteratively refines a synthetic dataset by integrating client-side updates while ensuring data confidentiality. To make our approach resilient to inference attacks perpetrated by the server that could exploit gradient updates to reconstruct private data, we create an optimized Local Differential Privacy approach, called LDPO-RLD. Furthermore, we assess the framework's resilience against malicious clients executing backdoor attacks (such as Doorping) and demonstrate robustness under the assumption of a sufficient number of participating clients. Our experimental results demonstrate the effectiveness of SFDD and that the proposed defense concretely mitigates the identified vulnerabilities, with minimal impact on the performance of the distilled dataset. By addressing the interplay between privacy and federation in dataset distillation, this work advances the field of privacy-preserving Machine Learning making our SFDD framework a viable solution for sensitive data-sharing applications.
no_new_dataset
0.941654
2502.14909
Hieu Xuan Nguyen
Cuong V. Nguyen, Hieu X. Nguyen, Dung D. Pham Minh and Cuong D. Do
Comparing Deep Neural Network for Multi-Label ECG Diagnosis From Scanned ECG
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Automated ECG diagnosis has seen significant advancements with deep learning techniques, but real-world applications still face challenges when dealing with scanned paper ECGs. In this study, we explore multi-label classification of ECGs extracted from scanned images, moving beyond traditional binary classification (normal/abnormal). We evaluate the performance of multiple deep neural network architectures, including AlexNet, VGG, ResNet, and Vision Transformer, on scanned ECG datasets. Our comparative analysis examines model accuracy, robustness to image artifacts, and generalizability across different ECG conditions. Additionally, we investigate whether ECG signals extracted from scanned images retain sufficient diagnostic information for reliable automated classification. The findings highlight the strengths and limitations of each architecture, providing insights into the feasibility of image-based ECG diagnosis and its potential integration into clinical workflows.
[ { "version": "v1", "created": "Wed, 19 Feb 2025 02:56:27 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 05:18:12 GMT" } ]
2025-03-07T00:00:00
[ [ "Nguyen", "Cuong V.", "" ], [ "Nguyen", "Hieu X.", "" ], [ "Minh", "Dung D. Pham", "" ], [ "Do", "Cuong D.", "" ] ]
TITLE: Comparing Deep Neural Network for Multi-Label ECG Diagnosis From Scanned ECG ABSTRACT: Automated ECG diagnosis has seen significant advancements with deep learning techniques, but real-world applications still face challenges when dealing with scanned paper ECGs. In this study, we explore multi-label classification of ECGs extracted from scanned images, moving beyond traditional binary classification (normal/abnormal). We evaluate the performance of multiple deep neural network architectures, including AlexNet, VGG, ResNet, and Vision Transformer, on scanned ECG datasets. Our comparative analysis examines model accuracy, robustness to image artifacts, and generalizability across different ECG conditions. Additionally, we investigate whether ECG signals extracted from scanned images retain sufficient diagnostic information for reliable automated classification. The findings highlight the strengths and limitations of each architecture, providing insights into the feasibility of image-based ECG diagnosis and its potential integration into clinical workflows.
no_new_dataset
0.948442
2502.16600
Guangliang Liu
Guangliang Liu, Lei Jiang, Xitong Zhang, Kristen Marie Johnson
Diagnosing Moral Reasoning Acquisition in Language Models: Pragmatics and Generalization
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ensuring that Large Language Models (LLMs) return just responses which adhere to societal values is crucial for their broader application. Prior research has shown that LLMs often fail to perform satisfactorily on tasks requiring moral cognizance, such as ethics-based judgments. While current approaches have focused on fine-tuning LLMs with curated datasets to improve their capabilities on such tasks, choosing the optimal learning paradigm to enhance the ethical responses of LLMs remains an open research debate. In this work, we aim to address this fundamental question: can current learning paradigms enable LLMs to acquire sufficient moral reasoning capabilities? Drawing from distributional semantics theory and the pragmatic nature of moral discourse, our analysis indicates that performance improvements follow a mechanism similar to that of semantic-level tasks, and therefore remain affected by the pragmatic nature of morals latent in discourse, a phenomenon we name the pragmatic dilemma. We conclude that this pragmatic dilemma imposes significant limitations on the generalization ability of current learning paradigms, making it the primary bottleneck for moral reasoning acquisition in LLMs.
[ { "version": "v1", "created": "Sun, 23 Feb 2025 15:00:53 GMT" }, { "version": "v2", "created": "Tue, 25 Feb 2025 04:13:25 GMT" }, { "version": "v3", "created": "Tue, 4 Mar 2025 17:23:23 GMT" }, { "version": "v4", "created": "Thu, 6 Mar 2025 17:56:40 GMT" } ]
2025-03-07T00:00:00
[ [ "Liu", "Guangliang", "" ], [ "Jiang", "Lei", "" ], [ "Zhang", "Xitong", "" ], [ "Johnson", "Kristen Marie", "" ] ]
TITLE: Diagnosing Moral Reasoning Acquisition in Language Models: Pragmatics and Generalization ABSTRACT: Ensuring that Large Language Models (LLMs) return just responses which adhere to societal values is crucial for their broader application. Prior research has shown that LLMs often fail to perform satisfactorily on tasks requiring moral cognizance, such as ethics-based judgments. While current approaches have focused on fine-tuning LLMs with curated datasets to improve their capabilities on such tasks, choosing the optimal learning paradigm to enhance the ethical responses of LLMs remains an open research debate. In this work, we aim to address this fundamental question: can current learning paradigms enable LLMs to acquire sufficient moral reasoning capabilities? Drawing from distributional semantics theory and the pragmatic nature of moral discourse, our analysis indicates that performance improvements follow a mechanism similar to that of semantic-level tasks, and therefore remain affected by the pragmatic nature of morals latent in discourse, a phenomenon we name the pragmatic dilemma. We conclude that this pragmatic dilemma imposes significant limitations on the generalization ability of current learning paradigms, making it the primary bottleneck for moral reasoning acquisition in LLMs.
no_new_dataset
0.945801
2502.17504
Yijia Xiao
Yijia Xiao, Wanjia Zhao, Junkai Zhang, Yiqiao Jin, Han Zhang, Zhicheng Ren, Renliang Sun, Haixin Wang, Guancheng Wan, Pan Lu, Xiao Luo, Yu Zhang, James Zou, Yizhou Sun, Wei Wang
Protein Large Language Models: A Comprehensive Survey
24 pages, 4 figures, 5 tables
null
null
null
q-bio.BM cs.AI cs.CE cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Protein-specific large language models (Protein LLMs) are revolutionizing protein science by enabling more efficient protein structure prediction, function annotation, and design. While existing surveys focus on specific aspects or applications, this work provides the first comprehensive overview of Protein LLMs, covering their architectures, training datasets, evaluation metrics, and diverse applications. Through a systematic analysis of over 100 articles, we propose a structured taxonomy of state-of-the-art Protein LLMs, analyze how they leverage large-scale protein sequence data for improved accuracy, and explore their potential in advancing protein engineering and biomedical research. Additionally, we discuss key challenges and future directions, positioning Protein LLMs as essential tools for scientific discovery in protein science. Resources are maintained at https://github.com/Yijia-Xiao/Protein-LLM-Survey.
[ { "version": "v1", "created": "Fri, 21 Feb 2025 19:22:10 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 16:14:45 GMT" } ]
2025-03-07T00:00:00
[ [ "Xiao", "Yijia", "" ], [ "Zhao", "Wanjia", "" ], [ "Zhang", "Junkai", "" ], [ "Jin", "Yiqiao", "" ], [ "Zhang", "Han", "" ], [ "Ren", "Zhicheng", "" ], [ "Sun", "Renliang", "" ], [ "Wang", "Haixin", "" ], [ "Wan", "Guancheng", "" ], [ "Lu", "Pan", "" ], [ "Luo", "Xiao", "" ], [ "Zhang", "Yu", "" ], [ "Zou", "James", "" ], [ "Sun", "Yizhou", "" ], [ "Wang", "Wei", "" ] ]
TITLE: Protein Large Language Models: A Comprehensive Survey ABSTRACT: Protein-specific large language models (Protein LLMs) are revolutionizing protein science by enabling more efficient protein structure prediction, function annotation, and design. While existing surveys focus on specific aspects or applications, this work provides the first comprehensive overview of Protein LLMs, covering their architectures, training datasets, evaluation metrics, and diverse applications. Through a systematic analysis of over 100 articles, we propose a structured taxonomy of state-of-the-art Protein LLMs, analyze how they leverage large-scale protein sequence data for improved accuracy, and explore their potential in advancing protein engineering and biomedical research. Additionally, we discuss key challenges and future directions, positioning Protein LLMs as essential tools for scientific discovery in protein science. Resources are maintained at https://github.com/Yijia-Xiao/Protein-LLM-Survey.
no_new_dataset
0.946051
2502.18049
Shirong Xu
Hengzhi He, Shirong Xu, Guang Cheng
Golden Ratio Weighting Prevents Model Collapse
null
null
null
null
stat.ML cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent studies identified an intriguing phenomenon in recursive generative model training known as model collapse, where models trained on data generated by previous models exhibit severe performance degradation. Addressing this issue and developing more effective training strategies have become central challenges in generative model research. In this paper, we investigate this phenomenon theoretically within a novel framework, where generative models are iteratively trained on a combination of newly collected real data and synthetic data from the previous training step. To develop an optimal training strategy for integrating real and synthetic data, we evaluate the performance of a weighted training scheme in various scenarios, including Gaussian distribution estimation and linear regression. We theoretically characterize the impact of the mixing proportion and weighting scheme of synthetic data on the final model's performance. Our key finding is that, across different settings, the optimal weighting scheme under different proportions of synthetic data asymptotically follows a unified expression, revealing a fundamental trade-off between leveraging synthetic data and generative model performance. Notably, in some cases, the optimal weight assigned to real data corresponds to the reciprocal of the golden ratio. Finally, we validate our theoretical results on extensive simulated datasets and a real tabular dataset.
[ { "version": "v1", "created": "Tue, 25 Feb 2025 10:15:16 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 16:03:59 GMT" } ]
2025-03-07T00:00:00
[ [ "He", "Hengzhi", "" ], [ "Xu", "Shirong", "" ], [ "Cheng", "Guang", "" ] ]
TITLE: Golden Ratio Weighting Prevents Model Collapse ABSTRACT: Recent studies identified an intriguing phenomenon in recursive generative model training known as model collapse, where models trained on data generated by previous models exhibit severe performance degradation. Addressing this issue and developing more effective training strategies have become central challenges in generative model research. In this paper, we investigate this phenomenon theoretically within a novel framework, where generative models are iteratively trained on a combination of newly collected real data and synthetic data from the previous training step. To develop an optimal training strategy for integrating real and synthetic data, we evaluate the performance of a weighted training scheme in various scenarios, including Gaussian distribution estimation and linear regression. We theoretically characterize the impact of the mixing proportion and weighting scheme of synthetic data on the final model's performance. Our key finding is that, across different settings, the optimal weighting scheme under different proportions of synthetic data asymptotically follows a unified expression, revealing a fundamental trade-off between leveraging synthetic data and generative model performance. Notably, in some cases, the optimal weight assigned to real data corresponds to the reciprocal of the golden ratio. Finally, we validate our theoretical results on extensive simulated datasets and a real tabular dataset.
no_new_dataset
0.949902
2502.18232
Jun Zeng
Jun Zeng, Debesh Jha, Ertugrul Aktas, Elif Keles, Alpay Medetalibeyoglu, Matthew Antalek, Robert Lewandowski, Daniela Ladner, Amir A. Borhani, Gorkem Durak, Ulas Bagci
A Reverse Mamba Attention Network for Pathological Liver Segmentation
8 pages, 3 figures
null
null
null
eess.IV cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
We present RMA-Mamba, a novel architecture that advances the capabilities of vision state space models through a specialized reverse mamba attention module (RMA). The key innovation lies in RMA-Mamba's ability to capture long-range dependencies while maintaining precise local feature representation through its hierarchical processing pipeline. By integrating Vision Mamba (VMamba)'s efficient sequence modeling with RMA's targeted feature refinement, our architecture achieves superior feature learning across multiple scales. This dual-mechanism approach enables robust handling of complex morphological patterns while maintaining computational efficiency. We demonstrate RMA-Mamba's effectiveness in the challenging domain of pathological liver segmentation (from both CT and MRI), where traditional segmentation approaches often fail due to tissue variations. When evaluated on a newly introduced cirrhotic liver dataset (CirrMRI600+) of T2-weighted MRI scans, RMA-Mamba achieves the state-of-the-art performance with a Dice coefficient of 92.08%, mean IoU of 87.36%, and recall of 92.96%. The architecture's generalizability is further validated on the cancerous liver segmentation from CT scans (LiTS: Liver Tumor Segmentation dataset), yielding a Dice score of 92.9% and mIoU of 88.99%. Our code is available for public: https://github.com/JunZengz/RMAMamba.
[ { "version": "v1", "created": "Sun, 23 Feb 2025 20:41:25 GMT" }, { "version": "v2", "created": "Wed, 5 Mar 2025 19:18:00 GMT" } ]
2025-03-07T00:00:00
[ [ "Zeng", "Jun", "" ], [ "Jha", "Debesh", "" ], [ "Aktas", "Ertugrul", "" ], [ "Keles", "Elif", "" ], [ "Medetalibeyoglu", "Alpay", "" ], [ "Antalek", "Matthew", "" ], [ "Lewandowski", "Robert", "" ], [ "Ladner", "Daniela", "" ], [ "Borhani", "Amir A.", "" ], [ "Durak", "Gorkem", "" ], [ "Bagci", "Ulas", "" ] ]
TITLE: A Reverse Mamba Attention Network for Pathological Liver Segmentation ABSTRACT: We present RMA-Mamba, a novel architecture that advances the capabilities of vision state space models through a specialized reverse mamba attention module (RMA). The key innovation lies in RMA-Mamba's ability to capture long-range dependencies while maintaining precise local feature representation through its hierarchical processing pipeline. By integrating Vision Mamba (VMamba)'s efficient sequence modeling with RMA's targeted feature refinement, our architecture achieves superior feature learning across multiple scales. This dual-mechanism approach enables robust handling of complex morphological patterns while maintaining computational efficiency. We demonstrate RMA-Mamba's effectiveness in the challenging domain of pathological liver segmentation (from both CT and MRI), where traditional segmentation approaches often fail due to tissue variations. When evaluated on a newly introduced cirrhotic liver dataset (CirrMRI600+) of T2-weighted MRI scans, RMA-Mamba achieves the state-of-the-art performance with a Dice coefficient of 92.08%, mean IoU of 87.36%, and recall of 92.96%. The architecture's generalizability is further validated on the cancerous liver segmentation from CT scans (LiTS: Liver Tumor Segmentation dataset), yielding a Dice score of 92.9% and mIoU of 88.99%. Our code is available for public: https://github.com/JunZengz/RMAMamba.
new_dataset
0.946448
2502.19660
Zikuan Li
Zikuan Li, Qiaoyun Wu, Jialin Zhang, Kaijun Zhang, Jun Wang
Noise-Injected Spiking Graph Convolution for Energy-Efficient 3D Point Cloud Denoising
Accepted by AAAI 2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking neural networks (SNNs), inspired by the spiking computation paradigm of the biological neural systems, have exhibited superior energy efficiency in 2D classification tasks over traditional artificial neural networks (ANNs). However, the regression potential of SNNs has not been well explored, especially in 3D point cloud processing. In this paper, we propose noise-injected spiking graph convolutional networks to leverage the full regression potential of SNNs in 3D point cloud denoising. Specifically, we first emulate the noise-injected neuronal dynamics to build noise-injected spiking neurons. On this basis, we design noise-injected spiking graph convolution for promoting disturbance-aware spiking representation learning on 3D points. Starting from the spiking graph convolution, we build two SNN-based denoising networks. One is a purely spiking graph convolutional network, which achieves low accuracy loss compared with some ANN-based alternatives, while resulting in significantly reduced energy consumption on two benchmark datasets, PU-Net and PC-Net. The other is a hybrid architecture that combines ANN-based learning with a high performance-efficiency trade-off in just a few time steps. Our work lights up SNN's potential for 3D point cloud denoising, injecting new perspectives of exploring the deployment on neuromorphic chips while paving the way for developing energy-efficient 3D data acquisition devices.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 01:04:23 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 03:14:41 GMT" } ]
2025-03-07T00:00:00
[ [ "Li", "Zikuan", "" ], [ "Wu", "Qiaoyun", "" ], [ "Zhang", "Jialin", "" ], [ "Zhang", "Kaijun", "" ], [ "Wang", "Jun", "" ] ]
TITLE: Noise-Injected Spiking Graph Convolution for Energy-Efficient 3D Point Cloud Denoising ABSTRACT: Spiking neural networks (SNNs), inspired by the spiking computation paradigm of the biological neural systems, have exhibited superior energy efficiency in 2D classification tasks over traditional artificial neural networks (ANNs). However, the regression potential of SNNs has not been well explored, especially in 3D point cloud processing. In this paper, we propose noise-injected spiking graph convolutional networks to leverage the full regression potential of SNNs in 3D point cloud denoising. Specifically, we first emulate the noise-injected neuronal dynamics to build noise-injected spiking neurons. On this basis, we design noise-injected spiking graph convolution for promoting disturbance-aware spiking representation learning on 3D points. Starting from the spiking graph convolution, we build two SNN-based denoising networks. One is a purely spiking graph convolutional network, which achieves low accuracy loss compared with some ANN-based alternatives, while resulting in significantly reduced energy consumption on two benchmark datasets, PU-Net and PC-Net. The other is a hybrid architecture that combines ANN-based learning with a high performance-efficiency trade-off in just a few time steps. Our work lights up SNN's potential for 3D point cloud denoising, injecting new perspectives of exploring the deployment on neuromorphic chips while paving the way for developing energy-efficient 3D data acquisition devices.
no_new_dataset
0.94801
2502.20490
Yicheng Fu
MohammadHossein Rezaei, Yicheng Fu, Phil Cuvin, Caleb Ziems, Yanzhe Zhang, Hao Zhu, Diyi Yang
EgoNormia: Benchmarking Physical Social Norm Understanding
null
null
null
null
cs.CV cs.AI cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Human activity is moderated by norms. However, machines are often trained without explicit supervision on norm understanding and reasoning, especially when the norms are grounded in a physical and social context. To improve and evaluate the normative reasoning capability of vision-language models (VLMs), we present EgoNormia $\|\epsilon\|$, consisting of 1,853 ego-centric videos of human interactions, each of which has two related questions evaluating both the prediction and justification of normative actions. The normative actions encompass seven categories: safety, privacy, proxemics, politeness, cooperation, coordination/proactivity, and communication/legibility. To compile this dataset at scale, we propose a novel pipeline leveraging video sampling, automatic answer generation, filtering, and human validation. Our work demonstrates that current state-of-the-art vision-language models lack robust norm understanding, scoring a maximum of 45% on EgoNormia (versus a human bench of 92%). Our analysis of performance in each dimension highlights the significant risks of safety, privacy, and the lack of collaboration and communication capability when applied to real-world agents. We additionally show that through a retrieval-based generation method, it is possible to use EgoNormia to enhance normative reasoning in VLMs.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 19:54:16 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 00:59:40 GMT" } ]
2025-03-07T00:00:00
[ [ "Rezaei", "MohammadHossein", "" ], [ "Fu", "Yicheng", "" ], [ "Cuvin", "Phil", "" ], [ "Ziems", "Caleb", "" ], [ "Zhang", "Yanzhe", "" ], [ "Zhu", "Hao", "" ], [ "Yang", "Diyi", "" ] ]
TITLE: EgoNormia: Benchmarking Physical Social Norm Understanding ABSTRACT: Human activity is moderated by norms. However, machines are often trained without explicit supervision on norm understanding and reasoning, especially when the norms are grounded in a physical and social context. To improve and evaluate the normative reasoning capability of vision-language models (VLMs), we present EgoNormia $\|\epsilon\|$, consisting of 1,853 ego-centric videos of human interactions, each of which has two related questions evaluating both the prediction and justification of normative actions. The normative actions encompass seven categories: safety, privacy, proxemics, politeness, cooperation, coordination/proactivity, and communication/legibility. To compile this dataset at scale, we propose a novel pipeline leveraging video sampling, automatic answer generation, filtering, and human validation. Our work demonstrates that current state-of-the-art vision-language models lack robust norm understanding, scoring a maximum of 45% on EgoNormia (versus a human bench of 92%). Our analysis of performance in each dimension highlights the significant risks of safety, privacy, and the lack of collaboration and communication capability when applied to real-world agents. We additionally show that through a retrieval-based generation method, it is possible to use EgoNormia to enhance normative reasoning in VLMs.
new_dataset
0.956309
2502.20637
Chen Yuqian
Yuqian Chen, Leo Zekelman, Yui Lo, Suheyla Cetin-Karayumak, Tengfei Xue, Yogesh Rathi, Nikos Makris, Fan Zhang, Weidong Cai, Lauren J. O'Donnell
TractCloud-FOV: Deep Learning-based Robust Tractography Parcellation in Diffusion MRI with Incomplete Field of View
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tractography parcellation classifies streamlines reconstructed from diffusion MRI into anatomically defined fiber tracts for clinical and research applications. However, clinical scans often have incomplete fields of view (FOV) where brain regions are partially imaged, leading to partial or truncated fiber tracts. To address this challenge, we introduce TractCloud-FOV, a deep learning framework that robustly parcellates tractography under conditions of incomplete FOV. We propose a novel training strategy, FOV-Cut Augmentation (FOV-CA), in which we synthetically cut tractograms to simulate a spectrum of real-world inferior FOV cutoff scenarios. This data augmentation approach enriches the training set with realistic truncated streamlines, enabling the model to achieve superior generalization. We evaluate the proposed TractCloud-FOV on both synthetically cut tractography and two real-life datasets with incomplete FOV. TractCloud-FOV significantly outperforms several state-of-the-art methods on all testing datasets in terms of streamline classification accuracy, generalization ability, tract anatomical depiction, and computational efficiency. Overall, TractCloud-FOV achieves efficient and consistent tractography parcellation in diffusion MRI with incomplete FOV.
[ { "version": "v1", "created": "Fri, 28 Feb 2025 01:36:38 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 04:31:21 GMT" } ]
2025-03-07T00:00:00
[ [ "Chen", "Yuqian", "" ], [ "Zekelman", "Leo", "" ], [ "Lo", "Yui", "" ], [ "Cetin-Karayumak", "Suheyla", "" ], [ "Xue", "Tengfei", "" ], [ "Rathi", "Yogesh", "" ], [ "Makris", "Nikos", "" ], [ "Zhang", "Fan", "" ], [ "Cai", "Weidong", "" ], [ "O'Donnell", "Lauren J.", "" ] ]
TITLE: TractCloud-FOV: Deep Learning-based Robust Tractography Parcellation in Diffusion MRI with Incomplete Field of View ABSTRACT: Tractography parcellation classifies streamlines reconstructed from diffusion MRI into anatomically defined fiber tracts for clinical and research applications. However, clinical scans often have incomplete fields of view (FOV) where brain regions are partially imaged, leading to partial or truncated fiber tracts. To address this challenge, we introduce TractCloud-FOV, a deep learning framework that robustly parcellates tractography under conditions of incomplete FOV. We propose a novel training strategy, FOV-Cut Augmentation (FOV-CA), in which we synthetically cut tractograms to simulate a spectrum of real-world inferior FOV cutoff scenarios. This data augmentation approach enriches the training set with realistic truncated streamlines, enabling the model to achieve superior generalization. We evaluate the proposed TractCloud-FOV on both synthetically cut tractography and two real-life datasets with incomplete FOV. TractCloud-FOV significantly outperforms several state-of-the-art methods on all testing datasets in terms of streamline classification accuracy, generalization ability, tract anatomical depiction, and computational efficiency. Overall, TractCloud-FOV achieves efficient and consistent tractography parcellation in diffusion MRI with incomplete FOV.
no_new_dataset
0.953751
2503.00168
Benedikt Blumenstiel
Benedikt Blumenstiel, Nassim Ait Ali Braham, Conrad M Albrecht, Stefano Maurogiovanni, Paolo Fraccaro
SSL4EO-S12 v1.1: A Multimodal, Multiseasonal Dataset for Pretraining, Updated
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
This technical report presents SSL4EO-S12 v1.1, a multimodal, multitemporal Earth Observation dataset designed for pretraining large-scale foundation models. Building on the success of SSL4EO-S12 v1.0, the new version addresses the previous challenges of data misalignment and a limited data structure for low-barrier, analysis-ready EO processing. SSL4EO-S12 v1.1 covers the world's 10,000 largest cities and its surroundings within a 50 km radius across four seasons, resulting in a diverse collection of nearly one million patches. SSL4EO-S12 v1.1 packages the data in Zarr file format for cloud-efficient loading and representation of meta-information such as including cloud masks and geolocation. Released under the CC-BY-4.0 license, SSL4EO-S12 v1.1 facilitates open research and provides a robust foundation for future advancements in self-supervised learning and geospatial analysis. The dataset is available online through https://datapub.fz-juelich.de/ssl4eo-s12, and we provided additional resources at https://github.com/DLR-MF-DAS/SSL4EO-S12-v1.1.
[ { "version": "v1", "created": "Fri, 28 Feb 2025 20:30:56 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 09:23:35 GMT" } ]
2025-03-07T00:00:00
[ [ "Blumenstiel", "Benedikt", "" ], [ "Braham", "Nassim Ait Ali", "" ], [ "Albrecht", "Conrad M", "" ], [ "Maurogiovanni", "Stefano", "" ], [ "Fraccaro", "Paolo", "" ] ]
TITLE: SSL4EO-S12 v1.1: A Multimodal, Multiseasonal Dataset for Pretraining, Updated ABSTRACT: This technical report presents SSL4EO-S12 v1.1, a multimodal, multitemporal Earth Observation dataset designed for pretraining large-scale foundation models. Building on the success of SSL4EO-S12 v1.0, the new version addresses the previous challenges of data misalignment and a limited data structure for low-barrier, analysis-ready EO processing. SSL4EO-S12 v1.1 covers the world's 10,000 largest cities and its surroundings within a 50 km radius across four seasons, resulting in a diverse collection of nearly one million patches. SSL4EO-S12 v1.1 packages the data in Zarr file format for cloud-efficient loading and representation of meta-information such as including cloud masks and geolocation. Released under the CC-BY-4.0 license, SSL4EO-S12 v1.1 facilitates open research and provides a robust foundation for future advancements in self-supervised learning and geospatial analysis. The dataset is available online through https://datapub.fz-juelich.de/ssl4eo-s12, and we provided additional resources at https://github.com/DLR-MF-DAS/SSL4EO-S12-v1.1.
new_dataset
0.964018
2503.00675
Wenke E
Wenke E, Chao Yuan, Li Li, Yixin Sun, Yona Falinie A. Gaus, Amir Atapour-Abarghouei, Toby P. Breckon
Dur360BEV: A Real-world 360-degree Single Camera Dataset and Benchmark for Bird-Eye View Mapping in Autonomous Driving
null
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
We present Dur360BEV, a novel spherical camera autonomous driving dataset equipped with a high-resolution 128-channel 3D LiDAR and a RTK-refined GNSS/INS system, along with a benchmark architecture designed to generate Bird-Eye-View (BEV) maps using only a single spherical camera. This dataset and benchmark address the challenges of BEV generation in autonomous driving, particularly by reducing hardware complexity through the use of a single 360-degree camera instead of multiple perspective cameras. Within our benchmark architecture, we propose a novel spherical-image-to-BEV module that leverages spherical imagery and a refined sampling strategy to project features from 2D to 3D. Our approach also includes an innovative application of focal loss, specifically adapted to address the extreme class imbalance often encountered in BEV segmentation tasks, that demonstrates improved segmentation performance on the Dur360BEV dataset. The results show that our benchmark not only simplifies the sensor setup but also achieves competitive performance.
[ { "version": "v1", "created": "Sun, 2 Mar 2025 00:40:50 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 05:59:08 GMT" } ]
2025-03-07T00:00:00
[ [ "E", "Wenke", "" ], [ "Yuan", "Chao", "" ], [ "Li", "Li", "" ], [ "Sun", "Yixin", "" ], [ "Gaus", "Yona Falinie A.", "" ], [ "Atapour-Abarghouei", "Amir", "" ], [ "Breckon", "Toby P.", "" ] ]
TITLE: Dur360BEV: A Real-world 360-degree Single Camera Dataset and Benchmark for Bird-Eye View Mapping in Autonomous Driving ABSTRACT: We present Dur360BEV, a novel spherical camera autonomous driving dataset equipped with a high-resolution 128-channel 3D LiDAR and a RTK-refined GNSS/INS system, along with a benchmark architecture designed to generate Bird-Eye-View (BEV) maps using only a single spherical camera. This dataset and benchmark address the challenges of BEV generation in autonomous driving, particularly by reducing hardware complexity through the use of a single 360-degree camera instead of multiple perspective cameras. Within our benchmark architecture, we propose a novel spherical-image-to-BEV module that leverages spherical imagery and a refined sampling strategy to project features from 2D to 3D. Our approach also includes an innovative application of focal loss, specifically adapted to address the extreme class imbalance often encountered in BEV segmentation tasks, that demonstrates improved segmentation performance on the Dur360BEV dataset. The results show that our benchmark not only simplifies the sensor setup but also achieves competitive performance.
new_dataset
0.956309
2503.00736
WenHui Lei
Wenhui Lei, Anqi Li, Yusheng Tan, Hanyu Chen, Xiaofan Zhang
Shazam: Unifying Multiple Foundation Models for Advanced Computational Pathology
9 pages, 2 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Foundation Models (FMs) in computational pathology (CPath) have significantly advanced the extraction of meaningful features from histopathology image datasets, achieving strong performance across various clinical tasks. Despite their impressive performance, these models often exhibit variability when applied to different tasks, prompting the need for a unified framework capable of consistently excelling across various applications. In this work, we propose Shazam, a novel framework designed to efficiently combine multiple CPath models. Unlike previous approaches that train a fixed-parameter FM, Shazam dynamically extracts and refines information from diverse FMs for each specific task. To ensure that each FM contributes effectively without dominance, a novel distillation strategy is applied, guiding the student model with features from all teacher models, which enhances its generalization ability. Experimental results on two pathology patch classification datasets demonstrate that Shazam outperforms existing CPath models and other fusion methods. Its lightweight, flexible design makes it a promising solution for improving CPath analysis in real-world settings. Code will be available at https://github.com/Tuner12/Shazam.
[ { "version": "v1", "created": "Sun, 2 Mar 2025 05:20:41 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 03:35:09 GMT" } ]
2025-03-07T00:00:00
[ [ "Lei", "Wenhui", "" ], [ "Li", "Anqi", "" ], [ "Tan", "Yusheng", "" ], [ "Chen", "Hanyu", "" ], [ "Zhang", "Xiaofan", "" ] ]
TITLE: Shazam: Unifying Multiple Foundation Models for Advanced Computational Pathology ABSTRACT: Foundation Models (FMs) in computational pathology (CPath) have significantly advanced the extraction of meaningful features from histopathology image datasets, achieving strong performance across various clinical tasks. Despite their impressive performance, these models often exhibit variability when applied to different tasks, prompting the need for a unified framework capable of consistently excelling across various applications. In this work, we propose Shazam, a novel framework designed to efficiently combine multiple CPath models. Unlike previous approaches that train a fixed-parameter FM, Shazam dynamically extracts and refines information from diverse FMs for each specific task. To ensure that each FM contributes effectively without dominance, a novel distillation strategy is applied, guiding the student model with features from all teacher models, which enhances its generalization ability. Experimental results on two pathology patch classification datasets demonstrate that Shazam outperforms existing CPath models and other fusion methods. Its lightweight, flexible design makes it a promising solution for improving CPath analysis in real-world settings. Code will be available at https://github.com/Tuner12/Shazam.
no_new_dataset
0.947137
2503.01234
Sijin Sun
Sijin Sun, Ming Deng, Xingrui Yu, Xinyu Xi, Liangbin Zhao
Self-Adaptive Gamma Context-Aware SSM-based Model for Metal Defect Detection
19 pages, 9 figures, under review
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metal defect detection is critical in industrial quality assurance, yet existing methods struggle with grayscale variations and complex defect states, limiting its robustness. To address these challenges, this paper proposes a Self-Adaptive Gamma Context-Aware SSM-based model(GCM-DET). This advanced detection framework integrating a Dynamic Gamma Correction (GC) module to enhance grayscale representation and optimize feature extraction for precise defect reconstruction. A State-Space Search Management (SSM) architecture captures robust multi-scale features, effectively handling defects of varying shapes and scales. Focal Loss is employed to mitigate class imbalance and refine detection accuracy. Additionally, the CD5-DET dataset is introduced, specifically designed for port container maintenance, featuring significant grayscale variations and intricate defect patterns. Experimental results demonstrate that the proposed model achieves substantial improvements, with [email protected] gains of 27.6\%, 6.6\%, and 2.6\% on the CD5-DET, NEU-DET, and GC10-DET datasets.
[ { "version": "v1", "created": "Mon, 3 Mar 2025 06:57:54 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 07:11:32 GMT" } ]
2025-03-07T00:00:00
[ [ "Sun", "Sijin", "" ], [ "Deng", "Ming", "" ], [ "Yu", "Xingrui", "" ], [ "Xi", "Xinyu", "" ], [ "Zhao", "Liangbin", "" ] ]
TITLE: Self-Adaptive Gamma Context-Aware SSM-based Model for Metal Defect Detection ABSTRACT: Metal defect detection is critical in industrial quality assurance, yet existing methods struggle with grayscale variations and complex defect states, limiting its robustness. To address these challenges, this paper proposes a Self-Adaptive Gamma Context-Aware SSM-based model(GCM-DET). This advanced detection framework integrating a Dynamic Gamma Correction (GC) module to enhance grayscale representation and optimize feature extraction for precise defect reconstruction. A State-Space Search Management (SSM) architecture captures robust multi-scale features, effectively handling defects of varying shapes and scales. Focal Loss is employed to mitigate class imbalance and refine detection accuracy. Additionally, the CD5-DET dataset is introduced, specifically designed for port container maintenance, featuring significant grayscale variations and intricate defect patterns. Experimental results demonstrate that the proposed model achieves substantial improvements, with [email protected] gains of 27.6\%, 6.6\%, and 2.6\% on the CD5-DET, NEU-DET, and GC10-DET datasets.
new_dataset
0.957873
2503.02118
C\'edric Solenthaler
C\'edric Solenthaler, Joshua Smailes, Martin Strohmeier
OrbID: Identifying Orbcomm Satellite RF Fingerprints
null
null
10.14722/spacesec.2025.23031
null
eess.SP cs.CR
http://creativecommons.org/licenses/by/4.0/
An increase in availability of Software Defined Radios (SDRs) has caused a dramatic shift in the threat landscape of legacy satellite systems, opening them up to easy spoofing attacks by low-budget adversaries. Physical-layer authentication methods can help improve the security of these systems by providing additional validation without modifying the space segment. This paper extends previous research on Radio Frequency Fingerprinting (RFF) of satellite communication to the Orbcomm satellite formation. The GPS and Iridium constellations are already well covered in prior research, but the feasibility of transferring techniques to other formations has not yet been examined, and raises previously undiscussed challenges. In this paper, we collect a novel dataset containing 8992474 packets from the Orbcom satellite constellation using different SDRs and locations. We use this dataset to train RFF systems based on convolutional neural networks. We achieve an ROC AUC score of 0.53 when distinguishing different satellites within the constellation, and 0.98 when distinguishing legitimate satellites from SDRs in a spoofing scenario. We also demonstrate the possibility of mixing datasets using different SDRs in different physical locations.
[ { "version": "v1", "created": "Mon, 3 Mar 2025 23:00:32 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 14:33:31 GMT" } ]
2025-03-07T00:00:00
[ [ "Solenthaler", "Cédric", "" ], [ "Smailes", "Joshua", "" ], [ "Strohmeier", "Martin", "" ] ]
TITLE: OrbID: Identifying Orbcomm Satellite RF Fingerprints ABSTRACT: An increase in availability of Software Defined Radios (SDRs) has caused a dramatic shift in the threat landscape of legacy satellite systems, opening them up to easy spoofing attacks by low-budget adversaries. Physical-layer authentication methods can help improve the security of these systems by providing additional validation without modifying the space segment. This paper extends previous research on Radio Frequency Fingerprinting (RFF) of satellite communication to the Orbcomm satellite formation. The GPS and Iridium constellations are already well covered in prior research, but the feasibility of transferring techniques to other formations has not yet been examined, and raises previously undiscussed challenges. In this paper, we collect a novel dataset containing 8992474 packets from the Orbcom satellite constellation using different SDRs and locations. We use this dataset to train RFF systems based on convolutional neural networks. We achieve an ROC AUC score of 0.53 when distinguishing different satellites within the constellation, and 0.98 when distinguishing legitimate satellites from SDRs in a spoofing scenario. We also demonstrate the possibility of mixing datasets using different SDRs in different physical locations.
new_dataset
0.96051
2503.02356
Hongtao Xu
Xiulong Yuan, Hongtao Xu, Wenting Shen, Ang Wang, Xiafei Qiu, Jie Zhang, Yuqiong Liu, Bowen Yu, Junyang Lin, Mingzhen Li, Weile Jia, Yong Li, Wei Lin
Efficient Long Context Fine-tuning with Chunk Flow
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long context fine-tuning of large language models(LLMs) involves training on datasets that are predominantly composed of short sequences and a small proportion of longer sequences. However, existing approaches overlook this long-tail distribution and employ training strategies designed specifically for long sequences. Moreover, these approaches also fail to address the challenges posed by variable sequence lengths during distributed training, such as load imbalance in data parallelism and severe pipeline bubbles in pipeline parallelism. These issues lead to suboptimal training performance and poor GPU resource utilization. To tackle these problems, we propose a chunk-centric training method named ChunkFlow. ChunkFlow reorganizes input sequences into uniformly sized chunks by consolidating short sequences and splitting longer ones. This approach achieves optimal computational efficiency and balance among training inputs. Additionally, ChunkFlow incorporates a state-aware chunk scheduling mechanism to ensure that the peak memory usage during training is primarily determined by the chunk size rather than the maximum sequence length in the dataset. Integrating this scheduling mechanism with existing pipeline scheduling algorithms further enhances the performance of distributed training. Experimental results demonstrate that, compared with Megatron-LM, ChunkFlow can be up to 4.53x faster in the long context fine-tuning of LLMs. Furthermore, we believe that ChunkFlow serves as an effective solution for a broader range of scenarios, such as long context continual pre-training, where datasets contain variable-length sequences.
[ { "version": "v1", "created": "Tue, 4 Mar 2025 07:27:41 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 02:44:16 GMT" } ]
2025-03-07T00:00:00
[ [ "Yuan", "Xiulong", "" ], [ "Xu", "Hongtao", "" ], [ "Shen", "Wenting", "" ], [ "Wang", "Ang", "" ], [ "Qiu", "Xiafei", "" ], [ "Zhang", "Jie", "" ], [ "Liu", "Yuqiong", "" ], [ "Yu", "Bowen", "" ], [ "Lin", "Junyang", "" ], [ "Li", "Mingzhen", "" ], [ "Jia", "Weile", "" ], [ "Li", "Yong", "" ], [ "Lin", "Wei", "" ] ]
TITLE: Efficient Long Context Fine-tuning with Chunk Flow ABSTRACT: Long context fine-tuning of large language models(LLMs) involves training on datasets that are predominantly composed of short sequences and a small proportion of longer sequences. However, existing approaches overlook this long-tail distribution and employ training strategies designed specifically for long sequences. Moreover, these approaches also fail to address the challenges posed by variable sequence lengths during distributed training, such as load imbalance in data parallelism and severe pipeline bubbles in pipeline parallelism. These issues lead to suboptimal training performance and poor GPU resource utilization. To tackle these problems, we propose a chunk-centric training method named ChunkFlow. ChunkFlow reorganizes input sequences into uniformly sized chunks by consolidating short sequences and splitting longer ones. This approach achieves optimal computational efficiency and balance among training inputs. Additionally, ChunkFlow incorporates a state-aware chunk scheduling mechanism to ensure that the peak memory usage during training is primarily determined by the chunk size rather than the maximum sequence length in the dataset. Integrating this scheduling mechanism with existing pipeline scheduling algorithms further enhances the performance of distributed training. Experimental results demonstrate that, compared with Megatron-LM, ChunkFlow can be up to 4.53x faster in the long context fine-tuning of LLMs. Furthermore, we believe that ChunkFlow serves as an effective solution for a broader range of scenarios, such as long context continual pre-training, where datasets contain variable-length sequences.
no_new_dataset
0.950869
2503.02365
Dana Moukheiber
Lama Moukheiber, Mira Moukheiber, Dana Moukheiiber, Jae-Woo Ju, Hyung-Chul Lee
EchoQA: A Large Collection of Instruction Tuning Data for Echocardiogram Reports
NeurIPS SafeGenAI 2024
null
null
null
cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
We introduce a novel question-answering (QA) dataset using echocardiogram reports sourced from the Medical Information Mart for Intensive Care database. This dataset is specifically designed to enhance QA systems in cardiology, consisting of 771,244 QA pairs addressing a wide array of cardiac abnormalities and their severity. We compare large language models (LLMs), including open-source and biomedical-specific models for zero-shot evaluation, and closed-source models for zero-shot and three-shot evaluation. Our results show that fine-tuning LLMs improves performance across various QA metrics, validating the value of our dataset. Clinicians also qualitatively evaluate the best-performing model to assess the LLM responses for correctness. Further, we conduct fine-grained fairness audits to assess the bias-performance trade-off of LLMs across various social determinants of health. Our objective is to propel the field forward by establishing a benchmark for LLM AI agents aimed at supporting clinicians with cardiac differential diagnoses, thereby reducing the documentation burden that contributes to clinician burnout and enabling healthcare professionals to focus more on patient care.
[ { "version": "v1", "created": "Tue, 4 Mar 2025 07:45:45 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 03:29:31 GMT" } ]
2025-03-07T00:00:00
[ [ "Moukheiber", "Lama", "" ], [ "Moukheiber", "Mira", "" ], [ "Moukheiiber", "Dana", "" ], [ "Ju", "Jae-Woo", "" ], [ "Lee", "Hyung-Chul", "" ] ]
TITLE: EchoQA: A Large Collection of Instruction Tuning Data for Echocardiogram Reports ABSTRACT: We introduce a novel question-answering (QA) dataset using echocardiogram reports sourced from the Medical Information Mart for Intensive Care database. This dataset is specifically designed to enhance QA systems in cardiology, consisting of 771,244 QA pairs addressing a wide array of cardiac abnormalities and their severity. We compare large language models (LLMs), including open-source and biomedical-specific models for zero-shot evaluation, and closed-source models for zero-shot and three-shot evaluation. Our results show that fine-tuning LLMs improves performance across various QA metrics, validating the value of our dataset. Clinicians also qualitatively evaluate the best-performing model to assess the LLM responses for correctness. Further, we conduct fine-grained fairness audits to assess the bias-performance trade-off of LLMs across various social determinants of health. Our objective is to propel the field forward by establishing a benchmark for LLM AI agents aimed at supporting clinicians with cardiac differential diagnoses, thereby reducing the documentation burden that contributes to clinician burnout and enabling healthcare professionals to focus more on patient care.
new_dataset
0.959383
2503.02448
Qiyi Wang
Qiyi Wang, Yinning Shao, Yunlong Ma, Min Liu
NodeNAS: Node-Specific Graph Neural Architecture Search for Out-of-Distribution Generalization
Accepted by DASFAA2025
null
null
null
cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph neural architecture search (GraphNAS) has demonstrated advantages in mitigating performance degradation of graph neural networks (GNNs) due to distribution shifts. Recent approaches introduce weight sharing across tailored architectures, generating unique GNN architectures for each graph end-to-end. However, existing GraphNAS methods do not account for distribution patterns across different graphs and heavily rely on extensive training data. With sparse or single training graphs, these methods struggle to discover optimal mappings between graphs and architectures, failing to generalize to out-of-distribution (OOD) data. In this paper, we propose node-specific graph neural architecture search(NodeNAS), which aims to tailor distinct aggregation methods for different nodes through disentangling node topology and graph distribution with limited datasets. We further propose adaptive aggregation attention based Multi-dim NodeNAS method(MNNAS), which learns an node-specific architecture customizer with good generalizability. Specifically, we extend the vertical depth of the search space, supporting simultaneous node-specific architecture customization across multiple dimensions. Moreover, we model the power-law distribution of node degrees under varying assortativity, encoding structure invariant information to guide architecture customization across each dimension. Extensive experiments across supervised and unsupervised tasks demonstrate that MNNAS surpasses state-of-the-art algorithms and achieves excellent OOD generalization.
[ { "version": "v1", "created": "Tue, 4 Mar 2025 09:45:27 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 02:31:06 GMT" } ]
2025-03-07T00:00:00
[ [ "Wang", "Qiyi", "" ], [ "Shao", "Yinning", "" ], [ "Ma", "Yunlong", "" ], [ "Liu", "Min", "" ] ]
TITLE: NodeNAS: Node-Specific Graph Neural Architecture Search for Out-of-Distribution Generalization ABSTRACT: Graph neural architecture search (GraphNAS) has demonstrated advantages in mitigating performance degradation of graph neural networks (GNNs) due to distribution shifts. Recent approaches introduce weight sharing across tailored architectures, generating unique GNN architectures for each graph end-to-end. However, existing GraphNAS methods do not account for distribution patterns across different graphs and heavily rely on extensive training data. With sparse or single training graphs, these methods struggle to discover optimal mappings between graphs and architectures, failing to generalize to out-of-distribution (OOD) data. In this paper, we propose node-specific graph neural architecture search(NodeNAS), which aims to tailor distinct aggregation methods for different nodes through disentangling node topology and graph distribution with limited datasets. We further propose adaptive aggregation attention based Multi-dim NodeNAS method(MNNAS), which learns an node-specific architecture customizer with good generalizability. Specifically, we extend the vertical depth of the search space, supporting simultaneous node-specific architecture customization across multiple dimensions. Moreover, we model the power-law distribution of node degrees under varying assortativity, encoding structure invariant information to guide architecture customization across each dimension. Extensive experiments across supervised and unsupervised tasks demonstrate that MNNAS surpasses state-of-the-art algorithms and achieves excellent OOD generalization.
no_new_dataset
0.951639
2503.02660
Ivan Sipiran
Isaac Aguirre, Ivan Sipiran, Gabriel Monta\~nana
A dataset-free approach for self-supervised learning of 3D reflectional symmetries
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we explore a self-supervised model that learns to detect the symmetry of a single object without requiring a dataset-relying solely on the input object itself. We hypothesize that the symmetry of an object can be determined by its intrinsic features, eliminating the need for large datasets during training. Additionally, we design a self-supervised learning strategy that removes the necessity of ground truth labels. These two key elements make our approach both effective and efficient, addressing the prohibitive costs associated with constructing large, labeled datasets for this task. The novelty of our method lies in computing features for each point on the object based on the idea that symmetric points should exhibit similar visual appearances. To achieve this, we leverage features extracted from a foundational image model to compute a visual descriptor for the points. This approach equips the point cloud with visual features that facilitate the optimization of our self-supervised model. Experimental results demonstrate that our method surpasses the state-of-the-art models trained on large datasets. Furthermore, our model is more efficient, effective, and operates with minimal computational and data resources.
[ { "version": "v1", "created": "Tue, 4 Mar 2025 14:22:08 GMT" }, { "version": "v2", "created": "Wed, 5 Mar 2025 19:36:48 GMT" } ]
2025-03-07T00:00:00
[ [ "Aguirre", "Isaac", "" ], [ "Sipiran", "Ivan", "" ], [ "Montañana", "Gabriel", "" ] ]
TITLE: A dataset-free approach for self-supervised learning of 3D reflectional symmetries ABSTRACT: In this paper, we explore a self-supervised model that learns to detect the symmetry of a single object without requiring a dataset-relying solely on the input object itself. We hypothesize that the symmetry of an object can be determined by its intrinsic features, eliminating the need for large datasets during training. Additionally, we design a self-supervised learning strategy that removes the necessity of ground truth labels. These two key elements make our approach both effective and efficient, addressing the prohibitive costs associated with constructing large, labeled datasets for this task. The novelty of our method lies in computing features for each point on the object based on the idea that symmetric points should exhibit similar visual appearances. To achieve this, we leverage features extracted from a foundational image model to compute a visual descriptor for the points. This approach equips the point cloud with visual features that facilitate the optimization of our self-supervised model. Experimental results demonstrate that our method surpasses the state-of-the-art models trained on large datasets. Furthermore, our model is more efficient, effective, and operates with minimal computational and data resources.
no_new_dataset
0.94545
2503.02844
Paul Janson
Vaibhav Singh, Paul Janson, Paria Mehrbod, Adam Ibrahim, Irina Rish, Eugene Belilovsky, Benjamin Th\'erien
Beyond Cosine Decay: On the effectiveness of Infinite Learning Rate Schedule for Continual Pre-training
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
The ever-growing availability of unlabeled data presents both opportunities and challenges for training artificial intelligence systems. While self-supervised learning (SSL) has emerged as a powerful paradigm for extracting meaningful representations from vast amounts of unlabeled data, existing methods still struggle to adapt to the non-stationary, non-IID nature of real-world data streams without forgetting previously learned knowledge. Recent works have adopted a repeated cosine annealing schedule for large-scale continual pre-training; however, these schedules (1) inherently cause forgetting during the re-warming phase and (2) have not been systematically compared to existing continual SSL methods. In this work, we systematically compare the widely used cosine schedule with the recently proposed infinite learning rate schedule and empirically find the latter to be a more effective alternative. Our extensive empirical evaluation across diverse image and language datasets demonstrates that the infinite learning rate schedule consistently enhances continual pre-training performance compared to a repeated cosine decay without being restricted to a fixed iteration budget. For instance, in a small-scale MAE pre-training setup, it outperforms several strong baselines from the literature. We then scale up our experiments to larger MAE pre-training and autoregressive language model pre-training. Our results show that the infinite learning rate schedule remains effective at scale, surpassing repeated cosine decay for both MAE pre-training and zero-shot LM benchmarks.
[ { "version": "v1", "created": "Tue, 4 Mar 2025 18:15:57 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 00:17:08 GMT" } ]
2025-03-07T00:00:00
[ [ "Singh", "Vaibhav", "" ], [ "Janson", "Paul", "" ], [ "Mehrbod", "Paria", "" ], [ "Ibrahim", "Adam", "" ], [ "Rish", "Irina", "" ], [ "Belilovsky", "Eugene", "" ], [ "Thérien", "Benjamin", "" ] ]
TITLE: Beyond Cosine Decay: On the effectiveness of Infinite Learning Rate Schedule for Continual Pre-training ABSTRACT: The ever-growing availability of unlabeled data presents both opportunities and challenges for training artificial intelligence systems. While self-supervised learning (SSL) has emerged as a powerful paradigm for extracting meaningful representations from vast amounts of unlabeled data, existing methods still struggle to adapt to the non-stationary, non-IID nature of real-world data streams without forgetting previously learned knowledge. Recent works have adopted a repeated cosine annealing schedule for large-scale continual pre-training; however, these schedules (1) inherently cause forgetting during the re-warming phase and (2) have not been systematically compared to existing continual SSL methods. In this work, we systematically compare the widely used cosine schedule with the recently proposed infinite learning rate schedule and empirically find the latter to be a more effective alternative. Our extensive empirical evaluation across diverse image and language datasets demonstrates that the infinite learning rate schedule consistently enhances continual pre-training performance compared to a repeated cosine decay without being restricted to a fixed iteration budget. For instance, in a small-scale MAE pre-training setup, it outperforms several strong baselines from the literature. We then scale up our experiments to larger MAE pre-training and autoregressive language model pre-training. Our results show that the infinite learning rate schedule remains effective at scale, surpassing repeated cosine decay for both MAE pre-training and zero-shot LM benchmarks.
no_new_dataset
0.950365
2503.02910
Wenqi Guo
Wenqi Guo, Yiyang Du, Shan Du
LangGas: Introducing Language in Selective Zero-Shot Background Subtraction for Semi-Transparent Gas Leak Detection with a New Dataset
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gas leakage poses a significant hazard that requires prevention. Traditionally, human inspection has been used for detection, a slow and labour-intensive process. Recent research has applied machine learning techniques to this problem, yet there remains a shortage of high-quality, publicly available datasets. This paper introduces a synthetic dataset featuring diverse backgrounds, interfering foreground objects, diverse leak locations, and precise segmentation ground truth. We propose a zero-shot method that combines background subtraction, zero-shot object detection, filtering, and segmentation to leverage this dataset. Experimental results indicate that our approach significantly outperforms baseline methods based solely on background subtraction and zero-shot object detection with segmentation, reaching an IoU of 69\% overall. We also present an analysis of various prompt configurations and threshold settings to provide deeper insights into the performance of our method. The code and dataset will be released after publication.
[ { "version": "v1", "created": "Tue, 4 Mar 2025 06:17:17 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 05:19:44 GMT" } ]
2025-03-07T00:00:00
[ [ "Guo", "Wenqi", "" ], [ "Du", "Yiyang", "" ], [ "Du", "Shan", "" ] ]
TITLE: LangGas: Introducing Language in Selective Zero-Shot Background Subtraction for Semi-Transparent Gas Leak Detection with a New Dataset ABSTRACT: Gas leakage poses a significant hazard that requires prevention. Traditionally, human inspection has been used for detection, a slow and labour-intensive process. Recent research has applied machine learning techniques to this problem, yet there remains a shortage of high-quality, publicly available datasets. This paper introduces a synthetic dataset featuring diverse backgrounds, interfering foreground objects, diverse leak locations, and precise segmentation ground truth. We propose a zero-shot method that combines background subtraction, zero-shot object detection, filtering, and segmentation to leverage this dataset. Experimental results indicate that our approach significantly outperforms baseline methods based solely on background subtraction and zero-shot object detection with segmentation, reaching an IoU of 69\% overall. We also present an analysis of various prompt configurations and threshold settings to provide deeper insights into the performance of our method. The code and dataset will be released after publication.
new_dataset
0.956917
2503.03125
Ziying Song
Ziying Song, Caiyan Jia, Lin Liu, Hongyu Pan, Yongchang Zhang, Junming Wang, Xingyu Zhang, Shaoqing Xu, Lei Yang, Yadan Luo
Don't Shake the Wheel: Momentum-Aware Planning in End-to-End Autonomous Driving
16 pages, 8 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
End-to-end autonomous driving frameworks enable seamless integration of perception and planning but often rely on one-shot trajectory prediction, which may lead to unstable control and vulnerability to occlusions in single-frame perception. To address this, we propose the Momentum-Aware Driving (MomAD) framework, which introduces trajectory momentum and perception momentum to stabilize and refine trajectory predictions. MomAD comprises two core components: (1) Topological Trajectory Matching (TTM) employs Hausdorff Distance to select the optimal planning query that aligns with prior paths to ensure coherence;(2) Momentum Planning Interactor (MPI) cross-attends the selected planning query with historical queries to expand static and dynamic perception files. This enriched query, in turn, helps regenerate long-horizon trajectory and reduce collision risks. To mitigate noise arising from dynamic environments and detection errors, we introduce robust instance denoising during training, enabling the planning model to focus on critical signals and improve its robustness. We also propose a novel Trajectory Prediction Consistency (TPC) metric to quantitatively assess planning stability. Experiments on the nuScenes dataset demonstrate that MomAD achieves superior long-term consistency (>=3s) compared to SOTA methods. Moreover, evaluations on the curated Turning-nuScenes shows that MomAD reduces the collision rate by 26% and improves TPC by 0.97m (33.45%) over a 6s prediction horizon, while closedloop on Bench2Drive demonstrates an up to 16.3% improvement in success rate.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 02:43:52 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 09:53:10 GMT" } ]
2025-03-07T00:00:00
[ [ "Song", "Ziying", "" ], [ "Jia", "Caiyan", "" ], [ "Liu", "Lin", "" ], [ "Pan", "Hongyu", "" ], [ "Zhang", "Yongchang", "" ], [ "Wang", "Junming", "" ], [ "Zhang", "Xingyu", "" ], [ "Xu", "Shaoqing", "" ], [ "Yang", "Lei", "" ], [ "Luo", "Yadan", "" ] ]
TITLE: Don't Shake the Wheel: Momentum-Aware Planning in End-to-End Autonomous Driving ABSTRACT: End-to-end autonomous driving frameworks enable seamless integration of perception and planning but often rely on one-shot trajectory prediction, which may lead to unstable control and vulnerability to occlusions in single-frame perception. To address this, we propose the Momentum-Aware Driving (MomAD) framework, which introduces trajectory momentum and perception momentum to stabilize and refine trajectory predictions. MomAD comprises two core components: (1) Topological Trajectory Matching (TTM) employs Hausdorff Distance to select the optimal planning query that aligns with prior paths to ensure coherence;(2) Momentum Planning Interactor (MPI) cross-attends the selected planning query with historical queries to expand static and dynamic perception files. This enriched query, in turn, helps regenerate long-horizon trajectory and reduce collision risks. To mitigate noise arising from dynamic environments and detection errors, we introduce robust instance denoising during training, enabling the planning model to focus on critical signals and improve its robustness. We also propose a novel Trajectory Prediction Consistency (TPC) metric to quantitatively assess planning stability. Experiments on the nuScenes dataset demonstrate that MomAD achieves superior long-term consistency (>=3s) compared to SOTA methods. Moreover, evaluations on the curated Turning-nuScenes shows that MomAD reduces the collision rate by 26% and improves TPC by 0.97m (33.45%) over a 6s prediction horizon, while closedloop on Bench2Drive demonstrates an up to 16.3% improvement in success rate.
no_new_dataset
0.949529
2503.03190
Jingzhou Luo
Jingzhou Luo, Yang Liu, Weixing Chen, Zhen Li, Yaowei Wang, Guanbin Li, Liang Lin
DSPNet: Dual-vision Scene Perception for Robust 3D Question Answering
Accepted by CVPR 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
3D Question Answering (3D QA) requires the model to comprehensively understand its situated 3D scene described by the text, then reason about its surrounding environment and answer a question under that situation. However, existing methods usually rely on global scene perception from pure 3D point clouds and overlook the importance of rich local texture details from multi-view images. Moreover, due to the inherent noise in camera poses and complex occlusions, there exists significant feature degradation and reduced feature robustness problems when aligning 3D point cloud with multi-view images. In this paper, we propose a Dual-vision Scene Perception Network (DSPNet), to comprehensively integrate multi-view and point cloud features to improve robustness in 3D QA. Our Text-guided Multi-view Fusion (TGMF) module prioritizes image views that closely match the semantic content of the text. To adaptively fuse back-projected multi-view images with point cloud features, we design the Adaptive Dual-vision Perception (ADVP) module, enhancing 3D scene comprehension. Additionally, our Multimodal Context-guided Reasoning (MCGR) module facilitates robust reasoning by integrating contextual information across visual and linguistic modalities. Experimental results on SQA3D and ScanQA datasets demonstrate the superiority of our DSPNet. Codes will be available at https://github.com/LZ-CH/DSPNet.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 05:13:53 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 03:32:56 GMT" } ]
2025-03-07T00:00:00
[ [ "Luo", "Jingzhou", "" ], [ "Liu", "Yang", "" ], [ "Chen", "Weixing", "" ], [ "Li", "Zhen", "" ], [ "Wang", "Yaowei", "" ], [ "Li", "Guanbin", "" ], [ "Lin", "Liang", "" ] ]
TITLE: DSPNet: Dual-vision Scene Perception for Robust 3D Question Answering ABSTRACT: 3D Question Answering (3D QA) requires the model to comprehensively understand its situated 3D scene described by the text, then reason about its surrounding environment and answer a question under that situation. However, existing methods usually rely on global scene perception from pure 3D point clouds and overlook the importance of rich local texture details from multi-view images. Moreover, due to the inherent noise in camera poses and complex occlusions, there exists significant feature degradation and reduced feature robustness problems when aligning 3D point cloud with multi-view images. In this paper, we propose a Dual-vision Scene Perception Network (DSPNet), to comprehensively integrate multi-view and point cloud features to improve robustness in 3D QA. Our Text-guided Multi-view Fusion (TGMF) module prioritizes image views that closely match the semantic content of the text. To adaptively fuse back-projected multi-view images with point cloud features, we design the Adaptive Dual-vision Perception (ADVP) module, enhancing 3D scene comprehension. Additionally, our Multimodal Context-guided Reasoning (MCGR) module facilitates robust reasoning by integrating contextual information across visual and linguistic modalities. Experimental results on SQA3D and ScanQA datasets demonstrate the superiority of our DSPNet. Codes will be available at https://github.com/LZ-CH/DSPNet.
no_new_dataset
0.945147
2503.03272
Li Lun
Li Lun, Kunyu Feng, Qinglong Ni, Ling Liang, Yuan Wang, Ying Li, Dunshan Yu, Xiaoxin Cui
Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients
Accepted by CVPR 2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking neural networks (SNNs) have shown their competence in handling spatial-temporal event-based data with low energy consumption. Similar to conventional artificial neural networks (ANNs), SNNs are also vulnerable to gradient-based adversarial attacks, wherein gradients are calculated by spatial-temporal back-propagation (STBP) and surrogate gradients (SGs). However, the SGs may be invisible for an inference-only model as they do not influence the inference results, and current gradient-based attacks are ineffective for binary dynamic images captured by the dynamic vision sensor (DVS). While some approaches addressed the issue of invisible SGs through universal SGs, their SGs lack a correlation with the victim model, resulting in sub-optimal performance. Moreover, the imperceptibility of existing SNN-based binary attacks is still insufficient. In this paper, we introduce an innovative potential-dependent surrogate gradient (PDSG) method to establish a robust connection between the SG and the model, thereby enhancing the adaptability of adversarial attacks across various models with invisible SGs. Additionally, we propose the sparse dynamic attack (SDA) to effectively attack binary dynamic images. Utilizing a generation-reduction paradigm, SDA can fully optimize the sparsity of adversarial perturbations. Experimental results demonstrate that our PDSG and SDA outperform state-of-the-art SNN-based attacks across various models and datasets. Specifically, our PDSG achieves 100% attack success rate on ImageNet, and our SDA obtains 82% attack success rate by modifying only 0.24% of the pixels on CIFAR10DVS. The code is available at https://github.com/ryime/PDSG-SDA .
[ { "version": "v1", "created": "Wed, 5 Mar 2025 08:52:55 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 13:49:46 GMT" } ]
2025-03-07T00:00:00
[ [ "Lun", "Li", "" ], [ "Feng", "Kunyu", "" ], [ "Ni", "Qinglong", "" ], [ "Liang", "Ling", "" ], [ "Wang", "Yuan", "" ], [ "Li", "Ying", "" ], [ "Yu", "Dunshan", "" ], [ "Cui", "Xiaoxin", "" ] ]
TITLE: Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients ABSTRACT: Spiking neural networks (SNNs) have shown their competence in handling spatial-temporal event-based data with low energy consumption. Similar to conventional artificial neural networks (ANNs), SNNs are also vulnerable to gradient-based adversarial attacks, wherein gradients are calculated by spatial-temporal back-propagation (STBP) and surrogate gradients (SGs). However, the SGs may be invisible for an inference-only model as they do not influence the inference results, and current gradient-based attacks are ineffective for binary dynamic images captured by the dynamic vision sensor (DVS). While some approaches addressed the issue of invisible SGs through universal SGs, their SGs lack a correlation with the victim model, resulting in sub-optimal performance. Moreover, the imperceptibility of existing SNN-based binary attacks is still insufficient. In this paper, we introduce an innovative potential-dependent surrogate gradient (PDSG) method to establish a robust connection between the SG and the model, thereby enhancing the adaptability of adversarial attacks across various models with invisible SGs. Additionally, we propose the sparse dynamic attack (SDA) to effectively attack binary dynamic images. Utilizing a generation-reduction paradigm, SDA can fully optimize the sparsity of adversarial perturbations. Experimental results demonstrate that our PDSG and SDA outperform state-of-the-art SNN-based attacks across various models and datasets. Specifically, our PDSG achieves 100% attack success rate on ImageNet, and our SDA obtains 82% attack success rate by modifying only 0.24% of the pixels on CIFAR10DVS. The code is available at https://github.com/ryime/PDSG-SDA .
no_new_dataset
0.943138
2503.03285
Thuan Duong
Khoi Anh Nguyen, Linh Yen Vu, Thang Dinh Duong, Thuan Nguyen Duong, Huy Thanh Nguyen, Vinh Quang Dinh
Enhancing Vietnamese VQA through Curriculum Learning on Raw and Augmented Text Representations
10 pages, 3 figures, AAAI-25 Workshop on Document Understanding and Intelligence
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual Question Answering (VQA) is a multimodal task requiring reasoning across textual and visual inputs, which becomes particularly challenging in low-resource languages like Vietnamese due to linguistic variability and the lack of high-quality datasets. Traditional methods often rely heavily on extensive annotated datasets, computationally expensive pipelines, and large pre-trained models, specifically in the domain of Vietnamese VQA, limiting their applicability in such scenarios. To address these limitations, we propose a training framework that combines a paraphrase-based feature augmentation module with a dynamic curriculum learning strategy. Explicitly, augmented samples are considered "easy" while raw samples are regarded as "hard". The framework then utilizes a mechanism that dynamically adjusts the ratio of easy to hard samples during training, progressively modifying the same dataset to increase its difficulty level. By enabling gradual adaptation to task complexity, this approach helps the Vietnamese VQA model generalize well, thus improving overall performance. Experimental results show consistent improvements on the OpenViVQA dataset and mixed outcomes on the ViVQA dataset, highlighting both the potential and challenges of our approach in advancing VQA for Vietnamese language.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 09:12:16 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 12:42:37 GMT" } ]
2025-03-07T00:00:00
[ [ "Nguyen", "Khoi Anh", "" ], [ "Vu", "Linh Yen", "" ], [ "Duong", "Thang Dinh", "" ], [ "Duong", "Thuan Nguyen", "" ], [ "Nguyen", "Huy Thanh", "" ], [ "Dinh", "Vinh Quang", "" ] ]
TITLE: Enhancing Vietnamese VQA through Curriculum Learning on Raw and Augmented Text Representations ABSTRACT: Visual Question Answering (VQA) is a multimodal task requiring reasoning across textual and visual inputs, which becomes particularly challenging in low-resource languages like Vietnamese due to linguistic variability and the lack of high-quality datasets. Traditional methods often rely heavily on extensive annotated datasets, computationally expensive pipelines, and large pre-trained models, specifically in the domain of Vietnamese VQA, limiting their applicability in such scenarios. To address these limitations, we propose a training framework that combines a paraphrase-based feature augmentation module with a dynamic curriculum learning strategy. Explicitly, augmented samples are considered "easy" while raw samples are regarded as "hard". The framework then utilizes a mechanism that dynamically adjusts the ratio of easy to hard samples during training, progressively modifying the same dataset to increase its difficulty level. By enabling gradual adaptation to task complexity, this approach helps the Vietnamese VQA model generalize well, thus improving overall performance. Experimental results show consistent improvements on the OpenViVQA dataset and mixed outcomes on the ViVQA dataset, highlighting both the potential and challenges of our approach in advancing VQA for Vietnamese language.
no_new_dataset
0.946597
2503.03370
Nimra Dilawar
Nimra Dilawar, Sara Nadeem, Javed Iqbal, Waqas Sultani, Mohsen Ali
MIAdapt: Source-free Few-shot Domain Adaptive Object Detection for Microscopic Images
6 pages, 5 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Existing generic unsupervised domain adaptation approaches require access to both a large labeled source dataset and a sufficient unlabeled target dataset during adaptation. However, collecting a large dataset, even if unlabeled, is a challenging and expensive endeavor, especially in medical imaging. In addition, constraints such as privacy issues can result in cases where source data is unavailable. Taking in consideration these challenges, we propose MIAdapt, an adaptive approach for Microscopic Imagery Adaptation as a solution for Source-free Few-shot Domain Adaptive Object detection (SF-FSDA). We also define two competitive baselines (1) Faster-FreeShot and (2) MT-FreeShot. Extensive experiments on the challenging M5-Malaria and Raabin-WBC datasets validate the effectiveness of MIAdapt. Without using any image from the source domain MIAdapt surpasses state-of-the-art source-free UDA (SF-UDA) methods by +21.3% mAP and few-shot domain adaptation (FSDA) approaches by +4.7% mAP on Raabin-WBC. Our code and models will be publicly available.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 10:46:03 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 10:41:28 GMT" } ]
2025-03-07T00:00:00
[ [ "Dilawar", "Nimra", "" ], [ "Nadeem", "Sara", "" ], [ "Iqbal", "Javed", "" ], [ "Sultani", "Waqas", "" ], [ "Ali", "Mohsen", "" ] ]
TITLE: MIAdapt: Source-free Few-shot Domain Adaptive Object Detection for Microscopic Images ABSTRACT: Existing generic unsupervised domain adaptation approaches require access to both a large labeled source dataset and a sufficient unlabeled target dataset during adaptation. However, collecting a large dataset, even if unlabeled, is a challenging and expensive endeavor, especially in medical imaging. In addition, constraints such as privacy issues can result in cases where source data is unavailable. Taking in consideration these challenges, we propose MIAdapt, an adaptive approach for Microscopic Imagery Adaptation as a solution for Source-free Few-shot Domain Adaptive Object detection (SF-FSDA). We also define two competitive baselines (1) Faster-FreeShot and (2) MT-FreeShot. Extensive experiments on the challenging M5-Malaria and Raabin-WBC datasets validate the effectiveness of MIAdapt. Without using any image from the source domain MIAdapt surpasses state-of-the-art source-free UDA (SF-UDA) methods by +21.3% mAP and few-shot domain adaptation (FSDA) approaches by +4.7% mAP on Raabin-WBC. Our code and models will be publicly available.
no_new_dataset
0.948822
2503.03454
Chih-Hsun Lin
Ting-Wei Liao, Chih-Hsun Lin, Yu-Lin Tsai, Takao Murakami, Chia-Mu Yu, Jun Sakuma, Chun-Ying Huang, Hiroaki Kikuchi
Data Poisoning Attacks to Locally Differentially Private Range Query Protocols
null
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Local Differential Privacy (LDP) has been widely adopted to protect user privacy in decentralized data collection. However, recent studies have revealed that LDP protocols are vulnerable to data poisoning attacks, where malicious users manipulate their reported data to distort aggregated results. In this work, we present the first study on data poisoning attacks targeting LDP range query protocols, focusing on both tree-based and grid-based approaches. We identify three key challenges in executing such attacks, including crafting consistent and effective fake data, maintaining data consistency across levels or grids, and preventing server detection. To address the first two challenges, we propose novel attack methods that are provably optimal, including a tree-based attack and a grid-based attack, designed to manipulate range query results with high effectiveness. \textbf{Our key finding is that the common post-processing procedure, Norm-Sub, in LDP range query protocols can help the attacker massively amplify their attack effectiveness.} In addition, we study a potential countermeasure, but also propose an adaptive attack capable of evading this defense to address the third challenge. We evaluate our methods through theoretical analysis and extensive experiments on synthetic and real-world datasets. Our results show that the proposed attacks can significantly amplify estimations for arbitrary range queries by manipulating a small fraction of users, providing 5-10x more influence than a normal user to the estimation.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 12:40:34 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 14:25:03 GMT" } ]
2025-03-07T00:00:00
[ [ "Liao", "Ting-Wei", "" ], [ "Lin", "Chih-Hsun", "" ], [ "Tsai", "Yu-Lin", "" ], [ "Murakami", "Takao", "" ], [ "Yu", "Chia-Mu", "" ], [ "Sakuma", "Jun", "" ], [ "Huang", "Chun-Ying", "" ], [ "Kikuchi", "Hiroaki", "" ] ]
TITLE: Data Poisoning Attacks to Locally Differentially Private Range Query Protocols ABSTRACT: Local Differential Privacy (LDP) has been widely adopted to protect user privacy in decentralized data collection. However, recent studies have revealed that LDP protocols are vulnerable to data poisoning attacks, where malicious users manipulate their reported data to distort aggregated results. In this work, we present the first study on data poisoning attacks targeting LDP range query protocols, focusing on both tree-based and grid-based approaches. We identify three key challenges in executing such attacks, including crafting consistent and effective fake data, maintaining data consistency across levels or grids, and preventing server detection. To address the first two challenges, we propose novel attack methods that are provably optimal, including a tree-based attack and a grid-based attack, designed to manipulate range query results with high effectiveness. \textbf{Our key finding is that the common post-processing procedure, Norm-Sub, in LDP range query protocols can help the attacker massively amplify their attack effectiveness.} In addition, we study a potential countermeasure, but also propose an adaptive attack capable of evading this defense to address the third challenge. We evaluate our methods through theoretical analysis and extensive experiments on synthetic and real-world datasets. Our results show that the proposed attacks can significantly amplify estimations for arbitrary range queries by manipulating a small fraction of users, providing 5-10x more influence than a normal user to the estimation.
no_new_dataset
0.945751
2503.03465
Fei Zhu
ChenTong Wang, Jincheng Gao, Fei Zhu, Abderrahim Halimi, C\'edric Richard
DTU-Net: A Multi-Scale Dilated Transformer Network for Nonlinear Hyperspectral Unmixing
null
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Transformers have shown significant success in hyperspectral unmixing (HU). However, challenges remain. While multi-scale and long-range spatial correlations are essential in unmixing tasks, current Transformer-based unmixing networks, built on Vision Transformer (ViT) or Swin-Transformer, struggle to capture them effectively. Additionally, current Transformer-based unmixing networks rely on the linear mixing model, which lacks the flexibility to accommodate scenarios where nonlinear effects are significant. To address these limitations, we propose a multi-scale Dilated Transformer-based unmixing network for nonlinear HU (DTU-Net). The encoder employs two branches. The first one performs multi-scale spatial feature extraction using Multi-Scale Dilated Attention (MSDA) in the Dilated Transformer, which varies dilation rates across attention heads to capture long-range and multi-scale spatial correlations. The second one performs spectral feature extraction utilizing 3D-CNNs with channel attention. The outputs from both branches are then fused to integrate multi-scale spatial and spectral information, which is subsequently transformed to estimate the abundances. The decoder is designed to accommodate both linear and nonlinear mixing scenarios. Its interpretability is enhanced by explicitly modeling the relationships between endmembers, abundances, and nonlinear coefficients in accordance with the polynomial post-nonlinear mixing model (PPNMM). Experiments on synthetic and real datasets validate the effectiveness of the proposed DTU-Net compared to PPNMM-derived methods and several advanced unmixing networks.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 12:56:33 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 02:55:33 GMT" } ]
2025-03-07T00:00:00
[ [ "Wang", "ChenTong", "" ], [ "Gao", "Jincheng", "" ], [ "Zhu", "Fei", "" ], [ "Halimi", "Abderrahim", "" ], [ "Richard", "Cédric", "" ] ]
TITLE: DTU-Net: A Multi-Scale Dilated Transformer Network for Nonlinear Hyperspectral Unmixing ABSTRACT: Transformers have shown significant success in hyperspectral unmixing (HU). However, challenges remain. While multi-scale and long-range spatial correlations are essential in unmixing tasks, current Transformer-based unmixing networks, built on Vision Transformer (ViT) or Swin-Transformer, struggle to capture them effectively. Additionally, current Transformer-based unmixing networks rely on the linear mixing model, which lacks the flexibility to accommodate scenarios where nonlinear effects are significant. To address these limitations, we propose a multi-scale Dilated Transformer-based unmixing network for nonlinear HU (DTU-Net). The encoder employs two branches. The first one performs multi-scale spatial feature extraction using Multi-Scale Dilated Attention (MSDA) in the Dilated Transformer, which varies dilation rates across attention heads to capture long-range and multi-scale spatial correlations. The second one performs spectral feature extraction utilizing 3D-CNNs with channel attention. The outputs from both branches are then fused to integrate multi-scale spatial and spectral information, which is subsequently transformed to estimate the abundances. The decoder is designed to accommodate both linear and nonlinear mixing scenarios. Its interpretability is enhanced by explicitly modeling the relationships between endmembers, abundances, and nonlinear coefficients in accordance with the polynomial post-nonlinear mixing model (PPNMM). Experiments on synthetic and real datasets validate the effectiveness of the proposed DTU-Net compared to PPNMM-derived methods and several advanced unmixing networks.
no_new_dataset
0.949389
2503.03784
Kannan Ashwin Viswanathan
Ashwin Viswanathan Kannan, Madhumitha Ganesan
Neural Models of Task Adaptation: A Tutorial on Spiking Networks for Executive Control
6 pages
null
null
null
q-bio.NC cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Understanding cognitive flexibility and task-switching mechanisms in neural systems requires biologically plausible computational models. This tutorial presents a step-by-step approach to constructing a spiking neural network (SNN) that simulates task-switching dynamics within the cognitive control network. The model incorporates biologically realistic features, including lateral inhibition, adaptive synaptic weights through unsupervised Spike Timing-Dependent Plasticity (STDP), and precise neuronal parameterization within physiologically relevant ranges. The SNN is implemented using Leaky Integrate-and-Fire (LIF) neurons, which represent excitatory (glutamatergic) and inhibitory (GABAergic) populations. We utilize two real-world datasets as tasks, demonstrating how the network learns and dynamically switches between them. Experimental design follows cognitive psychology paradigms to analyze neural adaptation, synaptic weight modifications, and emergent behaviors such as Long-Term Potentiation (LTP), Long-Term Depression (LTD), and Task-Set Reconfiguration (TSR). Through a series of structured experiments, this tutorial illustrates how variations in task-switching intervals affect performance and multitasking efficiency. The results align with empirically observed neuronal responses, offering insights into the computational underpinnings of executive function. By following this tutorial, researchers can develop and extend biologically inspired SNN models for studying cognitive processes and neural adaptation.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 00:44:34 GMT" } ]
2025-03-07T00:00:00
[ [ "Kannan", "Ashwin Viswanathan", "" ], [ "Ganesan", "Madhumitha", "" ] ]
TITLE: Neural Models of Task Adaptation: A Tutorial on Spiking Networks for Executive Control ABSTRACT: Understanding cognitive flexibility and task-switching mechanisms in neural systems requires biologically plausible computational models. This tutorial presents a step-by-step approach to constructing a spiking neural network (SNN) that simulates task-switching dynamics within the cognitive control network. The model incorporates biologically realistic features, including lateral inhibition, adaptive synaptic weights through unsupervised Spike Timing-Dependent Plasticity (STDP), and precise neuronal parameterization within physiologically relevant ranges. The SNN is implemented using Leaky Integrate-and-Fire (LIF) neurons, which represent excitatory (glutamatergic) and inhibitory (GABAergic) populations. We utilize two real-world datasets as tasks, demonstrating how the network learns and dynamically switches between them. Experimental design follows cognitive psychology paradigms to analyze neural adaptation, synaptic weight modifications, and emergent behaviors such as Long-Term Potentiation (LTP), Long-Term Depression (LTD), and Task-Set Reconfiguration (TSR). Through a series of structured experiments, this tutorial illustrates how variations in task-switching intervals affect performance and multitasking efficiency. The results align with empirically observed neuronal responses, offering insights into the computational underpinnings of executive function. By following this tutorial, researchers can develop and extend biologically inspired SNN models for studying cognitive processes and neural adaptation.
no_new_dataset
0.943971
2503.03787
Gibson Nkhata
Gibson Nkhata Shi Yin Hong and Susan Gauch
Sarcasm Detection as a Catalyst: Improving Stance Detection with Cross-Target Capabilities
2 pages, 5 figures, published, published in International Journal On Advances in Intelligent Systems, volume 17, numbers 3 and 4. arXiv admin note: text overlap with arXiv:2503.03172
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Stance Detection (SD) has become a critical area of interest due to its applications in various contexts leading to increased research within NLP. Yet the subtlety and complexity of texts sourced from online platforms often containing sarcastic language pose significant challenges for SD algorithms in accurately determining the authors stance. This paper addresses this by employing sarcasm for SD. It also tackles the issue of insufficient annotated data for training SD models on new targets by conducting Cross-Target SD (CTSD). The proposed approach involves fine-tuning BERT and RoBERTa models followed by concatenating additional deep learning layers. The approach is assessed against various State-Of-The-Art baselines for SD demonstrating superior performance using publicly available datasets. Notably our model outperforms the best SOTA models on both in-domain SD and CTSD tasks even before the incorporation of sarcasm-detection pre-training. The integration of sarcasm knowledge into the model significantly reduces misclassifications of sarcastic text elements in SD allowing our model to accurately predict 85% of texts that were previously misclassified without sarcasm-detection pre-training on in-domain SD. This enhancement contributes to an increase in the models average macro F1-score. The CTSD task achieves performance comparable to that of the in-domain task despite using a zero-shot finetuning. We also reveal that the success of the transfer-learning framework relies on the correlation between the lexical attributes of sarcasm detection and SD. This study represents the first exploration of sarcasm detection as an intermediate transfer-learning task within the context of SD while also leveraging the concatenation of BERT or RoBERTa with other deep-learning techniques. The proposed approach establishes a foundational baseline for future research in this domain.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 05:27:16 GMT" } ]
2025-03-07T00:00:00
[ [ "Hong", "Gibson Nkhata Shi Yin", "" ], [ "Gauch", "Susan", "" ] ]
TITLE: Sarcasm Detection as a Catalyst: Improving Stance Detection with Cross-Target Capabilities ABSTRACT: Stance Detection (SD) has become a critical area of interest due to its applications in various contexts leading to increased research within NLP. Yet the subtlety and complexity of texts sourced from online platforms often containing sarcastic language pose significant challenges for SD algorithms in accurately determining the authors stance. This paper addresses this by employing sarcasm for SD. It also tackles the issue of insufficient annotated data for training SD models on new targets by conducting Cross-Target SD (CTSD). The proposed approach involves fine-tuning BERT and RoBERTa models followed by concatenating additional deep learning layers. The approach is assessed against various State-Of-The-Art baselines for SD demonstrating superior performance using publicly available datasets. Notably our model outperforms the best SOTA models on both in-domain SD and CTSD tasks even before the incorporation of sarcasm-detection pre-training. The integration of sarcasm knowledge into the model significantly reduces misclassifications of sarcastic text elements in SD allowing our model to accurately predict 85% of texts that were previously misclassified without sarcasm-detection pre-training on in-domain SD. This enhancement contributes to an increase in the models average macro F1-score. The CTSD task achieves performance comparable to that of the in-domain task despite using a zero-shot finetuning. We also reveal that the success of the transfer-learning framework relies on the correlation between the lexical attributes of sarcasm detection and SD. This study represents the first exploration of sarcasm detection as an intermediate transfer-learning task within the context of SD while also leveraging the concatenation of BERT or RoBERTa with other deep-learning techniques. The proposed approach establishes a foundational baseline for future research in this domain.
no_new_dataset
0.947088
2503.03789
Hiroshi Takahashi
Hiroshi Takahashi, Tomoharu Iwata, Atsutoshi Kumagai, Yuuki Yamanaka, Tomoya Yamashita
Positive-Unlabeled Diffusion Models for Preventing Sensitive Data Generation
Accepted at ICLR2025. Code is available at https://github.com/takahashihiroshi/pudm
null
null
null
cs.LG cs.AI stat.ML
http://creativecommons.org/licenses/by/4.0/
Diffusion models are powerful generative models but often generate sensitive data that are unwanted by users, mainly because the unlabeled training data frequently contain such sensitive data. Since labeling all sensitive data in the large-scale unlabeled training data is impractical, we address this problem by using a small amount of labeled sensitive data. In this paper, we propose positive-unlabeled diffusion models, which prevent the generation of sensitive data using unlabeled and sensitive data. Our approach can approximate the evidence lower bound (ELBO) for normal (negative) data using only unlabeled and sensitive (positive) data. Therefore, even without labeled normal data, we can maximize the ELBO for normal data and minimize it for labeled sensitive data, ensuring the generation of only normal data. Through experiments across various datasets and settings, we demonstrated that our approach can prevent the generation of sensitive images without compromising image quality.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 07:17:48 GMT" } ]
2025-03-07T00:00:00
[ [ "Takahashi", "Hiroshi", "" ], [ "Iwata", "Tomoharu", "" ], [ "Kumagai", "Atsutoshi", "" ], [ "Yamanaka", "Yuuki", "" ], [ "Yamashita", "Tomoya", "" ] ]
TITLE: Positive-Unlabeled Diffusion Models for Preventing Sensitive Data Generation ABSTRACT: Diffusion models are powerful generative models but often generate sensitive data that are unwanted by users, mainly because the unlabeled training data frequently contain such sensitive data. Since labeling all sensitive data in the large-scale unlabeled training data is impractical, we address this problem by using a small amount of labeled sensitive data. In this paper, we propose positive-unlabeled diffusion models, which prevent the generation of sensitive data using unlabeled and sensitive data. Our approach can approximate the evidence lower bound (ELBO) for normal (negative) data using only unlabeled and sensitive (positive) data. Therefore, even without labeled normal data, we can maximize the ELBO for normal data and minimize it for labeled sensitive data, ensuring the generation of only normal data. Through experiments across various datasets and settings, we demonstrated that our approach can prevent the generation of sensitive images without compromising image quality.
no_new_dataset
0.953622
2503.03794
Tianyi Huang
Tianyi Huang
Synthetic Data Augmentation for Enhancing Harmful Algal Bloom Detection with Machine Learning
Accepted Paper at the 2025 IEEE Conference on Technologies for Sustainability (SusTech)
null
null
null
cs.LG cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Harmful Algal Blooms (HABs) pose severe threats to aquatic ecosystems and public health, resulting in substantial economic losses globally. Early detection is crucial but often hindered by the scarcity of high-quality datasets necessary for training reliable machine learning (ML) models. This study investigates the use of synthetic data augmentation using Gaussian Copulas to enhance ML-based HAB detection systems. Synthetic datasets of varying sizes (100-1,000 samples) were generated using relevant environmental features$\unicode{x2015}$water temperature, salinity, and UVB radiation$\unicode{x2015}$with corrected Chlorophyll-a concentration as the target variable. Experimental results demonstrate that moderate synthetic augmentation significantly improves model performance (RMSE reduced from 0.4706 to 0.1850; $p < 0.001$). However, excessive synthetic data introduces noise and reduces predictive accuracy, emphasizing the need for a balanced approach to data augmentation. These findings highlight the potential of synthetic data to enhance HAB monitoring systems, offering a scalable and cost-effective method for early detection and mitigation of ecological and public health risks.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 11:50:04 GMT" } ]
2025-03-07T00:00:00
[ [ "Huang", "Tianyi", "" ] ]
TITLE: Synthetic Data Augmentation for Enhancing Harmful Algal Bloom Detection with Machine Learning ABSTRACT: Harmful Algal Blooms (HABs) pose severe threats to aquatic ecosystems and public health, resulting in substantial economic losses globally. Early detection is crucial but often hindered by the scarcity of high-quality datasets necessary for training reliable machine learning (ML) models. This study investigates the use of synthetic data augmentation using Gaussian Copulas to enhance ML-based HAB detection systems. Synthetic datasets of varying sizes (100-1,000 samples) were generated using relevant environmental features$\unicode{x2015}$water temperature, salinity, and UVB radiation$\unicode{x2015}$with corrected Chlorophyll-a concentration as the target variable. Experimental results demonstrate that moderate synthetic augmentation significantly improves model performance (RMSE reduced from 0.4706 to 0.1850; $p < 0.001$). However, excessive synthetic data introduces noise and reduces predictive accuracy, emphasizing the need for a balanced approach to data augmentation. These findings highlight the potential of synthetic data to enhance HAB monitoring systems, offering a scalable and cost-effective method for early detection and mitigation of ecological and public health risks.
no_new_dataset
0.950686
2503.03797
Enkhtogtokh Togootogtokh
Enkhtogtokh Togootogtokh, Christian Klasen
VoiceGRPO: Modern MoE Transformers with Group Relative Policy Optimization GRPO for AI Voice Health Care Applications on Voice Pathology Detection
null
null
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
This research introduces a novel AI techniques as Mixture-of-Experts Transformers with Group Relative Policy Optimization (GRPO) for voice health care applications on voice pathology detection. With the architectural innovations, we adopt advanced training paradigms inspired by reinforcement learning, namely Proximal Policy Optimization (PPO) and Group-wise Regularized Policy Optimization (GRPO), to enhance model stability and performance. Experiments conducted on a synthetically generated voice pathology dataset demonstrate that our proposed models significantly improve diagnostic accuracy, F1 score, and ROC-AUC compared to conventional approaches. These findings underscore the potential of integrating transformer architectures with novel training strategies to advance automated voice pathology detection and ultimately contribute to more effective healthcare delivery. The code we used to train and evaluate our models is available at https://github.com/enkhtogtokh/voicegrpo
[ { "version": "v1", "created": "Wed, 5 Mar 2025 14:52:57 GMT" } ]
2025-03-07T00:00:00
[ [ "Togootogtokh", "Enkhtogtokh", "" ], [ "Klasen", "Christian", "" ] ]
TITLE: VoiceGRPO: Modern MoE Transformers with Group Relative Policy Optimization GRPO for AI Voice Health Care Applications on Voice Pathology Detection ABSTRACT: This research introduces a novel AI techniques as Mixture-of-Experts Transformers with Group Relative Policy Optimization (GRPO) for voice health care applications on voice pathology detection. With the architectural innovations, we adopt advanced training paradigms inspired by reinforcement learning, namely Proximal Policy Optimization (PPO) and Group-wise Regularized Policy Optimization (GRPO), to enhance model stability and performance. Experiments conducted on a synthetically generated voice pathology dataset demonstrate that our proposed models significantly improve diagnostic accuracy, F1 score, and ROC-AUC compared to conventional approaches. These findings underscore the potential of integrating transformer architectures with novel training strategies to advance automated voice pathology detection and ultimately contribute to more effective healthcare delivery. The code we used to train and evaluate our models is available at https://github.com/enkhtogtokh/voicegrpo
new_dataset
0.959535
2503.03803
Ziwei Liu
Jingkang Yang, Shuai Liu, Hongming Guo, Yuhao Dong, Xiamengwei Zhang, Sicheng Zhang, Pengyun Wang, Zitang Zhou, Binzhu Xie, Ziyue Wang, Bei Ouyang, Zhengyu Lin, Marco Cominelli, Zhongang Cai, Yuanhan Zhang, Peiyuan Zhang, Fangzhou Hong, Joerg Widmer, Francesco Gringoli, Lei Yang, Bo Li, Ziwei Liu
EgoLife: Towards Egocentric Life Assistant
Accepted to CVPR 2025. Project Page: https://egolife-ai.github.io/. Code: https://github.com/EvolvingLMMs-Lab/EgoLife
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce EgoLife, a project to develop an egocentric life assistant that accompanies and enhances personal efficiency through AI-powered wearable glasses. To lay the foundation for this assistant, we conducted a comprehensive data collection study where six participants lived together for one week, continuously recording their daily activities - including discussions, shopping, cooking, socializing, and entertainment - using AI glasses for multimodal egocentric video capture, along with synchronized third-person-view video references. This effort resulted in the EgoLife Dataset, a comprehensive 300-hour egocentric, interpersonal, multiview, and multimodal daily life dataset with intensive annotation. Leveraging this dataset, we introduce EgoLifeQA, a suite of long-context, life-oriented question-answering tasks designed to provide meaningful assistance in daily life by addressing practical questions such as recalling past relevant events, monitoring health habits, and offering personalized recommendations. To address the key technical challenges of (1) developing robust visual-audio models for egocentric data, (2) enabling identity recognition, and (3) facilitating long-context question answering over extensive temporal information, we introduce EgoButler, an integrated system comprising EgoGPT and EgoRAG. EgoGPT is an omni-modal model trained on egocentric datasets, achieving state-of-the-art performance on egocentric video understanding. EgoRAG is a retrieval-based component that supports answering ultra-long-context questions. Our experimental studies verify their working mechanisms and reveal critical factors and bottlenecks, guiding future improvements. By releasing our datasets, models, and benchmarks, we aim to stimulate further research in egocentric AI assistants.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 18:54:16 GMT" } ]
2025-03-07T00:00:00
[ [ "Yang", "Jingkang", "" ], [ "Liu", "Shuai", "" ], [ "Guo", "Hongming", "" ], [ "Dong", "Yuhao", "" ], [ "Zhang", "Xiamengwei", "" ], [ "Zhang", "Sicheng", "" ], [ "Wang", "Pengyun", "" ], [ "Zhou", "Zitang", "" ], [ "Xie", "Binzhu", "" ], [ "Wang", "Ziyue", "" ], [ "Ouyang", "Bei", "" ], [ "Lin", "Zhengyu", "" ], [ "Cominelli", "Marco", "" ], [ "Cai", "Zhongang", "" ], [ "Zhang", "Yuanhan", "" ], [ "Zhang", "Peiyuan", "" ], [ "Hong", "Fangzhou", "" ], [ "Widmer", "Joerg", "" ], [ "Gringoli", "Francesco", "" ], [ "Yang", "Lei", "" ], [ "Li", "Bo", "" ], [ "Liu", "Ziwei", "" ] ]
TITLE: EgoLife: Towards Egocentric Life Assistant ABSTRACT: We introduce EgoLife, a project to develop an egocentric life assistant that accompanies and enhances personal efficiency through AI-powered wearable glasses. To lay the foundation for this assistant, we conducted a comprehensive data collection study where six participants lived together for one week, continuously recording their daily activities - including discussions, shopping, cooking, socializing, and entertainment - using AI glasses for multimodal egocentric video capture, along with synchronized third-person-view video references. This effort resulted in the EgoLife Dataset, a comprehensive 300-hour egocentric, interpersonal, multiview, and multimodal daily life dataset with intensive annotation. Leveraging this dataset, we introduce EgoLifeQA, a suite of long-context, life-oriented question-answering tasks designed to provide meaningful assistance in daily life by addressing practical questions such as recalling past relevant events, monitoring health habits, and offering personalized recommendations. To address the key technical challenges of (1) developing robust visual-audio models for egocentric data, (2) enabling identity recognition, and (3) facilitating long-context question answering over extensive temporal information, we introduce EgoButler, an integrated system comprising EgoGPT and EgoRAG. EgoGPT is an omni-modal model trained on egocentric datasets, achieving state-of-the-art performance on egocentric video understanding. EgoRAG is a retrieval-based component that supports answering ultra-long-context questions. Our experimental studies verify their working mechanisms and reveal critical factors and bottlenecks, guiding future improvements. By releasing our datasets, models, and benchmarks, we aim to stimulate further research in egocentric AI assistants.
new_dataset
0.569194
2503.03826
Jesse Cresswell
Raunaq Suri, Ilan Gofman, Guangwei Yu, Jesse C. Cresswell
Zero-Execution Retrieval-Augmented Configuration Tuning of Spark Applications
Code and datasets available at https://github.com/layer6ai-labs/spark-retrieval-tuning
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale data processing is increasingly done using distributed computing frameworks like Apache Spark, which have a considerable number of configurable parameters that affect runtime performance. For optimal performance, these parameters must be tuned to the specific job being run. Tuning commonly requires multiple executions to collect runtime information for updating parameters. This is infeasible for ad hoc queries that are run once or infrequently. Zero-execution tuning, where parameters are automatically set before a job's first run, can provide significant savings for all types of applications, but is more challenging since runtime information is not available. In this work, we propose a novel method for zero-execution tuning of Spark configurations based on retrieval. Our method achieves 93.3% of the runtime improvement of state-of-the-art one-execution optimization, entirely avoiding the slow initial execution using default settings. The shift to zero-execution tuning results in a lower cumulative runtime over the first 140 runs, and provides the largest benefit for ad hoc and analytical queries which only need to be executed once. We release the largest and most comprehensive suite of Spark query datasets, optimal configurations, and runtime information, which will promote future development of zero-execution tuning methods.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 19:00:05 GMT" } ]
2025-03-07T00:00:00
[ [ "Suri", "Raunaq", "" ], [ "Gofman", "Ilan", "" ], [ "Yu", "Guangwei", "" ], [ "Cresswell", "Jesse C.", "" ] ]
TITLE: Zero-Execution Retrieval-Augmented Configuration Tuning of Spark Applications ABSTRACT: Large-scale data processing is increasingly done using distributed computing frameworks like Apache Spark, which have a considerable number of configurable parameters that affect runtime performance. For optimal performance, these parameters must be tuned to the specific job being run. Tuning commonly requires multiple executions to collect runtime information for updating parameters. This is infeasible for ad hoc queries that are run once or infrequently. Zero-execution tuning, where parameters are automatically set before a job's first run, can provide significant savings for all types of applications, but is more challenging since runtime information is not available. In this work, we propose a novel method for zero-execution tuning of Spark configurations based on retrieval. Our method achieves 93.3% of the runtime improvement of state-of-the-art one-execution optimization, entirely avoiding the slow initial execution using default settings. The shift to zero-execution tuning results in a lower cumulative runtime over the first 140 runs, and provides the largest benefit for ad hoc and analytical queries which only need to be executed once. We release the largest and most comprehensive suite of Spark query datasets, optimal configurations, and runtime information, which will promote future development of zero-execution tuning methods.
no_new_dataset
0.843895