id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
listlengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
2406.07266
Ross Irwin
Ross Irwin, Alessandro Tibo, Jon Paul Janet and Simon Olsson
SemlaFlow -- Efficient 3D Molecular Generation with Latent Attention and Equivariant Flow Matching
AISTATS 2025
null
null
null
cs.LG cs.AI cs.NE
http://creativecommons.org/licenses/by-sa/4.0/
Methods for jointly generating molecular graphs along with their 3D conformations have gained prominence recently due to their potential impact on structure-based drug design. Current approaches, however, often suffer from very slow sampling times or generate molecules with poor chemical validity. Addressing these limitations, we propose Semla, a scalable E(3)-equivariant message passing architecture. We further introduce an unconditional 3D molecular generation model, SemlaFlow, which is trained using equivariant flow matching to generate a joint distribution over atom types, coordinates, bond types and formal charges. Our model produces state-of-the-art results on benchmark datasets with as few as 20 sampling steps, corresponding to a two order-of-magnitude speedup compared to state-of-the-art. Furthermore, we highlight limitations of current evaluation methods for 3D generation and propose new benchmark metrics for unconditional molecular generators. Finally, using these new metrics, we compare our model's ability to generate high quality samples against current approaches and further demonstrate SemlaFlow's strong performance.
[ { "version": "v1", "created": "Tue, 11 Jun 2024 13:51:51 GMT" }, { "version": "v2", "created": "Tue, 25 Jun 2024 11:42:09 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 16:56:08 GMT" } ]
2025-03-03T00:00:00
[ [ "Irwin", "Ross", "" ], [ "Tibo", "Alessandro", "" ], [ "Janet", "Jon Paul", "" ], [ "Olsson", "Simon", "" ] ]
TITLE: SemlaFlow -- Efficient 3D Molecular Generation with Latent Attention and Equivariant Flow Matching ABSTRACT: Methods for jointly generating molecular graphs along with their 3D conformations have gained prominence recently due to their potential impact on structure-based drug design. Current approaches, however, often suffer from very slow sampling times or generate molecules with poor chemical validity. Addressing these limitations, we propose Semla, a scalable E(3)-equivariant message passing architecture. We further introduce an unconditional 3D molecular generation model, SemlaFlow, which is trained using equivariant flow matching to generate a joint distribution over atom types, coordinates, bond types and formal charges. Our model produces state-of-the-art results on benchmark datasets with as few as 20 sampling steps, corresponding to a two order-of-magnitude speedup compared to state-of-the-art. Furthermore, we highlight limitations of current evaluation methods for 3D generation and propose new benchmark metrics for unconditional molecular generators. Finally, using these new metrics, we compare our model's ability to generate high quality samples against current approaches and further demonstrate SemlaFlow's strong performance.
no_new_dataset
0.95297
2406.10288
Francisco Eiras
Francisco Eiras, Aleksandar Petrov, Philip H.S. Torr, M. Pawan Kumar, Adel Bibi
Do as I do (Safely): Mitigating Task-Specific Fine-tuning Risks in Large Language Models
Accepted to ICLR'25
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent research shows that fine-tuning on benign instruction-following data can inadvertently undo the safety alignment process and increase a model's propensity to comply with harmful queries. While instruction-following fine-tuning is important, task-specific fine-tuning - where models are trained on datasets with clear ground truth answers (e.g., multiple choice questions) - can enhance model performance on specialized downstream tasks. Understanding and mitigating safety risks in the task-specific setting remains distinct from the instruction-following context due to structural differences in the data. Our work demonstrates how malicious actors can subtly manipulate the structure of almost any task-specific dataset to foster significantly more dangerous model behaviors, while maintaining an appearance of innocuity and reasonable downstream task performance. To address this issue, we propose a novel mitigation strategy that mixes in safety data which mimics the task format and prompting style of the user data, showing this is significantly more effective and efficient than existing baselines at re-establishing safety alignment while maintaining similar task performance.
[ { "version": "v1", "created": "Wed, 12 Jun 2024 18:33:11 GMT" }, { "version": "v2", "created": "Mon, 1 Jul 2024 10:17:58 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 11:36:06 GMT" } ]
2025-03-03T00:00:00
[ [ "Eiras", "Francisco", "" ], [ "Petrov", "Aleksandar", "" ], [ "Torr", "Philip H. S.", "" ], [ "Kumar", "M. Pawan", "" ], [ "Bibi", "Adel", "" ] ]
TITLE: Do as I do (Safely): Mitigating Task-Specific Fine-tuning Risks in Large Language Models ABSTRACT: Recent research shows that fine-tuning on benign instruction-following data can inadvertently undo the safety alignment process and increase a model's propensity to comply with harmful queries. While instruction-following fine-tuning is important, task-specific fine-tuning - where models are trained on datasets with clear ground truth answers (e.g., multiple choice questions) - can enhance model performance on specialized downstream tasks. Understanding and mitigating safety risks in the task-specific setting remains distinct from the instruction-following context due to structural differences in the data. Our work demonstrates how malicious actors can subtly manipulate the structure of almost any task-specific dataset to foster significantly more dangerous model behaviors, while maintaining an appearance of innocuity and reasonable downstream task performance. To address this issue, we propose a novel mitigation strategy that mixes in safety data which mimics the task format and prompting style of the user data, showing this is significantly more effective and efficient than existing baselines at re-establishing safety alignment while maintaining similar task performance.
no_new_dataset
0.946101
2406.11451
Jiawei Chen
Yue Jiang, Jiawei Chen, Dingkang Yang, Mingcheng Li, Shunli Wang, Tong Wu, Ke Li, Lihua Zhang
CoMT: Chain-of-Medical-Thought Reduces Hallucination in Medical Report Generation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic medical report generation (MRG), which possesses significant research value as it can aid radiologists in clinical diagnosis and report composition, has garnered increasing attention. Despite recent progress, generating accurate reports remains arduous due to the requirement for precise clinical comprehension and disease diagnosis inference. Furthermore, owing to the limited accessibility of medical data and the imbalanced distribution of diseases, the underrepresentation of rare diseases in training data makes large-scale medical visual language models (LVLMs) prone to hallucinations, such as omissions or fabrications, severely undermining diagnostic performance and further intensifying the challenges for MRG in practice. In this study, to effectively mitigate hallucinations in medical report generation, we propose a chain-of-medical-thought approach (CoMT), which intends to imitate the cognitive process of human doctors by decomposing diagnostic procedures. The radiological features with different importance are structured into fine-grained medical thought chains to enhance the inferential ability during diagnosis, thereby alleviating hallucination problems and enhancing the diagnostic accuracy of MRG. The code and dataset have been released at https://github.com/FRENKIE-CHIANG/CoMT.
[ { "version": "v1", "created": "Mon, 17 Jun 2024 12:03:32 GMT" }, { "version": "v2", "created": "Tue, 18 Jun 2024 14:20:46 GMT" }, { "version": "v3", "created": "Wed, 18 Sep 2024 06:53:40 GMT" }, { "version": "v4", "created": "Fri, 28 Feb 2025 03:36:50 GMT" } ]
2025-03-03T00:00:00
[ [ "Jiang", "Yue", "" ], [ "Chen", "Jiawei", "" ], [ "Yang", "Dingkang", "" ], [ "Li", "Mingcheng", "" ], [ "Wang", "Shunli", "" ], [ "Wu", "Tong", "" ], [ "Li", "Ke", "" ], [ "Zhang", "Lihua", "" ] ]
TITLE: CoMT: Chain-of-Medical-Thought Reduces Hallucination in Medical Report Generation ABSTRACT: Automatic medical report generation (MRG), which possesses significant research value as it can aid radiologists in clinical diagnosis and report composition, has garnered increasing attention. Despite recent progress, generating accurate reports remains arduous due to the requirement for precise clinical comprehension and disease diagnosis inference. Furthermore, owing to the limited accessibility of medical data and the imbalanced distribution of diseases, the underrepresentation of rare diseases in training data makes large-scale medical visual language models (LVLMs) prone to hallucinations, such as omissions or fabrications, severely undermining diagnostic performance and further intensifying the challenges for MRG in practice. In this study, to effectively mitigate hallucinations in medical report generation, we propose a chain-of-medical-thought approach (CoMT), which intends to imitate the cognitive process of human doctors by decomposing diagnostic procedures. The radiological features with different importance are structured into fine-grained medical thought chains to enhance the inferential ability during diagnosis, thereby alleviating hallucination problems and enhancing the diagnostic accuracy of MRG. The code and dataset have been released at https://github.com/FRENKIE-CHIANG/CoMT.
new_dataset
0.944638
2406.14045
Yu-Neng Chuang
Yu-Neng Chuang, Songchen Li, Jiayi Yuan, Guanchu Wang, Kwei-Herng Lai, Songyuan Sui, Leisheng Yu, Sirui Ding, Chia-Yuan Chang, Qiaoyu Tan, Daochen Zha, Xia Hu
LTSM-Bundle: A Toolbox and Benchmark on Large Language Models for Time Series Forecasting
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time Series Forecasting (TSF) has long been a challenge in time series analysis. Inspired by the success of Large Language Models (LLMs), researchers are now developing Large Time Series Models (LTSMs)-universal transformer-based models that use autoregressive prediction-to improve TSF. However, training LTSMs on heterogeneous time series data poses unique challenges, including diverse frequencies, dimensions, and patterns across datasets. Recent endeavors have studied and evaluated various design choices aimed at enhancing LTSM training and generalization capabilities. However, these design choices are typically studied and evaluated in isolation and are not benchmarked collectively. In this work, we introduce LTSM-Bundle, a comprehensive toolbox, and benchmark for training LTSMs, spanning pre-processing techniques, model configurations, and dataset configuration. It modularized and benchmarked LTSMs from multiple dimensions, encompassing prompting strategies, tokenization approaches, training paradigms, base model selection, data quantity, and dataset diversity. Furthermore, we combine the most effective design choices identified in our study. Empirical results demonstrate that this combination achieves superior zero-shot and few-shot performances compared to state-of-the-art LTSMs and traditional TSF methods on benchmark datasets.
[ { "version": "v1", "created": "Thu, 20 Jun 2024 07:09:19 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 23:12:38 GMT" } ]
2025-03-03T00:00:00
[ [ "Chuang", "Yu-Neng", "" ], [ "Li", "Songchen", "" ], [ "Yuan", "Jiayi", "" ], [ "Wang", "Guanchu", "" ], [ "Lai", "Kwei-Herng", "" ], [ "Sui", "Songyuan", "" ], [ "Yu", "Leisheng", "" ], [ "Ding", "Sirui", "" ], [ "Chang", "Chia-Yuan", "" ], [ "Tan", "Qiaoyu", "" ], [ "Zha", "Daochen", "" ], [ "Hu", "Xia", "" ] ]
TITLE: LTSM-Bundle: A Toolbox and Benchmark on Large Language Models for Time Series Forecasting ABSTRACT: Time Series Forecasting (TSF) has long been a challenge in time series analysis. Inspired by the success of Large Language Models (LLMs), researchers are now developing Large Time Series Models (LTSMs)-universal transformer-based models that use autoregressive prediction-to improve TSF. However, training LTSMs on heterogeneous time series data poses unique challenges, including diverse frequencies, dimensions, and patterns across datasets. Recent endeavors have studied and evaluated various design choices aimed at enhancing LTSM training and generalization capabilities. However, these design choices are typically studied and evaluated in isolation and are not benchmarked collectively. In this work, we introduce LTSM-Bundle, a comprehensive toolbox, and benchmark for training LTSMs, spanning pre-processing techniques, model configurations, and dataset configuration. It modularized and benchmarked LTSMs from multiple dimensions, encompassing prompting strategies, tokenization approaches, training paradigms, base model selection, data quantity, and dataset diversity. Furthermore, we combine the most effective design choices identified in our study. Empirical results demonstrate that this combination achieves superior zero-shot and few-shot performances compared to state-of-the-art LTSMs and traditional TSF methods on benchmark datasets.
no_new_dataset
0.943191
2406.18450
Aliz\'ee Pace
Aliz\'ee Pace, Bernhard Sch\"olkopf, Gunnar R\"atsch, Giorgia Ramponi
Preference Elicitation for Offline Reinforcement Learning
ICLR 2025
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Applying reinforcement learning (RL) to real-world problems is often made challenging by the inability to interact with the environment and the difficulty of designing reward functions. Offline RL addresses the first challenge by considering access to an offline dataset of environment interactions labeled by the reward function. In contrast, Preference-based RL does not assume access to the reward function and learns it from preferences, but typically requires an online interaction with the environment. We bridge the gap between these frameworks by exploring efficient methods for acquiring preference feedback in a fully offline setup. We propose Sim-OPRL, an offline preference-based reinforcement learning algorithm, which leverages a learned environment model to elicit preference feedback on simulated rollouts. Drawing on insights from both the offline RL and the preference-based RL literature, our algorithm employs a pessimistic approach for out-of-distribution data, and an optimistic approach for acquiring informative preferences about the optimal policy. We provide theoretical guarantees regarding the sample complexity of our approach, dependent on how well the offline data covers the optimal policy. Finally, we demonstrate the empirical performance of Sim-OPRL in various environments.
[ { "version": "v1", "created": "Wed, 26 Jun 2024 15:59:13 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 08:36:00 GMT" } ]
2025-03-03T00:00:00
[ [ "Pace", "Alizée", "" ], [ "Schölkopf", "Bernhard", "" ], [ "Rätsch", "Gunnar", "" ], [ "Ramponi", "Giorgia", "" ] ]
TITLE: Preference Elicitation for Offline Reinforcement Learning ABSTRACT: Applying reinforcement learning (RL) to real-world problems is often made challenging by the inability to interact with the environment and the difficulty of designing reward functions. Offline RL addresses the first challenge by considering access to an offline dataset of environment interactions labeled by the reward function. In contrast, Preference-based RL does not assume access to the reward function and learns it from preferences, but typically requires an online interaction with the environment. We bridge the gap between these frameworks by exploring efficient methods for acquiring preference feedback in a fully offline setup. We propose Sim-OPRL, an offline preference-based reinforcement learning algorithm, which leverages a learned environment model to elicit preference feedback on simulated rollouts. Drawing on insights from both the offline RL and the preference-based RL literature, our algorithm employs a pessimistic approach for out-of-distribution data, and an optimistic approach for acquiring informative preferences about the optimal policy. We provide theoretical guarantees regarding the sample complexity of our approach, dependent on how well the offline data covers the optimal policy. Finally, we demonstrate the empirical performance of Sim-OPRL in various environments.
no_new_dataset
0.945701
2407.02447
Anshul Nasery
Anshul Nasery, Jonathan Hayase, Pang Wei Koh, Sewoong Oh
PLeaS -- Merging Models with Permutations and Least Squares
Accepted to CVPR 2025
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
The democratization of machine learning systems has made the process of fine-tuning accessible to practitioners, leading to a wide range of open-source models fine-tuned on specialized tasks and datasets. Recent work has proposed to merge such models to combine their functionalities. However, prior approaches are usually restricted to models that are fine-tuned from the same base model. Furthermore, the final merged model is typically required to be of the same size as the original models. In this work, we propose a new two-step algorithm to merge models -- termed PLeaS -- which relaxes these constraints. First, leveraging the Permutation symmetries inherent in the two models, PLeaS partially matches nodes in each layer by maximizing alignment. Next, PLeaS computes the weights of the merged model as a layer-wise Least Squares solution to minimize the approximation error between the features of the merged model and the permuted features of the original models. PLeaS allows a practitioner to merge two models sharing the same architecture into a single performant model of a desired size, even when the two original models are fine-tuned from different base models. We also demonstrate how our method can be extended to address a challenging scenario where no data is available from the fine-tuning domains. We demonstrate our method to merge ResNet and ViT models trained with shared and different label spaces, and show improvement over the state-of-the-art merging methods of up to 15 percentage points for the same target compute while merging models trained on DomainNet and fine-grained classification tasks. Our code is open-sourced at https://github.com/SewoongLab/PLeaS-Merging .
[ { "version": "v1", "created": "Tue, 2 Jul 2024 17:24:04 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 22:26:01 GMT" } ]
2025-03-03T00:00:00
[ [ "Nasery", "Anshul", "" ], [ "Hayase", "Jonathan", "" ], [ "Koh", "Pang Wei", "" ], [ "Oh", "Sewoong", "" ] ]
TITLE: PLeaS -- Merging Models with Permutations and Least Squares ABSTRACT: The democratization of machine learning systems has made the process of fine-tuning accessible to practitioners, leading to a wide range of open-source models fine-tuned on specialized tasks and datasets. Recent work has proposed to merge such models to combine their functionalities. However, prior approaches are usually restricted to models that are fine-tuned from the same base model. Furthermore, the final merged model is typically required to be of the same size as the original models. In this work, we propose a new two-step algorithm to merge models -- termed PLeaS -- which relaxes these constraints. First, leveraging the Permutation symmetries inherent in the two models, PLeaS partially matches nodes in each layer by maximizing alignment. Next, PLeaS computes the weights of the merged model as a layer-wise Least Squares solution to minimize the approximation error between the features of the merged model and the permuted features of the original models. PLeaS allows a practitioner to merge two models sharing the same architecture into a single performant model of a desired size, even when the two original models are fine-tuned from different base models. We also demonstrate how our method can be extended to address a challenging scenario where no data is available from the fine-tuning domains. We demonstrate our method to merge ResNet and ViT models trained with shared and different label spaces, and show improvement over the state-of-the-art merging methods of up to 15 percentage points for the same target compute while merging models trained on DomainNet and fine-grained classification tasks. Our code is open-sourced at https://github.com/SewoongLab/PLeaS-Merging .
no_new_dataset
0.948058
2407.08351
Xiang Lisa Li
Xiang Lisa Li, Farzaan Kaiyom, Evan Zheran Liu, Yifan Mai, Percy Liang, Tatsunori Hashimoto
AutoBencher: Towards Declarative Benchmark Construction
Accepted for publication at ICLR 2025
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
We present AutoBencher, a declarative framework for automatic benchmark construction, and use it to scalably discover novel insights and vulnerabilities of existing language models. Concretely, given a few desiderata of benchmarks (e.g., question difficulty, topic salience), we operationalize each desideratum and cast benchmark creation as an optimization problem. Specifically, we experiment with two settings with different optimization objectives: (i) for capability evaluation, we declare the goal of finding a salient, difficult dataset that induces novel performance patterns; (ii) for safety evaluation, we declare the goal of finding a dataset of unsafe prompts that existing LMs fail to decline. To tackle this optimization problem, we use a language model to iteratively propose and refine dataset descriptions, which are then used to generate topic-specific questions and answers. These descriptions are optimized to improve the declared desiderata. We use AutoBencher (powered by GPT-4) to create datasets for math, multilinguality, knowledge, and safety. The scalability of AutoBencher allows it to test fine-grained categories and tail knowledge, creating datasets that elicit 22% more model errors (i.e., difficulty) than existing benchmarks. On the novelty ends, AutoBencher also helps identify specific gaps not captured by existing benchmarks: e.g., Gemini-Pro has knowledge gaps on Permian Extinction and Fordism while GPT-4o fails to decline harmful requests about cryptocurrency scams.
[ { "version": "v1", "created": "Thu, 11 Jul 2024 10:03:47 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 08:14:49 GMT" } ]
2025-03-03T00:00:00
[ [ "Li", "Xiang Lisa", "" ], [ "Kaiyom", "Farzaan", "" ], [ "Liu", "Evan Zheran", "" ], [ "Mai", "Yifan", "" ], [ "Liang", "Percy", "" ], [ "Hashimoto", "Tatsunori", "" ] ]
TITLE: AutoBencher: Towards Declarative Benchmark Construction ABSTRACT: We present AutoBencher, a declarative framework for automatic benchmark construction, and use it to scalably discover novel insights and vulnerabilities of existing language models. Concretely, given a few desiderata of benchmarks (e.g., question difficulty, topic salience), we operationalize each desideratum and cast benchmark creation as an optimization problem. Specifically, we experiment with two settings with different optimization objectives: (i) for capability evaluation, we declare the goal of finding a salient, difficult dataset that induces novel performance patterns; (ii) for safety evaluation, we declare the goal of finding a dataset of unsafe prompts that existing LMs fail to decline. To tackle this optimization problem, we use a language model to iteratively propose and refine dataset descriptions, which are then used to generate topic-specific questions and answers. These descriptions are optimized to improve the declared desiderata. We use AutoBencher (powered by GPT-4) to create datasets for math, multilinguality, knowledge, and safety. The scalability of AutoBencher allows it to test fine-grained categories and tail knowledge, creating datasets that elicit 22% more model errors (i.e., difficulty) than existing benchmarks. On the novelty ends, AutoBencher also helps identify specific gaps not captured by existing benchmarks: e.g., Gemini-Pro has knowledge gaps on Permian Extinction and Fordism while GPT-4o fails to decline harmful requests about cryptocurrency scams.
no_new_dataset
0.773088
2407.14543
Micha{\l} Kozielski
Micha{\l} Kozielski, Marek Sikora, {\L}ukasz Wawrowski
Towards consistency of rule-based explainer and black box model -- fusion of rule induction and XAI-based feature importance
null
null
10.1016/j.knosys.2025.113092
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rule-based models offer a human-understandable representation, i.e. they are interpretable. For this reason, they are used to explain the decisions of non-interpretable complex models, referred to as black box models. The generation of such explanations involves the approximation of a black box model by a rule-based model. To date, however, it has not been investigated whether the rule-based model makes decisions in the same way as the black box model it approximates. Decision making in the same way is understood in this work as the consistency of decisions and the consistency of the most important attributes used for decision making. This study proposes a novel approach ensuring that the rule-based surrogate model mimics the performance of the black box model. The proposed solution performs an explanation fusion involving rule generation and taking into account the feature importance determined by the selected XAI methods for the black box model being explained. The result of the method can be both global and local rule-based explanations. The quality of the proposed solution was verified by extensive analysis on 30 tabular benchmark datasets representing classification problems. Evaluation included comparison with the reference method and an illustrative case study. In addition, the paper discusses the possible pathways for the application of the rule-based approach in XAI and how rule-based explanations, including the proposed method, meet the user perspective and requirements for both content and presentation. The software created and a detailed report containing the full experimental results are available on the GitHub repository (https://github.com/ruleminer/FI-rules4XAI ).
[ { "version": "v1", "created": "Tue, 16 Jul 2024 07:56:29 GMT" } ]
2025-03-03T00:00:00
[ [ "Kozielski", "Michał", "" ], [ "Sikora", "Marek", "" ], [ "Wawrowski", "Łukasz", "" ] ]
TITLE: Towards consistency of rule-based explainer and black box model -- fusion of rule induction and XAI-based feature importance ABSTRACT: Rule-based models offer a human-understandable representation, i.e. they are interpretable. For this reason, they are used to explain the decisions of non-interpretable complex models, referred to as black box models. The generation of such explanations involves the approximation of a black box model by a rule-based model. To date, however, it has not been investigated whether the rule-based model makes decisions in the same way as the black box model it approximates. Decision making in the same way is understood in this work as the consistency of decisions and the consistency of the most important attributes used for decision making. This study proposes a novel approach ensuring that the rule-based surrogate model mimics the performance of the black box model. The proposed solution performs an explanation fusion involving rule generation and taking into account the feature importance determined by the selected XAI methods for the black box model being explained. The result of the method can be both global and local rule-based explanations. The quality of the proposed solution was verified by extensive analysis on 30 tabular benchmark datasets representing classification problems. Evaluation included comparison with the reference method and an illustrative case study. In addition, the paper discusses the possible pathways for the application of the rule-based approach in XAI and how rule-based explanations, including the proposed method, meet the user perspective and requirements for both content and presentation. The software created and a detailed report containing the full experimental results are available on the GitHub repository (https://github.com/ruleminer/FI-rules4XAI ).
no_new_dataset
0.950915
2407.17396
Irtaza Khalid
Irtaza Khalid, Steven Schockaert
Systematic Relational Reasoning With Epistemic Graph Neural Networks
10+29 pages, 5+13 figures, 4+10 tables. Comments welcome!
ICLR 2025 main
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Developing models that can learn to reason is a notoriously challenging problem. We focus on reasoning in relational domains, where the use of Graph Neural Networks (GNNs) seems like a natural choice. However, previous work has shown that regular GNNs lack the ability to systematically generalize from training examples on test graphs requiring longer inference chains, which fundamentally limits their reasoning abilities. A common solution relies on neuro-symbolic methods that systematically reason by learning rules, but their scalability is often limited and they tend to make unrealistically strong assumptions, e.g.\ that the answer can always be inferred from a single relational path. We propose the Epistemic GNN (EpiGNN), a novel parameter-efficient and scalable GNN architecture with an epistemic inductive bias for systematic reasoning. Node embeddings in EpiGNNs are treated as epistemic states, and message passing is implemented accordingly. We show that EpiGNNs achieve state-of-the-art results on link prediction tasks that require systematic reasoning. Furthermore, for inductive knowledge graph completion, EpiGNNs rival the performance of state-of-the-art specialized approaches. Finally, we introduce two new benchmarks that go beyond standard relational reasoning by requiring the aggregation of information from multiple paths. Here, existing neuro-symbolic approaches fail, yet EpiGNNs learn to reason accurately. Code and datasets are available at https://github.com/erg0dic/gnn-sg.
[ { "version": "v1", "created": "Wed, 24 Jul 2024 16:17:15 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 22:50:41 GMT" } ]
2025-03-03T00:00:00
[ [ "Khalid", "Irtaza", "" ], [ "Schockaert", "Steven", "" ] ]
TITLE: Systematic Relational Reasoning With Epistemic Graph Neural Networks ABSTRACT: Developing models that can learn to reason is a notoriously challenging problem. We focus on reasoning in relational domains, where the use of Graph Neural Networks (GNNs) seems like a natural choice. However, previous work has shown that regular GNNs lack the ability to systematically generalize from training examples on test graphs requiring longer inference chains, which fundamentally limits their reasoning abilities. A common solution relies on neuro-symbolic methods that systematically reason by learning rules, but their scalability is often limited and they tend to make unrealistically strong assumptions, e.g.\ that the answer can always be inferred from a single relational path. We propose the Epistemic GNN (EpiGNN), a novel parameter-efficient and scalable GNN architecture with an epistemic inductive bias for systematic reasoning. Node embeddings in EpiGNNs are treated as epistemic states, and message passing is implemented accordingly. We show that EpiGNNs achieve state-of-the-art results on link prediction tasks that require systematic reasoning. Furthermore, for inductive knowledge graph completion, EpiGNNs rival the performance of state-of-the-art specialized approaches. Finally, we introduce two new benchmarks that go beyond standard relational reasoning by requiring the aggregation of information from multiple paths. Here, existing neuro-symbolic approaches fail, yet EpiGNNs learn to reason accurately. Code and datasets are available at https://github.com/erg0dic/gnn-sg.
no_new_dataset
0.938632
2407.17470
Yiming Xie
Yiming Xie, Chun-Han Yao, Vikram Voleti, Huaizu Jiang, Varun Jampani
SV4D: Dynamic 3D Content Generation with Multi-Frame and Multi-View Consistency
Project page: https://sv4d.github.io/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present Stable Video 4D (SV4D), a latent video diffusion model for multi-frame and multi-view consistent dynamic 3D content generation. Unlike previous methods that rely on separately trained generative models for video generation and novel view synthesis, we design a unified diffusion model to generate novel view videos of dynamic 3D objects. Specifically, given a monocular reference video, SV4D generates novel views for each video frame that are temporally consistent. We then use the generated novel view videos to optimize an implicit 4D representation (dynamic NeRF) efficiently, without the need for cumbersome SDS-based optimization used in most prior works. To train our unified novel view video generation model, we curate a dynamic 3D object dataset from the existing Objaverse dataset. Extensive experimental results on multiple datasets and user studies demonstrate SV4D's state-of-the-art performance on novel-view video synthesis as well as 4D generation compared to prior works.
[ { "version": "v1", "created": "Wed, 24 Jul 2024 17:59:43 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 21:52:39 GMT" } ]
2025-03-03T00:00:00
[ [ "Xie", "Yiming", "" ], [ "Yao", "Chun-Han", "" ], [ "Voleti", "Vikram", "" ], [ "Jiang", "Huaizu", "" ], [ "Jampani", "Varun", "" ] ]
TITLE: SV4D: Dynamic 3D Content Generation with Multi-Frame and Multi-View Consistency ABSTRACT: We present Stable Video 4D (SV4D), a latent video diffusion model for multi-frame and multi-view consistent dynamic 3D content generation. Unlike previous methods that rely on separately trained generative models for video generation and novel view synthesis, we design a unified diffusion model to generate novel view videos of dynamic 3D objects. Specifically, given a monocular reference video, SV4D generates novel views for each video frame that are temporally consistent. We then use the generated novel view videos to optimize an implicit 4D representation (dynamic NeRF) efficiently, without the need for cumbersome SDS-based optimization used in most prior works. To train our unified novel view video generation model, we curate a dynamic 3D object dataset from the existing Objaverse dataset. Extensive experimental results on multiple datasets and user studies demonstrate SV4D's state-of-the-art performance on novel-view video synthesis as well as 4D generation compared to prior works.
no_new_dataset
0.945045
2407.20595
Francis Kulumba
Francis Kulumba, Wissam Antoun, Guillaume Vimont, Laurent Romary
Harvesting Textual and Structured Data from the HAL Publication Repository
Under review
null
null
null
cs.DL cs.CL
http://creativecommons.org/licenses/by/4.0/
HAL (\textit{Hyper Articles en Ligne}) is the French national publication repository, used by most higher education and research organizations for their open science policy. Although it is a rich repository of academic documents, its potential for advanced research has not been fully explored. We present HALvest, a unique dataset that bridges the gap between citation networks and the full text of HAL-submitted articles to help with authorship attribution and verification. This first iteration consists of approximately 700,000 documents, spanning 56 languages across 13 identified domains. We transform articles' metadata into a citation network, producing a heterogeneous graph. This graph includes uniquely identified authors on HAL, as well as all open-access documents and their references. Finally, we mine 14.5 million high-quality sequence pairs from HALvest for contrastive learning purposes. By providing different views of HAL, suited for modern machine learning, we aim to assist practitioners in better analyzing and interpreting research dynamics.
[ { "version": "v1", "created": "Tue, 30 Jul 2024 07:14:04 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 19:33:23 GMT" } ]
2025-03-03T00:00:00
[ [ "Kulumba", "Francis", "" ], [ "Antoun", "Wissam", "" ], [ "Vimont", "Guillaume", "" ], [ "Romary", "Laurent", "" ] ]
TITLE: Harvesting Textual and Structured Data from the HAL Publication Repository ABSTRACT: HAL (\textit{Hyper Articles en Ligne}) is the French national publication repository, used by most higher education and research organizations for their open science policy. Although it is a rich repository of academic documents, its potential for advanced research has not been fully explored. We present HALvest, a unique dataset that bridges the gap between citation networks and the full text of HAL-submitted articles to help with authorship attribution and verification. This first iteration consists of approximately 700,000 documents, spanning 56 languages across 13 identified domains. We transform articles' metadata into a citation network, producing a heterogeneous graph. This graph includes uniquely identified authors on HAL, as well as all open-access documents and their references. Finally, we mine 14.5 million high-quality sequence pairs from HALvest for contrastive learning purposes. By providing different views of HAL, suited for modern machine learning, we aim to assist practitioners in better analyzing and interpreting research dynamics.
new_dataset
0.959116
2408.08545
Kaushal Kumar Maurya
Kaushal Kumar Maurya, KV Aditya Srivatsa, Ekaterina Kochmar
SelectLLM: Query-Aware Efficient Selection Algorithm for Large Language Models
8 pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Large language models (LLMs) have been widely adopted due to their remarkable performance across various applications, driving the accelerated development of a large number of diverse models. However, these individual LLMs show limitations in generalization and performance on complex tasks due to inherent training biases, model size constraints, and the quality or diversity of pre-training datasets. A promising direction is to efficiently harness the diverse capabilities of LLMs to overcome these individual limitations. To address these limitations, we introduce a novel LLM selection algorithm called SelectLLM, which efficiently directs input queries to the most suitable subset of LLMs from a large pool, ensuring that the selected models collectively provide accurate responses. SelectLLM employs a multi-label classifier and policy based on the classifier's predictions and confidence scores in selecting an optimal, query-aware, and lightweight subset of LLMs. Our findings indicate that the proposed model outperforms existing ensemble-based baselines and achieves competitive performance with similarly sized top-performing LLMs while maintaining efficiency. Specifically, it achieves a huge reduction in inference latency on two challenging reasoning benchmarks: 13% on GSM8K and 70% on MMLU, compared to the top-performing baseline. Also, we establish a theoretical upper bound by an Oracle with LLMs and perform an in-depth linguistic analysis to understand the performance gap between the Oracle and SelectLLM.
[ { "version": "v1", "created": "Fri, 16 Aug 2024 06:11:21 GMT" }, { "version": "v2", "created": "Mon, 30 Dec 2024 05:01:44 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 13:23:56 GMT" } ]
2025-03-03T00:00:00
[ [ "Maurya", "Kaushal Kumar", "" ], [ "Srivatsa", "KV Aditya", "" ], [ "Kochmar", "Ekaterina", "" ] ]
TITLE: SelectLLM: Query-Aware Efficient Selection Algorithm for Large Language Models ABSTRACT: Large language models (LLMs) have been widely adopted due to their remarkable performance across various applications, driving the accelerated development of a large number of diverse models. However, these individual LLMs show limitations in generalization and performance on complex tasks due to inherent training biases, model size constraints, and the quality or diversity of pre-training datasets. A promising direction is to efficiently harness the diverse capabilities of LLMs to overcome these individual limitations. To address these limitations, we introduce a novel LLM selection algorithm called SelectLLM, which efficiently directs input queries to the most suitable subset of LLMs from a large pool, ensuring that the selected models collectively provide accurate responses. SelectLLM employs a multi-label classifier and policy based on the classifier's predictions and confidence scores in selecting an optimal, query-aware, and lightweight subset of LLMs. Our findings indicate that the proposed model outperforms existing ensemble-based baselines and achieves competitive performance with similarly sized top-performing LLMs while maintaining efficiency. Specifically, it achieves a huge reduction in inference latency on two challenging reasoning benchmarks: 13% on GSM8K and 70% on MMLU, compared to the top-performing baseline. Also, we establish a theoretical upper bound by an Oracle with LLMs and perform an in-depth linguistic analysis to understand the performance gap between the Oracle and SelectLLM.
no_new_dataset
0.943712
2408.14578
Ligao Ruan
Ligao Ruan, Giles Hamilton-Fletcher, Mahya Beheshti, Todd E Hudson, Maurizio Porfiri, JR Rizzo
Multi-faceted Sensory Substitution for Curb Alerting: A Pilot Investigation in Persons with Blindness and Low Vision
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Curbs -- the edge of a raised sidewalk at the point where it meets a street -- crucial in urban environments where they help delineate safe pedestrian zones, from dangerous vehicular lanes. However, curbs themselves are significant navigation hazards, particularly for people who are blind or have low vision (pBLV). The challenges faced by pBLV in detecting and properly orientating themselves for these abrupt elevation changes can lead to falls and serious injuries. Despite recent advancements in assistive technologies, the detection and early warning of curbs remains a largely unsolved challenge. This paper aims to tackle this gap by introducing a novel, multi-faceted sensory substitution approach hosted on a smart wearable; the platform leverages an RGB camera and an embedded system to capture and segment curbs in real time and provide early warning and orientation information. The system utilizes YOLO (You Only Look Once) v8 segmentation model, trained on our custom curb dataset for the camera input. The output of the system consists of adaptive auditory beeps, abstract sonification, and speech, conveying information about the relative distance and orientation of curbs. Through human-subjects experimentation, we demonstrate the effectiveness of the system as compared to the white cane. Results show that our system can provide advanced warning through a larger safety window than the cane, while offering nearly identical curb orientation information.
[ { "version": "v1", "created": "Mon, 26 Aug 2024 18:52:45 GMT" }, { "version": "v2", "created": "Wed, 28 Aug 2024 14:22:22 GMT" } ]
2025-03-03T00:00:00
[ [ "Ruan", "Ligao", "" ], [ "Hamilton-Fletcher", "Giles", "" ], [ "Beheshti", "Mahya", "" ], [ "Hudson", "Todd E", "" ], [ "Porfiri", "Maurizio", "" ], [ "Rizzo", "JR", "" ] ]
TITLE: Multi-faceted Sensory Substitution for Curb Alerting: A Pilot Investigation in Persons with Blindness and Low Vision ABSTRACT: Curbs -- the edge of a raised sidewalk at the point where it meets a street -- crucial in urban environments where they help delineate safe pedestrian zones, from dangerous vehicular lanes. However, curbs themselves are significant navigation hazards, particularly for people who are blind or have low vision (pBLV). The challenges faced by pBLV in detecting and properly orientating themselves for these abrupt elevation changes can lead to falls and serious injuries. Despite recent advancements in assistive technologies, the detection and early warning of curbs remains a largely unsolved challenge. This paper aims to tackle this gap by introducing a novel, multi-faceted sensory substitution approach hosted on a smart wearable; the platform leverages an RGB camera and an embedded system to capture and segment curbs in real time and provide early warning and orientation information. The system utilizes YOLO (You Only Look Once) v8 segmentation model, trained on our custom curb dataset for the camera input. The output of the system consists of adaptive auditory beeps, abstract sonification, and speech, conveying information about the relative distance and orientation of curbs. Through human-subjects experimentation, we demonstrate the effectiveness of the system as compared to the white cane. Results show that our system can provide advanced warning through a larger safety window than the cane, while offering nearly identical curb orientation information.
new_dataset
0.964355
2409.01790
Shiwen Ni
Shiwen Ni, Xiangtao Kong, Chengming Li, Xiping Hu, Ruifeng Xu, Jia Zhu, Min Yang
Training on the Benchmark Is Not All You Need
null
AAAI 2025
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The success of Large Language Models (LLMs) relies heavily on the huge amount of pre-training data learned in the pre-training phase. The opacity of the pre-training process and the training data causes the results of many benchmark tests to become unreliable. If any model has been trained on a benchmark test set, it can seriously hinder the health of the field. In order to automate and efficiently test the capabilities of large language models, numerous mainstream benchmarks adopt a multiple-choice format. As the swapping of the contents of multiple-choice options does not affect the meaning of the question itself, we propose a simple and effective data leakage detection method based on this property. Specifically, we shuffle the contents of the options in the data to generate the corresponding derived data sets, and then detect data leakage based on the model's log probability distribution over the derived data sets. If there is a maximum and outlier in the set of log probabilities, it indicates that the data is leaked. Our method is able to work under gray-box conditions without access to model training data or weights, effectively identifying data leakage from benchmark test sets in model pre-training data, including both normal scenarios and complex scenarios where options may have been shuffled intentionally or unintentionally. Through experiments based on two LLMs and benchmark designs, we demonstrate the effectiveness of our method. In addition, we evaluate the degree of data leakage of 35 mainstream open-source LLMs on four benchmark datasets and give a ranking of the leaked LLMs for each benchmark, and we find that the Qwen family of LLMs has the highest degree of data leakage.
[ { "version": "v1", "created": "Tue, 3 Sep 2024 11:09:44 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 02:40:58 GMT" } ]
2025-03-03T00:00:00
[ [ "Ni", "Shiwen", "" ], [ "Kong", "Xiangtao", "" ], [ "Li", "Chengming", "" ], [ "Hu", "Xiping", "" ], [ "Xu", "Ruifeng", "" ], [ "Zhu", "Jia", "" ], [ "Yang", "Min", "" ] ]
TITLE: Training on the Benchmark Is Not All You Need ABSTRACT: The success of Large Language Models (LLMs) relies heavily on the huge amount of pre-training data learned in the pre-training phase. The opacity of the pre-training process and the training data causes the results of many benchmark tests to become unreliable. If any model has been trained on a benchmark test set, it can seriously hinder the health of the field. In order to automate and efficiently test the capabilities of large language models, numerous mainstream benchmarks adopt a multiple-choice format. As the swapping of the contents of multiple-choice options does not affect the meaning of the question itself, we propose a simple and effective data leakage detection method based on this property. Specifically, we shuffle the contents of the options in the data to generate the corresponding derived data sets, and then detect data leakage based on the model's log probability distribution over the derived data sets. If there is a maximum and outlier in the set of log probabilities, it indicates that the data is leaked. Our method is able to work under gray-box conditions without access to model training data or weights, effectively identifying data leakage from benchmark test sets in model pre-training data, including both normal scenarios and complex scenarios where options may have been shuffled intentionally or unintentionally. Through experiments based on two LLMs and benchmark designs, we demonstrate the effectiveness of our method. In addition, we evaluate the degree of data leakage of 35 mainstream open-source LLMs on four benchmark datasets and give a ranking of the leaked LLMs for each benchmark, and we find that the Qwen family of LLMs has the highest degree of data leakage.
no_new_dataset
0.94545
2409.02392
Wei Xiong
Wei Xiong, Chengshuai Shi, Jiaming Shen, Aviv Rosenberg, Zhen Qin, Daniele Calandriello, Misha Khalman, Rishabh Joshi, Bilal Piot, Mohammad Saleh, Chi Jin, Tong Zhang, Tianqi Liu
Building Math Agents with Multi-Turn Iterative Preference Learning
A multi-turn direct preference learning framework for tool-integrated reasoning tasks
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies have shown that large language models' (LLMs) mathematical problem-solving capabilities can be enhanced by integrating external tools, such as code interpreters, and employing multi-turn Chain-of-Thought (CoT) reasoning. While current methods focus on synthetic data generation and Supervised Fine-Tuning (SFT), this paper studies the complementary direct preference learning approach to further improve model performance. However, existing direct preference learning algorithms are originally designed for the single-turn chat task, and do not fully address the complexities of multi-turn reasoning and external tool integration required for tool-integrated mathematical reasoning tasks. To fill in this gap, we introduce a multi-turn direct preference learning framework, tailored for this context, that leverages feedback from code interpreters and optimizes trajectory-level preferences. This framework includes multi-turn DPO and multi-turn KTO as specific implementations. The effectiveness of our framework is validated through training of various language models using an augmented prompt set from the GSM8K and MATH datasets. Our results demonstrate substantial improvements: a supervised fine-tuned Gemma-1.1-it-7B model's performance increased from 77.5% to 83.9% on GSM8K and from 46.1% to 51.2% on MATH. Similarly, a Gemma-2-it-9B model improved from 84.1% to 86.3% on GSM8K and from 51.0% to 54.5% on MATH.
[ { "version": "v1", "created": "Wed, 4 Sep 2024 02:41:04 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 22:10:16 GMT" } ]
2025-03-03T00:00:00
[ [ "Xiong", "Wei", "" ], [ "Shi", "Chengshuai", "" ], [ "Shen", "Jiaming", "" ], [ "Rosenberg", "Aviv", "" ], [ "Qin", "Zhen", "" ], [ "Calandriello", "Daniele", "" ], [ "Khalman", "Misha", "" ], [ "Joshi", "Rishabh", "" ], [ "Piot", "Bilal", "" ], [ "Saleh", "Mohammad", "" ], [ "Jin", "Chi", "" ], [ "Zhang", "Tong", "" ], [ "Liu", "Tianqi", "" ] ]
TITLE: Building Math Agents with Multi-Turn Iterative Preference Learning ABSTRACT: Recent studies have shown that large language models' (LLMs) mathematical problem-solving capabilities can be enhanced by integrating external tools, such as code interpreters, and employing multi-turn Chain-of-Thought (CoT) reasoning. While current methods focus on synthetic data generation and Supervised Fine-Tuning (SFT), this paper studies the complementary direct preference learning approach to further improve model performance. However, existing direct preference learning algorithms are originally designed for the single-turn chat task, and do not fully address the complexities of multi-turn reasoning and external tool integration required for tool-integrated mathematical reasoning tasks. To fill in this gap, we introduce a multi-turn direct preference learning framework, tailored for this context, that leverages feedback from code interpreters and optimizes trajectory-level preferences. This framework includes multi-turn DPO and multi-turn KTO as specific implementations. The effectiveness of our framework is validated through training of various language models using an augmented prompt set from the GSM8K and MATH datasets. Our results demonstrate substantial improvements: a supervised fine-tuned Gemma-1.1-it-7B model's performance increased from 77.5% to 83.9% on GSM8K and from 46.1% to 51.2% on MATH. Similarly, a Gemma-2-it-9B model improved from 84.1% to 86.3% on GSM8K and from 51.0% to 54.5% on MATH.
no_new_dataset
0.942454
2409.03550
Qianlong Xiang
Qianlong Xiang, Miao Zhang, Yuzhang Shang, Jianlong Wu, Yan Yan, Liqiang Nie
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion models (DMs) have demonstrated exceptional generative capabilities across various domains, including image, video, and so on. A key factor contributing to their effectiveness is the high quantity and quality of data used during training. However, mainstream DMs now consume increasingly large amounts of data. For example, training a Stable Diffusion model requires billions of image-text pairs. This enormous data requirement poses significant challenges for training large DMs due to high data acquisition costs and storage expenses. To alleviate this data burden, we propose a novel scenario: using existing DMs as data sources to train new DMs with any architecture. We refer to this scenario as Data-Free Knowledge Distillation for Diffusion Models (DKDM), where the generative ability of DMs is transferred to new ones in a data-free manner. To tackle this challenge, we make two main contributions. First, we introduce a DKDM objective that enables the training of new DMs via distillation, without requiring access to the data. Second, we develop a dynamic iterative distillation method that efficiently extracts time-domain knowledge from existing DMs, enabling direct retrieval of training data without the need for a prolonged generative process. To the best of our knowledge, we are the first to explore this scenario. Experimental results demonstrate that our data-free approach not only achieves competitive generative performance but also, in some instances, outperforms models trained with the entire dataset.
[ { "version": "v1", "created": "Thu, 5 Sep 2024 14:12:22 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 15:26:03 GMT" } ]
2025-03-03T00:00:00
[ [ "Xiang", "Qianlong", "" ], [ "Zhang", "Miao", "" ], [ "Shang", "Yuzhang", "" ], [ "Wu", "Jianlong", "" ], [ "Yan", "Yan", "" ], [ "Nie", "Liqiang", "" ] ]
TITLE: DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture ABSTRACT: Diffusion models (DMs) have demonstrated exceptional generative capabilities across various domains, including image, video, and so on. A key factor contributing to their effectiveness is the high quantity and quality of data used during training. However, mainstream DMs now consume increasingly large amounts of data. For example, training a Stable Diffusion model requires billions of image-text pairs. This enormous data requirement poses significant challenges for training large DMs due to high data acquisition costs and storage expenses. To alleviate this data burden, we propose a novel scenario: using existing DMs as data sources to train new DMs with any architecture. We refer to this scenario as Data-Free Knowledge Distillation for Diffusion Models (DKDM), where the generative ability of DMs is transferred to new ones in a data-free manner. To tackle this challenge, we make two main contributions. First, we introduce a DKDM objective that enables the training of new DMs via distillation, without requiring access to the data. Second, we develop a dynamic iterative distillation method that efficiently extracts time-domain knowledge from existing DMs, enabling direct retrieval of training data without the need for a prolonged generative process. To the best of our knowledge, we are the first to explore this scenario. Experimental results demonstrate that our data-free approach not only achieves competitive generative performance but also, in some instances, outperforms models trained with the entire dataset.
no_new_dataset
0.945751
2409.10653
Raika Karimi
Raika Karimi, Faezeh Faez, Yingxue Zhang, Xing Li, Lei Chen, Mingxuan Yuan, Mahdi Biparva
Logic Synthesis Optimization with Predictive Self-Supervision via Causal Transformers
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Contemporary hardware design benefits from the abstraction provided by high-level logic gates, streamlining the implementation of logic circuits. Logic Synthesis Optimization (LSO) operates at one level of abstraction within the Electronic Design Automation (EDA) workflow, targeting improvements in logic circuits with respect to performance metrics such as size and speed in the final layout. Recent trends in the field show a growing interest in leveraging Machine Learning (ML) for EDA, notably through ML-guided logic synthesis utilizing policy-based Reinforcement Learning (RL) methods.Despite these advancements, existing models face challenges such as overfitting and limited generalization, attributed to constrained public circuits and the expressiveness limitations of graph encoders. To address these hurdles, and tackle data scarcity issues, we introduce LSOformer, a novel approach harnessing Autoregressive transformer models and predictive SSL to predict the trajectory of Quality of Results (QoR). LSOformer integrates cross-attention modules to merge insights from circuit graphs and optimization sequences, thereby enhancing prediction accuracy for QoR metrics. Experimental studies validate the effectiveness of LSOformer, showcasing its superior performance over baseline architectures in QoR prediction tasks, where it achieves improvements of 5.74%, 4.35%, and 17.06% on the EPFL, OABCD, and proprietary circuits datasets, respectively, in inductive setup.
[ { "version": "v1", "created": "Mon, 16 Sep 2024 18:45:07 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 16:04:54 GMT" } ]
2025-03-03T00:00:00
[ [ "Karimi", "Raika", "" ], [ "Faez", "Faezeh", "" ], [ "Zhang", "Yingxue", "" ], [ "Li", "Xing", "" ], [ "Chen", "Lei", "" ], [ "Yuan", "Mingxuan", "" ], [ "Biparva", "Mahdi", "" ] ]
TITLE: Logic Synthesis Optimization with Predictive Self-Supervision via Causal Transformers ABSTRACT: Contemporary hardware design benefits from the abstraction provided by high-level logic gates, streamlining the implementation of logic circuits. Logic Synthesis Optimization (LSO) operates at one level of abstraction within the Electronic Design Automation (EDA) workflow, targeting improvements in logic circuits with respect to performance metrics such as size and speed in the final layout. Recent trends in the field show a growing interest in leveraging Machine Learning (ML) for EDA, notably through ML-guided logic synthesis utilizing policy-based Reinforcement Learning (RL) methods.Despite these advancements, existing models face challenges such as overfitting and limited generalization, attributed to constrained public circuits and the expressiveness limitations of graph encoders. To address these hurdles, and tackle data scarcity issues, we introduce LSOformer, a novel approach harnessing Autoregressive transformer models and predictive SSL to predict the trajectory of Quality of Results (QoR). LSOformer integrates cross-attention modules to merge insights from circuit graphs and optimization sequences, thereby enhancing prediction accuracy for QoR metrics. Experimental studies validate the effectiveness of LSOformer, showcasing its superior performance over baseline architectures in QoR prediction tasks, where it achieves improvements of 5.74%, 4.35%, and 17.06% on the EPFL, OABCD, and proprietary circuits datasets, respectively, in inductive setup.
no_new_dataset
0.943243
2409.16238
Dominic Phillips
Jonathan Feldstein, Dominic Phillips, Efthymia Tsamoura
Efficiently Learning Probabilistic Logical Models by Cheaply Ranking Mined Rules
21 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Probabilistic logical models are a core component of neurosymbolic AI and are important in their own right for tasks that require high explainability. Unlike neural networks, logical theories that underlie the model are often handcrafted using domain expertise, making their development costly and prone to errors. While there are algorithms that learn logical theories from data, they are generally prohibitively expensive, limiting their applicability in real-world settings. Here, we introduce precision and recall for logical rules and define their composition as rule utility -- a cost-effective measure of the predictive power of logical theories. We also introduce SPECTRUM, a scalable framework for learning logical theories from relational data. Its scalability derives from a linear-time algorithm that mines recurrent subgraphs in the data graph along with a second algorithm that, using the cheap utility measure, efficiently ranks rules derived from these subgraphs. Finally, we prove theoretical guarantees on the utility of the learnt logical theory. As a result, we demonstrate across various tasks that SPECTRUM scales to larger datasets, often learning more accurate logical theories on CPUs in < 1% the runtime of SOTA neural network approaches on GPUs.
[ { "version": "v1", "created": "Tue, 24 Sep 2024 16:54:12 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 16:29:51 GMT" } ]
2025-03-03T00:00:00
[ [ "Feldstein", "Jonathan", "" ], [ "Phillips", "Dominic", "" ], [ "Tsamoura", "Efthymia", "" ] ]
TITLE: Efficiently Learning Probabilistic Logical Models by Cheaply Ranking Mined Rules ABSTRACT: Probabilistic logical models are a core component of neurosymbolic AI and are important in their own right for tasks that require high explainability. Unlike neural networks, logical theories that underlie the model are often handcrafted using domain expertise, making their development costly and prone to errors. While there are algorithms that learn logical theories from data, they are generally prohibitively expensive, limiting their applicability in real-world settings. Here, we introduce precision and recall for logical rules and define their composition as rule utility -- a cost-effective measure of the predictive power of logical theories. We also introduce SPECTRUM, a scalable framework for learning logical theories from relational data. Its scalability derives from a linear-time algorithm that mines recurrent subgraphs in the data graph along with a second algorithm that, using the cheap utility measure, efficiently ranks rules derived from these subgraphs. Finally, we prove theoretical guarantees on the utility of the learnt logical theory. As a result, we demonstrate across various tasks that SPECTRUM scales to larger datasets, often learning more accurate logical theories on CPUs in < 1% the runtime of SOTA neural network approaches on GPUs.
no_new_dataset
0.948202
2410.00477
Pranav Gupta
Pranav Gupta, Advith Krishnan, Naman Nanda, Ananth Eswar, Deeksha Agarwal, Pratham Gohil, Pratyush Goel
ViDAS: Vision-based Danger Assessment and Scoring
Preprint
null
10.1145/3702250.3702279
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present a novel dataset aimed at advancing danger analysis and assessment by addressing the challenge of quantifying danger in video content and identifying how human-like a Large Language Model (LLM) evaluator is for the same. This is achieved by compiling a collection of 100 YouTube videos featuring various events. Each video is annotated by human participants who provided danger ratings on a scale from 0 (no danger to humans) to 10 (life-threatening), with precise timestamps indicating moments of heightened danger. Additionally, we leverage LLMs to independently assess the danger levels in these videos using video summaries. We introduce Mean Squared Error (MSE) scores for multimodal meta-evaluation of the alignment between human and LLM danger assessments. Our dataset not only contributes a new resource for danger assessment in video content but also demonstrates the potential of LLMs in achieving human-like evaluations.
[ { "version": "v1", "created": "Tue, 1 Oct 2024 08:06:46 GMT" } ]
2025-03-03T00:00:00
[ [ "Gupta", "Pranav", "" ], [ "Krishnan", "Advith", "" ], [ "Nanda", "Naman", "" ], [ "Eswar", "Ananth", "" ], [ "Agarwal", "Deeksha", "" ], [ "Gohil", "Pratham", "" ], [ "Goel", "Pratyush", "" ] ]
TITLE: ViDAS: Vision-based Danger Assessment and Scoring ABSTRACT: We present a novel dataset aimed at advancing danger analysis and assessment by addressing the challenge of quantifying danger in video content and identifying how human-like a Large Language Model (LLM) evaluator is for the same. This is achieved by compiling a collection of 100 YouTube videos featuring various events. Each video is annotated by human participants who provided danger ratings on a scale from 0 (no danger to humans) to 10 (life-threatening), with precise timestamps indicating moments of heightened danger. Additionally, we leverage LLMs to independently assess the danger levels in these videos using video summaries. We introduce Mean Squared Error (MSE) scores for multimodal meta-evaluation of the alignment between human and LLM danger assessments. Our dataset not only contributes a new resource for danger assessment in video content but also demonstrates the potential of LLMs in achieving human-like evaluations.
new_dataset
0.955402
2410.01628
Aron Distelzweig
Aron Distelzweig, Andreas Look, Eitan Kosman, Faris Janjo\v{s}, J\"org Wagner, Abhinav Valada
Stochasticity in Motion: An Information-Theoretic Approach to Trajectory Prediction
8 pages, 5 figures, submitted to International Conference on Intelligent Robots and Systems (IROS 2025)
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In autonomous driving, accurate motion prediction is crucial for safe and efficient motion planning. To ensure safety, planners require reliable uncertainty estimates of the predicted behavior of surrounding agents, yet this aspect has received limited attention. In particular, decomposing uncertainty into its aleatoric and epistemic components is essential for distinguishing between inherent environmental randomness and model uncertainty, thereby enabling more robust and informed decision-making. This paper addresses the challenge of uncertainty modeling in trajectory prediction with a holistic approach that emphasizes uncertainty quantification, decomposition, and the impact of model composition. Our method, grounded in information theory, provides a theoretically principled way to measure uncertainty and decompose it into aleatoric and epistemic components. Unlike prior work, our approach is compatible with state-of-the-art motion predictors, allowing for broader applicability. We demonstrate its utility by conducting extensive experiments on the nuScenes dataset, which shows how different architectures and configurations influence uncertainty quantification and model robustness.
[ { "version": "v1", "created": "Wed, 2 Oct 2024 15:02:32 GMT" }, { "version": "v2", "created": "Mon, 7 Oct 2024 11:57:37 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 16:28:50 GMT" } ]
2025-03-03T00:00:00
[ [ "Distelzweig", "Aron", "" ], [ "Look", "Andreas", "" ], [ "Kosman", "Eitan", "" ], [ "Janjoš", "Faris", "" ], [ "Wagner", "Jörg", "" ], [ "Valada", "Abhinav", "" ] ]
TITLE: Stochasticity in Motion: An Information-Theoretic Approach to Trajectory Prediction ABSTRACT: In autonomous driving, accurate motion prediction is crucial for safe and efficient motion planning. To ensure safety, planners require reliable uncertainty estimates of the predicted behavior of surrounding agents, yet this aspect has received limited attention. In particular, decomposing uncertainty into its aleatoric and epistemic components is essential for distinguishing between inherent environmental randomness and model uncertainty, thereby enabling more robust and informed decision-making. This paper addresses the challenge of uncertainty modeling in trajectory prediction with a holistic approach that emphasizes uncertainty quantification, decomposition, and the impact of model composition. Our method, grounded in information theory, provides a theoretically principled way to measure uncertainty and decompose it into aleatoric and epistemic components. Unlike prior work, our approach is compatible with state-of-the-art motion predictors, allowing for broader applicability. We demonstrate its utility by conducting extensive experiments on the nuScenes dataset, which shows how different architectures and configurations influence uncertainty quantification and model robustness.
no_new_dataset
0.940188
2410.01671
Yanming Liu
Yanming Liu, Xinyue Peng, Jiannan Cao, Shi Bo, Yanxin Shen, Tianyu Du, Sheng Cheng, Xun Wang, Jianwei Yin, Xuhong Zhang
Bridging Context Gaps: Leveraging Coreference Resolution for Long Contextual Understanding
ICLR 2025 camera ready version, with updated metadata
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Large language models (LLMs) have shown remarkable capabilities in natural language processing; however, they still face difficulties when tasked with understanding lengthy contexts and executing effective question answering. These challenges often arise due to the complexity and ambiguity present in longer texts. To enhance the performance of LLMs in such scenarios, we introduce the Long Question Coreference Adaptation (LQCA) method. This innovative framework focuses on coreference resolution tailored to long contexts, allowing the model to identify and manage references effectively. The LQCA method encompasses four key steps: resolving coreferences within sub-documents, computing the distances between mentions, defining a representative mention for coreference, and answering questions through mention replacement. By processing information systematically, the framework provides easier-to-handle partitions for LLMs, promoting better understanding. Experimental evaluations on a range of LLMs and datasets have yielded positive results, with a notable improvements on OpenAI-o1-mini and GPT-4o models, highlighting the effectiveness of leveraging coreference resolution to bridge context gaps in question answering. Our code is public at https://github.com/OceannTwT/LQCA.
[ { "version": "v1", "created": "Wed, 2 Oct 2024 15:39:55 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 07:09:00 GMT" } ]
2025-03-03T00:00:00
[ [ "Liu", "Yanming", "" ], [ "Peng", "Xinyue", "" ], [ "Cao", "Jiannan", "" ], [ "Bo", "Shi", "" ], [ "Shen", "Yanxin", "" ], [ "Du", "Tianyu", "" ], [ "Cheng", "Sheng", "" ], [ "Wang", "Xun", "" ], [ "Yin", "Jianwei", "" ], [ "Zhang", "Xuhong", "" ] ]
TITLE: Bridging Context Gaps: Leveraging Coreference Resolution for Long Contextual Understanding ABSTRACT: Large language models (LLMs) have shown remarkable capabilities in natural language processing; however, they still face difficulties when tasked with understanding lengthy contexts and executing effective question answering. These challenges often arise due to the complexity and ambiguity present in longer texts. To enhance the performance of LLMs in such scenarios, we introduce the Long Question Coreference Adaptation (LQCA) method. This innovative framework focuses on coreference resolution tailored to long contexts, allowing the model to identify and manage references effectively. The LQCA method encompasses four key steps: resolving coreferences within sub-documents, computing the distances between mentions, defining a representative mention for coreference, and answering questions through mention replacement. By processing information systematically, the framework provides easier-to-handle partitions for LLMs, promoting better understanding. Experimental evaluations on a range of LLMs and datasets have yielded positive results, with a notable improvements on OpenAI-o1-mini and GPT-4o models, highlighting the effectiveness of leveraging coreference resolution to bridge context gaps in question answering. Our code is public at https://github.com/OceannTwT/LQCA.
no_new_dataset
0.942507
2410.01767
Carlos Miguel Pati\~no
Santiago Cortes-Gomez, Carlos Pati\~no, Yewon Byun, Steven Wu, Eric Horvitz, Bryan Wilder
Utility-Directed Conformal Prediction: A Decision-Aware Framework for Actionable Uncertainty Quantification
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Interest has been growing in decision-focused machine learning methods which train models to account for how their predictions are used in downstream optimization problems. Doing so can often improve performance on subsequent decision problems. However, current methods for uncertainty quantification do not incorporate any information about downstream decisions. We develop a methodology based on conformal prediction to identify prediction sets that account for a downstream cost function, making them more appropriate to inform high-stakes decision-making. Our approach harnesses the strengths of conformal methods -- modularity, model-agnosticism, and statistical coverage guarantees -- while incorporating downstream decisions and user-specified utility functions. We prove that our methods retain standard coverage guarantees. Empirical evaluation across a range of datasets and utility metrics demonstrates that our methods achieve significantly lower costs than standard conformal methods. We present a real-world use case in healthcare diagnosis, where our method effectively incorporates the hierarchical structure of dermatological diseases. The method successfully generates sets with coherent diagnostic meaning, potentially aiding triage for dermatology diagnosis and illustrating how our method can ground high-stakes decision-making employing domain knowledge.
[ { "version": "v1", "created": "Wed, 2 Oct 2024 17:22:09 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 09:26:15 GMT" } ]
2025-03-03T00:00:00
[ [ "Cortes-Gomez", "Santiago", "" ], [ "Patiño", "Carlos", "" ], [ "Byun", "Yewon", "" ], [ "Wu", "Steven", "" ], [ "Horvitz", "Eric", "" ], [ "Wilder", "Bryan", "" ] ]
TITLE: Utility-Directed Conformal Prediction: A Decision-Aware Framework for Actionable Uncertainty Quantification ABSTRACT: Interest has been growing in decision-focused machine learning methods which train models to account for how their predictions are used in downstream optimization problems. Doing so can often improve performance on subsequent decision problems. However, current methods for uncertainty quantification do not incorporate any information about downstream decisions. We develop a methodology based on conformal prediction to identify prediction sets that account for a downstream cost function, making them more appropriate to inform high-stakes decision-making. Our approach harnesses the strengths of conformal methods -- modularity, model-agnosticism, and statistical coverage guarantees -- while incorporating downstream decisions and user-specified utility functions. We prove that our methods retain standard coverage guarantees. Empirical evaluation across a range of datasets and utility metrics demonstrates that our methods achieve significantly lower costs than standard conformal methods. We present a real-world use case in healthcare diagnosis, where our method effectively incorporates the hierarchical structure of dermatological diseases. The method successfully generates sets with coherent diagnostic meaning, potentially aiding triage for dermatology diagnosis and illustrating how our method can ground high-stakes decision-making employing domain knowledge.
no_new_dataset
0.945901
2410.03074
Yuehan Qin
Yuehan Qin, Yichi Zhang, Yi Nian, Xueying Ding, Yue Zhao
MetaOOD: Automatic Selection of OOD Detection Models
Best paper at 2024 KDD Workshop on Resource-Efficient Learning. Extended version at ICLR 2025
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How can we automatically select an out-of-distribution (OOD) detection model for various underlying tasks? This is crucial for maintaining the reliability of open-world applications by identifying data distribution shifts, particularly in critical domains such as online transactions, autonomous driving, and real-time patient diagnosis. Despite the availability of numerous OOD detection methods, the challenge of selecting an optimal model for diverse tasks remains largely underexplored, especially in scenarios lacking ground truth labels. In this work, we introduce MetaOOD, the first zero-shot, unsupervised framework that utilizes meta-learning to select an OOD detection model automatically. As a meta-learning approach, MetaOOD leverages historical performance data of existing methods across various benchmark OOD detection datasets, enabling the effective selection of a suitable model for new datasets without the need for labeled data at the test time. To quantify task similarities more accurately, we introduce language model-based embeddings that capture the distinctive OOD characteristics of both datasets and detection models. Through extensive experimentation with 24 unique test dataset pairs to choose from among 11 OOD detection models, we demonstrate that MetaOOD significantly outperforms existing methods and only brings marginal time overhead. Our results, validated by Wilcoxon statistical tests, show that MetaOOD surpasses a diverse group of 11 baselines, including established OOD detectors and advanced unsupervised selection methods.
[ { "version": "v1", "created": "Fri, 4 Oct 2024 01:36:19 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 05:14:32 GMT" } ]
2025-03-03T00:00:00
[ [ "Qin", "Yuehan", "" ], [ "Zhang", "Yichi", "" ], [ "Nian", "Yi", "" ], [ "Ding", "Xueying", "" ], [ "Zhao", "Yue", "" ] ]
TITLE: MetaOOD: Automatic Selection of OOD Detection Models ABSTRACT: How can we automatically select an out-of-distribution (OOD) detection model for various underlying tasks? This is crucial for maintaining the reliability of open-world applications by identifying data distribution shifts, particularly in critical domains such as online transactions, autonomous driving, and real-time patient diagnosis. Despite the availability of numerous OOD detection methods, the challenge of selecting an optimal model for diverse tasks remains largely underexplored, especially in scenarios lacking ground truth labels. In this work, we introduce MetaOOD, the first zero-shot, unsupervised framework that utilizes meta-learning to select an OOD detection model automatically. As a meta-learning approach, MetaOOD leverages historical performance data of existing methods across various benchmark OOD detection datasets, enabling the effective selection of a suitable model for new datasets without the need for labeled data at the test time. To quantify task similarities more accurately, we introduce language model-based embeddings that capture the distinctive OOD characteristics of both datasets and detection models. Through extensive experimentation with 24 unique test dataset pairs to choose from among 11 OOD detection models, we demonstrate that MetaOOD significantly outperforms existing methods and only brings marginal time overhead. Our results, validated by Wilcoxon statistical tests, show that MetaOOD surpasses a diverse group of 11 baselines, including established OOD detectors and advanced unsupervised selection methods.
no_new_dataset
0.942929
2410.05602
Byoungwoo Park
Byoungwoo Park, Hyungi Lee, Juho Lee
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
null
null
null
null
stat.ML cs.LG
http://creativecommons.org/licenses/by/4.0/
Many real-world datasets, such as healthcare, climate, and economics, are often collected as irregular time series, which poses challenges for accurate modeling. In this paper, we propose the Amortized Control of continuous State Space Model (ACSSM) for continuous dynamical modeling of time series for irregular and discrete observations. We first present a multi-marginal Doob's $h$-transform to construct a continuous dynamical system conditioned on these irregular observations. Following this, we introduce a variational inference algorithm with a tight evidence lower bound (ELBO), leveraging stochastic optimal control (SOC) theory to approximate the intractable Doob's $h$-transform and simulate the conditioned dynamics. To improve efficiency and scalability during both training and inference, ACSSM leverages auxiliary variable to flexibly parameterize the latent dynamics and amortized control. Additionally, it incorporates a simulation-free latent dynamics framework and a transformer-based data assimilation scheme, facilitating parallel inference of the latent states and ELBO computation. Through empirical evaluations across a variety of real-world datasets, ACSSM demonstrates superior performance in tasks such as classification, regression, interpolation, and extrapolation, while maintaining computational efficiency.
[ { "version": "v1", "created": "Tue, 8 Oct 2024 01:27:46 GMT" }, { "version": "v2", "created": "Tue, 25 Feb 2025 00:18:24 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 03:30:08 GMT" } ]
2025-03-03T00:00:00
[ [ "Park", "Byoungwoo", "" ], [ "Lee", "Hyungi", "" ], [ "Lee", "Juho", "" ] ]
TITLE: Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series ABSTRACT: Many real-world datasets, such as healthcare, climate, and economics, are often collected as irregular time series, which poses challenges for accurate modeling. In this paper, we propose the Amortized Control of continuous State Space Model (ACSSM) for continuous dynamical modeling of time series for irregular and discrete observations. We first present a multi-marginal Doob's $h$-transform to construct a continuous dynamical system conditioned on these irregular observations. Following this, we introduce a variational inference algorithm with a tight evidence lower bound (ELBO), leveraging stochastic optimal control (SOC) theory to approximate the intractable Doob's $h$-transform and simulate the conditioned dynamics. To improve efficiency and scalability during both training and inference, ACSSM leverages auxiliary variable to flexibly parameterize the latent dynamics and amortized control. Additionally, it incorporates a simulation-free latent dynamics framework and a transformer-based data assimilation scheme, facilitating parallel inference of the latent states and ELBO computation. Through empirical evaluations across a variety of real-world datasets, ACSSM demonstrates superior performance in tasks such as classification, regression, interpolation, and extrapolation, while maintaining computational efficiency.
no_new_dataset
0.948394
2410.08014
Kai Zhang
Kai Zhang, Congchao Wang, Liqian Peng, Alec Go, Xiaozhong Liu
Privacy-preserved LLM Cascade via CoT-enhanced Policy Learning
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) have gained significant attention in on-device applications due to their remarkable performance across real-world tasks. However, on-device LLMs often suffer from suboptimal performance due to hardware limitations. A promising solution to this challenge is cascading a weaker local (on-device) LLM with a more powerful server LLM. While existing research on LLM cascade primarily optimizes the performance-cost trade-off, real-world applications impose additional requirements, such as privacy preservation, which remain largely unaddressed. In this work, we move beyond existing confidence- and logit-based LLM cascade methods and propose $\mathbf{P^{3}Defer}$, a novel Chain-of-Thought (CoT)-enhanced \textbf{p}olicy learning framework for \textbf{p}rivacy-\textbf{p}reserved \textbf{defer}ral decision-making. Our approach effectively improves cascade efficiency while mitigating privacy risks. Extensive experiments on three benchmark datasets demonstrate the effectiveness and superiority of $\mathbf{P^{3}Defer}$ over existing methods.
[ { "version": "v1", "created": "Thu, 10 Oct 2024 15:09:52 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 17:56:08 GMT" } ]
2025-03-03T00:00:00
[ [ "Zhang", "Kai", "" ], [ "Wang", "Congchao", "" ], [ "Peng", "Liqian", "" ], [ "Go", "Alec", "" ], [ "Liu", "Xiaozhong", "" ] ]
TITLE: Privacy-preserved LLM Cascade via CoT-enhanced Policy Learning ABSTRACT: Large Language Models (LLMs) have gained significant attention in on-device applications due to their remarkable performance across real-world tasks. However, on-device LLMs often suffer from suboptimal performance due to hardware limitations. A promising solution to this challenge is cascading a weaker local (on-device) LLM with a more powerful server LLM. While existing research on LLM cascade primarily optimizes the performance-cost trade-off, real-world applications impose additional requirements, such as privacy preservation, which remain largely unaddressed. In this work, we move beyond existing confidence- and logit-based LLM cascade methods and propose $\mathbf{P^{3}Defer}$, a novel Chain-of-Thought (CoT)-enhanced \textbf{p}olicy learning framework for \textbf{p}rivacy-\textbf{p}reserved \textbf{defer}ral decision-making. Our approach effectively improves cascade efficiency while mitigating privacy risks. Extensive experiments on three benchmark datasets demonstrate the effectiveness and superiority of $\mathbf{P^{3}Defer}$ over existing methods.
no_new_dataset
0.947769
2410.08388
Maximus Powers
Maximus Powers, Shaina Raza, Alex Chang, Umang Mavani, Harshitha Reddy Jonala, Ansh Tiwari, Hua Wei
The GUS Framework: Benchmarking Social Bias Classification with Discriminative (Encoder-Only) and Generative (Decoder-Only) Language Models
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
The detection of social bias in text is a critical challenge, particularly due to the limitations of binary classification methods. These methods often oversimplify nuanced biases, leading to high emotional impact when content is misclassified as either "biased" or "fair." To address these shortcomings, we propose a more nuanced framework that focuses on three key linguistic components underlying social bias: Generalizations, Unfairness, and Stereotypes (the GUS framework). The GUS framework employs a semi-automated approach to create a comprehensive synthetic dataset, which is then verified by humans to maintain ethical standards. This dataset enables robust multi-label token classification. Our methodology, which combines discriminative (encoder-only) models and generative (auto-regressive large language models), identifies biased entities in text. Through extensive experiments, we demonstrate that encoder-only models are effective for this complex task, often outperforming state-of-the-art methods, both in terms of macro and entity-wise F1-score and Hamming loss. These findings can guide the choice of model for different use cases, highlighting the GUS framework's effectiveness in capturing explicit and implicit biases across diverse contexts, and offering a pathway for future research and applications in various fields.
[ { "version": "v1", "created": "Thu, 10 Oct 2024 21:51:22 GMT" }, { "version": "v2", "created": "Thu, 17 Oct 2024 20:33:28 GMT" }, { "version": "v3", "created": "Sun, 23 Feb 2025 17:08:56 GMT" }, { "version": "v4", "created": "Fri, 28 Feb 2025 18:55:08 GMT" } ]
2025-03-03T00:00:00
[ [ "Powers", "Maximus", "" ], [ "Raza", "Shaina", "" ], [ "Chang", "Alex", "" ], [ "Mavani", "Umang", "" ], [ "Jonala", "Harshitha Reddy", "" ], [ "Tiwari", "Ansh", "" ], [ "Wei", "Hua", "" ] ]
TITLE: The GUS Framework: Benchmarking Social Bias Classification with Discriminative (Encoder-Only) and Generative (Decoder-Only) Language Models ABSTRACT: The detection of social bias in text is a critical challenge, particularly due to the limitations of binary classification methods. These methods often oversimplify nuanced biases, leading to high emotional impact when content is misclassified as either "biased" or "fair." To address these shortcomings, we propose a more nuanced framework that focuses on three key linguistic components underlying social bias: Generalizations, Unfairness, and Stereotypes (the GUS framework). The GUS framework employs a semi-automated approach to create a comprehensive synthetic dataset, which is then verified by humans to maintain ethical standards. This dataset enables robust multi-label token classification. Our methodology, which combines discriminative (encoder-only) models and generative (auto-regressive large language models), identifies biased entities in text. Through extensive experiments, we demonstrate that encoder-only models are effective for this complex task, often outperforming state-of-the-art methods, both in terms of macro and entity-wise F1-score and Hamming loss. These findings can guide the choice of model for different use cases, highlighting the GUS framework's effectiveness in capturing explicit and implicit biases across diverse contexts, and offering a pathway for future research and applications in various fields.
new_dataset
0.955817
2410.09542
Jiachun Li
Jiachun Li, Pengfei Cao, Zhuoran Jin, Yubo Chen, Kang Liu, Jun Zhao
MIRAGE: Evaluating and Explaining Inductive Reasoning Process in Language Models
Accepted as ICLR 2025 conference paper (26 pages, 16 tables, 9 figures)
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inductive reasoning is an essential capability for large language models (LLMs) to achieve higher intelligence, which requires the model to generalize rules from observed facts and then apply them to unseen examples. We present MIRAGE, a synthetic dataset that addresses the limitations of previous work, specifically the lack of comprehensive evaluation and flexible test data. In it, we evaluate LLMs' capabilities in both the inductive and deductive stages, allowing for flexible variation in input distribution, task scenario, and task difficulty to analyze the factors influencing LLMs' inductive reasoning. Based on these multi-faceted evaluations, we demonstrate that the LLM is a poor rule-based reasoner. In many cases, when conducting inductive reasoning, they do not rely on a correct rule to answer the unseen case. From the perspectives of different prompting methods, observation numbers, and task forms, models tend to consistently conduct correct deduction without correct inductive rules. Besides, we find that LLMs are good neighbor-based reasoners. In the inductive reasoning process, the model tends to focus on observed facts that are close to the current test example in feature space. By leveraging these similar examples, the model maintains strong inductive capabilities within a localized region, significantly improving its deductive performance.
[ { "version": "v1", "created": "Sat, 12 Oct 2024 14:12:36 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 08:01:32 GMT" } ]
2025-03-03T00:00:00
[ [ "Li", "Jiachun", "" ], [ "Cao", "Pengfei", "" ], [ "Jin", "Zhuoran", "" ], [ "Chen", "Yubo", "" ], [ "Liu", "Kang", "" ], [ "Zhao", "Jun", "" ] ]
TITLE: MIRAGE: Evaluating and Explaining Inductive Reasoning Process in Language Models ABSTRACT: Inductive reasoning is an essential capability for large language models (LLMs) to achieve higher intelligence, which requires the model to generalize rules from observed facts and then apply them to unseen examples. We present MIRAGE, a synthetic dataset that addresses the limitations of previous work, specifically the lack of comprehensive evaluation and flexible test data. In it, we evaluate LLMs' capabilities in both the inductive and deductive stages, allowing for flexible variation in input distribution, task scenario, and task difficulty to analyze the factors influencing LLMs' inductive reasoning. Based on these multi-faceted evaluations, we demonstrate that the LLM is a poor rule-based reasoner. In many cases, when conducting inductive reasoning, they do not rely on a correct rule to answer the unseen case. From the perspectives of different prompting methods, observation numbers, and task forms, models tend to consistently conduct correct deduction without correct inductive rules. Besides, we find that LLMs are good neighbor-based reasoners. In the inductive reasoning process, the model tends to focus on observed facts that are close to the current test example in feature space. By leveraging these similar examples, the model maintains strong inductive capabilities within a localized region, significantly improving its deductive performance.
new_dataset
0.956675
2410.09570
Dingyi Zhuang
Dingyi Zhuang, Chonghe Jiang, Yunhan Zheng, Shenhao Wang, Jinhua Zhao
GETS: Ensemble Temperature Scaling for Calibration in Graph Neural Networks
ICLR 2025 Spotlight
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Graph Neural Networks deliver strong classification results but often suffer from poor calibration performance, leading to overconfidence or underconfidence. This is particularly problematic in high stakes applications where accurate uncertainty estimates are essential. Existing post hoc methods, such as temperature scaling, fail to effectively utilize graph structures, while current GNN calibration methods often overlook the potential of leveraging diverse input information and model ensembles jointly. In the paper, we propose Graph Ensemble Temperature Scaling, a novel calibration framework that combines input and model ensemble strategies within a Graph Mixture of Experts archi SOTA calibration techniques, reducing expected calibration error by 25 percent across 10 GNN benchmark datasets. Additionally, GETS is computationally efficient, scalable, and capable of selecting effective input combinations for improved calibration performance. The implementation is available via Github.
[ { "version": "v1", "created": "Sat, 12 Oct 2024 15:34:41 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 23:10:46 GMT" } ]
2025-03-03T00:00:00
[ [ "Zhuang", "Dingyi", "" ], [ "Jiang", "Chonghe", "" ], [ "Zheng", "Yunhan", "" ], [ "Wang", "Shenhao", "" ], [ "Zhao", "Jinhua", "" ] ]
TITLE: GETS: Ensemble Temperature Scaling for Calibration in Graph Neural Networks ABSTRACT: Graph Neural Networks deliver strong classification results but often suffer from poor calibration performance, leading to overconfidence or underconfidence. This is particularly problematic in high stakes applications where accurate uncertainty estimates are essential. Existing post hoc methods, such as temperature scaling, fail to effectively utilize graph structures, while current GNN calibration methods often overlook the potential of leveraging diverse input information and model ensembles jointly. In the paper, we propose Graph Ensemble Temperature Scaling, a novel calibration framework that combines input and model ensemble strategies within a Graph Mixture of Experts archi SOTA calibration techniques, reducing expected calibration error by 25 percent across 10 GNN benchmark datasets. Additionally, GETS is computationally efficient, scalable, and capable of selecting effective input combinations for improved calibration performance. The implementation is available via Github.
no_new_dataset
0.947769
2410.09870
Yein Park
Yein Park, Chanwoong Yoon, Jungwoo Park, Donghyeon Lee, Minbyul Jeong, Jaewoo Kang
ChroKnowledge: Unveiling Chronological Knowledge of Language Models in Multiple Domains
ICLR 2025, 40 pages, 17 figures
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) have brought significant changes to many aspects of our lives. However, assessing and ensuring their chronological knowledge remains challenging. Existing approaches fall short in addressing the temporal adaptability of knowledge, often relying on a fixed time-point view. To overcome this, we introduce ChroKnowBench, a benchmark dataset designed to evaluate chronologically accumulated knowledge across three key aspects: multiple domains, time dependency, temporal state. Our benchmark distinguishes between knowledge that evolves (e.g., personal history, scientific discoveries, amended laws) and knowledge that remain constant (e.g., mathematical truths, commonsense facts). Building on this benchmark, we present ChroKnowledge (Chronological Categorization of Knowledge), a novel sampling-based framework for evaluating LLMs' non-parametric chronological knowledge. Our evaluation led to the following observations: (1) The ability of eliciting temporal knowledge varies depending on the data format that model was trained on. (2) LLMs partially recall knowledge or show a cut-off at temporal boundaries rather than recalling all aspects of knowledge correctly. Thus, we apply our ChroKnowPrompt, an in-depth prompting to elicit chronological knowledge by traversing step-by-step through the surrounding time spans. We observe that it successfully recalls objects across both open-source and proprietary LLMs, demonstrating versatility, though it faces challenges with dynamic datasets and unstructured formats.
[ { "version": "v1", "created": "Sun, 13 Oct 2024 15:08:49 GMT" }, { "version": "v2", "created": "Wed, 27 Nov 2024 11:11:00 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 08:02:31 GMT" } ]
2025-03-03T00:00:00
[ [ "Park", "Yein", "" ], [ "Yoon", "Chanwoong", "" ], [ "Park", "Jungwoo", "" ], [ "Lee", "Donghyeon", "" ], [ "Jeong", "Minbyul", "" ], [ "Kang", "Jaewoo", "" ] ]
TITLE: ChroKnowledge: Unveiling Chronological Knowledge of Language Models in Multiple Domains ABSTRACT: Large language models (LLMs) have brought significant changes to many aspects of our lives. However, assessing and ensuring their chronological knowledge remains challenging. Existing approaches fall short in addressing the temporal adaptability of knowledge, often relying on a fixed time-point view. To overcome this, we introduce ChroKnowBench, a benchmark dataset designed to evaluate chronologically accumulated knowledge across three key aspects: multiple domains, time dependency, temporal state. Our benchmark distinguishes between knowledge that evolves (e.g., personal history, scientific discoveries, amended laws) and knowledge that remain constant (e.g., mathematical truths, commonsense facts). Building on this benchmark, we present ChroKnowledge (Chronological Categorization of Knowledge), a novel sampling-based framework for evaluating LLMs' non-parametric chronological knowledge. Our evaluation led to the following observations: (1) The ability of eliciting temporal knowledge varies depending on the data format that model was trained on. (2) LLMs partially recall knowledge or show a cut-off at temporal boundaries rather than recalling all aspects of knowledge correctly. Thus, we apply our ChroKnowPrompt, an in-depth prompting to elicit chronological knowledge by traversing step-by-step through the surrounding time spans. We observe that it successfully recalls objects across both open-source and proprietary LLMs, demonstrating versatility, though it faces challenges with dynamic datasets and unstructured formats.
new_dataset
0.958924
2410.10105
Qian Yu
Qian Yu, Peng-Tao Jiang, Hao Zhang, Jinwei Chen, Bo Li, Lihe Zhang, Huchuan Lu
High-Precision Dichotomous Image Segmentation via Probing Diffusion Capacity
Published as a conference paper at ICLR 2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the realm of high-resolution (HR), fine-grained image segmentation, the primary challenge is balancing broad contextual awareness with the precision required for detailed object delineation, capturing intricate details and the finest edges of objects. Diffusion models, trained on vast datasets comprising billions of image-text pairs, such as SD V2.1, have revolutionized text-to-image synthesis by delivering exceptional quality, fine detail resolution, and strong contextual awareness, making them an attractive solution for high-resolution image segmentation. To this end, we propose DiffDIS, a diffusion-driven segmentation model that taps into the potential of the pre-trained U-Net within diffusion models, specifically designed for high-resolution, fine-grained object segmentation. By leveraging the robust generalization capabilities and rich, versatile image representation prior of the SD models, coupled with a task-specific stable one-step denoising approach, we significantly reduce the inference time while preserving high-fidelity, detailed generation. Additionally, we introduce an auxiliary edge generation task to not only enhance the preservation of fine details of the object boundaries, but reconcile the probabilistic nature of diffusion with the deterministic demands of segmentation. With these refined strategies in place, DiffDIS serves as a rapid object mask generation model, specifically optimized for generating detailed binary maps at high resolutions, while demonstrating impressive accuracy and swift processing. Experiments on the DIS5K dataset demonstrate the superiority of DiffDIS, achieving state-of-the-art results through a streamlined inference process. The source code will be publicly available at https://github.com/qianyu-dlut/DiffDIS.
[ { "version": "v1", "created": "Mon, 14 Oct 2024 02:49:23 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 09:44:00 GMT" } ]
2025-03-03T00:00:00
[ [ "Yu", "Qian", "" ], [ "Jiang", "Peng-Tao", "" ], [ "Zhang", "Hao", "" ], [ "Chen", "Jinwei", "" ], [ "Li", "Bo", "" ], [ "Zhang", "Lihe", "" ], [ "Lu", "Huchuan", "" ] ]
TITLE: High-Precision Dichotomous Image Segmentation via Probing Diffusion Capacity ABSTRACT: In the realm of high-resolution (HR), fine-grained image segmentation, the primary challenge is balancing broad contextual awareness with the precision required for detailed object delineation, capturing intricate details and the finest edges of objects. Diffusion models, trained on vast datasets comprising billions of image-text pairs, such as SD V2.1, have revolutionized text-to-image synthesis by delivering exceptional quality, fine detail resolution, and strong contextual awareness, making them an attractive solution for high-resolution image segmentation. To this end, we propose DiffDIS, a diffusion-driven segmentation model that taps into the potential of the pre-trained U-Net within diffusion models, specifically designed for high-resolution, fine-grained object segmentation. By leveraging the robust generalization capabilities and rich, versatile image representation prior of the SD models, coupled with a task-specific stable one-step denoising approach, we significantly reduce the inference time while preserving high-fidelity, detailed generation. Additionally, we introduce an auxiliary edge generation task to not only enhance the preservation of fine details of the object boundaries, but reconcile the probabilistic nature of diffusion with the deterministic demands of segmentation. With these refined strategies in place, DiffDIS serves as a rapid object mask generation model, specifically optimized for generating detailed binary maps at high resolutions, while demonstrating impressive accuracy and swift processing. Experiments on the DIS5K dataset demonstrate the superiority of DiffDIS, achieving state-of-the-art results through a streamlined inference process. The source code will be publicly available at https://github.com/qianyu-dlut/DiffDIS.
no_new_dataset
0.949389
2410.11540
Yaxin Du
Yaxin Du and Rui Ye and Fengting Yuchi and Wanru Zhao and Jingjing Qu and Yanfeng Wang and Siheng Chen
Data Quality Control in Federated Instruction-tuning of Large Language Models
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated Learning (FL) enables privacy-preserving collaborative instruction tuning of large language models (LLMs) by leveraging massively distributed data. However, the decentralized nature of FL exacerbates data quality challenges, as local clients lack global visibility to filter noisy or low-quality samples before training. To resolve this issue, we propose FedDQC, a novel federated instruction tuning framework with dynamic data quality control. Our approach introduces two key innovations. First, we propose instruction-response alignment (IRA), an efficient client-side metric for quality evaluation requiring only low-cost inference. We validate that higher-IRA data corresponds to more relevant and easier-to-learn question-answer pairs. Second, mirroring the human easy-to-hard knowledge acquisition process, we design a quality-aware hierarchical FL training framework, where the LLM is progressively fine-tuned from high- to low-IRA data in a collaborative manner. The framework also supports adaptive data quality assessment at each hierarchy, enabling dynamic adjustments throughout the training process. Extensive experiments on synthetic and real-world datasets show that our method significantly improves LLM performance on mixed-quality data in FL.
[ { "version": "v1", "created": "Tue, 15 Oct 2024 12:14:57 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 14:35:58 GMT" } ]
2025-03-03T00:00:00
[ [ "Du", "Yaxin", "" ], [ "Ye", "Rui", "" ], [ "Yuchi", "Fengting", "" ], [ "Zhao", "Wanru", "" ], [ "Qu", "Jingjing", "" ], [ "Wang", "Yanfeng", "" ], [ "Chen", "Siheng", "" ] ]
TITLE: Data Quality Control in Federated Instruction-tuning of Large Language Models ABSTRACT: Federated Learning (FL) enables privacy-preserving collaborative instruction tuning of large language models (LLMs) by leveraging massively distributed data. However, the decentralized nature of FL exacerbates data quality challenges, as local clients lack global visibility to filter noisy or low-quality samples before training. To resolve this issue, we propose FedDQC, a novel federated instruction tuning framework with dynamic data quality control. Our approach introduces two key innovations. First, we propose instruction-response alignment (IRA), an efficient client-side metric for quality evaluation requiring only low-cost inference. We validate that higher-IRA data corresponds to more relevant and easier-to-learn question-answer pairs. Second, mirroring the human easy-to-hard knowledge acquisition process, we design a quality-aware hierarchical FL training framework, where the LLM is progressively fine-tuned from high- to low-IRA data in a collaborative manner. The framework also supports adaptive data quality assessment at each hierarchy, enabling dynamic adjustments throughout the training process. Extensive experiments on synthetic and real-world datasets show that our method significantly improves LLM performance on mixed-quality data in FL.
no_new_dataset
0.945901
2410.12207
Xianren Zhang
Xianren Zhang, Xianfeng Tang, Hui Liu, Zongyu Wu, Qi He, Dongwon Lee and Suhang Wang
Divide-Verify-Refine: Can LLMs Self-Align with Complex Instructions?
Under review
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent studies show LLMs struggle with complex instructions involving multiple constraints (e.g., length, format, sentiment). Existing works address this issue by fine-tuning, which heavily relies on fine-tuning data quality and is computational expensive. An alternative is leveraging LLMs' self-correction to refine responses for better constraint adherence. However, this is limited by the feedback quality, as LLMs cannot generate reliable feedback or detect errors. Moreover, its effectiveness relies on few-shot examples illustrating response modifications. As constraints in complex instructions are diverse, manually crafting such examples for each constraint type can be labor-intensive and sub-optimal. To address these two challenges, we propose the Divide-Verify-Refine (DVR) framework with three steps: (1) Divide complex instructions into single constraints and prepare appropriate tools; (2) Verify responses using tools that provide rigorous check and textual guidance (e.g., Python toolkit for format checks or pre-trained classifiers for content analysis); (3) Refine: To maximize refinement effectiveness, we propose dynamic few-shot prompting, where a refinement repository collects successful refinements, and these examples are selectively retrieved for future refinements. Recognizing the lack of complexity in existing datasets, we create a new dataset of complex instructions. DVR doubles Llama3.1-8B's constraint adherence and triples Mistral-7B's performance.
[ { "version": "v1", "created": "Wed, 16 Oct 2024 04:01:55 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 22:16:18 GMT" } ]
2025-03-03T00:00:00
[ [ "Zhang", "Xianren", "" ], [ "Tang", "Xianfeng", "" ], [ "Liu", "Hui", "" ], [ "Wu", "Zongyu", "" ], [ "He", "Qi", "" ], [ "Lee", "Dongwon", "" ], [ "Wang", "Suhang", "" ] ]
TITLE: Divide-Verify-Refine: Can LLMs Self-Align with Complex Instructions? ABSTRACT: Recent studies show LLMs struggle with complex instructions involving multiple constraints (e.g., length, format, sentiment). Existing works address this issue by fine-tuning, which heavily relies on fine-tuning data quality and is computational expensive. An alternative is leveraging LLMs' self-correction to refine responses for better constraint adherence. However, this is limited by the feedback quality, as LLMs cannot generate reliable feedback or detect errors. Moreover, its effectiveness relies on few-shot examples illustrating response modifications. As constraints in complex instructions are diverse, manually crafting such examples for each constraint type can be labor-intensive and sub-optimal. To address these two challenges, we propose the Divide-Verify-Refine (DVR) framework with three steps: (1) Divide complex instructions into single constraints and prepare appropriate tools; (2) Verify responses using tools that provide rigorous check and textual guidance (e.g., Python toolkit for format checks or pre-trained classifiers for content analysis); (3) Refine: To maximize refinement effectiveness, we propose dynamic few-shot prompting, where a refinement repository collects successful refinements, and these examples are selectively retrieved for future refinements. Recognizing the lack of complexity in existing datasets, we create a new dataset of complex instructions. DVR doubles Llama3.1-8B's constraint adherence and triples Mistral-7B's performance.
new_dataset
0.942082
2410.12337
Linfeng Xu
Linfeng Xu, Fanman Meng, Qingbo Wu, Lili Pan, Heqian Qiu, Lanxiao Wang, Kailong Chen, Kanglei Geng, Yilei Qian, Haojie Wang, Shuchang Zhou, Shimou Ling, Zejia Liu, Nanlin Chen, Yingjie Xu, Shaoxu Cheng, Bowen Tan, Ziyong Xu, Hongliang Li
ARIC: An Activity Recognition Dataset in Classroom Surveillance Images
arXiv admin note: text overlap with arXiv:2409.03354. Updated the description for ARIC supplement
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The application of activity recognition in the ``AI + Education" field is gaining increasing attention. However, current work mainly focuses on the recognition of activities in manually captured videos and a limited number of activity types, with little attention given to recognizing activities in surveillance images from real classrooms. Activity recognition in classroom surveillance images faces multiple challenges, such as class imbalance and high activity similarity. To address this gap, we constructed a novel multimodal dataset focused on classroom surveillance image activity recognition called ARIC (Activity Recognition In Classroom). The ARIC dataset has advantages of multiple perspectives, 32 activity categories, three modalities, and real-world classroom scenarios. In addition to the general activity recognition tasks, we also provide settings for continual learning and few-shot continual learning. We hope that the ARIC dataset can act as a facilitator for future analysis and research for open teaching scenarios. You can download preliminary data from https://ivipclab.github.io/publication_ARIC/ARIC.
[ { "version": "v1", "created": "Wed, 16 Oct 2024 07:59:07 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 12:45:25 GMT" } ]
2025-03-03T00:00:00
[ [ "Xu", "Linfeng", "" ], [ "Meng", "Fanman", "" ], [ "Wu", "Qingbo", "" ], [ "Pan", "Lili", "" ], [ "Qiu", "Heqian", "" ], [ "Wang", "Lanxiao", "" ], [ "Chen", "Kailong", "" ], [ "Geng", "Kanglei", "" ], [ "Qian", "Yilei", "" ], [ "Wang", "Haojie", "" ], [ "Zhou", "Shuchang", "" ], [ "Ling", "Shimou", "" ], [ "Liu", "Zejia", "" ], [ "Chen", "Nanlin", "" ], [ "Xu", "Yingjie", "" ], [ "Cheng", "Shaoxu", "" ], [ "Tan", "Bowen", "" ], [ "Xu", "Ziyong", "" ], [ "Li", "Hongliang", "" ] ]
TITLE: ARIC: An Activity Recognition Dataset in Classroom Surveillance Images ABSTRACT: The application of activity recognition in the ``AI + Education" field is gaining increasing attention. However, current work mainly focuses on the recognition of activities in manually captured videos and a limited number of activity types, with little attention given to recognizing activities in surveillance images from real classrooms. Activity recognition in classroom surveillance images faces multiple challenges, such as class imbalance and high activity similarity. To address this gap, we constructed a novel multimodal dataset focused on classroom surveillance image activity recognition called ARIC (Activity Recognition In Classroom). The ARIC dataset has advantages of multiple perspectives, 32 activity categories, three modalities, and real-world classroom scenarios. In addition to the general activity recognition tasks, we also provide settings for continual learning and few-shot continual learning. We hope that the ARIC dataset can act as a facilitator for future analysis and research for open teaching scenarios. You can download preliminary data from https://ivipclab.github.io/publication_ARIC/ARIC.
new_dataset
0.962356
2410.13322
Giovanni Braglia
Giovanni Braglia, Davide Tebaldi, Andr\'e Eugenio Lazzaretti and Luigi Biagiotti
Arc-Length-Based Warping for Robot Skill Synthesis from Multiple Demonstrations
8 pages, 7 figures
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
In robotics, Learning from Demonstration (LfD) aims to transfer skills to robots by using multiple demonstrations of the same task. These demonstrations are recorded and processed to extract a consistent skill representation. This process typically requires temporal alignment through techniques such as Dynamic Time Warping (DTW). In this paper, we consider a novel algorithm, named Spatial Sampling (SS), specifically designed for robot trajectories, that enables time-independent alignment of the trajectories by providing an arc-length parametrization of the signals. This approach eliminates the need for temporal alignment, enhancing the accuracy and robustness of skill representation, especially when recorded movements are subject to intermittent motions or extremely variable speeds, a common characteristic of operations based on kinesthetic teaching, where the operator may encounter difficulties in guiding the end-effector smoothly. To prove this, we built a custom publicly available dataset of robot recordings to test real-world movements, where the user tracks the same geometric path multiple times, with motion laws that vary greatly and are subject to starting and stopping. The SS demonstrates better performances against state-of-the-art algorithms in terms of (i) trajectory synchronization and (ii) quality of the extracted skill.
[ { "version": "v1", "created": "Thu, 17 Oct 2024 08:25:44 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 15:25:53 GMT" } ]
2025-03-03T00:00:00
[ [ "Braglia", "Giovanni", "" ], [ "Tebaldi", "Davide", "" ], [ "Lazzaretti", "André Eugenio", "" ], [ "Biagiotti", "Luigi", "" ] ]
TITLE: Arc-Length-Based Warping for Robot Skill Synthesis from Multiple Demonstrations ABSTRACT: In robotics, Learning from Demonstration (LfD) aims to transfer skills to robots by using multiple demonstrations of the same task. These demonstrations are recorded and processed to extract a consistent skill representation. This process typically requires temporal alignment through techniques such as Dynamic Time Warping (DTW). In this paper, we consider a novel algorithm, named Spatial Sampling (SS), specifically designed for robot trajectories, that enables time-independent alignment of the trajectories by providing an arc-length parametrization of the signals. This approach eliminates the need for temporal alignment, enhancing the accuracy and robustness of skill representation, especially when recorded movements are subject to intermittent motions or extremely variable speeds, a common characteristic of operations based on kinesthetic teaching, where the operator may encounter difficulties in guiding the end-effector smoothly. To prove this, we built a custom publicly available dataset of robot recordings to test real-world movements, where the user tracks the same geometric path multiple times, with motion laws that vary greatly and are subject to starting and stopping. The SS demonstrates better performances against state-of-the-art algorithms in terms of (i) trajectory synchronization and (ii) quality of the extracted skill.
new_dataset
0.955194
2410.14668
Jie He
Xiongtao Zhou, Jie He, Lanyu Chen, Jingyu Li, Haojing Chen, V\'ictor Guti\'errez-Basulto, Jeff Z. Pan, Hanjie Chen
MiCEval: Unveiling Multimodal Chain of Thought's Quality via Image Description and Reasoning Steps
NAACL 2025
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Multimodal Chain of Thought (MCoT) is a popular prompting strategy for improving the performance of multimodal large language models (MLLMs) across a range of complex reasoning tasks. Despite its popularity, there is a notable absence of automated methods for evaluating the quality of reasoning steps in MCoT. To address this gap, we propose Multimodal Chain-of-Thought Evaluation (MiCEval), a framework designed to assess the correctness of reasoning chains by evaluating the quality of both the description and each reasoning step. The evaluation of the description component focuses on the accuracy of the image descriptions, while the reasoning step evaluates the quality of each step as it is conditionally generated based on the preceding steps. MiCEval is built upon a fine-grained dataset with annotations that rate each step according to correctness, relevance, and informativeness. Extensive experiments on four state-of-the-art MLLMs show that step-wise evaluations using MiCEval align more closely with human judgments compared to existing methods based on cosine similarity or fine-tuning approaches. MiCEval datasets and code can be found in https://github.com/alenai97/MiCEval.
[ { "version": "v1", "created": "Fri, 18 Oct 2024 17:57:40 GMT" }, { "version": "v2", "created": "Mon, 21 Oct 2024 21:42:46 GMT" }, { "version": "v3", "created": "Sat, 16 Nov 2024 18:47:18 GMT" }, { "version": "v4", "created": "Fri, 28 Feb 2025 12:57:03 GMT" } ]
2025-03-03T00:00:00
[ [ "Zhou", "Xiongtao", "" ], [ "He", "Jie", "" ], [ "Chen", "Lanyu", "" ], [ "Li", "Jingyu", "" ], [ "Chen", "Haojing", "" ], [ "Gutiérrez-Basulto", "Víctor", "" ], [ "Pan", "Jeff Z.", "" ], [ "Chen", "Hanjie", "" ] ]
TITLE: MiCEval: Unveiling Multimodal Chain of Thought's Quality via Image Description and Reasoning Steps ABSTRACT: Multimodal Chain of Thought (MCoT) is a popular prompting strategy for improving the performance of multimodal large language models (MLLMs) across a range of complex reasoning tasks. Despite its popularity, there is a notable absence of automated methods for evaluating the quality of reasoning steps in MCoT. To address this gap, we propose Multimodal Chain-of-Thought Evaluation (MiCEval), a framework designed to assess the correctness of reasoning chains by evaluating the quality of both the description and each reasoning step. The evaluation of the description component focuses on the accuracy of the image descriptions, while the reasoning step evaluates the quality of each step as it is conditionally generated based on the preceding steps. MiCEval is built upon a fine-grained dataset with annotations that rate each step according to correctness, relevance, and informativeness. Extensive experiments on four state-of-the-art MLLMs show that step-wise evaluations using MiCEval align more closely with human judgments compared to existing methods based on cosine similarity or fine-tuning approaches. MiCEval datasets and code can be found in https://github.com/alenai97/MiCEval.
new_dataset
0.958654
2410.18148
Shaowu Pan
Nithin Somasekharan, Shaowu Pan
Beyond the Kolmogorov Barrier: A Learnable Weighted Hybrid Autoencoder for Model Order Reduction
31 pages
null
null
null
cs.LG cs.AI physics.comp-ph stat.ML
http://creativecommons.org/publicdomain/zero/1.0/
Representation learning for high-dimensional, complex physical systems aims to identify a low-dimensional intrinsic latent space, which is crucial for reduced-order modeling and modal analysis. To overcome the well-known Kolmogorov barrier, deep autoencoders (AEs) have been introduced in recent years, but they often suffer from poor convergence behavior as the rank of the latent space increases. To address this issue, we propose the learnable weighted hybrid autoencoder, a hybrid approach that combines the strengths of singular value decomposition (SVD) with deep autoencoders through a learnable weighted framework. We find that the introduction of learnable weighting parameters is essential -- without them, the resulting model would either collapse into a standard POD or fail to exhibit the desired convergence behavior. Interestingly, we empirically find that our trained model has a sharpness thousands of times smaller compared to other models. Our experiments on classical chaotic PDE systems, including the 1D Kuramoto-Sivashinsky and forced isotropic turbulence datasets, demonstrate that our approach significantly improves generalization performance compared to several competing methods. Additionally, when combining with time series modeling techniques (e.g., Koopman operator, LSTM), the proposed technique offers significant improvements for surrogate modeling of high-dimensional multi-scale PDE systems.
[ { "version": "v1", "created": "Wed, 23 Oct 2024 00:04:26 GMT" }, { "version": "v2", "created": "Sat, 22 Feb 2025 00:06:01 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 17:12:31 GMT" } ]
2025-03-03T00:00:00
[ [ "Somasekharan", "Nithin", "" ], [ "Pan", "Shaowu", "" ] ]
TITLE: Beyond the Kolmogorov Barrier: A Learnable Weighted Hybrid Autoencoder for Model Order Reduction ABSTRACT: Representation learning for high-dimensional, complex physical systems aims to identify a low-dimensional intrinsic latent space, which is crucial for reduced-order modeling and modal analysis. To overcome the well-known Kolmogorov barrier, deep autoencoders (AEs) have been introduced in recent years, but they often suffer from poor convergence behavior as the rank of the latent space increases. To address this issue, we propose the learnable weighted hybrid autoencoder, a hybrid approach that combines the strengths of singular value decomposition (SVD) with deep autoencoders through a learnable weighted framework. We find that the introduction of learnable weighting parameters is essential -- without them, the resulting model would either collapse into a standard POD or fail to exhibit the desired convergence behavior. Interestingly, we empirically find that our trained model has a sharpness thousands of times smaller compared to other models. Our experiments on classical chaotic PDE systems, including the 1D Kuramoto-Sivashinsky and forced isotropic turbulence datasets, demonstrate that our approach significantly improves generalization performance compared to several competing methods. Additionally, when combining with time series modeling techniques (e.g., Koopman operator, LSTM), the proposed technique offers significant improvements for surrogate modeling of high-dimensional multi-scale PDE systems.
no_new_dataset
0.949012
2410.18456
Bingyu Yang
Bingyu Yang, Qingyao Tian, Huai Liao, Xinyan Huang, Jinlin Wu, Jingdi Hu, Hongbin Liu
Progressive Curriculum Learning with Scale-Enhanced U-Net for Continuous Airway Segmentation
null
null
null
null
eess.IV cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continuous and accurate segmentation of airways in chest CT images is essential for preoperative planning and real-time bronchoscopy navigation. Despite advances in deep learning for medical image segmentation, maintaining airway continuity remains a challenge, particularly due to intra-class imbalance between large and small branches and blurred CT scan details. To address these challenges, we propose a progressive curriculum learning pipeline and a Scale-Enhanced U-Net (SE-UNet) to enhance segmentation continuity. Specifically, our progressive curriculum learning pipeline consists of three stages: extracting main airways, identifying small airways, and repairing discontinuities. The cropping sampling strategy in each stage reduces feature interference between airways of different scales, effectively addressing the challenge of intra-class imbalance. In the third training stage, we present an Adaptive Topology-Responsive Loss (ATRL) to guide the network to focus on airway continuity. The progressive training pipeline shares the same SE-UNet, integrating multi-scale inputs and Detail Information Enhancers (DIEs) to enhance information flow and effectively capture the intricate details of small airways. Additionally, we propose a robust airway tree parsing method and hierarchical evaluation metrics to provide more clinically relevant and precise analysis. Experiments on both in-house and public datasets demonstrate that our method outperforms existing approaches, significantly improving the accuracy of small airways and the completeness of the airway tree. The code will be released upon publication.
[ { "version": "v1", "created": "Thu, 24 Oct 2024 06:10:09 GMT" }, { "version": "v2", "created": "Sun, 10 Nov 2024 12:13:17 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 15:04:56 GMT" } ]
2025-03-03T00:00:00
[ [ "Yang", "Bingyu", "" ], [ "Tian", "Qingyao", "" ], [ "Liao", "Huai", "" ], [ "Huang", "Xinyan", "" ], [ "Wu", "Jinlin", "" ], [ "Hu", "Jingdi", "" ], [ "Liu", "Hongbin", "" ] ]
TITLE: Progressive Curriculum Learning with Scale-Enhanced U-Net for Continuous Airway Segmentation ABSTRACT: Continuous and accurate segmentation of airways in chest CT images is essential for preoperative planning and real-time bronchoscopy navigation. Despite advances in deep learning for medical image segmentation, maintaining airway continuity remains a challenge, particularly due to intra-class imbalance between large and small branches and blurred CT scan details. To address these challenges, we propose a progressive curriculum learning pipeline and a Scale-Enhanced U-Net (SE-UNet) to enhance segmentation continuity. Specifically, our progressive curriculum learning pipeline consists of three stages: extracting main airways, identifying small airways, and repairing discontinuities. The cropping sampling strategy in each stage reduces feature interference between airways of different scales, effectively addressing the challenge of intra-class imbalance. In the third training stage, we present an Adaptive Topology-Responsive Loss (ATRL) to guide the network to focus on airway continuity. The progressive training pipeline shares the same SE-UNet, integrating multi-scale inputs and Detail Information Enhancers (DIEs) to enhance information flow and effectively capture the intricate details of small airways. Additionally, we propose a robust airway tree parsing method and hierarchical evaluation metrics to provide more clinically relevant and precise analysis. Experiments on both in-house and public datasets demonstrate that our method outperforms existing approaches, significantly improving the accuracy of small airways and the completeness of the airway tree. The code will be released upon publication.
no_new_dataset
0.952131
2410.18514
Shen Nie
Shen Nie, Fengqi Zhu, Chao Du, Tianyu Pang, Qian Liu, Guangtao Zeng, Min Lin, Chongxuan Li
Scaling up Masked Diffusion Models on Text
null
null
null
null
cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Masked diffusion models (MDMs) have shown promise in language modeling, yet their scalability and effectiveness in core language tasks, such as text generation and language understanding, remain underexplored. This paper establishes the first scaling law for MDMs, demonstrating a scaling rate comparable to autoregressive models (ARMs) and a relatively small compute gap. Motivated by their scalability, we train a family of MDMs with up to 1.1 billion (B) parameters to systematically evaluate their performance against ARMs of comparable or larger sizes. Fully leveraging the probabilistic formulation of MDMs, we propose a simple yet effective unsupervised classifier-free guidance that effectively exploits large-scale unpaired data, boosting performance for conditional inference. In language understanding, the 1.1B MDM outperforms the 1.1B TinyLlama model trained on the same data across four of eight zero-shot benchmarks. Notably, it achieves competitive math reasoning ability with the 7B Llama-2 model on the GSM8K dataset. In text generation, MDMs with 16 times more pre-training time offer a flexible trade-off against ARMs with the accelerated sampling technique KV-Cache: MDMs match ARMs in performance while being 1.4 times faster during sampling. Moreover, MDMs address challenging tasks for ARMs by effectively handling bidirectional reasoning and adapting to temporal shifts in data. Notably, a 1.1B MDM breaks the reverse curse encountered by much larger ARMs with significantly more data and computation, such as 13B Llama-2 and 175B GPT-3. Our code is available at https://github.com/ML-GSAI/SMDM.
[ { "version": "v1", "created": "Thu, 24 Oct 2024 08:01:22 GMT" }, { "version": "v2", "created": "Fri, 20 Dec 2024 03:55:07 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 07:02:59 GMT" } ]
2025-03-03T00:00:00
[ [ "Nie", "Shen", "" ], [ "Zhu", "Fengqi", "" ], [ "Du", "Chao", "" ], [ "Pang", "Tianyu", "" ], [ "Liu", "Qian", "" ], [ "Zeng", "Guangtao", "" ], [ "Lin", "Min", "" ], [ "Li", "Chongxuan", "" ] ]
TITLE: Scaling up Masked Diffusion Models on Text ABSTRACT: Masked diffusion models (MDMs) have shown promise in language modeling, yet their scalability and effectiveness in core language tasks, such as text generation and language understanding, remain underexplored. This paper establishes the first scaling law for MDMs, demonstrating a scaling rate comparable to autoregressive models (ARMs) and a relatively small compute gap. Motivated by their scalability, we train a family of MDMs with up to 1.1 billion (B) parameters to systematically evaluate their performance against ARMs of comparable or larger sizes. Fully leveraging the probabilistic formulation of MDMs, we propose a simple yet effective unsupervised classifier-free guidance that effectively exploits large-scale unpaired data, boosting performance for conditional inference. In language understanding, the 1.1B MDM outperforms the 1.1B TinyLlama model trained on the same data across four of eight zero-shot benchmarks. Notably, it achieves competitive math reasoning ability with the 7B Llama-2 model on the GSM8K dataset. In text generation, MDMs with 16 times more pre-training time offer a flexible trade-off against ARMs with the accelerated sampling technique KV-Cache: MDMs match ARMs in performance while being 1.4 times faster during sampling. Moreover, MDMs address challenging tasks for ARMs by effectively handling bidirectional reasoning and adapting to temporal shifts in data. Notably, a 1.1B MDM breaks the reverse curse encountered by much larger ARMs with significantly more data and computation, such as 13B Llama-2 and 175B GPT-3. Our code is available at https://github.com/ML-GSAI/SMDM.
no_new_dataset
0.94256
2410.18868
Katharina Friedl
Katharina Friedl, No\'emie Jaquier, Jens Lundell, Tamim Asfour, Danica Kragic
A Riemannian Framework for Learning Reduced-order Lagrangian Dynamics
28 pages, 16 figures. Accepted for publication in ICLR'25
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By incorporating physical consistency as inductive bias, deep neural networks display increased generalization capabilities and data efficiency in learning nonlinear dynamic models. However, the complexity of these models generally increases with the system dimensionality, requiring larger datasets, more complex deep networks, and significant computational effort. We propose a novel geometric network architecture to learn physically-consistent reduced-order dynamic parameters that accurately describe the original high-dimensional system behavior. This is achieved by building on recent advances in model-order reduction and by adopting a Riemannian perspective to jointly learn a non-linear structure-preserving latent space and the associated low-dimensional dynamics. Our approach enables accurate long-term predictions of the high-dimensional dynamics of rigid and deformable systems with increased data efficiency by inferring interpretable and physically-plausible reduced Lagrangian models.
[ { "version": "v1", "created": "Thu, 24 Oct 2024 15:53:21 GMT" }, { "version": "v2", "created": "Fri, 29 Nov 2024 17:02:31 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 16:12:10 GMT" } ]
2025-03-03T00:00:00
[ [ "Friedl", "Katharina", "" ], [ "Jaquier", "Noémie", "" ], [ "Lundell", "Jens", "" ], [ "Asfour", "Tamim", "" ], [ "Kragic", "Danica", "" ] ]
TITLE: A Riemannian Framework for Learning Reduced-order Lagrangian Dynamics ABSTRACT: By incorporating physical consistency as inductive bias, deep neural networks display increased generalization capabilities and data efficiency in learning nonlinear dynamic models. However, the complexity of these models generally increases with the system dimensionality, requiring larger datasets, more complex deep networks, and significant computational effort. We propose a novel geometric network architecture to learn physically-consistent reduced-order dynamic parameters that accurately describe the original high-dimensional system behavior. This is achieved by building on recent advances in model-order reduction and by adopting a Riemannian perspective to jointly learn a non-linear structure-preserving latent space and the associated low-dimensional dynamics. Our approach enables accurate long-term predictions of the high-dimensional dynamics of rigid and deformable systems with increased data efficiency by inferring interpretable and physically-plausible reduced Lagrangian models.
no_new_dataset
0.950411
2410.18967
Zhe Gan
Zhangheng Li, Keen You, Haotian Zhang, Di Feng, Harsh Agrawal, Xiujun Li, Mohana Prasad Sathya Moorthy, Jeff Nichols, Yinfei Yang, Zhe Gan
Ferret-UI 2: Mastering Universal User Interface Understanding Across Platforms
Accepted to ICLR 2025
null
null
null
cs.CV cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Building a generalist model for user interface (UI) understanding is challenging due to various foundational issues, such as platform diversity, resolution variation, and data limitation. In this paper, we introduce Ferret-UI 2, a multimodal large language model (MLLM) designed for universal UI understanding across a wide range of platforms, including iPhone, Android, iPad, Webpage, and AppleTV. Building on the foundation of Ferret-UI, Ferret-UI 2 introduces three key innovations: support for multiple platform types, high-resolution perception through adaptive scaling, and advanced task training data generation powered by GPT-4o with set-of-mark visual prompting. These advancements enable Ferret-UI 2 to perform complex, user-centered interactions, making it highly versatile and adaptable for the expanding diversity of platform ecosystems. Extensive empirical experiments on referring, grounding, user-centric advanced tasks (comprising 9 subtasks $\times$ 5 platforms), GUIDE next-action prediction dataset, and GUI-World multi-platform benchmark demonstrate that Ferret-UI 2 significantly outperforms Ferret-UI, and also shows strong cross-platform transfer capabilities.
[ { "version": "v1", "created": "Thu, 24 Oct 2024 17:58:31 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 00:29:14 GMT" } ]
2025-03-03T00:00:00
[ [ "Li", "Zhangheng", "" ], [ "You", "Keen", "" ], [ "Zhang", "Haotian", "" ], [ "Feng", "Di", "" ], [ "Agrawal", "Harsh", "" ], [ "Li", "Xiujun", "" ], [ "Moorthy", "Mohana Prasad Sathya", "" ], [ "Nichols", "Jeff", "" ], [ "Yang", "Yinfei", "" ], [ "Gan", "Zhe", "" ] ]
TITLE: Ferret-UI 2: Mastering Universal User Interface Understanding Across Platforms ABSTRACT: Building a generalist model for user interface (UI) understanding is challenging due to various foundational issues, such as platform diversity, resolution variation, and data limitation. In this paper, we introduce Ferret-UI 2, a multimodal large language model (MLLM) designed for universal UI understanding across a wide range of platforms, including iPhone, Android, iPad, Webpage, and AppleTV. Building on the foundation of Ferret-UI, Ferret-UI 2 introduces three key innovations: support for multiple platform types, high-resolution perception through adaptive scaling, and advanced task training data generation powered by GPT-4o with set-of-mark visual prompting. These advancements enable Ferret-UI 2 to perform complex, user-centered interactions, making it highly versatile and adaptable for the expanding diversity of platform ecosystems. Extensive empirical experiments on referring, grounding, user-centric advanced tasks (comprising 9 subtasks $\times$ 5 platforms), GUIDE next-action prediction dataset, and GUI-World multi-platform benchmark demonstrate that Ferret-UI 2 significantly outperforms Ferret-UI, and also shows strong cross-platform transfer capabilities.
new_dataset
0.965544
2411.00418
Chenghua Huang
Chenghua Huang, Zhizhen Fan, Lu Wang, Fangkai Yang, Pu Zhao, Zeqi Lin, Qingwei Lin, Dongmei Zhang, Saravan Rajmohan, Qi Zhang
Self-Evolved Reward Learning for LLMs
23 pages,6 figures,Accepted to ICLR 2025
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement Learning from Human Feedback (RLHF) is a crucial technique for aligning language models with human preferences, playing a pivotal role in the success of conversational models like GPT-4, ChatGPT, and Llama 2. A core challenge in employing RLHF lies in training a reliable reward model (RM), which relies on high-quality labels typically provided by human experts or advanced AI system. These methods can be costly and may introduce biases that affect the language model's responses. As language models improve, human input may become less effective in further enhancing their performance. In this paper, we propose Self-Evolved Reward Learning (SER), a novel approach where the RM generates additional training data to iteratively improve itself. We conducted extensive experiments on multiple datasets such as HH-RLHF and UltraFeedback, using models like Mistral and Llama 3, and compare SER against various baselines. Our results demonstrate that even with limited human-annotated data, learning from self-feedback can robustly enhance RM performance, thereby boosting the capabilities of large language models (LLMs).
[ { "version": "v1", "created": "Fri, 1 Nov 2024 07:29:03 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 03:37:09 GMT" } ]
2025-03-03T00:00:00
[ [ "Huang", "Chenghua", "" ], [ "Fan", "Zhizhen", "" ], [ "Wang", "Lu", "" ], [ "Yang", "Fangkai", "" ], [ "Zhao", "Pu", "" ], [ "Lin", "Zeqi", "" ], [ "Lin", "Qingwei", "" ], [ "Zhang", "Dongmei", "" ], [ "Rajmohan", "Saravan", "" ], [ "Zhang", "Qi", "" ] ]
TITLE: Self-Evolved Reward Learning for LLMs ABSTRACT: Reinforcement Learning from Human Feedback (RLHF) is a crucial technique for aligning language models with human preferences, playing a pivotal role in the success of conversational models like GPT-4, ChatGPT, and Llama 2. A core challenge in employing RLHF lies in training a reliable reward model (RM), which relies on high-quality labels typically provided by human experts or advanced AI system. These methods can be costly and may introduce biases that affect the language model's responses. As language models improve, human input may become less effective in further enhancing their performance. In this paper, we propose Self-Evolved Reward Learning (SER), a novel approach where the RM generates additional training data to iteratively improve itself. We conducted extensive experiments on multiple datasets such as HH-RLHF and UltraFeedback, using models like Mistral and Llama 3, and compare SER against various baselines. Our results demonstrate that even with limited human-annotated data, learning from self-feedback can robustly enhance RM performance, thereby boosting the capabilities of large language models (LLMs).
no_new_dataset
0.944485
2411.03753
Zihan Yu
Zihan Yu, Jingtao Ding, Yong Li
Symbolic regression via MDLformer-guided search: from minimizing prediction error to minimizing description length
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Symbolic regression, a task discovering the formula best fitting the given data, is typically based on the heuristical search. These methods usually update candidate formulas to obtain new ones with lower prediction errors iteratively.However, since formulas with similar function shapes may have completely different symbolic forms, the prediction error does not decrease monotonously as the search approaches the target formula, causing the low recovery rate of existing methods. To solve this problem, we propose a novel search objective based on the minimum description length, which reflects the distance from the target and decreases monotonically as the search approaches the correct form of the target formula. To estimate the minimum description length of any input data, we design a neural network, MDLformer, which enables robust and scalable estimation through large-scale training. With the MDLformer's output as the search objective, we implement a symbolic regression method, SR4MDL, that can effectively recover the correct mathematical form of the formula. Extensive experiments illustrate its excellent performance in recovering formulas from data. Our method successfully recovers around 50 formulas across two benchmark datasets comprising 133 problems, outperforming state-of-the-art methods by 43.92%. Experiments on 122 unseen black-box problems further demonstrate its generalization performance. We release our code at https://github.com/tsinghua-fib-lab/SR4MDL .
[ { "version": "v1", "created": "Wed, 6 Nov 2024 08:29:46 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 07:48:42 GMT" } ]
2025-03-03T00:00:00
[ [ "Yu", "Zihan", "" ], [ "Ding", "Jingtao", "" ], [ "Li", "Yong", "" ] ]
TITLE: Symbolic regression via MDLformer-guided search: from minimizing prediction error to minimizing description length ABSTRACT: Symbolic regression, a task discovering the formula best fitting the given data, is typically based on the heuristical search. These methods usually update candidate formulas to obtain new ones with lower prediction errors iteratively.However, since formulas with similar function shapes may have completely different symbolic forms, the prediction error does not decrease monotonously as the search approaches the target formula, causing the low recovery rate of existing methods. To solve this problem, we propose a novel search objective based on the minimum description length, which reflects the distance from the target and decreases monotonically as the search approaches the correct form of the target formula. To estimate the minimum description length of any input data, we design a neural network, MDLformer, which enables robust and scalable estimation through large-scale training. With the MDLformer's output as the search objective, we implement a symbolic regression method, SR4MDL, that can effectively recover the correct mathematical form of the formula. Extensive experiments illustrate its excellent performance in recovering formulas from data. Our method successfully recovers around 50 formulas across two benchmark datasets comprising 133 problems, outperforming state-of-the-art methods by 43.92%. Experiments on 122 unseen black-box problems further demonstrate its generalization performance. We release our code at https://github.com/tsinghua-fib-lab/SR4MDL .
no_new_dataset
0.944587
2411.04847
Fujie Zhang
Fujie Zhang, Peiqi Yu, Biao Yi, Baolei Zhang, Tong Li, Zheli Liu
Prompt-Guided Internal States for Hallucination Detection of Large Language Models
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) have demonstrated remarkable capabilities across a variety of tasks in different domains. However, they sometimes generate responses that are logically coherent but factually incorrect or misleading, which is known as LLM hallucinations. Data-driven supervised methods train hallucination detectors by leveraging the internal states of LLMs, but detectors trained on specific domains often struggle to generalize well to other domains. In this paper, we aim to enhance the cross-domain performance of supervised detectors with only in-domain data. We propose a novel framework, prompt-guided internal states for hallucination detection of LLMs, namely PRISM. By utilizing appropriate prompts to guide changes to the structure related to text truthfulness in LLMs' internal states, we make this structure more salient and consistent across texts from different domains. We integrated our framework with existing hallucination detection methods and conducted experiments on datasets from different domains. The experimental results indicate that our framework significantly enhances the cross-domain generalization of existing hallucination detection methods.
[ { "version": "v1", "created": "Thu, 7 Nov 2024 16:33:48 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 02:41:06 GMT" } ]
2025-03-03T00:00:00
[ [ "Zhang", "Fujie", "" ], [ "Yu", "Peiqi", "" ], [ "Yi", "Biao", "" ], [ "Zhang", "Baolei", "" ], [ "Li", "Tong", "" ], [ "Liu", "Zheli", "" ] ]
TITLE: Prompt-Guided Internal States for Hallucination Detection of Large Language Models ABSTRACT: Large Language Models (LLMs) have demonstrated remarkable capabilities across a variety of tasks in different domains. However, they sometimes generate responses that are logically coherent but factually incorrect or misleading, which is known as LLM hallucinations. Data-driven supervised methods train hallucination detectors by leveraging the internal states of LLMs, but detectors trained on specific domains often struggle to generalize well to other domains. In this paper, we aim to enhance the cross-domain performance of supervised detectors with only in-domain data. We propose a novel framework, prompt-guided internal states for hallucination detection of LLMs, namely PRISM. By utilizing appropriate prompts to guide changes to the structure related to text truthfulness in LLMs' internal states, we make this structure more salient and consistent across texts from different domains. We integrated our framework with existing hallucination detection methods and conducted experiments on datasets from different domains. The experimental results indicate that our framework significantly enhances the cross-domain generalization of existing hallucination detection methods.
no_new_dataset
0.953275
2411.05692
Abhisek Ray Mr.
Abhisek Ray and Ayush Raj and Maheshkumar H. Kolekar
Autoregressive Adaptive Hypergraph Transformer for Skeleton-based Activity Recognition
Accepted to WACV 2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extracting multiscale contextual information and higher-order correlations among skeleton sequences using Graph Convolutional Networks (GCNs) alone is inadequate for effective action classification. Hypergraph convolution addresses the above issues but cannot harness the long-range dependencies. The transformer proves to be effective in capturing these dependencies and making complex contextual features accessible. We propose an Autoregressive Adaptive HyperGraph Transformer (AutoregAd-HGformer) model for in-phase (autoregressive and discrete) and out-phase (adaptive) hypergraph generation. The vector quantized in-phase hypergraph equipped with powerful autoregressive learned priors produces a more robust and informative representation suitable for hyperedge formation. The out-phase hypergraph generator provides a model-agnostic hyperedge learning technique to align the attributes with input skeleton embedding. The hybrid (supervised and unsupervised) learning in AutoregAd-HGformer explores the action-dependent feature along spatial, temporal, and channel dimensions. The extensive experimental results and ablation study indicate the superiority of our model over state-of-the-art hypergraph architectures on the NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets.
[ { "version": "v1", "created": "Fri, 8 Nov 2024 16:45:52 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 19:34:59 GMT" } ]
2025-03-03T00:00:00
[ [ "Ray", "Abhisek", "" ], [ "Raj", "Ayush", "" ], [ "Kolekar", "Maheshkumar H.", "" ] ]
TITLE: Autoregressive Adaptive Hypergraph Transformer for Skeleton-based Activity Recognition ABSTRACT: Extracting multiscale contextual information and higher-order correlations among skeleton sequences using Graph Convolutional Networks (GCNs) alone is inadequate for effective action classification. Hypergraph convolution addresses the above issues but cannot harness the long-range dependencies. The transformer proves to be effective in capturing these dependencies and making complex contextual features accessible. We propose an Autoregressive Adaptive HyperGraph Transformer (AutoregAd-HGformer) model for in-phase (autoregressive and discrete) and out-phase (adaptive) hypergraph generation. The vector quantized in-phase hypergraph equipped with powerful autoregressive learned priors produces a more robust and informative representation suitable for hyperedge formation. The out-phase hypergraph generator provides a model-agnostic hyperedge learning technique to align the attributes with input skeleton embedding. The hybrid (supervised and unsupervised) learning in AutoregAd-HGformer explores the action-dependent feature along spatial, temporal, and channel dimensions. The extensive experimental results and ablation study indicate the superiority of our model over state-of-the-art hypergraph architectures on the NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets.
no_new_dataset
0.949059
2411.06655
Shu Wang
Shu Wang, Lei Ji, Renxi Wang, Wenxiao Zhao, Haokun Liu, Yifan Hou, Ying Nian Wu
Explore the Reasoning Capability of LLMs in the Chess Testbed
NAACL2025 Main Conference. Data and models are available: https://mate-chess.github.io/
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reasoning is a central capability of human intelligence. In recent years, with the advent of large-scale datasets, pretrained large language models have emerged with new capabilities, including reasoning. However, these models still struggle with long-term, complex reasoning tasks, such as playing chess. Based on the observation that expert chess players employ a dual approach combining long-term strategic play with short-term tactical play along with language explanation, we propose improving the reasoning capability of large language models in chess by integrating annotated strategy and tactic. Specifically, we collect a dataset named MATE, which consists of 1 million chess positions with candidate moves annotated by chess experts for strategy and tactics. We finetune the LLaMA-3-8B model and compare it against state-of-the-art commercial language models in the task of selecting better chess moves. Our experiments show that our models perform better than GPT, Claude, and Gemini models. We find that language explanations can enhance the reasoning capability of large language models.
[ { "version": "v1", "created": "Mon, 11 Nov 2024 01:42:56 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 11:58:28 GMT" } ]
2025-03-03T00:00:00
[ [ "Wang", "Shu", "" ], [ "Ji", "Lei", "" ], [ "Wang", "Renxi", "" ], [ "Zhao", "Wenxiao", "" ], [ "Liu", "Haokun", "" ], [ "Hou", "Yifan", "" ], [ "Wu", "Ying Nian", "" ] ]
TITLE: Explore the Reasoning Capability of LLMs in the Chess Testbed ABSTRACT: Reasoning is a central capability of human intelligence. In recent years, with the advent of large-scale datasets, pretrained large language models have emerged with new capabilities, including reasoning. However, these models still struggle with long-term, complex reasoning tasks, such as playing chess. Based on the observation that expert chess players employ a dual approach combining long-term strategic play with short-term tactical play along with language explanation, we propose improving the reasoning capability of large language models in chess by integrating annotated strategy and tactic. Specifically, we collect a dataset named MATE, which consists of 1 million chess positions with candidate moves annotated by chess experts for strategy and tactics. We finetune the LLaMA-3-8B model and compare it against state-of-the-art commercial language models in the task of selecting better chess moves. Our experiments show that our models perform better than GPT, Claude, and Gemini models. We find that language explanations can enhance the reasoning capability of large language models.
new_dataset
0.960621
2411.11285
Ranjan Sapkota
Ranjan Sapkota, Achyut Paudel, Manoj Karkee
Zero-Shot Automatic Annotation and Instance Segmentation using LLM-Generated Datasets: Eliminating Field Imaging and Manual Annotation for Deep Learning Model Development
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Currently, deep learning-based instance segmentation for various applications (e.g., Agriculture) is predominantly performed using a labor-intensive process involving extensive field data collection using sophisticated sensors, followed by careful manual annotation of images, presenting significant logistical and financial challenges to researchers and organizations. The process also slows down the model development and training process. In this study, we presented a novel method for deep learning-based instance segmentation of apples in commercial orchards that eliminates the need for labor-intensive field data collection and manual annotation. Utilizing a Large Language Model (LLM), we synthetically generated orchard images and automatically annotated them using the Segment Anything Model (SAM) integrated with a YOLO11 base model. This method significantly reduces reliance on physical sensors and manual data processing, presenting a major advancement in "Agricultural AI". The synthetic, auto-annotated dataset was used to train the YOLO11 model for Apple instance segmentation, which was then validated on real orchard images. The results showed that the automatically generated annotations achieved a Dice Coefficient of 0.9513 and an IoU of 0.9303, validating the accuracy and overlap of the mask annotations. All YOLO11 configurations, trained solely on these synthetic datasets with automated annotations, accurately recognized and delineated apples, highlighting the method's efficacy. Specifically, the YOLO11m-seg configuration achieved a mask precision of 0.902 and a mask mAP@50 of 0.833 on test images collected from a commercial orchard. Additionally, the YOLO11l-seg configuration outperformed other models in validation on 40 LLM-generated images, achieving the highest mask precision and mAP@50 metrics. Keywords: YOLO, SAM, SAMv2, YOLO11, YOLOv11, Segment Anything, YOLO-SAM
[ { "version": "v1", "created": "Mon, 18 Nov 2024 05:11:29 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 00:44:36 GMT" } ]
2025-03-03T00:00:00
[ [ "Sapkota", "Ranjan", "" ], [ "Paudel", "Achyut", "" ], [ "Karkee", "Manoj", "" ] ]
TITLE: Zero-Shot Automatic Annotation and Instance Segmentation using LLM-Generated Datasets: Eliminating Field Imaging and Manual Annotation for Deep Learning Model Development ABSTRACT: Currently, deep learning-based instance segmentation for various applications (e.g., Agriculture) is predominantly performed using a labor-intensive process involving extensive field data collection using sophisticated sensors, followed by careful manual annotation of images, presenting significant logistical and financial challenges to researchers and organizations. The process also slows down the model development and training process. In this study, we presented a novel method for deep learning-based instance segmentation of apples in commercial orchards that eliminates the need for labor-intensive field data collection and manual annotation. Utilizing a Large Language Model (LLM), we synthetically generated orchard images and automatically annotated them using the Segment Anything Model (SAM) integrated with a YOLO11 base model. This method significantly reduces reliance on physical sensors and manual data processing, presenting a major advancement in "Agricultural AI". The synthetic, auto-annotated dataset was used to train the YOLO11 model for Apple instance segmentation, which was then validated on real orchard images. The results showed that the automatically generated annotations achieved a Dice Coefficient of 0.9513 and an IoU of 0.9303, validating the accuracy and overlap of the mask annotations. All YOLO11 configurations, trained solely on these synthetic datasets with automated annotations, accurately recognized and delineated apples, highlighting the method's efficacy. Specifically, the YOLO11m-seg configuration achieved a mask precision of 0.902 and a mask mAP@50 of 0.833 on test images collected from a commercial orchard. Additionally, the YOLO11l-seg configuration outperformed other models in validation on 40 LLM-generated images, achieving the highest mask precision and mAP@50 metrics. Keywords: YOLO, SAM, SAMv2, YOLO11, YOLOv11, Segment Anything, YOLO-SAM
no_new_dataset
0.945601
2411.17645
Yujie Dai
Yujie Dai, Brian Sullivan, Axel Montout, Amy Dillon, Chris Waller, Peter Acs, Rachel Denholm, Philip Williams, Alastair D Hay, Raul Santos-Rodriguez, Andrew Dowsey
Explainable AI for Classifying UTI Risk Groups Using a Real-World Linked EHR and Pathology Lab Dataset
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of machine learning and AI on electronic health records (EHRs) holds substantial potential for clinical insight. However, this approach faces challenges due to data heterogeneity, sparsity, temporal misalignment, and limited labeled outcomes. In this context, we leverage a linked EHR dataset of approximately one million de-identified individuals from Bristol, North Somerset, and South Gloucestershire, UK, to characterize urinary tract infections (UTIs). We implemented a data pre-processing and curation pipeline that transforms the raw EHR data into a structured format suitable for developing predictive models focused on data fairness, accountability and transparency. Given the limited availability and biases of ground truth UTI outcomes, we introduce a UTI risk estimation framework informed by clinical expertise to estimate UTI risk across individual patient timelines. Pairwise XGBoost models are trained using this framework to differentiate UTI risk categories with explainable AI techniques applied to identify key predictors and support interpretability. Our findings reveal differences in clinical and demographic predictors across risk groups. While this study highlights the potential of AI-driven insights to support UTI clinical decision-making, further investigation of patient sub-strata and extensive validation are needed to ensure robustness and applicability in clinical practice.
[ { "version": "v1", "created": "Tue, 26 Nov 2024 18:10:51 GMT" }, { "version": "v2", "created": "Mon, 13 Jan 2025 16:01:14 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 15:16:36 GMT" } ]
2025-03-03T00:00:00
[ [ "Dai", "Yujie", "" ], [ "Sullivan", "Brian", "" ], [ "Montout", "Axel", "" ], [ "Dillon", "Amy", "" ], [ "Waller", "Chris", "" ], [ "Acs", "Peter", "" ], [ "Denholm", "Rachel", "" ], [ "Williams", "Philip", "" ], [ "Hay", "Alastair D", "" ], [ "Santos-Rodriguez", "Raul", "" ], [ "Dowsey", "Andrew", "" ] ]
TITLE: Explainable AI for Classifying UTI Risk Groups Using a Real-World Linked EHR and Pathology Lab Dataset ABSTRACT: The use of machine learning and AI on electronic health records (EHRs) holds substantial potential for clinical insight. However, this approach faces challenges due to data heterogeneity, sparsity, temporal misalignment, and limited labeled outcomes. In this context, we leverage a linked EHR dataset of approximately one million de-identified individuals from Bristol, North Somerset, and South Gloucestershire, UK, to characterize urinary tract infections (UTIs). We implemented a data pre-processing and curation pipeline that transforms the raw EHR data into a structured format suitable for developing predictive models focused on data fairness, accountability and transparency. Given the limited availability and biases of ground truth UTI outcomes, we introduce a UTI risk estimation framework informed by clinical expertise to estimate UTI risk across individual patient timelines. Pairwise XGBoost models are trained using this framework to differentiate UTI risk categories with explainable AI techniques applied to identify key predictors and support interpretability. Our findings reveal differences in clinical and demographic predictors across risk groups. While this study highlights the potential of AI-driven insights to support UTI clinical decision-making, further investigation of patient sub-strata and extensive validation are needed to ensure robustness and applicability in clinical practice.
no_new_dataset
0.945349
2412.02370
Eerik Alamikkotervo
Eerik Alamikkotervo, Henrik Toikka, Kari Tammi, Risto Ojala
Trajectory-based Road Autolabeling with Lidar-Camera Fusion in Winter Conditions
Small bugs fixed, noise filtering removed as it was removing useful points, failure case analysis added, dataset published
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Robust road segmentation in all road conditions is required for safe autonomous driving and advanced driver assistance systems. Supervised deep learning methods provide accurate road segmentation in the domain of their training data but cannot be trusted in out-of-distribution scenarios. Including the whole distribution in the trainset is challenging as each sample must be labeled by hand. Trajectory-based self-supervised methods offer a potential solution as they can learn from the traversed route without manual labels. However, existing trajectory-based methods use learning schemes that rely only on the camera or only on the lidar. In this paper, trajectory-based learning is implemented jointly with lidar and camera for increased performance. Our method outperforms recent standalone camera- and lidar-based methods when evaluated with a challenging winter driving dataset including countryside and suburb driving scenes. The source code is available at https://github.com/eerik98/lidar-camera-road-autolabeling.git
[ { "version": "v1", "created": "Tue, 3 Dec 2024 10:54:37 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 12:28:56 GMT" } ]
2025-03-03T00:00:00
[ [ "Alamikkotervo", "Eerik", "" ], [ "Toikka", "Henrik", "" ], [ "Tammi", "Kari", "" ], [ "Ojala", "Risto", "" ] ]
TITLE: Trajectory-based Road Autolabeling with Lidar-Camera Fusion in Winter Conditions ABSTRACT: Robust road segmentation in all road conditions is required for safe autonomous driving and advanced driver assistance systems. Supervised deep learning methods provide accurate road segmentation in the domain of their training data but cannot be trusted in out-of-distribution scenarios. Including the whole distribution in the trainset is challenging as each sample must be labeled by hand. Trajectory-based self-supervised methods offer a potential solution as they can learn from the traversed route without manual labels. However, existing trajectory-based methods use learning schemes that rely only on the camera or only on the lidar. In this paper, trajectory-based learning is implemented jointly with lidar and camera for increased performance. Our method outperforms recent standalone camera- and lidar-based methods when evaluated with a challenging winter driving dataset including countryside and suburb driving scenes. The source code is available at https://github.com/eerik98/lidar-camera-road-autolabeling.git
no_new_dataset
0.945901
2412.03084
Deep Gupta Dr.
Ajinkya Deshpande, Deep Gupta, Ankit Bhurane, Nisha Meshram, Sneha Singh, Petia Radeva
Hybrid deep learning-based strategy for the hepatocellular carcinoma cancer grade classification of H&E stained liver histopathology images
14 figure, 9 tables
null
null
null
eess.IV cs.CV cs.LG q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Hepatocellular carcinoma (HCC) is a common type of liver cancer whose early-stage diagnosis is a common challenge, mainly due to the manual assessment of hematoxylin and eosin-stained whole slide images, which is a time-consuming process and may lead to variability in decision-making. For accurate detection of HCC, we propose a hybrid deep learning-based architecture that uses transfer learning to extract the features from pre-trained convolutional neural network (CNN) models and a classifier made up of a sequence of fully connected layers. This study uses a publicly available The Cancer Genome Atlas Hepatocellular Carcinoma (TCGA-LIHC)database (n=491) for model development and database of Kasturba Gandhi Medical College (KMC), India for validation. The pre-processing step involves patch extraction, colour normalization, and augmentation that results in 3920 patches for the TCGA dataset. The developed hybrid deep neural network consisting of a CNN-based pre-trained feature extractor and a customized artificial neural network-based classifier is trained using five-fold cross-validation. For this study, eight different state-of-the-art models are trained and tested as feature extractors for the proposed hybrid model. The proposed hybrid model with ResNet50-based feature extractor provided the sensitivity, specificity, F1-score, accuracy, and AUC of 100.00%, 100.00%, 100.00%, 100.00%, and 1.00, respectively on the TCGA database. On the KMC database, EfficientNetb3 resulted in the optimal choice of the feature extractor giving sensitivity, specificity, F1-score, accuracy, and AUC of 96.97, 98.85, 96.71, 96.71, and 0.99, respectively. The proposed hybrid models showed improvement in accuracy of 2% and 4% over the pre-trained models in TCGA-LIHC and KMC databases.
[ { "version": "v1", "created": "Wed, 4 Dec 2024 07:26:36 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 12:24:33 GMT" } ]
2025-03-03T00:00:00
[ [ "Deshpande", "Ajinkya", "" ], [ "Gupta", "Deep", "" ], [ "Bhurane", "Ankit", "" ], [ "Meshram", "Nisha", "" ], [ "Singh", "Sneha", "" ], [ "Radeva", "Petia", "" ] ]
TITLE: Hybrid deep learning-based strategy for the hepatocellular carcinoma cancer grade classification of H&E stained liver histopathology images ABSTRACT: Hepatocellular carcinoma (HCC) is a common type of liver cancer whose early-stage diagnosis is a common challenge, mainly due to the manual assessment of hematoxylin and eosin-stained whole slide images, which is a time-consuming process and may lead to variability in decision-making. For accurate detection of HCC, we propose a hybrid deep learning-based architecture that uses transfer learning to extract the features from pre-trained convolutional neural network (CNN) models and a classifier made up of a sequence of fully connected layers. This study uses a publicly available The Cancer Genome Atlas Hepatocellular Carcinoma (TCGA-LIHC)database (n=491) for model development and database of Kasturba Gandhi Medical College (KMC), India for validation. The pre-processing step involves patch extraction, colour normalization, and augmentation that results in 3920 patches for the TCGA dataset. The developed hybrid deep neural network consisting of a CNN-based pre-trained feature extractor and a customized artificial neural network-based classifier is trained using five-fold cross-validation. For this study, eight different state-of-the-art models are trained and tested as feature extractors for the proposed hybrid model. The proposed hybrid model with ResNet50-based feature extractor provided the sensitivity, specificity, F1-score, accuracy, and AUC of 100.00%, 100.00%, 100.00%, 100.00%, and 1.00, respectively on the TCGA database. On the KMC database, EfficientNetb3 resulted in the optimal choice of the feature extractor giving sensitivity, specificity, F1-score, accuracy, and AUC of 96.97, 98.85, 96.71, 96.71, and 0.99, respectively. The proposed hybrid models showed improvement in accuracy of 2% and 4% over the pre-trained models in TCGA-LIHC and KMC databases.
no_new_dataset
0.957636
2412.03844
Jiaqi Gu
Jingyu Lin, Jiaqi Gu, Lubin Fan, Bojian Wu, Yujing Lou, Renjie Chen, Ligang Liu, Jieping Ye
HybridGS: Decoupling Transients and Statics with 2D and 3D Gaussian Splatting
Accpeted by CVPR 2025. Project page: https://gujiaqivadin.github.io/hybridgs/ Code: https://github.com/Yeyuqqwx/HybridGS Data: https://huggingface.co/Eto63277/HybridGS/tree/main
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Generating high-quality novel view renderings of 3D Gaussian Splatting (3DGS) in scenes featuring transient objects is challenging. We propose a novel hybrid representation, termed as HybridGS, using 2D Gaussians for transient objects per image and maintaining traditional 3D Gaussians for the whole static scenes. Note that, the 3DGS itself is better suited for modeling static scenes that assume multi-view consistency, but the transient objects appear occasionally and do not adhere to the assumption, thus we model them as planar objects from a single view, represented with 2D Gaussians. Our novel representation decomposes the scene from the perspective of fundamental viewpoint consistency, making it more reasonable. Additionally, we present a novel multi-view regulated supervision method for 3DGS that leverages information from co-visible regions, further enhancing the distinctions between the transients and statics. Then, we propose a straightforward yet effective multi-stage training strategy to ensure robust training and high-quality view synthesis across various settings. Experiments on benchmark datasets show our state-of-the-art performance of novel view synthesis in both indoor and outdoor scenes, even in the presence of distracting elements.
[ { "version": "v1", "created": "Thu, 5 Dec 2024 03:20:35 GMT" }, { "version": "v2", "created": "Tue, 10 Dec 2024 04:59:24 GMT" }, { "version": "v3", "created": "Thu, 27 Feb 2025 02:48:54 GMT" }, { "version": "v4", "created": "Fri, 28 Feb 2025 09:49:45 GMT" } ]
2025-03-03T00:00:00
[ [ "Lin", "Jingyu", "" ], [ "Gu", "Jiaqi", "" ], [ "Fan", "Lubin", "" ], [ "Wu", "Bojian", "" ], [ "Lou", "Yujing", "" ], [ "Chen", "Renjie", "" ], [ "Liu", "Ligang", "" ], [ "Ye", "Jieping", "" ] ]
TITLE: HybridGS: Decoupling Transients and Statics with 2D and 3D Gaussian Splatting ABSTRACT: Generating high-quality novel view renderings of 3D Gaussian Splatting (3DGS) in scenes featuring transient objects is challenging. We propose a novel hybrid representation, termed as HybridGS, using 2D Gaussians for transient objects per image and maintaining traditional 3D Gaussians for the whole static scenes. Note that, the 3DGS itself is better suited for modeling static scenes that assume multi-view consistency, but the transient objects appear occasionally and do not adhere to the assumption, thus we model them as planar objects from a single view, represented with 2D Gaussians. Our novel representation decomposes the scene from the perspective of fundamental viewpoint consistency, making it more reasonable. Additionally, we present a novel multi-view regulated supervision method for 3DGS that leverages information from co-visible regions, further enhancing the distinctions between the transients and statics. Then, we propose a straightforward yet effective multi-stage training strategy to ensure robust training and high-quality view synthesis across various settings. Experiments on benchmark datasets show our state-of-the-art performance of novel view synthesis in both indoor and outdoor scenes, even in the presence of distracting elements.
no_new_dataset
0.944331
2412.06071
Juyong Jiang
Fan Wang, Juyong Jiang, Chansung Park, Sunghun Kim, Jing Tang
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models
The first three authors contributed equally to this work; Accepted by ICLR 2025
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing sizes of large language models (LLMs) result in significant computational overhead and memory usage when adapting these models to specific tasks or domains. Various parameter-efficient fine-tuning (PEFT) methods have been devised to mitigate these challenges by training a small set of parameters for the task-specific updates of the model weights. Among PEFT methods, LoRA stands out for its simplicity and efficiency, inspiring the development of a series of variants. However, LoRA and its successors disregard the knowledge that is noisy or irrelevant to the targeted task, detrimentally impacting model performance and leading to suboptimality. To address this limitation, we introduce Knowledge-aware Singular-value Adaptation (KaSA), a PEFT method that leverages singular value decomposition (SVD) with knowledge-aware singular values to dynamically activate knowledge based on its relevance to the task at hand. We conduct extensive experiments across a range of LLMs on tasks spanning natural language understanding (NLU), generation (NLG), instruction following, and commonsense reasoning. The experimental results demonstrate that KaSA consistently outperforms FFT and 14 popular PEFT baselines across 16 benchmarks and 4 synthetic datasets, underscoring our method's efficacy and adaptability. The source code of our method is available at https://github.com/juyongjiang/KaSA.
[ { "version": "v1", "created": "Sun, 8 Dec 2024 21:26:22 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 05:46:45 GMT" } ]
2025-03-03T00:00:00
[ [ "Wang", "Fan", "" ], [ "Jiang", "Juyong", "" ], [ "Park", "Chansung", "" ], [ "Kim", "Sunghun", "" ], [ "Tang", "Jing", "" ] ]
TITLE: KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models ABSTRACT: The increasing sizes of large language models (LLMs) result in significant computational overhead and memory usage when adapting these models to specific tasks or domains. Various parameter-efficient fine-tuning (PEFT) methods have been devised to mitigate these challenges by training a small set of parameters for the task-specific updates of the model weights. Among PEFT methods, LoRA stands out for its simplicity and efficiency, inspiring the development of a series of variants. However, LoRA and its successors disregard the knowledge that is noisy or irrelevant to the targeted task, detrimentally impacting model performance and leading to suboptimality. To address this limitation, we introduce Knowledge-aware Singular-value Adaptation (KaSA), a PEFT method that leverages singular value decomposition (SVD) with knowledge-aware singular values to dynamically activate knowledge based on its relevance to the task at hand. We conduct extensive experiments across a range of LLMs on tasks spanning natural language understanding (NLU), generation (NLG), instruction following, and commonsense reasoning. The experimental results demonstrate that KaSA consistently outperforms FFT and 14 popular PEFT baselines across 16 benchmarks and 4 synthetic datasets, underscoring our method's efficacy and adaptability. The source code of our method is available at https://github.com/juyongjiang/KaSA.
no_new_dataset
0.943348
2412.08467
Zun Wang
Zun Wang, Jialu Li, Yicong Hong, Songze Li, Kunchang Li, Shoubin Yu, Yi Wang, Yu Qiao, Yali Wang, Mohit Bansal, Limin Wang
Bootstrapping Language-Guided Navigation Learning with Self-Refining Data Flywheel
28 pages, Code and data are available at https://github.com/wz0919/VLN-SRDF
null
null
null
cs.CV cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Creating high-quality data for training robust language-instructed agents is a long-lasting challenge in embodied AI. In this paper, we introduce a Self-Refining Data Flywheel (SRDF) that generates high-quality and large-scale navigational instruction-trajectory pairs by iteratively refining the data pool through the collaboration between two models, the instruction generator and the navigator, without any human-in-the-loop annotation. Specifically, SRDF starts with using a base generator to create an initial data pool for training a base navigator, followed by applying the trained navigator to filter the data pool. This leads to higher-fidelity data to train a better generator, which can, in turn, produce higher-quality data for training the next-round navigator. Such a flywheel establishes a data self-refining process, yielding a continuously improved and highly effective dataset for large-scale language-guided navigation learning. Our experiments demonstrate that after several flywheel rounds, the navigator elevates the performance boundary from 70% to 78% SPL on the classic R2R test set, surpassing human performance (76%) for the first time. Meanwhile, this process results in a superior generator, evidenced by a SPICE increase from 23.5 to 26.2, better than all previous VLN instruction generation methods. Finally, we demonstrate the scalability of our method through increasing environment and instruction diversity, and the generalization ability of our pre-trained navigator across various downstream navigation tasks, surpassing state-of-the-art methods by a large margin in all cases.
[ { "version": "v1", "created": "Wed, 11 Dec 2024 15:32:24 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 08:06:39 GMT" } ]
2025-03-03T00:00:00
[ [ "Wang", "Zun", "" ], [ "Li", "Jialu", "" ], [ "Hong", "Yicong", "" ], [ "Li", "Songze", "" ], [ "Li", "Kunchang", "" ], [ "Yu", "Shoubin", "" ], [ "Wang", "Yi", "" ], [ "Qiao", "Yu", "" ], [ "Wang", "Yali", "" ], [ "Bansal", "Mohit", "" ], [ "Wang", "Limin", "" ] ]
TITLE: Bootstrapping Language-Guided Navigation Learning with Self-Refining Data Flywheel ABSTRACT: Creating high-quality data for training robust language-instructed agents is a long-lasting challenge in embodied AI. In this paper, we introduce a Self-Refining Data Flywheel (SRDF) that generates high-quality and large-scale navigational instruction-trajectory pairs by iteratively refining the data pool through the collaboration between two models, the instruction generator and the navigator, without any human-in-the-loop annotation. Specifically, SRDF starts with using a base generator to create an initial data pool for training a base navigator, followed by applying the trained navigator to filter the data pool. This leads to higher-fidelity data to train a better generator, which can, in turn, produce higher-quality data for training the next-round navigator. Such a flywheel establishes a data self-refining process, yielding a continuously improved and highly effective dataset for large-scale language-guided navigation learning. Our experiments demonstrate that after several flywheel rounds, the navigator elevates the performance boundary from 70% to 78% SPL on the classic R2R test set, surpassing human performance (76%) for the first time. Meanwhile, this process results in a superior generator, evidenced by a SPICE increase from 23.5 to 26.2, better than all previous VLN instruction generation methods. Finally, we demonstrate the scalability of our method through increasing environment and instruction diversity, and the generalization ability of our pre-trained navigator across various downstream navigation tasks, surpassing state-of-the-art methods by a large margin in all cases.
no_new_dataset
0.945349
2412.11441
Yuning Han
Yuning Han, Bingyin Zhao, Rui Chu, Feng Luo, Biplab Sikdar, Yingjie Lao
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
null
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent studies show that diffusion models (DMs) are vulnerable to backdoor attacks. Existing backdoor attacks impose unconcealed triggers (e.g., a gray box and eyeglasses) that contain evident patterns, rendering remarkable attack effects yet easy detection upon human inspection and defensive algorithms. While it is possible to improve stealthiness by reducing the strength of the backdoor, doing so can significantly compromise its generality and effectiveness. In this paper, we propose UIBDiffusion, the universal imperceptible backdoor attack for diffusion models, which allows us to achieve superior attack and generation performance while evading state-of-the-art defenses. We propose a novel trigger generation approach based on universal adversarial perturbations (UAPs) and reveal that such perturbations, which are initially devised for fooling pre-trained discriminative models, can be adapted as potent imperceptible backdoor triggers for DMs. We evaluate UIBDiffusion on multiple types of DMs with different kinds of samplers across various datasets and targets. Experimental results demonstrate that UIBDiffusion brings three advantages: 1) Universality, the imperceptible trigger is universal (i.e., image and model agnostic) where a single trigger is effective to any images and all diffusion models with different samplers; 2) Utility, it achieves comparable generation quality (e.g., FID) and even better attack success rate (i.e., ASR) at low poison rates compared to the prior works; and 3) Undetectability, UIBDiffusion is plausible to human perception and can bypass Elijah and TERD, the SOTA defenses against backdoors for DMs. We will release our backdoor triggers and code.
[ { "version": "v1", "created": "Mon, 16 Dec 2024 04:47:55 GMT" }, { "version": "v2", "created": "Tue, 31 Dec 2024 05:07:06 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 04:36:39 GMT" } ]
2025-03-03T00:00:00
[ [ "Han", "Yuning", "" ], [ "Zhao", "Bingyin", "" ], [ "Chu", "Rui", "" ], [ "Luo", "Feng", "" ], [ "Sikdar", "Biplab", "" ], [ "Lao", "Yingjie", "" ] ]
TITLE: UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models ABSTRACT: Recent studies show that diffusion models (DMs) are vulnerable to backdoor attacks. Existing backdoor attacks impose unconcealed triggers (e.g., a gray box and eyeglasses) that contain evident patterns, rendering remarkable attack effects yet easy detection upon human inspection and defensive algorithms. While it is possible to improve stealthiness by reducing the strength of the backdoor, doing so can significantly compromise its generality and effectiveness. In this paper, we propose UIBDiffusion, the universal imperceptible backdoor attack for diffusion models, which allows us to achieve superior attack and generation performance while evading state-of-the-art defenses. We propose a novel trigger generation approach based on universal adversarial perturbations (UAPs) and reveal that such perturbations, which are initially devised for fooling pre-trained discriminative models, can be adapted as potent imperceptible backdoor triggers for DMs. We evaluate UIBDiffusion on multiple types of DMs with different kinds of samplers across various datasets and targets. Experimental results demonstrate that UIBDiffusion brings three advantages: 1) Universality, the imperceptible trigger is universal (i.e., image and model agnostic) where a single trigger is effective to any images and all diffusion models with different samplers; 2) Utility, it achieves comparable generation quality (e.g., FID) and even better attack success rate (i.e., ASR) at low poison rates compared to the prior works; and 3) Undetectability, UIBDiffusion is plausible to human perception and can bypass Elijah and TERD, the SOTA defenses against backdoors for DMs. We will release our backdoor triggers and code.
no_new_dataset
0.942929
2412.12693
Wenyu Zhang
Wenyu Zhang, Wei En Ng, Lixin Ma, Yuwen Wang, Jungqi Zhao, Allison Koenecke, Boyang Li, Lu Wang
SPHERE: Unveiling Spatial Blind Spots in Vision-Language Models Through Hierarchical Evaluation
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Current vision-language models may grasp basic spatial cues and simple directions (e.g. left, right, front, back), but struggle with the multi-dimensional spatial reasoning necessary for human-like understanding and real-world applications. To address this gap, we develop SPHERE (Spatial Perception and Hierarchical Evaluation of REasoning), a hierarchical evaluation framework supported by a new human-annotated dataset. SPHERE systematically probes models across increasing levels of complexity, from fundamental skills to multi-skill integration and high-level reasoning that combines spatial, visual, and logical understanding. Benchmark evaluation of state-of-the-art models reveals significant deficiencies, especially in reasoning about distance and proximity, understanding both egocentric and allocentric perspectives, and applying spatial logic in physical contexts. These findings expose critical blind spots in existing models and underscore the need for more advanced spatial reasoning techniques, driving the development of vision-language models that align more closely with human spatial cognition. The SPHERE benchmark is available at https://github.com/zwenyu/SPHERE-VLM.
[ { "version": "v1", "created": "Tue, 17 Dec 2024 09:10:55 GMT" }, { "version": "v2", "created": "Mon, 17 Feb 2025 10:28:00 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 15:14:37 GMT" } ]
2025-03-03T00:00:00
[ [ "Zhang", "Wenyu", "" ], [ "Ng", "Wei En", "" ], [ "Ma", "Lixin", "" ], [ "Wang", "Yuwen", "" ], [ "Zhao", "Jungqi", "" ], [ "Koenecke", "Allison", "" ], [ "Li", "Boyang", "" ], [ "Wang", "Lu", "" ] ]
TITLE: SPHERE: Unveiling Spatial Blind Spots in Vision-Language Models Through Hierarchical Evaluation ABSTRACT: Current vision-language models may grasp basic spatial cues and simple directions (e.g. left, right, front, back), but struggle with the multi-dimensional spatial reasoning necessary for human-like understanding and real-world applications. To address this gap, we develop SPHERE (Spatial Perception and Hierarchical Evaluation of REasoning), a hierarchical evaluation framework supported by a new human-annotated dataset. SPHERE systematically probes models across increasing levels of complexity, from fundamental skills to multi-skill integration and high-level reasoning that combines spatial, visual, and logical understanding. Benchmark evaluation of state-of-the-art models reveals significant deficiencies, especially in reasoning about distance and proximity, understanding both egocentric and allocentric perspectives, and applying spatial logic in physical contexts. These findings expose critical blind spots in existing models and underscore the need for more advanced spatial reasoning techniques, driving the development of vision-language models that align more closely with human spatial cognition. The SPHERE benchmark is available at https://github.com/zwenyu/SPHERE-VLM.
new_dataset
0.957318
2412.13211
Arth Shukla
Arth Shukla, Stone Tao, Hao Su
ManiSkill-HAB: A Benchmark for Low-Level Manipulation in Home Rearrangement Tasks
null
null
null
null
cs.RO cs.AI cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
High-quality benchmarks are the foundation for embodied AI research, enabling significant advancements in long-horizon navigation, manipulation and rearrangement tasks. However, as frontier tasks in robotics get more advanced, they require faster simulation speed, more intricate test environments, and larger demonstration datasets. To this end, we present MS-HAB, a holistic benchmark for low-level manipulation and in-home object rearrangement. First, we provide a GPU-accelerated implementation of the Home Assistant Benchmark (HAB). We support realistic low-level control and achieve over 3x the speed of prior magical grasp implementations at a fraction of the GPU memory usage. Second, we train extensive reinforcement learning (RL) and imitation learning (IL) baselines for future work to compare against. Finally, we develop a rule-based trajectory filtering system to sample specific demonstrations from our RL policies which match predefined criteria for robot behavior and safety. Combining demonstration filtering with our fast environments enables efficient, controlled data generation at scale.
[ { "version": "v1", "created": "Mon, 9 Dec 2024 01:29:24 GMT" }, { "version": "v2", "created": "Fri, 20 Dec 2024 05:21:39 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 10:10:33 GMT" } ]
2025-03-03T00:00:00
[ [ "Shukla", "Arth", "" ], [ "Tao", "Stone", "" ], [ "Su", "Hao", "" ] ]
TITLE: ManiSkill-HAB: A Benchmark for Low-Level Manipulation in Home Rearrangement Tasks ABSTRACT: High-quality benchmarks are the foundation for embodied AI research, enabling significant advancements in long-horizon navigation, manipulation and rearrangement tasks. However, as frontier tasks in robotics get more advanced, they require faster simulation speed, more intricate test environments, and larger demonstration datasets. To this end, we present MS-HAB, a holistic benchmark for low-level manipulation and in-home object rearrangement. First, we provide a GPU-accelerated implementation of the Home Assistant Benchmark (HAB). We support realistic low-level control and achieve over 3x the speed of prior magical grasp implementations at a fraction of the GPU memory usage. Second, we train extensive reinforcement learning (RL) and imitation learning (IL) baselines for future work to compare against. Finally, we develop a rule-based trajectory filtering system to sample specific demonstrations from our RL policies which match predefined criteria for robot behavior and safety. Combining demonstration filtering with our fast environments enables efficient, controlled data generation at scale.
no_new_dataset
0.940463
2412.13299
Eichi Takaya
Eichi Takaya and Shinnosuke Yamamoto
In-context learning for medical image segmentation
null
null
null
null
eess.IV cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Annotation of medical images, such as MRI and CT scans, is crucial for evaluating treatment efficacy and planning radiotherapy. However, the extensive workload of medical professionals limits their ability to annotate large image datasets, posing a bottleneck for AI applications in medical imaging. To address this, we propose In-context Cascade Segmentation (ICS), a novel method that minimizes annotation requirements while achieving high segmentation accuracy for sequential medical images. ICS builds on the UniverSeg framework, which performs few-shot segmentation using support images without additional training. By iteratively adding the inference results of each slice to the support set, ICS propagates information forward and backward through the sequence, ensuring inter-slice consistency. We evaluate the proposed method on the HVSMR dataset, which includes segmentation tasks for eight cardiac regions. Experimental results demonstrate that ICS significantly improves segmentation performance in complex anatomical regions, particularly in maintaining boundary consistency across slices, compared to baseline methods. The study also highlights the impact of the number and position of initial support slices on segmentation accuracy. ICS offers a promising solution for reducing annotation burdens while delivering robust segmentation results, paving the way for its broader adoption in clinical and research applications.
[ { "version": "v1", "created": "Tue, 17 Dec 2024 19:59:08 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 06:19:59 GMT" } ]
2025-03-03T00:00:00
[ [ "Takaya", "Eichi", "" ], [ "Yamamoto", "Shinnosuke", "" ] ]
TITLE: In-context learning for medical image segmentation ABSTRACT: Annotation of medical images, such as MRI and CT scans, is crucial for evaluating treatment efficacy and planning radiotherapy. However, the extensive workload of medical professionals limits their ability to annotate large image datasets, posing a bottleneck for AI applications in medical imaging. To address this, we propose In-context Cascade Segmentation (ICS), a novel method that minimizes annotation requirements while achieving high segmentation accuracy for sequential medical images. ICS builds on the UniverSeg framework, which performs few-shot segmentation using support images without additional training. By iteratively adding the inference results of each slice to the support set, ICS propagates information forward and backward through the sequence, ensuring inter-slice consistency. We evaluate the proposed method on the HVSMR dataset, which includes segmentation tasks for eight cardiac regions. Experimental results demonstrate that ICS significantly improves segmentation performance in complex anatomical regions, particularly in maintaining boundary consistency across slices, compared to baseline methods. The study also highlights the impact of the number and position of initial support slices on segmentation accuracy. ICS offers a promising solution for reducing annotation burdens while delivering robust segmentation results, paving the way for its broader adoption in clinical and research applications.
no_new_dataset
0.943764
2412.14613
Nakamasa Inoue
Masanari Ohi, Masahiro Kaneko, Naoaki Okazaki, Nakamasa Inoue
Multi-modal, Multi-task, Multi-criteria Automatic Evaluation with Vision Language Models
null
null
null
null
cs.CL cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-language models (VLMs) have shown impressive abilities across a range of multi-modal tasks. However, existing metrics for evaluating the quality of text generated by VLMs typically focus on an overall evaluation for a specific task, such as image captioning. While the overall evaluation is essential for any task, the criteria prioritized can differ depending on the task, making it challenging for current metrics to adapt to multi-task scenarios. To address this limitation, we propose HarmonicEval, a reference-free comprehensive evaluation metric that aggregates criterion-wise scores to produce the overall score in a bottom-up manner. Furthermore, we construct the Multi-task Multi-criteria Human Evaluation (MMHE) dataset, which comprises 18,000 expert human judgments across four multi-modal tasks. Our experiments demonstrate that HarmonicEval achieves higher correlations with human judgments than conventional metrics while providing numerical scores for each criterion.
[ { "version": "v1", "created": "Thu, 19 Dec 2024 08:03:16 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 03:04:05 GMT" } ]
2025-03-03T00:00:00
[ [ "Ohi", "Masanari", "" ], [ "Kaneko", "Masahiro", "" ], [ "Okazaki", "Naoaki", "" ], [ "Inoue", "Nakamasa", "" ] ]
TITLE: Multi-modal, Multi-task, Multi-criteria Automatic Evaluation with Vision Language Models ABSTRACT: Vision-language models (VLMs) have shown impressive abilities across a range of multi-modal tasks. However, existing metrics for evaluating the quality of text generated by VLMs typically focus on an overall evaluation for a specific task, such as image captioning. While the overall evaluation is essential for any task, the criteria prioritized can differ depending on the task, making it challenging for current metrics to adapt to multi-task scenarios. To address this limitation, we propose HarmonicEval, a reference-free comprehensive evaluation metric that aggregates criterion-wise scores to produce the overall score in a bottom-up manner. Furthermore, we construct the Multi-task Multi-criteria Human Evaluation (MMHE) dataset, which comprises 18,000 expert human judgments across four multi-modal tasks. Our experiments demonstrate that HarmonicEval achieves higher correlations with human judgments than conventional metrics while providing numerical scores for each criterion.
new_dataset
0.958654
2412.16100
Bishwamittra Ghosh
Bishwamittra Ghosh, Sarah Hasan, Naheed Anjum Arafat, Arijit Khan
Logical Consistency of Large Language Models in Fact-checking
Published at ICLR 2025
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent years, large language models (LLMs) have demonstrated significant success in performing varied natural language tasks such as language translation, question-answering, summarizing, fact-checking, etc. Despite LLMs' impressive ability to generate human-like texts, LLMs are infamous for their inconsistent responses - a meaning-preserving change in the input query results in an inconsistent response and attributes to vulnerabilities of LLMs such as hallucination. Consequently, existing research focuses on simple paraphrasing-based consistency assessment of LLMs, and ignores complex queries that necessitate an even better understanding of logical reasoning by an LLM. Our work therefore addresses the logical inconsistency of LLMs under complex logical queries with primitive logical operators, e.g., negation, conjunction, and disjunction. As a test bed, we consider retrieval-augmented LLMs on a fact-checking task involving propositional logic queries from knowledge graphs (KGs). Our contributions are threefold. Benchmark: We introduce three logical fact-checking datasets over KGs for community development towards logically consistent LLMs. Assessment: We propose consistency measures of LLMs on propositional logic queries and demonstrate that existing LLMs lack logical consistency, especially on complex queries. Improvement: We employ supervised fine-tuning to improve the logical consistency of LLMs on the complex fact-checking task with KG contexts. We have made our source code and benchmarks available.
[ { "version": "v1", "created": "Fri, 20 Dec 2024 17:42:25 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 17:02:23 GMT" } ]
2025-03-03T00:00:00
[ [ "Ghosh", "Bishwamittra", "" ], [ "Hasan", "Sarah", "" ], [ "Arafat", "Naheed Anjum", "" ], [ "Khan", "Arijit", "" ] ]
TITLE: Logical Consistency of Large Language Models in Fact-checking ABSTRACT: In recent years, large language models (LLMs) have demonstrated significant success in performing varied natural language tasks such as language translation, question-answering, summarizing, fact-checking, etc. Despite LLMs' impressive ability to generate human-like texts, LLMs are infamous for their inconsistent responses - a meaning-preserving change in the input query results in an inconsistent response and attributes to vulnerabilities of LLMs such as hallucination. Consequently, existing research focuses on simple paraphrasing-based consistency assessment of LLMs, and ignores complex queries that necessitate an even better understanding of logical reasoning by an LLM. Our work therefore addresses the logical inconsistency of LLMs under complex logical queries with primitive logical operators, e.g., negation, conjunction, and disjunction. As a test bed, we consider retrieval-augmented LLMs on a fact-checking task involving propositional logic queries from knowledge graphs (KGs). Our contributions are threefold. Benchmark: We introduce three logical fact-checking datasets over KGs for community development towards logically consistent LLMs. Assessment: We propose consistency measures of LLMs on propositional logic queries and demonstrate that existing LLMs lack logical consistency, especially on complex queries. Improvement: We employ supervised fine-tuning to improve the logical consistency of LLMs on the complex fact-checking task with KG contexts. We have made our source code and benchmarks available.
new_dataset
0.960287
2501.04903
Nathan Phelps
Nathan Phelps, Daniel J. Lizotte, and Douglas G. Woolford
Towards understanding the bias in decision trees
null
null
null
null
stat.ML cs.LG
http://creativecommons.org/licenses/by/4.0/
There is a widespread and longstanding belief that machine learning models are biased towards the majority (or negative) class when learning from imbalanced data, leading them to neglect or ignore the minority (or positive) class. In this study, we show that this belief is not necessarily correct for decision trees, and that their bias can actually be in the opposite direction. Motivated by a recent simulation study that suggested that decision trees can be biased towards the minority class, our paper aims to reconcile the conflict between that study and decades of other works. First, we critically evaluate past literature on this problem, finding that failing to consider the data generating process has led to incorrect conclusions about the bias in decision trees. We then prove that, under specific conditions related to the predictors, decision trees fit to purity and trained on a dataset with only one positive case are biased towards the minority class. Finally, we demonstrate that splits in a decision tree are also biased when there is more than one positive case. Our findings have implications on the use of popular tree-based models, such as random forests.
[ { "version": "v1", "created": "Thu, 9 Jan 2025 01:31:30 GMT" }, { "version": "v2", "created": "Mon, 27 Jan 2025 18:22:59 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 14:03:56 GMT" } ]
2025-03-03T00:00:00
[ [ "Phelps", "Nathan", "" ], [ "Lizotte", "Daniel J.", "" ], [ "Woolford", "Douglas G.", "" ] ]
TITLE: Towards understanding the bias in decision trees ABSTRACT: There is a widespread and longstanding belief that machine learning models are biased towards the majority (or negative) class when learning from imbalanced data, leading them to neglect or ignore the minority (or positive) class. In this study, we show that this belief is not necessarily correct for decision trees, and that their bias can actually be in the opposite direction. Motivated by a recent simulation study that suggested that decision trees can be biased towards the minority class, our paper aims to reconcile the conflict between that study and decades of other works. First, we critically evaluate past literature on this problem, finding that failing to consider the data generating process has led to incorrect conclusions about the bias in decision trees. We then prove that, under specific conditions related to the predictors, decision trees fit to purity and trained on a dataset with only one positive case are biased towards the minority class. Finally, we demonstrate that splits in a decision tree are also biased when there is more than one positive case. Our findings have implications on the use of popular tree-based models, such as random forests.
no_new_dataset
0.94625
2501.06842
Tianjin Huang
Tianjin Huang, Ziquan Zhu, Gaojie Jin, Lu Liu, Zhangyang Wang, Shiwei Liu
SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training
null
null
null
null
cs.LG cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have demonstrated exceptional performance across diverse tasks, yet their training remains highly resource-intensive and susceptible to critical challenges such as training instability. A predominant source of this instability stems from gradient and loss spikes, which disrupt the learning process, often leading to costly interventions like checkpoint recovery and experiment restarts, further amplifying inefficiencies. This paper presents a comprehensive investigation into gradient spikes observed during LLM training, revealing their prevalence across multiple architectures and datasets. Our analysis shows that these spikes can be up to $1000\times$ larger than typical gradients, substantially deteriorating model performance. To address this issue, we propose Spike-Aware Adam with Momentum Reset SPAM, a novel optimizer designed to counteract gradient spikes through momentum reset and spike-aware gradient clipping. Extensive experiments, including both pre-training and fine-tuning, demonstrate that SPAM consistently surpasses Adam and its variants across various tasks, including (1) LLM pre-training from 60M to 1B, (2) 4-bit LLM pre-training,(3) reinforcement learning, and (4) Time Series Forecasting. Additionally, SPAM facilitates memory-efficient training by enabling sparse momentum, where only a subset of momentum terms are maintained and updated. When operating under memory constraints, SPAM outperforms state-of-the-art memory-efficient optimizers such as GaLore and Adam-Mini. Our work underscores the importance of mitigating gradient spikes in LLM training and introduces an effective optimization strategy that enhances both training stability and resource efficiency at scale. Code is available at https://github.com/TianjinYellow/SPAM-Optimizer.git
[ { "version": "v1", "created": "Sun, 12 Jan 2025 15:21:22 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 15:15:31 GMT" } ]
2025-03-03T00:00:00
[ [ "Huang", "Tianjin", "" ], [ "Zhu", "Ziquan", "" ], [ "Jin", "Gaojie", "" ], [ "Liu", "Lu", "" ], [ "Wang", "Zhangyang", "" ], [ "Liu", "Shiwei", "" ] ]
TITLE: SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training ABSTRACT: Large Language Models (LLMs) have demonstrated exceptional performance across diverse tasks, yet their training remains highly resource-intensive and susceptible to critical challenges such as training instability. A predominant source of this instability stems from gradient and loss spikes, which disrupt the learning process, often leading to costly interventions like checkpoint recovery and experiment restarts, further amplifying inefficiencies. This paper presents a comprehensive investigation into gradient spikes observed during LLM training, revealing their prevalence across multiple architectures and datasets. Our analysis shows that these spikes can be up to $1000\times$ larger than typical gradients, substantially deteriorating model performance. To address this issue, we propose Spike-Aware Adam with Momentum Reset SPAM, a novel optimizer designed to counteract gradient spikes through momentum reset and spike-aware gradient clipping. Extensive experiments, including both pre-training and fine-tuning, demonstrate that SPAM consistently surpasses Adam and its variants across various tasks, including (1) LLM pre-training from 60M to 1B, (2) 4-bit LLM pre-training,(3) reinforcement learning, and (4) Time Series Forecasting. Additionally, SPAM facilitates memory-efficient training by enabling sparse momentum, where only a subset of momentum terms are maintained and updated. When operating under memory constraints, SPAM outperforms state-of-the-art memory-efficient optimizers such as GaLore and Adam-Mini. Our work underscores the importance of mitigating gradient spikes in LLM training and introduces an effective optimization strategy that enhances both training stability and resource efficiency at scale. Code is available at https://github.com/TianjinYellow/SPAM-Optimizer.git
no_new_dataset
0.947137
2501.09768
Mohamed Bayan Kmainasi
Mohamed Bayan Kmainasi, Ali Ezzat Shahroor, Amani Al-Ghraibah
Can Large Language Models Predict the Outcome of Judicial Decisions?
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have shown exceptional capabilities in Natural Language Processing (NLP) across diverse domains. However, their application in specialized tasks such as Legal Judgment Prediction (LJP) for low-resource languages like Arabic remains underexplored. In this work, we address this gap by developing an Arabic LJP dataset, collected and preprocessed from Saudi commercial court judgments. We benchmark state-of-the-art open-source LLMs, including LLaMA-3.2-3B and LLaMA-3.1-8B, under varying configurations such as zero-shot, one-shot, and fine-tuning using LoRA. Additionally, we employed a comprehensive evaluation framework that integrates both quantitative metrics (such as BLEU, ROUGE, and BERT) and qualitative assessments (including Coherence, Legal Language, Clarity, etc.) using an LLM. Our results demonstrate that fine-tuned smaller models achieve comparable performance to larger models in task-specific contexts while offering significant resource efficiency. Furthermore, we investigate the impact of fine-tuning the model on a diverse set of instructions, offering valuable insights into the development of a more human-centric and adaptable LLM. We have made the dataset, code, and models publicly available to provide a solid foundation for future research in Arabic legal NLP.
[ { "version": "v1", "created": "Wed, 15 Jan 2025 11:32:35 GMT" }, { "version": "v2", "created": "Wed, 5 Feb 2025 12:17:36 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 18:27:21 GMT" } ]
2025-03-03T00:00:00
[ [ "Kmainasi", "Mohamed Bayan", "" ], [ "Shahroor", "Ali Ezzat", "" ], [ "Al-Ghraibah", "Amani", "" ] ]
TITLE: Can Large Language Models Predict the Outcome of Judicial Decisions? ABSTRACT: Large Language Models (LLMs) have shown exceptional capabilities in Natural Language Processing (NLP) across diverse domains. However, their application in specialized tasks such as Legal Judgment Prediction (LJP) for low-resource languages like Arabic remains underexplored. In this work, we address this gap by developing an Arabic LJP dataset, collected and preprocessed from Saudi commercial court judgments. We benchmark state-of-the-art open-source LLMs, including LLaMA-3.2-3B and LLaMA-3.1-8B, under varying configurations such as zero-shot, one-shot, and fine-tuning using LoRA. Additionally, we employed a comprehensive evaluation framework that integrates both quantitative metrics (such as BLEU, ROUGE, and BERT) and qualitative assessments (including Coherence, Legal Language, Clarity, etc.) using an LLM. Our results demonstrate that fine-tuned smaller models achieve comparable performance to larger models in task-specific contexts while offering significant resource efficiency. Furthermore, we investigate the impact of fine-tuning the model on a diverse set of instructions, offering valuable insights into the development of a more human-centric and adaptable LLM. We have made the dataset, code, and models publicly available to provide a solid foundation for future research in Arabic legal NLP.
new_dataset
0.955858
2501.12087
Branislava Jankovic
Branislava Jankovic, Sabina Jangirova, Waseem Ullah, Latif U. Khan, Mohsen Guizani
UAV-Assisted Real-Time Disaster Detection Using Optimized Transformer Model
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dangerous surroundings and difficult-to-reach landscapes introduce significant complications for adequate disaster management and recuperation. These problems can be solved by engaging unmanned aerial vehicles (UAVs) provided with embedded platforms and optical sensors. In this work, we focus on enabling onboard aerial image processing to ensure proper and real-time disaster detection. Such a setting usually causes challenges due to the limited hardware resources of UAVs. However, privacy, connectivity, and latency issues can be avoided. We suggest a UAV-assisted edge framework for disaster detection, leveraging our proposed model optimized for onboard real-time aerial image classification. The optimization of the model is achieved using post-training quantization techniques. To address the limited number of disaster cases in existing benchmark datasets and therefore ensure real-world adoption of our model, we construct a novel dataset, DisasterEye, featuring disaster scenes captured by UAVs and individuals on-site. Experimental results reveal the efficacy of our model, reaching high accuracy with lowered inference latency and memory use on both traditional machines and resource-limited devices. This shows that the scalability and adaptability of our method make it a powerful solution for real-time disaster management on resource-constrained UAV platforms.
[ { "version": "v1", "created": "Tue, 21 Jan 2025 12:29:45 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 10:42:30 GMT" } ]
2025-03-03T00:00:00
[ [ "Jankovic", "Branislava", "" ], [ "Jangirova", "Sabina", "" ], [ "Ullah", "Waseem", "" ], [ "Khan", "Latif U.", "" ], [ "Guizani", "Mohsen", "" ] ]
TITLE: UAV-Assisted Real-Time Disaster Detection Using Optimized Transformer Model ABSTRACT: Dangerous surroundings and difficult-to-reach landscapes introduce significant complications for adequate disaster management and recuperation. These problems can be solved by engaging unmanned aerial vehicles (UAVs) provided with embedded platforms and optical sensors. In this work, we focus on enabling onboard aerial image processing to ensure proper and real-time disaster detection. Such a setting usually causes challenges due to the limited hardware resources of UAVs. However, privacy, connectivity, and latency issues can be avoided. We suggest a UAV-assisted edge framework for disaster detection, leveraging our proposed model optimized for onboard real-time aerial image classification. The optimization of the model is achieved using post-training quantization techniques. To address the limited number of disaster cases in existing benchmark datasets and therefore ensure real-world adoption of our model, we construct a novel dataset, DisasterEye, featuring disaster scenes captured by UAVs and individuals on-site. Experimental results reveal the efficacy of our model, reaching high accuracy with lowered inference latency and memory use on both traditional machines and resource-limited devices. This shows that the scalability and adaptability of our method make it a powerful solution for real-time disaster management on resource-constrained UAV platforms.
new_dataset
0.959345
2501.15089
Zhan Ling
Zhan Ling, Kang Liu, Kai Yan, Yifan Yang, Weijian Lin, Ting-Han Fan, Lingfeng Shen, Zhengyin Du, Jiecao Chen
LongReason: A Synthetic Long-Context Reasoning Benchmark via Context Expansion
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) have demonstrated remarkable progress in understanding long-context inputs. However, benchmarks for evaluating the long-context reasoning abilities of LLMs fall behind the pace. Existing benchmarks often focus on a narrow range of tasks or those that do not demand complex reasoning. To address this gap and enable a more comprehensive evaluation of the long-context reasoning capabilities of current LLMs, we propose a new synthetic benchmark, LongReason, which is constructed by synthesizing long-context reasoning questions from a varied set of short-context reasoning questions through context expansion. LongReason consists of 794 multiple-choice reasoning questions with diverse reasoning patterns across three task categories: reading comprehension, logical inference, and mathematical word problems. We evaluate 21 LLMs on LongReason, revealing that most models experience significant performance drops as context length increases. Our further analysis shows that even state-of-the-art LLMs still have significant room for improvement in providing robust reasoning across different tasks. We have open-sourced LongReason under https://huggingface.co/datasets/lz1bytedance/LongReason to support the comprehensive evaluation of LLMs' long-context reasoning capabilities.
[ { "version": "v1", "created": "Sat, 25 Jan 2025 05:32:14 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 07:53:20 GMT" } ]
2025-03-03T00:00:00
[ [ "Ling", "Zhan", "" ], [ "Liu", "Kang", "" ], [ "Yan", "Kai", "" ], [ "Yang", "Yifan", "" ], [ "Lin", "Weijian", "" ], [ "Fan", "Ting-Han", "" ], [ "Shen", "Lingfeng", "" ], [ "Du", "Zhengyin", "" ], [ "Chen", "Jiecao", "" ] ]
TITLE: LongReason: A Synthetic Long-Context Reasoning Benchmark via Context Expansion ABSTRACT: Large language models (LLMs) have demonstrated remarkable progress in understanding long-context inputs. However, benchmarks for evaluating the long-context reasoning abilities of LLMs fall behind the pace. Existing benchmarks often focus on a narrow range of tasks or those that do not demand complex reasoning. To address this gap and enable a more comprehensive evaluation of the long-context reasoning capabilities of current LLMs, we propose a new synthetic benchmark, LongReason, which is constructed by synthesizing long-context reasoning questions from a varied set of short-context reasoning questions through context expansion. LongReason consists of 794 multiple-choice reasoning questions with diverse reasoning patterns across three task categories: reading comprehension, logical inference, and mathematical word problems. We evaluate 21 LLMs on LongReason, revealing that most models experience significant performance drops as context length increases. Our further analysis shows that even state-of-the-art LLMs still have significant room for improvement in providing robust reasoning across different tasks. We have open-sourced LongReason under https://huggingface.co/datasets/lz1bytedance/LongReason to support the comprehensive evaluation of LLMs' long-context reasoning capabilities.
new_dataset
0.964288
2501.15296
Ayan Sengupta
Ayan Sengupta, Siddhant Chaudhary, Tanmoy Chakraborty
You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The ever-increasing size of large language models (LLMs) presents significant challenges for deployment due to their heavy computational and memory requirements. Current model pruning techniques attempt to alleviate these issues by relying heavily on external calibration datasets to determine which parameters to prune or compress, thus limiting their flexibility and scalability across different compression ratios. Moreover, these methods often cause severe performance degradation, particularly in downstream tasks, when subjected to higher compression rates. In this paper, we propose PruneNet, a novel model compression method that addresses these limitations by reformulating model pruning as a policy learning process. PruneNet decouples the pruning process from the model architecture, eliminating the need for calibration datasets. It learns a stochastic pruning policy to assess parameter importance solely based on intrinsic model properties while preserving the spectral structure to minimize information loss. PruneNet can compress the LLaMA-2-7B model in just 15 minutes, achieving over 80% retention of its zero-shot performance with a 30% compression ratio, outperforming existing methods that retain only 75% performance. Furthermore, on complex multitask language understanding tasks, PruneNet demonstrates its robustness by preserving up to 80% performance of the original model, proving itself a superior alternative to conventional structured compression techniques.
[ { "version": "v1", "created": "Sat, 25 Jan 2025 18:26:39 GMT" }, { "version": "v2", "created": "Wed, 19 Feb 2025 06:34:23 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 15:23:40 GMT" } ]
2025-03-03T00:00:00
[ [ "Sengupta", "Ayan", "" ], [ "Chaudhary", "Siddhant", "" ], [ "Chakraborty", "Tanmoy", "" ] ]
TITLE: You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning ABSTRACT: The ever-increasing size of large language models (LLMs) presents significant challenges for deployment due to their heavy computational and memory requirements. Current model pruning techniques attempt to alleviate these issues by relying heavily on external calibration datasets to determine which parameters to prune or compress, thus limiting their flexibility and scalability across different compression ratios. Moreover, these methods often cause severe performance degradation, particularly in downstream tasks, when subjected to higher compression rates. In this paper, we propose PruneNet, a novel model compression method that addresses these limitations by reformulating model pruning as a policy learning process. PruneNet decouples the pruning process from the model architecture, eliminating the need for calibration datasets. It learns a stochastic pruning policy to assess parameter importance solely based on intrinsic model properties while preserving the spectral structure to minimize information loss. PruneNet can compress the LLaMA-2-7B model in just 15 minutes, achieving over 80% retention of its zero-shot performance with a 30% compression ratio, outperforming existing methods that retain only 75% performance. Furthermore, on complex multitask language understanding tasks, PruneNet demonstrates its robustness by preserving up to 80% performance of the original model, proving itself a superior alternative to conventional structured compression techniques.
no_new_dataset
0.944177
2501.15889
Federico Errica
Federico Errica, Henrik Christiansen, Viktor Zaverkin, Mathias Niepert, Francesco Alesiani
Adaptive Width Neural Networks
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For almost 70 years, researchers have mostly relied on hyper-parameter tuning to pick the width of neural networks' layers out of many possible choices. This paper challenges the status quo by introducing an easy-to-use technique to learn an unbounded width of a neural network's layer during training. The technique does not rely on alternate optimization nor hand-crafted gradient heuristics; rather, it jointly optimizes the width and the parameters of each layer via simple backpropagation. We apply the technique to a broad range of data domains such as tables, images, texts, and graphs, showing how the width adapts to the task's difficulty. By imposing a soft ordering of importance among neurons, it is possible to truncate the trained network at virtually zero cost, achieving a smooth trade-off between performance and compute resources in a structured way. Alternatively, one can dynamically compress the network with no performance degradation. In light of recent foundation models trained on large datasets, believed to require billions of parameters and where hyper-parameter tuning is unfeasible due to huge training costs, our approach stands as a viable alternative for width learning.
[ { "version": "v1", "created": "Mon, 27 Jan 2025 09:25:56 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 08:28:08 GMT" } ]
2025-03-03T00:00:00
[ [ "Errica", "Federico", "" ], [ "Christiansen", "Henrik", "" ], [ "Zaverkin", "Viktor", "" ], [ "Niepert", "Mathias", "" ], [ "Alesiani", "Francesco", "" ] ]
TITLE: Adaptive Width Neural Networks ABSTRACT: For almost 70 years, researchers have mostly relied on hyper-parameter tuning to pick the width of neural networks' layers out of many possible choices. This paper challenges the status quo by introducing an easy-to-use technique to learn an unbounded width of a neural network's layer during training. The technique does not rely on alternate optimization nor hand-crafted gradient heuristics; rather, it jointly optimizes the width and the parameters of each layer via simple backpropagation. We apply the technique to a broad range of data domains such as tables, images, texts, and graphs, showing how the width adapts to the task's difficulty. By imposing a soft ordering of importance among neurons, it is possible to truncate the trained network at virtually zero cost, achieving a smooth trade-off between performance and compute resources in a structured way. Alternatively, one can dynamically compress the network with no performance degradation. In light of recent foundation models trained on large datasets, believed to require billions of parameters and where hyper-parameter tuning is unfeasible due to huge training costs, our approach stands as a viable alternative for width learning.
no_new_dataset
0.946151
2501.16239
Antoine Olivier
Alexandre Filiot, Nicolas Dop, Oussama Tchita, Auriane Riou, R\'emy Dubois, Thomas Peeters, Daria Valter, Marin Scalbert, Charlie Saillard, Genevi\`eve Robin, Antoine Olivier
Distilling foundation models for robust and efficient models in digital pathology
Preprint
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent years, the advent of foundation models (FM) for digital pathology has relied heavily on scaling the pre-training datasets and the model size, yielding large and powerful models. While it resulted in improving the performance on diverse downstream tasks, it also introduced increased computational cost and inference time. In this work, we explore the distillation of a large foundation model into a smaller one, reducing the number of parameters by several orders of magnitude. Leveraging distillation techniques, our distilled model, H0-mini, achieves nearly comparable performance to large FMs at a significantly reduced inference cost. It is evaluated on several public benchmarks, achieving 3rd place on the HEST benchmark and 5th place on the EVA benchmark. Additionally, a robustness analysis conducted on the PLISM dataset demonstrates that our distilled model reaches excellent robustness to variations in staining and scanning conditions, significantly outperforming other state-of-the art models. This opens new perspectives to design lightweight and robust models for digital pathology, without compromising on performance.
[ { "version": "v1", "created": "Mon, 27 Jan 2025 17:35:39 GMT" }, { "version": "v2", "created": "Tue, 28 Jan 2025 17:09:41 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 15:44:24 GMT" } ]
2025-03-03T00:00:00
[ [ "Filiot", "Alexandre", "" ], [ "Dop", "Nicolas", "" ], [ "Tchita", "Oussama", "" ], [ "Riou", "Auriane", "" ], [ "Dubois", "Rémy", "" ], [ "Peeters", "Thomas", "" ], [ "Valter", "Daria", "" ], [ "Scalbert", "Marin", "" ], [ "Saillard", "Charlie", "" ], [ "Robin", "Geneviève", "" ], [ "Olivier", "Antoine", "" ] ]
TITLE: Distilling foundation models for robust and efficient models in digital pathology ABSTRACT: In recent years, the advent of foundation models (FM) for digital pathology has relied heavily on scaling the pre-training datasets and the model size, yielding large and powerful models. While it resulted in improving the performance on diverse downstream tasks, it also introduced increased computational cost and inference time. In this work, we explore the distillation of a large foundation model into a smaller one, reducing the number of parameters by several orders of magnitude. Leveraging distillation techniques, our distilled model, H0-mini, achieves nearly comparable performance to large FMs at a significantly reduced inference cost. It is evaluated on several public benchmarks, achieving 3rd place on the HEST benchmark and 5th place on the EVA benchmark. Additionally, a robustness analysis conducted on the PLISM dataset demonstrates that our distilled model reaches excellent robustness to variations in staining and scanning conditions, significantly outperforming other state-of-the art models. This opens new perspectives to design lightweight and robust models for digital pathology, without compromising on performance.
no_new_dataset
0.948632
2502.01674
Akhilbaran Ghosh
Priyam Ganguly, Akhilbaran Ghosh
Efficient Brain Tumor Classification with Lightweight CNN Architecture: A Novel Approach
Accepted in FMLDS 2024
2024 IEEE International Conference on Future Machine Learning and Data Science (FMLDS)
10.1109/FMLDS63805.2024.00065
null
eess.IV cs.CV
http://creativecommons.org/licenses/by/4.0/
Brain tumor classification using MRI images is critical in medical diagnostics, where early and accurate detection significantly impacts patient outcomes. While recent advancements in deep learning (DL), particularly CNNs, have shown promise, many models struggle with balancing accuracy and computational efficiency and often lack robustness across diverse datasets. To address these challenges, we propose a novel model architecture integrating separable convolutions and squeeze and excitation (SE) blocks, designed to enhance feature extraction while maintaining computational efficiency. Our model further incorporates batch normalization and dropout to prevent overfitting, ensuring stable and reliable performance. The proposed model is lightweight because it uses separable convolutions, which reduce the number of parameters, and incorporates global average pooling instead of fully connected layers to minimize computational complexity while maintaining high accuracy. Our model does better than other models by about 0.5% to 1.0% in accuracy and 1.5% to 2.5% in loss reduction, as shown by many experiments. It has a validation accuracy of 99.22% and a test accuracy of 98.44%. These results highlight the model's ability to generalize effectively across different brain tumour types, offering a robust tools for clinical applications. Our work sets a new benchmark in the field, providing a foundation for future research in optimizing the accuracy and efficiency of DL models for medical image analysis.
[ { "version": "v1", "created": "Sat, 1 Feb 2025 21:06:42 GMT" } ]
2025-03-03T00:00:00
[ [ "Ganguly", "Priyam", "" ], [ "Ghosh", "Akhilbaran", "" ] ]
TITLE: Efficient Brain Tumor Classification with Lightweight CNN Architecture: A Novel Approach ABSTRACT: Brain tumor classification using MRI images is critical in medical diagnostics, where early and accurate detection significantly impacts patient outcomes. While recent advancements in deep learning (DL), particularly CNNs, have shown promise, many models struggle with balancing accuracy and computational efficiency and often lack robustness across diverse datasets. To address these challenges, we propose a novel model architecture integrating separable convolutions and squeeze and excitation (SE) blocks, designed to enhance feature extraction while maintaining computational efficiency. Our model further incorporates batch normalization and dropout to prevent overfitting, ensuring stable and reliable performance. The proposed model is lightweight because it uses separable convolutions, which reduce the number of parameters, and incorporates global average pooling instead of fully connected layers to minimize computational complexity while maintaining high accuracy. Our model does better than other models by about 0.5% to 1.0% in accuracy and 1.5% to 2.5% in loss reduction, as shown by many experiments. It has a validation accuracy of 99.22% and a test accuracy of 98.44%. These results highlight the model's ability to generalize effectively across different brain tumour types, offering a robust tools for clinical applications. Our work sets a new benchmark in the field, providing a foundation for future research in optimizing the accuracy and efficiency of DL models for medical image analysis.
no_new_dataset
0.949059
2502.06136
Sagar Barad
Rucha Bhalchandra Joshi, Sagar Prakash Barad, Nidhi Tiwari and Subhankar Mishra
Graph Neural Networks at a Fraction
12 pages, 2 figures, accepted at PAKDD 2025
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Neural Networks (GNNs) have emerged as powerful tools for learning representations of graph-structured data. In addition to real-valued GNNs, quaternion GNNs also perform well on tasks on graph-structured data. With the aim of reducing the energy footprint, we reduce the model size while maintaining accuracy comparable to that of the original-sized GNNs. This paper introduces Quaternion Message Passing Neural Networks (QMPNNs), a framework that leverages quaternion space to compute node representations. Our approach offers a generalizable method for incorporating quaternion representations into GNN architectures at one-fourth of the original parameter count. Furthermore, we present a novel perspective on Graph Lottery Tickets, redefining their applicability within the context of GNNs and QMPNNs. We specifically aim to find the initialization lottery from the subnetwork of the GNNs that can achieve comparable performance to the original GNN upon training. Thereby reducing the trainable model parameters even further. To validate the effectiveness of our proposed QMPNN framework and LTH for both GNNs and QMPNNs, we evaluate their performance on real-world datasets across three fundamental graph-based tasks: node classification, link prediction, and graph classification.
[ { "version": "v1", "created": "Mon, 10 Feb 2025 03:55:09 GMT" }, { "version": "v2", "created": "Tue, 11 Feb 2025 06:30:25 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 06:26:53 GMT" } ]
2025-03-03T00:00:00
[ [ "Joshi", "Rucha Bhalchandra", "" ], [ "Barad", "Sagar Prakash", "" ], [ "Tiwari", "Nidhi", "" ], [ "Mishra", "Subhankar", "" ] ]
TITLE: Graph Neural Networks at a Fraction ABSTRACT: Graph Neural Networks (GNNs) have emerged as powerful tools for learning representations of graph-structured data. In addition to real-valued GNNs, quaternion GNNs also perform well on tasks on graph-structured data. With the aim of reducing the energy footprint, we reduce the model size while maintaining accuracy comparable to that of the original-sized GNNs. This paper introduces Quaternion Message Passing Neural Networks (QMPNNs), a framework that leverages quaternion space to compute node representations. Our approach offers a generalizable method for incorporating quaternion representations into GNN architectures at one-fourth of the original parameter count. Furthermore, we present a novel perspective on Graph Lottery Tickets, redefining their applicability within the context of GNNs and QMPNNs. We specifically aim to find the initialization lottery from the subnetwork of the GNNs that can achieve comparable performance to the original GNN upon training. Thereby reducing the trainable model parameters even further. To validate the effectiveness of our proposed QMPNN framework and LTH for both GNNs and QMPNNs, we evaluate their performance on real-world datasets across three fundamental graph-based tasks: node classification, link prediction, and graph classification.
no_new_dataset
0.950595
2502.07138
Girish A. Koushik
Girish A. Koushik, Diptesh Kanojia, Helen Treharne
Towards a Robust Framework for Multimodal Hate Detection: A Study on Video vs. Image-based Content
Accepted to the MM4SG Workshop at the WebConf 2025
Companion Proceedings of the ACM Web Conference 2025 (WWW Companion '25), April 28-May 2, 2025, Sydney, NSW, Australia
10.1145/3701716.3718382
979-8-4007-1331-6/2025/04
cs.CV cs.CL cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Social media platforms enable the propagation of hateful content across different modalities such as textual, auditory, and visual, necessitating effective detection methods. While recent approaches have shown promise in handling individual modalities, their effectiveness across different modality combinations remains unexplored. This paper presents a systematic analysis of fusion-based approaches for multimodal hate detection, focusing on their performance across video and image-based content. Our comprehensive evaluation reveals significant modality-specific limitations: while simple embedding fusion achieves state-of-the-art performance on video content (HateMM dataset) with a 9.9% points F1-score improvement, it struggles with complex image-text relationships in memes (Hateful Memes dataset). Through detailed ablation studies and error analysis, we demonstrate how current fusion approaches fail to capture nuanced cross-modal interactions, particularly in cases involving benign confounders. Our findings provide crucial insights for developing more robust hate detection systems and highlight the need for modality-specific architectural considerations. The code is available at https://github.com/gak97/Video-vs-Meme-Hate.
[ { "version": "v1", "created": "Tue, 11 Feb 2025 00:07:40 GMT" } ]
2025-03-03T00:00:00
[ [ "Koushik", "Girish A.", "" ], [ "Kanojia", "Diptesh", "" ], [ "Treharne", "Helen", "" ] ]
TITLE: Towards a Robust Framework for Multimodal Hate Detection: A Study on Video vs. Image-based Content ABSTRACT: Social media platforms enable the propagation of hateful content across different modalities such as textual, auditory, and visual, necessitating effective detection methods. While recent approaches have shown promise in handling individual modalities, their effectiveness across different modality combinations remains unexplored. This paper presents a systematic analysis of fusion-based approaches for multimodal hate detection, focusing on their performance across video and image-based content. Our comprehensive evaluation reveals significant modality-specific limitations: while simple embedding fusion achieves state-of-the-art performance on video content (HateMM dataset) with a 9.9% points F1-score improvement, it struggles with complex image-text relationships in memes (Hateful Memes dataset). Through detailed ablation studies and error analysis, we demonstrate how current fusion approaches fail to capture nuanced cross-modal interactions, particularly in cases involving benign confounders. Our findings provide crucial insights for developing more robust hate detection systems and highlight the need for modality-specific architectural considerations. The code is available at https://github.com/gak97/Video-vs-Meme-Hate.
no_new_dataset
0.944995
2502.10636
Hamed Rahimi
Hamed Rahimi, Adil Bahaj, Mouad Abrini, Mahdi Khoramshahi, Mounir Ghogho, Mohamed Chetouani
USER-VLM 360: Personalized Vision Language Models with User-aware Tuning for Social Human-Robot Interactions
null
null
null
null
cs.AI cs.HC cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
The integration of vision-language models into robotic systems constitutes a significant advancement in enabling machines to interact with their surroundings in a more intuitive manner. While VLMs offer rich multimodal reasoning, existing approaches lack user-specific adaptability, often relying on generic interaction paradigms that fail to account for individual behavioral, contextual, or socio-emotional nuances. When customization is attempted, ethical concerns arise from unmitigated biases in user data, risking exclusion or unfair treatment. To address these dual challenges, we propose User-VLM 360{\deg}, a holistic framework integrating multimodal user modeling with bias-aware optimization. Our approach features: (1) user-aware tuning that adapts interactions in real time using visual-linguistic signals; (2) bias mitigation via preference optimization; and (3) curated 360{\deg} socio-emotive interaction datasets annotated with demographic, emotion, and relational metadata. Evaluations across eight benchmarks demonstrate state-of-the-art results: +35.3% F1 in personalized VQA, +47.5% F1 in facial features understanding, 15% bias reduction, and 30X speedup over baselines. Ablation studies confirm component efficacy, and deployment on the Pepper robot validates real-time adaptability across diverse users. We open-source parameter-efficient 3B/10B models and an ethical verification framework for responsible adaptation.
[ { "version": "v1", "created": "Sat, 15 Feb 2025 02:25:49 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 09:38:19 GMT" } ]
2025-03-03T00:00:00
[ [ "Rahimi", "Hamed", "" ], [ "Bahaj", "Adil", "" ], [ "Abrini", "Mouad", "" ], [ "Khoramshahi", "Mahdi", "" ], [ "Ghogho", "Mounir", "" ], [ "Chetouani", "Mohamed", "" ] ]
TITLE: USER-VLM 360: Personalized Vision Language Models with User-aware Tuning for Social Human-Robot Interactions ABSTRACT: The integration of vision-language models into robotic systems constitutes a significant advancement in enabling machines to interact with their surroundings in a more intuitive manner. While VLMs offer rich multimodal reasoning, existing approaches lack user-specific adaptability, often relying on generic interaction paradigms that fail to account for individual behavioral, contextual, or socio-emotional nuances. When customization is attempted, ethical concerns arise from unmitigated biases in user data, risking exclusion or unfair treatment. To address these dual challenges, we propose User-VLM 360{\deg}, a holistic framework integrating multimodal user modeling with bias-aware optimization. Our approach features: (1) user-aware tuning that adapts interactions in real time using visual-linguistic signals; (2) bias mitigation via preference optimization; and (3) curated 360{\deg} socio-emotive interaction datasets annotated with demographic, emotion, and relational metadata. Evaluations across eight benchmarks demonstrate state-of-the-art results: +35.3% F1 in personalized VQA, +47.5% F1 in facial features understanding, 15% bias reduction, and 30X speedup over baselines. Ablation studies confirm component efficacy, and deployment on the Pepper robot validates real-time adaptability across diverse users. We open-source parameter-efficient 3B/10B models and an ethical verification framework for responsible adaptation.
no_new_dataset
0.942981
2502.11037
Xin Gao
Xin Gao, Jian Pu
Deep Incomplete Multi-view Learning via Cyclic Permutation of VAEs
10 pages, 4 figures, ICLR 2025
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Multi-View Representation Learning (MVRL) aims to derive a unified representation from multi-view data by leveraging shared and complementary information across views. However, when views are irregularly missing, the incomplete data can lead to representations that lack sufficiency and consistency. To address this, we propose Multi-View Permutation of Variational Auto-Encoders (MVP), which excavates invariant relationships between views in incomplete data. MVP establishes inter-view correspondences in the latent space of Variational Auto-Encoders, enabling the inference of missing views and the aggregation of more sufficient information. To derive a valid Evidence Lower Bound (ELBO) for learning, we apply permutations to randomly reorder variables for cross-view generation and then partition them by views to maintain invariant meanings under permutations. Additionally, we enhance consistency by introducing an informational prior with cyclic permutations of posteriors, which turns the regularization term into a similarity measure across distributions. We demonstrate the effectiveness of our approach on seven diverse datasets with varying missing ratios, achieving superior performance in multi-view clustering and generation tasks.
[ { "version": "v1", "created": "Sun, 16 Feb 2025 08:36:43 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 06:04:20 GMT" } ]
2025-03-03T00:00:00
[ [ "Gao", "Xin", "" ], [ "Pu", "Jian", "" ] ]
TITLE: Deep Incomplete Multi-view Learning via Cyclic Permutation of VAEs ABSTRACT: Multi-View Representation Learning (MVRL) aims to derive a unified representation from multi-view data by leveraging shared and complementary information across views. However, when views are irregularly missing, the incomplete data can lead to representations that lack sufficiency and consistency. To address this, we propose Multi-View Permutation of Variational Auto-Encoders (MVP), which excavates invariant relationships between views in incomplete data. MVP establishes inter-view correspondences in the latent space of Variational Auto-Encoders, enabling the inference of missing views and the aggregation of more sufficient information. To derive a valid Evidence Lower Bound (ELBO) for learning, we apply permutations to randomly reorder variables for cross-view generation and then partition them by views to maintain invariant meanings under permutations. Additionally, we enhance consistency by introducing an informational prior with cyclic permutations of posteriors, which turns the regularization term into a similarity measure across distributions. We demonstrate the effectiveness of our approach on seven diverse datasets with varying missing ratios, achieving superior performance in multi-view clustering and generation tasks.
no_new_dataset
0.946695
2502.11742
Jianyi Peng
Jianyi Peng, Fan Lu, Bin Li, Yuan Huang, Sanqing Qu, Guang Chen
Range and Bird's Eye View Fused Cross-Modal Visual Place Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image-to-point cloud cross-modal Visual Place Recognition (VPR) is a challenging task where the query is an RGB image, and the database samples are LiDAR point clouds. Compared to single-modal VPR, this approach benefits from the widespread availability of RGB cameras and the robustness of point clouds in providing accurate spatial geometry and distance information. However, current methods rely on intermediate modalities that capture either the vertical or horizontal field of view, limiting their ability to fully exploit the complementary information from both sensors. In this work, we propose an innovative initial retrieval + re-rank method that effectively combines information from range (or RGB) images and Bird's Eye View (BEV) images. Our approach relies solely on a computationally efficient global descriptor similarity search process to achieve re-ranking. Additionally, we introduce a novel similarity label supervision technique to maximize the utility of limited training data. Specifically, we employ points average distance to approximate appearance similarity and incorporate an adaptive margin, based on similarity differences, into the vanilla triplet loss. Experimental results on the KITTI dataset demonstrate that our method significantly outperforms state-of-the-art approaches.
[ { "version": "v1", "created": "Mon, 17 Feb 2025 12:29:26 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 10:10:21 GMT" } ]
2025-03-03T00:00:00
[ [ "Peng", "Jianyi", "" ], [ "Lu", "Fan", "" ], [ "Li", "Bin", "" ], [ "Huang", "Yuan", "" ], [ "Qu", "Sanqing", "" ], [ "Chen", "Guang", "" ] ]
TITLE: Range and Bird's Eye View Fused Cross-Modal Visual Place Recognition ABSTRACT: Image-to-point cloud cross-modal Visual Place Recognition (VPR) is a challenging task where the query is an RGB image, and the database samples are LiDAR point clouds. Compared to single-modal VPR, this approach benefits from the widespread availability of RGB cameras and the robustness of point clouds in providing accurate spatial geometry and distance information. However, current methods rely on intermediate modalities that capture either the vertical or horizontal field of view, limiting their ability to fully exploit the complementary information from both sensors. In this work, we propose an innovative initial retrieval + re-rank method that effectively combines information from range (or RGB) images and Bird's Eye View (BEV) images. Our approach relies solely on a computationally efficient global descriptor similarity search process to achieve re-ranking. Additionally, we introduce a novel similarity label supervision technique to maximize the utility of limited training data. Specifically, we employ points average distance to approximate appearance similarity and incorporate an adaptive margin, based on similarity differences, into the vanilla triplet loss. Experimental results on the KITTI dataset demonstrate that our method significantly outperforms state-of-the-art approaches.
no_new_dataset
0.949529
2502.15835
Zhuchen Cao
Zhuchen Cao, Sven Apel, Adish Singla, Vera Demberg
Pragmatic Reasoning improves LLM Code Generation
null
null
null
null
cs.CL cs.AI cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have demonstrated impressive potential in translating natural language (NL) instructions into program code. However, user instructions often contain inherent ambiguities, making it challenging for LLMs to generate code that accurately reflects the user's true intent. To address this challenge, researchers have proposed to produce multiple candidates of the program code and then rerank them to identify the best solution. In this paper, we propose CodeRSA, a novel code candidate reranking mechanism built upon the Rational Speech Act (RSA) framework, designed to guide LLMs toward more comprehensive pragmatic reasoning about user intent. We evaluate CodeRSA using one of the latest LLMs on a popular code generation dataset. Our experiment results show that CodeRSA consistently outperforms common baselines, surpasses the state-of-the-art approach in most cases, and demonstrates robust overall performance. These findings underscore the effectiveness of integrating pragmatic reasoning into code candidate reranking, offering a promising direction for enhancing code generation quality in LLMs.
[ { "version": "v1", "created": "Thu, 20 Feb 2025 12:44:26 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 13:40:42 GMT" } ]
2025-03-03T00:00:00
[ [ "Cao", "Zhuchen", "" ], [ "Apel", "Sven", "" ], [ "Singla", "Adish", "" ], [ "Demberg", "Vera", "" ] ]
TITLE: Pragmatic Reasoning improves LLM Code Generation ABSTRACT: Large Language Models (LLMs) have demonstrated impressive potential in translating natural language (NL) instructions into program code. However, user instructions often contain inherent ambiguities, making it challenging for LLMs to generate code that accurately reflects the user's true intent. To address this challenge, researchers have proposed to produce multiple candidates of the program code and then rerank them to identify the best solution. In this paper, we propose CodeRSA, a novel code candidate reranking mechanism built upon the Rational Speech Act (RSA) framework, designed to guide LLMs toward more comprehensive pragmatic reasoning about user intent. We evaluate CodeRSA using one of the latest LLMs on a popular code generation dataset. Our experiment results show that CodeRSA consistently outperforms common baselines, surpasses the state-of-the-art approach in most cases, and demonstrates robust overall performance. These findings underscore the effectiveness of integrating pragmatic reasoning into code candidate reranking, offering a promising direction for enhancing code generation quality in LLMs.
no_new_dataset
0.95018
2502.16622
Luis Lara
Luis Lara, Lucia Eve Berger, Rajesh Raju
Diagnosing COVID-19 Severity from Chest X-Ray Images Using ViT and CNN Architectures
Upon reflection, the final version of this work does not meet the author's personal standards for thoroughness and clarity. As a result, the authors have chosen to withdraw the paper to prevent the dissemination of work that may not fully reflect the level of quality they strive to maintain
null
null
null
eess.IV cs.CV
http://creativecommons.org/licenses/by/4.0/
The COVID-19 pandemic strained healthcare resources and prompted discussion about how machine learning can alleviate physician burdens and contribute to diagnosis. Chest x-rays (CXRs) are used for diagnosis of COVID-19, but few studies predict the severity of a patient's condition from CXRs. In this study, we produce a large COVID severity dataset by merging three sources and investigate the efficacy of transfer learning using ImageNet- and CXR-pretrained models and vision transformers (ViTs) in both severity regression and classification tasks. A pretrained DenseNet161 model performed the best on the three class severity prediction problem, reaching 80% accuracy overall and 77.3%, 83.9%, and 70% on mild, moderate and severe cases, respectively. The ViT had the best regression results, with a mean absolute error of 0.5676 compared to radiologist-predicted severity scores. The project's source code is publicly available.
[ { "version": "v1", "created": "Sun, 23 Feb 2025 15:50:42 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 13:20:09 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 14:34:45 GMT" } ]
2025-03-03T00:00:00
[ [ "Lara", "Luis", "" ], [ "Berger", "Lucia Eve", "" ], [ "Raju", "Rajesh", "" ] ]
TITLE: Diagnosing COVID-19 Severity from Chest X-Ray Images Using ViT and CNN Architectures ABSTRACT: The COVID-19 pandemic strained healthcare resources and prompted discussion about how machine learning can alleviate physician burdens and contribute to diagnosis. Chest x-rays (CXRs) are used for diagnosis of COVID-19, but few studies predict the severity of a patient's condition from CXRs. In this study, we produce a large COVID severity dataset by merging three sources and investigate the efficacy of transfer learning using ImageNet- and CXR-pretrained models and vision transformers (ViTs) in both severity regression and classification tasks. A pretrained DenseNet161 model performed the best on the three class severity prediction problem, reaching 80% accuracy overall and 77.3%, 83.9%, and 70% on mild, moderate and severe cases, respectively. The ViT had the best regression results, with a mean absolute error of 0.5676 compared to radiologist-predicted severity scores. The project's source code is publicly available.
new_dataset
0.910863
2502.16680
Li Rui
Rui Li, Xiaowei Zhao
AeroReformer: Aerial Referring Transformer for UAV-based Referring Image Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a novel and challenging task, referring segmentation combines computer vision and natural language processing to localize and segment objects based on textual descriptions. While referring image segmentation (RIS) has been extensively studied in natural images, little attention has been given to aerial imagery, particularly from unmanned aerial vehicles (UAVs). The unique challenges of UAV imagery, including complex spatial scales, occlusions, and varying object orientations, render existing RIS approaches ineffective. A key limitation has been the lack of UAV-specific datasets, as manually annotating pixel-level masks and generating textual descriptions is labour-intensive and time-consuming. To address this gap, we design an automatic labelling pipeline that leverages pre-existing UAV segmentation datasets and Multimodal Large Language Models (MLLM) for generating textual descriptions. Furthermore, we propose Aerial Referring Transformer (AeroReformer), a novel framework for UAV referring image segmentation (UAV-RIS), featuring a Vision-Language Cross-Attention Module (VLCAM) for effective cross-modal understanding and a Rotation-Aware Multi-Scale Fusion (RAMSF) decoder to enhance segmentation accuracy in aerial scenes. Extensive experiments on two newly developed datasets demonstrate the superiority of AeroReformer over existing methods, establishing a new benchmark for UAV-RIS. The datasets and code will be publicly available at: https://github.com/lironui/AeroReformer.
[ { "version": "v1", "created": "Sun, 23 Feb 2025 18:49:00 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 17:19:00 GMT" } ]
2025-03-03T00:00:00
[ [ "Li", "Rui", "" ], [ "Zhao", "Xiaowei", "" ] ]
TITLE: AeroReformer: Aerial Referring Transformer for UAV-based Referring Image Segmentation ABSTRACT: As a novel and challenging task, referring segmentation combines computer vision and natural language processing to localize and segment objects based on textual descriptions. While referring image segmentation (RIS) has been extensively studied in natural images, little attention has been given to aerial imagery, particularly from unmanned aerial vehicles (UAVs). The unique challenges of UAV imagery, including complex spatial scales, occlusions, and varying object orientations, render existing RIS approaches ineffective. A key limitation has been the lack of UAV-specific datasets, as manually annotating pixel-level masks and generating textual descriptions is labour-intensive and time-consuming. To address this gap, we design an automatic labelling pipeline that leverages pre-existing UAV segmentation datasets and Multimodal Large Language Models (MLLM) for generating textual descriptions. Furthermore, we propose Aerial Referring Transformer (AeroReformer), a novel framework for UAV referring image segmentation (UAV-RIS), featuring a Vision-Language Cross-Attention Module (VLCAM) for effective cross-modal understanding and a Rotation-Aware Multi-Scale Fusion (RAMSF) decoder to enhance segmentation accuracy in aerial scenes. Extensive experiments on two newly developed datasets demonstrate the superiority of AeroReformer over existing methods, establishing a new benchmark for UAV-RIS. The datasets and code will be publicly available at: https://github.com/lironui/AeroReformer.
new_dataset
0.963643
2502.17009
Enea Monzio Compagnoni Mr.
Enea Monzio Compagnoni, Rustem Islamov, Frank Norbert Proske, Aurelien Lucchi
Unbiased and Sign Compression in Distributed Learning: Comparing Noise Resilience via SDEs
Accepted at AISTATS 2025 (Oral). arXiv admin note: substantial text overlap with arXiv:2411.15958
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Distributed methods are essential for handling machine learning pipelines comprising large-scale models and datasets. However, their benefits often come at the cost of increased communication overhead between the central server and agents, which can become the main bottleneck, making training costly or even unfeasible in such systems. Compression methods such as quantization and sparsification can alleviate this issue. Still, their robustness to large and heavy-tailed gradient noise, a phenomenon sometimes observed in language modeling, remains poorly understood. This work addresses this gap by analyzing Distributed Compressed SGD (DCSGD) and Distributed SignSGD (DSignSGD) using stochastic differential equations (SDEs). Our results show that DCSGD with unbiased compression is more vulnerable to noise in stochastic gradients, while DSignSGD remains robust, even under large and heavy-tailed noise. Additionally, we propose new scaling rules for hyperparameter tuning to mitigate performance degradation due to compression. These findings are empirically validated across multiple deep learning architectures and datasets, providing practical recommendations for distributed optimization.
[ { "version": "v1", "created": "Mon, 24 Feb 2025 09:39:17 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 00:12:11 GMT" } ]
2025-03-03T00:00:00
[ [ "Compagnoni", "Enea Monzio", "" ], [ "Islamov", "Rustem", "" ], [ "Proske", "Frank Norbert", "" ], [ "Lucchi", "Aurelien", "" ] ]
TITLE: Unbiased and Sign Compression in Distributed Learning: Comparing Noise Resilience via SDEs ABSTRACT: Distributed methods are essential for handling machine learning pipelines comprising large-scale models and datasets. However, their benefits often come at the cost of increased communication overhead between the central server and agents, which can become the main bottleneck, making training costly or even unfeasible in such systems. Compression methods such as quantization and sparsification can alleviate this issue. Still, their robustness to large and heavy-tailed gradient noise, a phenomenon sometimes observed in language modeling, remains poorly understood. This work addresses this gap by analyzing Distributed Compressed SGD (DCSGD) and Distributed SignSGD (DSignSGD) using stochastic differential equations (SDEs). Our results show that DCSGD with unbiased compression is more vulnerable to noise in stochastic gradients, while DSignSGD remains robust, even under large and heavy-tailed noise. Additionally, we propose new scaling rules for hyperparameter tuning to mitigate performance degradation due to compression. These findings are empirically validated across multiple deep learning architectures and datasets, providing practical recommendations for distributed optimization.
no_new_dataset
0.944944
2502.17184
Yuming Yang
Yuming Yang, Yang Nan, Junjie Ye, Shihan Dou, Xiao Wang, Shuo Li, Huijie Lv, Mingqi Wu, Tao Gui, Qi Zhang, Xuanjing Huang
Measuring Data Diversity for Instruction Tuning: A Systematic Analysis and A Reliable Metric
16 pages. The related codes and resources will be released later. Project page: https://github.com/UmeanNever/NovelSum
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data diversity is crucial for the instruction tuning of large language models. Existing studies have explored various diversity-aware data selection methods to construct high-quality datasets and enhance model performance. However, the fundamental problem of precisely defining and measuring data diversity remains underexplored, limiting clear guidance for data engineering. To address this, we systematically analyze 11 existing diversity measurement methods by evaluating their correlation with model performance through extensive fine-tuning experiments. Our results indicate that a reliable diversity measure should properly account for both inter-sample differences and the information distribution in the sample space. Building on this, we propose NovelSum, a new diversity metric based on sample-level "novelty." Experiments on both simulated and real-world data show that NovelSum accurately captures diversity variations and achieves a 0.97 correlation with instruction-tuned model performance, highlighting its value in guiding data engineering practices. With NovelSum as an optimization objective, we further develop a greedy, diversity-oriented data selection strategy that outperforms existing approaches, validating both the effectiveness and practical significance of our metric.
[ { "version": "v1", "created": "Mon, 24 Feb 2025 14:20:22 GMT" }, { "version": "v2", "created": "Tue, 25 Feb 2025 06:56:39 GMT" }, { "version": "v3", "created": "Thu, 27 Feb 2025 12:59:58 GMT" }, { "version": "v4", "created": "Fri, 28 Feb 2025 08:44:08 GMT" } ]
2025-03-03T00:00:00
[ [ "Yang", "Yuming", "" ], [ "Nan", "Yang", "" ], [ "Ye", "Junjie", "" ], [ "Dou", "Shihan", "" ], [ "Wang", "Xiao", "" ], [ "Li", "Shuo", "" ], [ "Lv", "Huijie", "" ], [ "Wu", "Mingqi", "" ], [ "Gui", "Tao", "" ], [ "Zhang", "Qi", "" ], [ "Huang", "Xuanjing", "" ] ]
TITLE: Measuring Data Diversity for Instruction Tuning: A Systematic Analysis and A Reliable Metric ABSTRACT: Data diversity is crucial for the instruction tuning of large language models. Existing studies have explored various diversity-aware data selection methods to construct high-quality datasets and enhance model performance. However, the fundamental problem of precisely defining and measuring data diversity remains underexplored, limiting clear guidance for data engineering. To address this, we systematically analyze 11 existing diversity measurement methods by evaluating their correlation with model performance through extensive fine-tuning experiments. Our results indicate that a reliable diversity measure should properly account for both inter-sample differences and the information distribution in the sample space. Building on this, we propose NovelSum, a new diversity metric based on sample-level "novelty." Experiments on both simulated and real-world data show that NovelSum accurately captures diversity variations and achieves a 0.97 correlation with instruction-tuned model performance, highlighting its value in guiding data engineering practices. With NovelSum as an optimization objective, we further develop a greedy, diversity-oriented data selection strategy that outperforms existing approaches, validating both the effectiveness and practical significance of our metric.
no_new_dataset
0.94428
2502.17481
Cheol-Hui Lee
Cheol-Hui Lee, Hakseung Kim, Byung C. Yoon, Dong-Joo Kim
Toward Foundational Model for Sleep Analysis Using a Multimodal Hybrid Self-Supervised Learning Framework
18 pages, 5 figures
null
null
null
eess.SP cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sleep is essential for maintaining human health and quality of life. Analyzing physiological signals during sleep is critical in assessing sleep quality and diagnosing sleep disorders. However, manual diagnoses by clinicians are time-intensive and subjective. Despite advances in deep learning that have enhanced automation, these approaches remain heavily dependent on large-scale labeled datasets. This study introduces SynthSleepNet, a multimodal hybrid self-supervised learning framework designed for analyzing polysomnography (PSG) data. SynthSleepNet effectively integrates masked prediction and contrastive learning to leverage complementary features across multiple modalities, including electroencephalogram (EEG), electrooculography (EOG), electromyography (EMG), and electrocardiogram (ECG). This approach enables the model to learn highly expressive representations of PSG data. Furthermore, a temporal context module based on Mamba was developed to efficiently capture contextual information across signals. SynthSleepNet achieved superior performance compared to state-of-the-art methods across three downstream tasks: sleep-stage classification, apnea detection, and hypopnea detection, with accuracies of 89.89%, 99.75%, and 89.60%, respectively. The model demonstrated robust performance in a semi-supervised learning environment with limited labels, achieving accuracies of 87.98%, 99.37%, and 77.52% in the same tasks. These results underscore the potential of the model as a foundational tool for the comprehensive analysis of PSG data. SynthSleepNet demonstrates comprehensively superior performance across multiple downstream tasks compared to other methodologies, making it expected to set a new standard for sleep disorder monitoring and diagnostic systems.
[ { "version": "v1", "created": "Tue, 18 Feb 2025 10:11:50 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 18:56:25 GMT" } ]
2025-03-03T00:00:00
[ [ "Lee", "Cheol-Hui", "" ], [ "Kim", "Hakseung", "" ], [ "Yoon", "Byung C.", "" ], [ "Kim", "Dong-Joo", "" ] ]
TITLE: Toward Foundational Model for Sleep Analysis Using a Multimodal Hybrid Self-Supervised Learning Framework ABSTRACT: Sleep is essential for maintaining human health and quality of life. Analyzing physiological signals during sleep is critical in assessing sleep quality and diagnosing sleep disorders. However, manual diagnoses by clinicians are time-intensive and subjective. Despite advances in deep learning that have enhanced automation, these approaches remain heavily dependent on large-scale labeled datasets. This study introduces SynthSleepNet, a multimodal hybrid self-supervised learning framework designed for analyzing polysomnography (PSG) data. SynthSleepNet effectively integrates masked prediction and contrastive learning to leverage complementary features across multiple modalities, including electroencephalogram (EEG), electrooculography (EOG), electromyography (EMG), and electrocardiogram (ECG). This approach enables the model to learn highly expressive representations of PSG data. Furthermore, a temporal context module based on Mamba was developed to efficiently capture contextual information across signals. SynthSleepNet achieved superior performance compared to state-of-the-art methods across three downstream tasks: sleep-stage classification, apnea detection, and hypopnea detection, with accuracies of 89.89%, 99.75%, and 89.60%, respectively. The model demonstrated robust performance in a semi-supervised learning environment with limited labels, achieving accuracies of 87.98%, 99.37%, and 77.52% in the same tasks. These results underscore the potential of the model as a foundational tool for the comprehensive analysis of PSG data. SynthSleepNet demonstrates comprehensively superior performance across multiple downstream tasks compared to other methodologies, making it expected to set a new standard for sleep disorder monitoring and diagnostic systems.
no_new_dataset
0.948775
2502.17690
Zhixin Lu
Zhixin Lu, {\L}ukasz Ku\'smierz, Stefan Mihalas
A Fokker-Planck-Based Loss Function that Bridges Dynamics with Density Estimation
Under review by the ICML
null
null
null
nlin.CD cs.LG
http://creativecommons.org/licenses/by/4.0/
We have derived a novel loss function from the Fokker-Planck equation that links dynamical system models with their probability density functions, demonstrating its utility in model identification and density estimation. In the first application, we show that this loss function can enable the extraction of dynamical parameters from non-temporal datasets, including timestamp-free measurements from steady non-equilibrium systems such as noisy Lorenz systems and gene regulatory networks. In the second application, when coupled with a density estimator, this loss facilitates density estimation when the dynamic equations are known. For density estimation, we propose a density estimator that integrates a Gaussian Mixture Model with a normalizing flow model. It simultaneously estimates normalized density, energy, and score functions from both empirical data and dynamics. It is compatible with a variety of data-based training methodologies, including maximum likelihood and score matching. It features a latent space akin to a modern Hopfield network, where the inherent Hopfield energy effectively assigns low densities to sparsely populated data regions, addressing common challenges in neural density estimators. Additionally, this Hopfield-like energy enables direct and rapid data manipulation through the Concave-Convex Procedure (CCCP) rule, facilitating tasks such as denoising and clustering. Our work demonstrates a principled framework for leveraging the complex interdependencies between dynamics and density estimation, as illustrated through synthetic examples that clarify the underlying theoretical intuitions.
[ { "version": "v1", "created": "Mon, 24 Feb 2025 22:27:25 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 22:11:09 GMT" } ]
2025-03-03T00:00:00
[ [ "Lu", "Zhixin", "" ], [ "Kuśmierz", "Łukasz", "" ], [ "Mihalas", "Stefan", "" ] ]
TITLE: A Fokker-Planck-Based Loss Function that Bridges Dynamics with Density Estimation ABSTRACT: We have derived a novel loss function from the Fokker-Planck equation that links dynamical system models with their probability density functions, demonstrating its utility in model identification and density estimation. In the first application, we show that this loss function can enable the extraction of dynamical parameters from non-temporal datasets, including timestamp-free measurements from steady non-equilibrium systems such as noisy Lorenz systems and gene regulatory networks. In the second application, when coupled with a density estimator, this loss facilitates density estimation when the dynamic equations are known. For density estimation, we propose a density estimator that integrates a Gaussian Mixture Model with a normalizing flow model. It simultaneously estimates normalized density, energy, and score functions from both empirical data and dynamics. It is compatible with a variety of data-based training methodologies, including maximum likelihood and score matching. It features a latent space akin to a modern Hopfield network, where the inherent Hopfield energy effectively assigns low densities to sparsely populated data regions, addressing common challenges in neural density estimators. Additionally, this Hopfield-like energy enables direct and rapid data manipulation through the Concave-Convex Procedure (CCCP) rule, facilitating tasks such as denoising and clustering. Our work demonstrates a principled framework for leveraging the complex interdependencies between dynamics and density estimation, as illustrated through synthetic examples that clarify the underlying theoretical intuitions.
no_new_dataset
0.947962
2502.17749
Shinwoo Park
Shinwoo Park, Hyundong Jin, Jeong-won Cha, Yo-Sub Han
Detection of LLM-Paraphrased Code and Identification of the Responsible LLM Using Coding Style Features
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent progress in large language models (LLMs) for code generation has raised serious concerns about intellectual property protection. Malicious users can exploit LLMs to produce paraphrased versions of proprietary code that closely resemble the original. While the potential for LLM-assisted code paraphrasing continues to grow, research on detecting it remains limited, underscoring an urgent need for detection system. We respond to this need by proposing two tasks. The first task is to detect whether code generated by an LLM is a paraphrased version of original human-written code. The second task is to identify which LLM is used to paraphrase the original code. For these tasks, we construct a dataset LPcode consisting of pairs of human-written code and LLM-paraphrased code using various LLMs. We statistically confirm significant differences in the coding styles of human-written and LLM-paraphrased code, particularly in terms of naming consistency, code structure, and readability. Based on these findings, we develop LPcodedec, a detection method that identifies paraphrase relationships between human-written and LLM-generated code, and discover which LLM is used for the paraphrasing. LPcodedec outperforms the best baselines in two tasks, improving F1 scores by 2.64% and 15.17% while achieving speedups of 1,343x and 213x, respectively. Our code and data are available at https://github.com/Shinwoo-Park/detecting_llm_paraphrased_code_via_coding_style_features.
[ { "version": "v1", "created": "Tue, 25 Feb 2025 00:58:06 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 08:06:00 GMT" } ]
2025-03-03T00:00:00
[ [ "Park", "Shinwoo", "" ], [ "Jin", "Hyundong", "" ], [ "Cha", "Jeong-won", "" ], [ "Han", "Yo-Sub", "" ] ]
TITLE: Detection of LLM-Paraphrased Code and Identification of the Responsible LLM Using Coding Style Features ABSTRACT: Recent progress in large language models (LLMs) for code generation has raised serious concerns about intellectual property protection. Malicious users can exploit LLMs to produce paraphrased versions of proprietary code that closely resemble the original. While the potential for LLM-assisted code paraphrasing continues to grow, research on detecting it remains limited, underscoring an urgent need for detection system. We respond to this need by proposing two tasks. The first task is to detect whether code generated by an LLM is a paraphrased version of original human-written code. The second task is to identify which LLM is used to paraphrase the original code. For these tasks, we construct a dataset LPcode consisting of pairs of human-written code and LLM-paraphrased code using various LLMs. We statistically confirm significant differences in the coding styles of human-written and LLM-paraphrased code, particularly in terms of naming consistency, code structure, and readability. Based on these findings, we develop LPcodedec, a detection method that identifies paraphrase relationships between human-written and LLM-generated code, and discover which LLM is used for the paraphrasing. LPcodedec outperforms the best baselines in two tasks, improving F1 scores by 2.64% and 15.17% while achieving speedups of 1,343x and 213x, respectively. Our code and data are available at https://github.com/Shinwoo-Park/detecting_llm_paraphrased_code_via_coding_style_features.
new_dataset
0.968411
2502.18842
Muhammad Angga Muttaqien
Muhammad A. Muttaqien, Tomohiro Motoda, Ryo Hanai, Domae Yukiyasu
Attention-Guided Integration of CLIP and SAM for Precise Object Masking in Robotic Manipulation
null
null
null
null
cs.RO cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper introduces a novel pipeline to enhance the precision of object masking for robotic manipulation within the specific domain of masking products in convenience stores. The approach integrates two advanced AI models, CLIP and SAM, focusing on their synergistic combination and the effective use of multimodal data (image and text). Emphasis is placed on utilizing gradient-based attention mechanisms and customized datasets to fine-tune performance. While CLIP, SAM, and Grad- CAM are established components, their integration within this structured pipeline represents a significant contribution to the field. The resulting segmented masks, generated through this combined approach, can be effectively utilized as inputs for robotic systems, enabling more precise and adaptive object manipulation in the context of convenience store products.
[ { "version": "v1", "created": "Wed, 26 Feb 2025 05:30:46 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 02:20:15 GMT" } ]
2025-03-03T00:00:00
[ [ "Muttaqien", "Muhammad A.", "" ], [ "Motoda", "Tomohiro", "" ], [ "Hanai", "Ryo", "" ], [ "Yukiyasu", "Domae", "" ] ]
TITLE: Attention-Guided Integration of CLIP and SAM for Precise Object Masking in Robotic Manipulation ABSTRACT: This paper introduces a novel pipeline to enhance the precision of object masking for robotic manipulation within the specific domain of masking products in convenience stores. The approach integrates two advanced AI models, CLIP and SAM, focusing on their synergistic combination and the effective use of multimodal data (image and text). Emphasis is placed on utilizing gradient-based attention mechanisms and customized datasets to fine-tune performance. While CLIP, SAM, and Grad- CAM are established components, their integration within this structured pipeline represents a significant contribution to the field. The resulting segmented masks, generated through this combined approach, can be effectively utilized as inputs for robotic systems, enabling more precise and adaptive object manipulation in the context of convenience store products.
no_new_dataset
0.950915
2502.18860
Md Mehrab Tanjim
Md Mehrab Tanjim, Ryan A. Rossi, Mike Rimer, Xiang Chen, Sungchul Kim, Vaishnavi Muppala, Tong Yu, Zhengmian Hu, Ritwik Sinha, Wei Zhang, Iftikhar Ahamath Burhanuddin, Franck Dernoncourt
Exploring Rewriting Approaches for Different Conversational Tasks
Preprint
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Conversational assistants often require a question rewriting algorithm that leverages a subset of past interactions to provide a more meaningful (accurate) answer to the user's question or request. However, the exact rewriting approach may often depend on the use case and application-specific tasks supported by the conversational assistant, among other constraints. In this paper, we systematically investigate two different approaches, denoted as rewriting and fusion, on two fundamentally different generation tasks, including a text-to-text generation task and a multimodal generative task that takes as input text and generates a visualization or data table that answers the user's question. Our results indicate that the specific rewriting or fusion approach highly depends on the underlying use case and generative task. In particular, we find that for a conversational question-answering assistant, the query rewriting approach performs best, whereas for a data analysis assistant that generates visualizations and data tables based on the user's conversation with the assistant, the fusion approach works best. Notably, we explore two datasets for the data analysis assistant use case, for short and long conversations, and we find that query fusion always performs better, whereas for the conversational text-based question-answering, the query rewrite approach performs best.
[ { "version": "v1", "created": "Wed, 26 Feb 2025 06:05:29 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 04:18:19 GMT" } ]
2025-03-03T00:00:00
[ [ "Tanjim", "Md Mehrab", "" ], [ "Rossi", "Ryan A.", "" ], [ "Rimer", "Mike", "" ], [ "Chen", "Xiang", "" ], [ "Kim", "Sungchul", "" ], [ "Muppala", "Vaishnavi", "" ], [ "Yu", "Tong", "" ], [ "Hu", "Zhengmian", "" ], [ "Sinha", "Ritwik", "" ], [ "Zhang", "Wei", "" ], [ "Burhanuddin", "Iftikhar Ahamath", "" ], [ "Dernoncourt", "Franck", "" ] ]
TITLE: Exploring Rewriting Approaches for Different Conversational Tasks ABSTRACT: Conversational assistants often require a question rewriting algorithm that leverages a subset of past interactions to provide a more meaningful (accurate) answer to the user's question or request. However, the exact rewriting approach may often depend on the use case and application-specific tasks supported by the conversational assistant, among other constraints. In this paper, we systematically investigate two different approaches, denoted as rewriting and fusion, on two fundamentally different generation tasks, including a text-to-text generation task and a multimodal generative task that takes as input text and generates a visualization or data table that answers the user's question. Our results indicate that the specific rewriting or fusion approach highly depends on the underlying use case and generative task. In particular, we find that for a conversational question-answering assistant, the query rewriting approach performs best, whereas for a data analysis assistant that generates visualizations and data tables based on the user's conversation with the assistant, the fusion approach works best. Notably, we explore two datasets for the data analysis assistant use case, for short and long conversations, and we find that query fusion always performs better, whereas for the conversational text-based question-answering, the query rewrite approach performs best.
no_new_dataset
0.949153
2502.19104
Michelle Kappl
Michelle Kappl
Are All Spanish Doctors Male? Evaluating Gender Bias in German Machine Translation
ISCA/ITG Workshop on Diversity in Large Speech and Language Models
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present WinoMTDE, a new gender bias evaluation test set designed to assess occupational stereotyping and underrepresentation in German machine translation (MT) systems. Building on the automatic evaluation method introduced by arXiv:1906.00591v1, we extend the approach to German, a language with grammatical gender. The WinoMTDE dataset comprises 288 German sentences that are balanced in regard to gender, as well as stereotype, which was annotated using German labor statistics. We conduct a large-scale evaluation of five widely used MT systems and a large language model. Our results reveal persistent bias in most models, with the LLM outperforming traditional systems. The dataset and evaluation code are publicly available under https://github.com/michellekappl/mt_gender_german.
[ { "version": "v1", "created": "Wed, 26 Feb 2025 12:46:59 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 15:00:01 GMT" } ]
2025-03-03T00:00:00
[ [ "Kappl", "Michelle", "" ] ]
TITLE: Are All Spanish Doctors Male? Evaluating Gender Bias in German Machine Translation ABSTRACT: We present WinoMTDE, a new gender bias evaluation test set designed to assess occupational stereotyping and underrepresentation in German machine translation (MT) systems. Building on the automatic evaluation method introduced by arXiv:1906.00591v1, we extend the approach to German, a language with grammatical gender. The WinoMTDE dataset comprises 288 German sentences that are balanced in regard to gender, as well as stereotype, which was annotated using German labor statistics. We conduct a large-scale evaluation of five widely used MT systems and a large language model. Our results reveal persistent bias in most models, with the LLM outperforming traditional systems. The dataset and evaluation code are publicly available under https://github.com/michellekappl/mt_gender_german.
new_dataset
0.955402
2502.19635
Youran Zhou
Youran Zhou, Mohamed Reda Bouadjenek, Sunil Aryal
Developing robust methods to handle missing data in real-world applications effectively
This work was presented at the ECML PKDD 2024 PhD Forum. https://ecmlpkdd. org/2024/program-accepted-phd-forum/
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Missing data is a pervasive challenge spanning diverse data types, including tabular, sensor data, time-series, images and so on. Its origins are multifaceted, resulting in various missing mechanisms. Prior research in this field has predominantly revolved around the assumption of the Missing Completely At Random (MCAR) mechanism. However, Missing At Random (MAR) and Missing Not At Random (MNAR) mechanisms, though equally prevalent, have often remained underexplored despite their significant influence. This PhD project presents a comprehensive research agenda designed to investigate the implications of diverse missing data mechanisms. The principal aim is to devise robust methodologies capable of effectively handling missing data while accommodating the unique characteristics of MCAR, MAR, and MNAR mechanisms. By addressing these gaps, this research contributes to an enriched understanding of the challenges posed by missing data across various industries and data modalities. It seeks to provide practical solutions that enable the effective management of missing data, empowering researchers and practitioners to leverage incomplete datasets confidently.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 00:00:28 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 11:26:39 GMT" } ]
2025-03-03T00:00:00
[ [ "Zhou", "Youran", "" ], [ "Bouadjenek", "Mohamed Reda", "" ], [ "Aryal", "Sunil", "" ] ]
TITLE: Developing robust methods to handle missing data in real-world applications effectively ABSTRACT: Missing data is a pervasive challenge spanning diverse data types, including tabular, sensor data, time-series, images and so on. Its origins are multifaceted, resulting in various missing mechanisms. Prior research in this field has predominantly revolved around the assumption of the Missing Completely At Random (MCAR) mechanism. However, Missing At Random (MAR) and Missing Not At Random (MNAR) mechanisms, though equally prevalent, have often remained underexplored despite their significant influence. This PhD project presents a comprehensive research agenda designed to investigate the implications of diverse missing data mechanisms. The principal aim is to devise robust methodologies capable of effectively handling missing data while accommodating the unique characteristics of MCAR, MAR, and MNAR mechanisms. By addressing these gaps, this research contributes to an enriched understanding of the challenges posed by missing data across various industries and data modalities. It seeks to provide practical solutions that enable the effective management of missing data, empowering researchers and practitioners to leverage incomplete datasets confidently.
no_new_dataset
0.943295
2502.19751
Zeqi Ma
Jiaxing Li and Lin Jiang and Zeqi Ma and Kaihang Jiang and Xiaozhao Fang and Jie Wen
Lightweight Contrastive Distilled Hashing for Online Cross-modal Retrieval
Accepted by AAAI 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Deep online cross-modal hashing has gained much attention from researchers recently, as its promising applications with low storage requirement, fast retrieval efficiency and cross modality adaptive, etc. However, there still exists some technical hurdles that hinder its applications, e.g., 1) how to extract the coexistent semantic relevance of cross-modal data, 2) how to achieve competitive performance when handling the real time data streams, 3) how to transfer the knowledge learned from offline to online training in a lightweight manner. To address these problems, this paper proposes a lightweight contrastive distilled hashing (LCDH) for cross-modal retrieval, by innovatively bridging the offline and online cross-modal hashing by similarity matrix approximation in a knowledge distillation framework. Specifically, in the teacher network, LCDH first extracts the cross-modal features by the contrastive language-image pre-training (CLIP), which are further fed into an attention module for representation enhancement after feature fusion. Then, the output of the attention module is fed into a FC layer to obtain hash codes for aligning the sizes of similarity matrices for online and offline training. In the student network, LCDH extracts the visual and textual features by lightweight models, and then the features are fed into a FC layer to generate binary codes. Finally, by approximating the similarity matrices, the performance of online hashing in the lightweight student network can be enhanced by the supervision of coexistent semantic relevance that is distilled from the teacher network. Experimental results on three widely used datasets demonstrate that LCDH outperforms some state-of-the-art methods.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 04:31:17 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 02:33:25 GMT" } ]
2025-03-03T00:00:00
[ [ "Li", "Jiaxing", "" ], [ "Jiang", "Lin", "" ], [ "Ma", "Zeqi", "" ], [ "Jiang", "Kaihang", "" ], [ "Fang", "Xiaozhao", "" ], [ "Wen", "Jie", "" ] ]
TITLE: Lightweight Contrastive Distilled Hashing for Online Cross-modal Retrieval ABSTRACT: Deep online cross-modal hashing has gained much attention from researchers recently, as its promising applications with low storage requirement, fast retrieval efficiency and cross modality adaptive, etc. However, there still exists some technical hurdles that hinder its applications, e.g., 1) how to extract the coexistent semantic relevance of cross-modal data, 2) how to achieve competitive performance when handling the real time data streams, 3) how to transfer the knowledge learned from offline to online training in a lightweight manner. To address these problems, this paper proposes a lightweight contrastive distilled hashing (LCDH) for cross-modal retrieval, by innovatively bridging the offline and online cross-modal hashing by similarity matrix approximation in a knowledge distillation framework. Specifically, in the teacher network, LCDH first extracts the cross-modal features by the contrastive language-image pre-training (CLIP), which are further fed into an attention module for representation enhancement after feature fusion. Then, the output of the attention module is fed into a FC layer to obtain hash codes for aligning the sizes of similarity matrices for online and offline training. In the student network, LCDH extracts the visual and textual features by lightweight models, and then the features are fed into a FC layer to generate binary codes. Finally, by approximating the similarity matrices, the performance of online hashing in the lightweight student network can be enhanced by the supervision of coexistent semantic relevance that is distilled from the teacher network. Experimental results on three widely used datasets demonstrate that LCDH outperforms some state-of-the-art methods.
no_new_dataset
0.949576
2502.19989
Surajit Ghosh
Hugo Retief, Mariangel Garcia Andarcia, Chris Dickens, Surajit Ghosh
Dam Volume Prediction Model Development Using ML Algorithms
22 pages, 18 Figures and 4 Tables
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Reliable reservoir volume estimates are crucial for water resource management, especially in arid and semi-arid regions. The present study investigates applying three machine learning regression techniques - Gradient Boosting, Random Forest, and ElasticNet to predict key dam performance characteristics of the Loskop Dam in South Africa. The models were trained and validated on a dataset comprising geospatial elevation measurements paired with corresponding reservoir supply capacity values. The best-performing approach was a threshold-based blended model that combined random forest for higher volumes with Ridge regression for lower volumes. This model achieved an RMSE of 4.88 MCM and an R2 of 0.99. These findings highlight the ability of ensemble learning techniques to capture complex relationships in dam datasets and underscore their practical utility for reliable dam performance modelling in real-world water resource management scenarios.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 11:14:14 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 04:28:01 GMT" } ]
2025-03-03T00:00:00
[ [ "Retief", "Hugo", "" ], [ "Andarcia", "Mariangel Garcia", "" ], [ "Dickens", "Chris", "" ], [ "Ghosh", "Surajit", "" ] ]
TITLE: Dam Volume Prediction Model Development Using ML Algorithms ABSTRACT: Reliable reservoir volume estimates are crucial for water resource management, especially in arid and semi-arid regions. The present study investigates applying three machine learning regression techniques - Gradient Boosting, Random Forest, and ElasticNet to predict key dam performance characteristics of the Loskop Dam in South Africa. The models were trained and validated on a dataset comprising geospatial elevation measurements paired with corresponding reservoir supply capacity values. The best-performing approach was a threshold-based blended model that combined random forest for higher volumes with Ridge regression for lower volumes. This model achieved an RMSE of 4.88 MCM and an R2 of 0.99. These findings highlight the ability of ensemble learning techniques to capture complex relationships in dam datasets and underscore their practical utility for reliable dam performance modelling in real-world water resource management scenarios.
no_new_dataset
0.940681
2502.20037
Hongyu Deng
Hongyu Deng, Tianfan Xue, He Chen
FuseGrasp: Radar-Camera Fusion for Robotic Grasping of Transparent Objects
16 pages, 20 figures, accepted by IEEE TMC
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transparent objects are prevalent in everyday environments, but their distinct physical properties pose significant challenges for camera-guided robotic arms. Current research is mainly dependent on camera-only approaches, which often falter in suboptimal conditions, such as low-light environments. In response to this challenge, we present FuseGrasp, the first radar-camera fusion system tailored to enhance the transparent objects manipulation. FuseGrasp exploits the weak penetrating property of millimeter-wave (mmWave) signals, which causes transparent materials to appear opaque, and combines it with the precise motion control of a robotic arm to acquire high-quality mmWave radar images of transparent objects. The system employs a carefully designed deep neural network to fuse radar and camera imagery, thereby improving depth completion and elevating the success rate of object grasping. Nevertheless, training FuseGrasp effectively is non-trivial, due to limited radar image datasets for transparent objects. We address this issue utilizing large RGB-D dataset, and propose an effective two-stage training approach: we first pre-train FuseGrasp on a large public RGB-D dataset of transparent objects, then fine-tune it on a self-built small RGB-D-Radar dataset. Furthermore, as a byproduct, FuseGrasp can determine the composition of transparent objects, such as glass or plastic, leveraging the material identification capability of mmWave radar. This identification result facilitates the robotic arm in modulating its grip force appropriately. Extensive testing reveals that FuseGrasp significantly improves the accuracy of depth reconstruction and material identification for transparent objects. Moreover, real-world robotic trials have confirmed that FuseGrasp markedly enhances the handling of transparent items. A video demonstration of FuseGrasp is available at https://youtu.be/MWDqv0sRSok.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 12:27:07 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 08:42:19 GMT" } ]
2025-03-03T00:00:00
[ [ "Deng", "Hongyu", "" ], [ "Xue", "Tianfan", "" ], [ "Chen", "He", "" ] ]
TITLE: FuseGrasp: Radar-Camera Fusion for Robotic Grasping of Transparent Objects ABSTRACT: Transparent objects are prevalent in everyday environments, but their distinct physical properties pose significant challenges for camera-guided robotic arms. Current research is mainly dependent on camera-only approaches, which often falter in suboptimal conditions, such as low-light environments. In response to this challenge, we present FuseGrasp, the first radar-camera fusion system tailored to enhance the transparent objects manipulation. FuseGrasp exploits the weak penetrating property of millimeter-wave (mmWave) signals, which causes transparent materials to appear opaque, and combines it with the precise motion control of a robotic arm to acquire high-quality mmWave radar images of transparent objects. The system employs a carefully designed deep neural network to fuse radar and camera imagery, thereby improving depth completion and elevating the success rate of object grasping. Nevertheless, training FuseGrasp effectively is non-trivial, due to limited radar image datasets for transparent objects. We address this issue utilizing large RGB-D dataset, and propose an effective two-stage training approach: we first pre-train FuseGrasp on a large public RGB-D dataset of transparent objects, then fine-tune it on a self-built small RGB-D-Radar dataset. Furthermore, as a byproduct, FuseGrasp can determine the composition of transparent objects, such as glass or plastic, leveraging the material identification capability of mmWave radar. This identification result facilitates the robotic arm in modulating its grip force appropriately. Extensive testing reveals that FuseGrasp significantly improves the accuracy of depth reconstruction and material identification for transparent objects. Moreover, real-world robotic trials have confirmed that FuseGrasp markedly enhances the handling of transparent items. A video demonstration of FuseGrasp is available at https://youtu.be/MWDqv0sRSok.
no_new_dataset
0.915583
2502.20077
Zijie Zhou
Zijie Zhou, Zhangshuo Qi, Luqi Cheng, Guangming Xiong
SegLocNet: Multimodal Localization Network for Autonomous Driving via Bird's-Eye-View Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust and accurate localization is critical for autonomous driving. Traditional GNSS-based localization methods suffer from signal occlusion and multipath effects in urban environments. Meanwhile, methods relying on high-definition (HD) maps are constrained by the high costs associated with the construction and maintenance of HD maps. Standard-definition (SD) maps-based methods, on the other hand, often exhibit unsatisfactory performance or poor generalization ability due to overfitting. To address these challenges, we propose SegLocNet, a multimodal GNSS-free localization network that achieves precise localization using bird's-eye-view (BEV) semantic segmentation. SegLocNet employs a BEV segmentation network to generate semantic maps from multiple sensor inputs, followed by an exhaustive matching process to estimate the vehicle's ego pose. This approach avoids the limitations of regression-based pose estimation and maintains high interpretability and generalization. By introducing a unified map representation, our method can be applied to both HD and SD maps without any modifications to the network architecture, thereby balancing localization accuracy and area coverage. Extensive experiments on the nuScenes and Argoverse datasets demonstrate that our method outperforms the current state-of-the-art methods, and that our method can accurately estimate the ego pose in urban environments without relying on GNSS, while maintaining strong generalization ability. Our code and pre-trained model will be released publicly.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 13:34:55 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 14:25:18 GMT" } ]
2025-03-03T00:00:00
[ [ "Zhou", "Zijie", "" ], [ "Qi", "Zhangshuo", "" ], [ "Cheng", "Luqi", "" ], [ "Xiong", "Guangming", "" ] ]
TITLE: SegLocNet: Multimodal Localization Network for Autonomous Driving via Bird's-Eye-View Segmentation ABSTRACT: Robust and accurate localization is critical for autonomous driving. Traditional GNSS-based localization methods suffer from signal occlusion and multipath effects in urban environments. Meanwhile, methods relying on high-definition (HD) maps are constrained by the high costs associated with the construction and maintenance of HD maps. Standard-definition (SD) maps-based methods, on the other hand, often exhibit unsatisfactory performance or poor generalization ability due to overfitting. To address these challenges, we propose SegLocNet, a multimodal GNSS-free localization network that achieves precise localization using bird's-eye-view (BEV) semantic segmentation. SegLocNet employs a BEV segmentation network to generate semantic maps from multiple sensor inputs, followed by an exhaustive matching process to estimate the vehicle's ego pose. This approach avoids the limitations of regression-based pose estimation and maintains high interpretability and generalization. By introducing a unified map representation, our method can be applied to both HD and SD maps without any modifications to the network architecture, thereby balancing localization accuracy and area coverage. Extensive experiments on the nuScenes and Argoverse datasets demonstrate that our method outperforms the current state-of-the-art methods, and that our method can accurately estimate the ego pose in urban environments without relying on GNSS, while maintaining strong generalization ability. Our code and pre-trained model will be released publicly.
no_new_dataset
0.94868
2502.20104
Xuzheng Yang
Xuzheng Yang, Junzhuo Liu, Peng Wang, Guoqing Wang, Yang Yang, Heng Tao Shen
New Dataset and Methods for Fine-Grained Compositional Referring Expression Comprehension via Specialist-MLLM Collaboration
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Referring Expression Comprehension (REC) is a foundational cross-modal task that evaluates the interplay of language understanding, image comprehension, and language-to-image grounding. To advance this field, we introduce a new REC dataset with two key features. First, it is designed with controllable difficulty levels, requiring fine-grained reasoning across object categories, attributes, and relationships. Second, it incorporates negative text and images generated through fine-grained editing, explicitly testing a model's ability to reject non-existent targets, an often-overlooked yet critical challenge in existing datasets. To address fine-grained compositional REC, we propose novel methods based on a Specialist-MLLM collaboration framework, leveraging the complementary strengths of them: Specialist Models handle simpler tasks efficiently, while MLLMs are better suited for complex reasoning. Based on this synergy, we introduce two collaborative strategies. The first, Slow-Fast Adaptation (SFA), employs a routing mechanism to adaptively delegate simple tasks to Specialist Models and complex tasks to MLLMs. Additionally, common error patterns in both models are mitigated through a target-refocus strategy. The second, Candidate Region Selection (CRS), generates multiple bounding box candidates based on Specialist Model and uses the advanced reasoning capabilities of MLLMs to identify the correct target. Extensive experiments on our dataset and other challenging compositional benchmarks validate the effectiveness of our approaches. The SFA strategy achieves a trade-off between localization accuracy and efficiency, and the CRS strategy greatly boosts the performance of both Specialist Models and MLLMs. We aim for this work to offer valuable insights into solving complex real-world tasks by strategically combining existing tools for maximum effectiveness, rather than reinventing them.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 13:58:44 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 07:36:32 GMT" } ]
2025-03-03T00:00:00
[ [ "Yang", "Xuzheng", "" ], [ "Liu", "Junzhuo", "" ], [ "Wang", "Peng", "" ], [ "Wang", "Guoqing", "" ], [ "Yang", "Yang", "" ], [ "Shen", "Heng Tao", "" ] ]
TITLE: New Dataset and Methods for Fine-Grained Compositional Referring Expression Comprehension via Specialist-MLLM Collaboration ABSTRACT: Referring Expression Comprehension (REC) is a foundational cross-modal task that evaluates the interplay of language understanding, image comprehension, and language-to-image grounding. To advance this field, we introduce a new REC dataset with two key features. First, it is designed with controllable difficulty levels, requiring fine-grained reasoning across object categories, attributes, and relationships. Second, it incorporates negative text and images generated through fine-grained editing, explicitly testing a model's ability to reject non-existent targets, an often-overlooked yet critical challenge in existing datasets. To address fine-grained compositional REC, we propose novel methods based on a Specialist-MLLM collaboration framework, leveraging the complementary strengths of them: Specialist Models handle simpler tasks efficiently, while MLLMs are better suited for complex reasoning. Based on this synergy, we introduce two collaborative strategies. The first, Slow-Fast Adaptation (SFA), employs a routing mechanism to adaptively delegate simple tasks to Specialist Models and complex tasks to MLLMs. Additionally, common error patterns in both models are mitigated through a target-refocus strategy. The second, Candidate Region Selection (CRS), generates multiple bounding box candidates based on Specialist Model and uses the advanced reasoning capabilities of MLLMs to identify the correct target. Extensive experiments on our dataset and other challenging compositional benchmarks validate the effectiveness of our approaches. The SFA strategy achieves a trade-off between localization accuracy and efficiency, and the CRS strategy greatly boosts the performance of both Specialist Models and MLLMs. We aim for this work to offer valuable insights into solving complex real-world tasks by strategically combining existing tools for maximum effectiveness, rather than reinventing them.
no_new_dataset
0.831006
2502.20246
Chi Chien Tsai
Chi-Chien Tsai, Chia-Mu Yu, Ying-Dar Lin, Yu-Sung Wu, Wei-Bin Lee
Beyond Natural Language Perplexity: Detecting Dead Code Poisoning in Code Generation Datasets
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The increasing adoption of large language models (LLMs) for code-related tasks has raised concerns about the security of their training datasets. One critical threat is dead code poisoning, where syntactically valid but functionally redundant code is injected into training data to manipulate model behavior. Such attacks can degrade the performance of neural code search systems, leading to biased or insecure code suggestions. Existing detection methods, such as token-level perplexity analysis, fail to effectively identify dead code due to the structural and contextual characteristics of programming languages. In this paper, we propose DePA (Dead Code Perplexity Analysis), a novel line-level detection and cleansing method tailored to the structural properties of code. DePA computes line-level perplexity by leveraging the contextual relationships between code lines and identifies anomalous lines by comparing their perplexity to the overall distribution within the file. Our experiments on benchmark datasets demonstrate that DePA significantly outperforms existing methods, achieving 0.14-0.19 improvement in detection F1-score and a 44-65% increase in poisoned segment localization precision. Furthermore, DePA enhances detection speed by 0.62-23x, making it practical for large-scale dataset cleansing. Overall, by addressing the unique challenges of dead code poisoning, DePA provides a robust and efficient solution for safeguarding the integrity of code generation model training datasets.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 16:30:00 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 08:39:27 GMT" } ]
2025-03-03T00:00:00
[ [ "Tsai", "Chi-Chien", "" ], [ "Yu", "Chia-Mu", "" ], [ "Lin", "Ying-Dar", "" ], [ "Wu", "Yu-Sung", "" ], [ "Lee", "Wei-Bin", "" ] ]
TITLE: Beyond Natural Language Perplexity: Detecting Dead Code Poisoning in Code Generation Datasets ABSTRACT: The increasing adoption of large language models (LLMs) for code-related tasks has raised concerns about the security of their training datasets. One critical threat is dead code poisoning, where syntactically valid but functionally redundant code is injected into training data to manipulate model behavior. Such attacks can degrade the performance of neural code search systems, leading to biased or insecure code suggestions. Existing detection methods, such as token-level perplexity analysis, fail to effectively identify dead code due to the structural and contextual characteristics of programming languages. In this paper, we propose DePA (Dead Code Perplexity Analysis), a novel line-level detection and cleansing method tailored to the structural properties of code. DePA computes line-level perplexity by leveraging the contextual relationships between code lines and identifies anomalous lines by comparing their perplexity to the overall distribution within the file. Our experiments on benchmark datasets demonstrate that DePA significantly outperforms existing methods, achieving 0.14-0.19 improvement in detection F1-score and a 44-65% increase in poisoned segment localization precision. Furthermore, DePA enhances detection speed by 0.62-23x, making it practical for large-scale dataset cleansing. Overall, by addressing the unique challenges of dead code poisoning, DePA provides a robust and efficient solution for safeguarding the integrity of code generation model training datasets.
no_new_dataset
0.943243
2502.20272
Yixu Feng
Qingsen Yan, Yixu Feng, Cheng Zhang, Guansong Pang, Kangbiao Shi, Peng Wu, Wei Dong, Jinqiu Sun, Yanning Zhang
HVI: A New Color Space for Low-light Image Enhancement
Qingsen Yan, Yixu Feng, and Cheng Zhang contributed equally to this work
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Low-Light Image Enhancement (LLIE) is a crucial computer vision task that aims to restore detailed visual information from corrupted low-light images. Many existing LLIE methods are based on standard RGB (sRGB) space, which often produce color bias and brightness artifacts due to inherent high color sensitivity in sRGB. While converting the images using Hue, Saturation and Value (HSV) color space helps resolve the brightness issue, it introduces significant red and black noise artifacts. To address this issue, we propose a new color space for LLIE, namely Horizontal/Vertical-Intensity (HVI), defined by polarized HS maps and learnable intensity. The former enforces small distances for red coordinates to remove the red artifacts, while the latter compresses the low-light regions to remove the black artifacts. To fully leverage the chromatic and intensity information, a novel Color and Intensity Decoupling Network (CIDNet) is further introduced to learn accurate photometric mapping function under different lighting conditions in the HVI space. Comprehensive results from benchmark and ablation experiments show that the proposed HVI color space with CIDNet outperforms the state-of-the-art methods on 10 datasets. The code is available at https://github.com/Fediory/HVI-CIDNet.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 16:59:51 GMT" }, { "version": "v2", "created": "Fri, 28 Feb 2025 11:13:24 GMT" } ]
2025-03-03T00:00:00
[ [ "Yan", "Qingsen", "" ], [ "Feng", "Yixu", "" ], [ "Zhang", "Cheng", "" ], [ "Pang", "Guansong", "" ], [ "Shi", "Kangbiao", "" ], [ "Wu", "Peng", "" ], [ "Dong", "Wei", "" ], [ "Sun", "Jinqiu", "" ], [ "Zhang", "Yanning", "" ] ]
TITLE: HVI: A New Color Space for Low-light Image Enhancement ABSTRACT: Low-Light Image Enhancement (LLIE) is a crucial computer vision task that aims to restore detailed visual information from corrupted low-light images. Many existing LLIE methods are based on standard RGB (sRGB) space, which often produce color bias and brightness artifacts due to inherent high color sensitivity in sRGB. While converting the images using Hue, Saturation and Value (HSV) color space helps resolve the brightness issue, it introduces significant red and black noise artifacts. To address this issue, we propose a new color space for LLIE, namely Horizontal/Vertical-Intensity (HVI), defined by polarized HS maps and learnable intensity. The former enforces small distances for red coordinates to remove the red artifacts, while the latter compresses the low-light regions to remove the black artifacts. To fully leverage the chromatic and intensity information, a novel Color and Intensity Decoupling Network (CIDNet) is further introduced to learn accurate photometric mapping function under different lighting conditions in the HVI space. Comprehensive results from benchmark and ablation experiments show that the proposed HVI color space with CIDNet outperforms the state-of-the-art methods on 10 datasets. The code is available at https://github.com/Fediory/HVI-CIDNet.
no_new_dataset
0.953751
2502.20405
Yicheng Fu
James Begin, Namit Agrawal, Eshan Singh, Yicheng Fu, Sean O'Brien, Vasu Sharma, Kevin Zhu
Pause-Tuning for Long-Context Comprehension: A Lightweight Approach to LLM Attention Recalibration
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
LLMs have demonstrated remarkable proficiency in understanding tasks but continue to struggle with long-context comprehension, particularly with content located in the middle of extensive inputs. This limitation, known as the Lost-in-the-Middle (LITM) problem, hinders models from fully processing and utilizing information across lengthy contexts. To address this issue, we introduce pause-tuning, a technique that redistributes attention to enhance comprehension of long-context inputs. Our approach involves fine-tuning language models on datasets with artificially inserted pause tokens, which serve to segment the input into smaller, more manageable parts. We evaluate pause-tuning against alternative approaches using the Needle-in-a-Haystack benchmark, where models must retrieve information embedded within contexts of up to 128K tokens. Experimental results demonstrate significant performance gains, with the LLaMA 3.2 3B Instruct model and the LLaMA 3.1 8B Instruct model improving by 10.61% and 3.57% respectively on average, suggesting that pause-tuning successfully enhances attention redistribution and improves long-context retention. The code and data are available at https://anonymous.4open.science/r/LITM-PauseTokens-7357.
[ { "version": "v1", "created": "Sat, 1 Feb 2025 21:47:15 GMT" } ]
2025-03-03T00:00:00
[ [ "Begin", "James", "" ], [ "Agrawal", "Namit", "" ], [ "Singh", "Eshan", "" ], [ "Fu", "Yicheng", "" ], [ "O'Brien", "Sean", "" ], [ "Sharma", "Vasu", "" ], [ "Zhu", "Kevin", "" ] ]
TITLE: Pause-Tuning for Long-Context Comprehension: A Lightweight Approach to LLM Attention Recalibration ABSTRACT: LLMs have demonstrated remarkable proficiency in understanding tasks but continue to struggle with long-context comprehension, particularly with content located in the middle of extensive inputs. This limitation, known as the Lost-in-the-Middle (LITM) problem, hinders models from fully processing and utilizing information across lengthy contexts. To address this issue, we introduce pause-tuning, a technique that redistributes attention to enhance comprehension of long-context inputs. Our approach involves fine-tuning language models on datasets with artificially inserted pause tokens, which serve to segment the input into smaller, more manageable parts. We evaluate pause-tuning against alternative approaches using the Needle-in-a-Haystack benchmark, where models must retrieve information embedded within contexts of up to 128K tokens. Experimental results demonstrate significant performance gains, with the LLaMA 3.2 3B Instruct model and the LLaMA 3.1 8B Instruct model improving by 10.61% and 3.57% respectively on average, suggesting that pause-tuning successfully enhances attention redistribution and improves long-context retention. The code and data are available at https://anonymous.4open.science/r/LITM-PauseTokens-7357.
no_new_dataset
0.947721
2502.20411
Saeed Reza Kheradpisheh
Mohammadnavid Ghader, Saeed Reza Kheradpisheh, Bahar Farahani, Mahmood Fazlali
Backpropagation-free Spiking Neural Networks with the Forward-Forward Algorithm
null
null
null
null
cs.NE cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Spiking Neural Networks (SNNs) offer a biologically inspired computational paradigm that emulates neuronal activity through discrete spike-based processing. Despite their advantages, training SNNs with traditional backpropagation (BP) remains challenging due to computational inefficiencies and a lack of biological plausibility. This study explores the Forward-Forward (FF) algorithm as an alternative learning framework for SNNs. Unlike backpropagation, which relies on forward and backward passes, the FF algorithm employs two forward passes, enabling localized learning, enhanced computational efficiency, and improved compatibility with neuromorphic hardware. We introduce an FF-based SNN training framework and evaluate its performance across both non-spiking (MNIST, Fashion-MNIST, CIFAR-10) and spiking (Neuro-MNIST, SHD) datasets. Experimental results demonstrate that our model surpasses existing FF-based SNNs by over 5% on MNIST and Fashion-MNIST while achieving accuracy comparable to state-of-the-art backpropagation-trained SNNs. On more complex tasks such as CIFAR-10 and SHD, our approach outperforms other SNN models by up to 6% and remains competitive with leading backpropagation-trained SNNs. These findings highlight the FF algorithm's potential to advance SNN training methodologies and neuromorphic computing by addressing key limitations of backpropagation.
[ { "version": "v1", "created": "Wed, 19 Feb 2025 12:44:26 GMT" } ]
2025-03-03T00:00:00
[ [ "Ghader", "Mohammadnavid", "" ], [ "Kheradpisheh", "Saeed Reza", "" ], [ "Farahani", "Bahar", "" ], [ "Fazlali", "Mahmood", "" ] ]
TITLE: Backpropagation-free Spiking Neural Networks with the Forward-Forward Algorithm ABSTRACT: Spiking Neural Networks (SNNs) offer a biologically inspired computational paradigm that emulates neuronal activity through discrete spike-based processing. Despite their advantages, training SNNs with traditional backpropagation (BP) remains challenging due to computational inefficiencies and a lack of biological plausibility. This study explores the Forward-Forward (FF) algorithm as an alternative learning framework for SNNs. Unlike backpropagation, which relies on forward and backward passes, the FF algorithm employs two forward passes, enabling localized learning, enhanced computational efficiency, and improved compatibility with neuromorphic hardware. We introduce an FF-based SNN training framework and evaluate its performance across both non-spiking (MNIST, Fashion-MNIST, CIFAR-10) and spiking (Neuro-MNIST, SHD) datasets. Experimental results demonstrate that our model surpasses existing FF-based SNNs by over 5% on MNIST and Fashion-MNIST while achieving accuracy comparable to state-of-the-art backpropagation-trained SNNs. On more complex tasks such as CIFAR-10 and SHD, our approach outperforms other SNN models by up to 6% and remains competitive with leading backpropagation-trained SNNs. These findings highlight the FF algorithm's potential to advance SNN training methodologies and neuromorphic computing by addressing key limitations of backpropagation.
no_new_dataset
0.948442
2502.20422
Zicheng Cai
Zicheng Cai, Yaohua Tang, Yutao Lai, Hua Wang, Zhi Chen and Hao Chen
SEKI: Self-Evolution and Knowledge Inspiration based Neural Architecture Search via Large Language Models
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
We introduce SEKI, a novel large language model (LLM)-based neural architecture search (NAS) method. Inspired by the chain-of-thought (CoT) paradigm in modern LLMs, SEKI operates in two key stages: self-evolution and knowledge distillation. In the self-evolution stage, LLMs initially lack sufficient reference examples, so we implement an iterative refinement mechanism that enhances architectures based on performance feedback. Over time, this process accumulates a repository of high-performance architectures. In the knowledge distillation stage, LLMs analyze common patterns among these architectures to generate new, optimized designs. Combining these two stages, SEKI greatly leverages the capacity of LLMs on NAS and without requiring any domain-specific data. Experimental results show that SEKI achieves state-of-the-art (SOTA) performance across various datasets and search spaces while requiring only 0.05 GPU-days, outperforming existing methods in both efficiency and accuracy. Furthermore, SEKI demonstrates strong generalization capabilities, achieving SOTA-competitive results across multiple tasks.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 09:17:49 GMT" } ]
2025-03-03T00:00:00
[ [ "Cai", "Zicheng", "" ], [ "Tang", "Yaohua", "" ], [ "Lai", "Yutao", "" ], [ "Wang", "Hua", "" ], [ "Chen", "Zhi", "" ], [ "Chen", "Hao", "" ] ]
TITLE: SEKI: Self-Evolution and Knowledge Inspiration based Neural Architecture Search via Large Language Models ABSTRACT: We introduce SEKI, a novel large language model (LLM)-based neural architecture search (NAS) method. Inspired by the chain-of-thought (CoT) paradigm in modern LLMs, SEKI operates in two key stages: self-evolution and knowledge distillation. In the self-evolution stage, LLMs initially lack sufficient reference examples, so we implement an iterative refinement mechanism that enhances architectures based on performance feedback. Over time, this process accumulates a repository of high-performance architectures. In the knowledge distillation stage, LLMs analyze common patterns among these architectures to generate new, optimized designs. Combining these two stages, SEKI greatly leverages the capacity of LLMs on NAS and without requiring any domain-specific data. Experimental results show that SEKI achieves state-of-the-art (SOTA) performance across various datasets and search spaces while requiring only 0.05 GPU-days, outperforming existing methods in both efficiency and accuracy. Furthermore, SEKI demonstrates strong generalization capabilities, achieving SOTA-competitive results across multiple tasks.
no_new_dataset
0.942029
2502.20480
Chaoyu Li
Chaoyu Li, Sid Padmanabhuni, Maryam Cheema, Hasti Seifi, Pooyan Fazli
VideoA11y: Method and Dataset for Accessible Video Description
ACM CHI 2025
null
null
null
cs.CV cs.HC
http://creativecommons.org/licenses/by/4.0/
Video descriptions are crucial for blind and low vision (BLV) users to access visual content. However, current artificial intelligence models for generating descriptions often fall short due to limitations in the quality of human annotations within training datasets, resulting in descriptions that do not fully meet BLV users' needs. To address this gap, we introduce VideoA11y, an approach that leverages multimodal large language models (MLLMs) and video accessibility guidelines to generate descriptions tailored for BLV individuals. Using this method, we have curated VideoA11y-40K, the largest and most comprehensive dataset of 40,000 videos described for BLV users. Rigorous experiments across 15 video categories, involving 347 sighted participants, 40 BLV participants, and seven professional describers, showed that VideoA11y descriptions outperform novice human annotations and are comparable to trained human annotations in clarity, accuracy, objectivity, descriptiveness, and user satisfaction. We evaluated models on VideoA11y-40K using both standard and custom metrics, demonstrating that MLLMs fine-tuned on this dataset produce high-quality accessible descriptions. Code and dataset are available at https://people-robots.github.io/VideoA11y.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 19:44:31 GMT" } ]
2025-03-03T00:00:00
[ [ "Li", "Chaoyu", "" ], [ "Padmanabhuni", "Sid", "" ], [ "Cheema", "Maryam", "" ], [ "Seifi", "Hasti", "" ], [ "Fazli", "Pooyan", "" ] ]
TITLE: VideoA11y: Method and Dataset for Accessible Video Description ABSTRACT: Video descriptions are crucial for blind and low vision (BLV) users to access visual content. However, current artificial intelligence models for generating descriptions often fall short due to limitations in the quality of human annotations within training datasets, resulting in descriptions that do not fully meet BLV users' needs. To address this gap, we introduce VideoA11y, an approach that leverages multimodal large language models (MLLMs) and video accessibility guidelines to generate descriptions tailored for BLV individuals. Using this method, we have curated VideoA11y-40K, the largest and most comprehensive dataset of 40,000 videos described for BLV users. Rigorous experiments across 15 video categories, involving 347 sighted participants, 40 BLV participants, and seven professional describers, showed that VideoA11y descriptions outperform novice human annotations and are comparable to trained human annotations in clarity, accuracy, objectivity, descriptiveness, and user satisfaction. We evaluated models on VideoA11y-40K using both standard and custom metrics, demonstrating that MLLMs fine-tuned on this dataset produce high-quality accessible descriptions. Code and dataset are available at https://people-robots.github.io/VideoA11y.
new_dataset
0.961134
2502.20493
Vijay Srinivas Tida
Vijay Srinivas Tida, Md Imran Hossen, Liqun Shan, Sai Venkatesh Chilukoti, Sonya Hsu, Xiali Hei
Unified Kernel-Segregated Transpose Convolution Operation
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
The optimization of the transpose convolution layer for deep learning applications is achieved with the kernel segregation mechanism. However, kernel segregation has disadvantages, such as computing extra elements to obtain the output feature map with odd dimensions while launching a thread. To mitigate this problem, we introduce a unified kernel segregation approach that limits the usage of memory and computational resources by employing one unified kernel to execute four sub-kernels. The findings reveal that the suggested approach achieves an average computational speedup of 2.03x (3.89x) when tested on specific datasets with an RTX 2070 GPU (Intel Xeon CPU). The ablation study shows an average computational speedup of 3.5x when evaluating the transpose convolution layers from well-known Generative Adversarial Networks (GANs). The implementation of the proposed method for the transpose convolution layers in the EB-GAN model demonstrates significant memory savings of up to 35 MB.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 19:56:25 GMT" } ]
2025-03-03T00:00:00
[ [ "Tida", "Vijay Srinivas", "" ], [ "Hossen", "Md Imran", "" ], [ "Shan", "Liqun", "" ], [ "Chilukoti", "Sai Venkatesh", "" ], [ "Hsu", "Sonya", "" ], [ "Hei", "Xiali", "" ] ]
TITLE: Unified Kernel-Segregated Transpose Convolution Operation ABSTRACT: The optimization of the transpose convolution layer for deep learning applications is achieved with the kernel segregation mechanism. However, kernel segregation has disadvantages, such as computing extra elements to obtain the output feature map with odd dimensions while launching a thread. To mitigate this problem, we introduce a unified kernel segregation approach that limits the usage of memory and computational resources by employing one unified kernel to execute four sub-kernels. The findings reveal that the suggested approach achieves an average computational speedup of 2.03x (3.89x) when tested on specific datasets with an RTX 2070 GPU (Intel Xeon CPU). The ablation study shows an average computational speedup of 3.5x when evaluating the transpose convolution layers from well-known Generative Adversarial Networks (GANs). The implementation of the proposed method for the transpose convolution layers in the EB-GAN model demonstrates significant memory savings of up to 35 MB.
no_new_dataset
0.949342
2502.20504
Julius Broomfield
Julius Broomfield, Kartik Sharma, Srijan Kumar
A Thousand Words or An Image: Studying the Influence of Persona Modality in Multimodal LLMs
null
null
null
null
cs.CL cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) have recently demonstrated remarkable advancements in embodying diverse personas, enhancing their effectiveness as conversational agents and virtual assistants. Consequently, LLMs have made significant strides in processing and integrating multimodal information. However, even though human personas can be expressed in both text and image, the extent to which the modality of a persona impacts the embodiment by the LLM remains largely unexplored. In this paper, we investigate how do different modalities influence the expressiveness of personas in multimodal LLMs. To this end, we create a novel modality-parallel dataset of 40 diverse personas varying in age, gender, occupation, and location. This consists of four modalities to equivalently represent a persona: image-only, text-only, a combination of image and small text, and typographical images, where text is visually stylized to convey persona-related attributes. We then create a systematic evaluation framework with 60 questions and corresponding metrics to assess how well LLMs embody each persona across its attributes and scenarios. Comprehensive experiments on $5$ multimodal LLMs show that personas represented by detailed text show more linguistic habits, while typographical images often show more consistency with the persona. Our results reveal that LLMs often overlook persona-specific details conveyed through images, highlighting underlying limitations and paving the way for future research to bridge this gap. We release the data and code at https://github.com/claws-lab/persona-modality .
[ { "version": "v1", "created": "Thu, 27 Feb 2025 20:25:00 GMT" } ]
2025-03-03T00:00:00
[ [ "Broomfield", "Julius", "" ], [ "Sharma", "Kartik", "" ], [ "Kumar", "Srijan", "" ] ]
TITLE: A Thousand Words or An Image: Studying the Influence of Persona Modality in Multimodal LLMs ABSTRACT: Large language models (LLMs) have recently demonstrated remarkable advancements in embodying diverse personas, enhancing their effectiveness as conversational agents and virtual assistants. Consequently, LLMs have made significant strides in processing and integrating multimodal information. However, even though human personas can be expressed in both text and image, the extent to which the modality of a persona impacts the embodiment by the LLM remains largely unexplored. In this paper, we investigate how do different modalities influence the expressiveness of personas in multimodal LLMs. To this end, we create a novel modality-parallel dataset of 40 diverse personas varying in age, gender, occupation, and location. This consists of four modalities to equivalently represent a persona: image-only, text-only, a combination of image and small text, and typographical images, where text is visually stylized to convey persona-related attributes. We then create a systematic evaluation framework with 60 questions and corresponding metrics to assess how well LLMs embody each persona across its attributes and scenarios. Comprehensive experiments on $5$ multimodal LLMs show that personas represented by detailed text show more linguistic habits, while typographical images often show more consistency with the persona. Our results reveal that LLMs often overlook persona-specific details conveyed through images, highlighting underlying limitations and paving the way for future research to bridge this gap. We release the data and code at https://github.com/claws-lab/persona-modality .
new_dataset
0.961929
2502.20508
Soumyabrata Chaudhuri
Soumyabrata Chaudhuri, Pranav Purkar, Ritwik Raghav, Shubhojit Mallick, Manish Gupta, Abhik Jana, Shreya Ghosh
TripCraft: A Benchmark for Spatio-Temporally Fine Grained Travel Planning
27 pages, 18 Tables and 6 Figures
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent advancements in probing Large Language Models (LLMs) have explored their latent potential as personalized travel planning agents, yet existing benchmarks remain limited in real world applicability. Existing datasets, such as TravelPlanner and TravelPlanner+, suffer from semi synthetic data reliance, spatial inconsistencies, and a lack of key travel constraints, making them inadequate for practical itinerary generation. To address these gaps, we introduce TripCraft, a spatiotemporally coherent travel planning dataset that integrates real world constraints, including public transit schedules, event availability, diverse attraction categories, and user personas for enhanced personalization. To evaluate LLM generated plans beyond existing binary validation methods, we propose five continuous evaluation metrics, namely Temporal Meal Score, Temporal Attraction Score, Spatial Score, Ordering Score, and Persona Score which assess itinerary quality across multiple dimensions. Our parameter informed setting significantly enhances meal scheduling, improving the Temporal Meal Score from 61% to 80% in a 7 day scenario. TripCraft establishes a new benchmark for LLM driven personalized travel planning, offering a more realistic, constraint aware framework for itinerary generation. Dataset and Codebase will be made publicly available upon acceptance.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 20:33:28 GMT" } ]
2025-03-03T00:00:00
[ [ "Chaudhuri", "Soumyabrata", "" ], [ "Purkar", "Pranav", "" ], [ "Raghav", "Ritwik", "" ], [ "Mallick", "Shubhojit", "" ], [ "Gupta", "Manish", "" ], [ "Jana", "Abhik", "" ], [ "Ghosh", "Shreya", "" ] ]
TITLE: TripCraft: A Benchmark for Spatio-Temporally Fine Grained Travel Planning ABSTRACT: Recent advancements in probing Large Language Models (LLMs) have explored their latent potential as personalized travel planning agents, yet existing benchmarks remain limited in real world applicability. Existing datasets, such as TravelPlanner and TravelPlanner+, suffer from semi synthetic data reliance, spatial inconsistencies, and a lack of key travel constraints, making them inadequate for practical itinerary generation. To address these gaps, we introduce TripCraft, a spatiotemporally coherent travel planning dataset that integrates real world constraints, including public transit schedules, event availability, diverse attraction categories, and user personas for enhanced personalization. To evaluate LLM generated plans beyond existing binary validation methods, we propose five continuous evaluation metrics, namely Temporal Meal Score, Temporal Attraction Score, Spatial Score, Ordering Score, and Persona Score which assess itinerary quality across multiple dimensions. Our parameter informed setting significantly enhances meal scheduling, improving the Temporal Meal Score from 61% to 80% in a 7 day scenario. TripCraft establishes a new benchmark for LLM driven personalized travel planning, offering a more realistic, constraint aware framework for itinerary generation. Dataset and Codebase will be made publicly available upon acceptance.
new_dataset
0.958265
2502.20513
Smit Desai
Smit Desai, Mateusz Dubiel, Nima Zargham, Thomas Mildner, Laura Spillner
Personas Evolved: Designing Ethical LLM-Based Conversational Agent Personalities
null
null
null
null
cs.HC cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The emergence of Large Language Models (LLMs) has revolutionized Conversational User Interfaces (CUIs), enabling more dynamic, context-aware, and human-like interactions across diverse domains, from social sciences to healthcare. However, the rapid adoption of LLM-based personas raises critical ethical and practical concerns, including bias, manipulation, and unforeseen social consequences. Unlike traditional CUIs, where personas are carefully designed with clear intent, LLM-based personas generate responses dynamically from vast datasets, making their behavior less predictable and harder to govern. This workshop aims to bridge the gap between CUI and broader AI communities by fostering a cross-disciplinary dialogue on the responsible design and evaluation of LLM-based personas. Bringing together researchers, designers, and practitioners, we will explore best practices, develop ethical guidelines, and promote frameworks that ensure transparency, inclusivity, and user-centered interactions. By addressing these challenges collaboratively, we seek to shape the future of LLM-driven CUIs in ways that align with societal values and expectations.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 20:46:54 GMT" } ]
2025-03-03T00:00:00
[ [ "Desai", "Smit", "" ], [ "Dubiel", "Mateusz", "" ], [ "Zargham", "Nima", "" ], [ "Mildner", "Thomas", "" ], [ "Spillner", "Laura", "" ] ]
TITLE: Personas Evolved: Designing Ethical LLM-Based Conversational Agent Personalities ABSTRACT: The emergence of Large Language Models (LLMs) has revolutionized Conversational User Interfaces (CUIs), enabling more dynamic, context-aware, and human-like interactions across diverse domains, from social sciences to healthcare. However, the rapid adoption of LLM-based personas raises critical ethical and practical concerns, including bias, manipulation, and unforeseen social consequences. Unlike traditional CUIs, where personas are carefully designed with clear intent, LLM-based personas generate responses dynamically from vast datasets, making their behavior less predictable and harder to govern. This workshop aims to bridge the gap between CUI and broader AI communities by fostering a cross-disciplinary dialogue on the responsible design and evaluation of LLM-based personas. Bringing together researchers, designers, and practitioners, we will explore best practices, develop ethical guidelines, and promote frameworks that ensure transparency, inclusivity, and user-centered interactions. By addressing these challenges collaboratively, we seek to shape the future of LLM-driven CUIs in ways that align with societal values and expectations.
no_new_dataset
0.94801
2502.20516
Hu Wang
Hu Wang, Ibrahim Almakky, Congbo Ma, Numan Saeed, Mohammad Yaqub
In-Model Merging for Enhancing the Robustness of Medical Imaging Classification Models
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Model merging is an effective strategy to merge multiple models for enhancing model performances, and more efficient than ensemble learning as it will not introduce extra computation into inference. However, limited research explores if the merging process can occur within one model and enhance the model's robustness, which is particularly critical in the medical image domain. In the paper, we are the first to propose in-model merging (InMerge), a novel approach that enhances the model's robustness by selectively merging similar convolutional kernels in the deep layers of a single convolutional neural network (CNN) during the training process for classification. We also analytically reveal important characteristics that affect how in-model merging should be performed, serving as an insightful reference for the community. We demonstrate the feasibility and effectiveness of this technique for different CNN architectures on 4 prevalent datasets. The proposed InMerge-trained model surpasses the typically-trained model by a substantial margin. The code will be made public.
[ { "version": "v1", "created": "Thu, 27 Feb 2025 20:52:55 GMT" } ]
2025-03-03T00:00:00
[ [ "Wang", "Hu", "" ], [ "Almakky", "Ibrahim", "" ], [ "Ma", "Congbo", "" ], [ "Saeed", "Numan", "" ], [ "Yaqub", "Mohammad", "" ] ]
TITLE: In-Model Merging for Enhancing the Robustness of Medical Imaging Classification Models ABSTRACT: Model merging is an effective strategy to merge multiple models for enhancing model performances, and more efficient than ensemble learning as it will not introduce extra computation into inference. However, limited research explores if the merging process can occur within one model and enhance the model's robustness, which is particularly critical in the medical image domain. In the paper, we are the first to propose in-model merging (InMerge), a novel approach that enhances the model's robustness by selectively merging similar convolutional kernels in the deep layers of a single convolutional neural network (CNN) during the training process for classification. We also analytically reveal important characteristics that affect how in-model merging should be performed, serving as an insightful reference for the community. We demonstrate the feasibility and effectiveness of this technique for different CNN architectures on 4 prevalent datasets. The proposed InMerge-trained model surpasses the typically-trained model by a substantial margin. The code will be made public.
no_new_dataset
0.951278