Search is not available for this dataset
id
string
submitter
string
authors
string
title
string
comments
string
journal-ref
string
doi
string
report-no
string
categories
string
license
string
abstract
string
versions
list
update_date
timestamp[s]
authors_parsed
sequence
prompt
string
2411.10332
Yongliang Wu
Yongliang Wu, Xinting Hu, Yuyang Sun, Yizhou Zhou, Wenbo Zhu, Fengyun Rao, Bernt Schiele, Xu Yang
Number it: Temporal Grounding Videos like Flipping Manga
Accepted by CVPR 2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video Large Language Models (Vid-LLMs) have made remarkable advancements in comprehending video content for QA dialogue. However, they struggle to extend this visual understanding to tasks requiring precise temporal localization, known as Video Temporal Grounding (VTG). To address this gap, we introduce Number-Prompt (NumPro), a novel method that empowers Vid-LLMs to bridge visual comprehension with temporal grounding by adding unique numerical identifiers to each video frame. Treating a video as a sequence of numbered frame images, NumPro transforms VTG into an intuitive process: flipping through manga panels in sequence. This allows Vid-LLMs to "read" event timelines, accurately linking visual content with corresponding temporal information. Our experiments demonstrate that NumPro significantly boosts VTG performance of top-tier Vid-LLMs without additional computational cost. Furthermore, fine-tuning on a NumPro-enhanced dataset defines a new state-of-the-art for VTG, surpassing previous top-performing methods by up to 6.9\% in mIoU for moment retrieval and 8.5\% in mAP for highlight detection. The code will be available at https://github.com/yongliang-wu/NumPro.
[ { "version": "v1", "created": "Fri, 15 Nov 2024 16:32:34 GMT" }, { "version": "v2", "created": "Thu, 28 Nov 2024 02:57:24 GMT" }, { "version": "v3", "created": "Fri, 21 Mar 2025 12:40:26 GMT" } ]
2025-03-24T00:00:00
[ [ "Wu", "Yongliang", "" ], [ "Hu", "Xinting", "" ], [ "Sun", "Yuyang", "" ], [ "Zhou", "Yizhou", "" ], [ "Zhu", "Wenbo", "" ], [ "Rao", "Fengyun", "" ], [ "Schiele", "Bernt", "" ], [ "Yang", "Xu", "" ] ]
TITLE: Number it: Temporal Grounding Videos like Flipping Manga ABSTRACT: Video Large Language Models (Vid-LLMs) have made remarkable advancements in comprehending video content for QA dialogue. However, they struggle to extend this visual understanding to tasks requiring precise temporal localization, known as Video Temporal Grounding (VTG). To address this gap, we introduce Number-Prompt (NumPro), a novel method that empowers Vid-LLMs to bridge visual comprehension with temporal grounding by adding unique numerical identifiers to each video frame. Treating a video as a sequence of numbered frame images, NumPro transforms VTG into an intuitive process: flipping through manga panels in sequence. This allows Vid-LLMs to "read" event timelines, accurately linking visual content with corresponding temporal information. Our experiments demonstrate that NumPro significantly boosts VTG performance of top-tier Vid-LLMs without additional computational cost. Furthermore, fine-tuning on a NumPro-enhanced dataset defines a new state-of-the-art for VTG, surpassing previous top-performing methods by up to 6.9\% in mIoU for moment retrieval and 8.5\% in mAP for highlight detection. The code will be available at https://github.com/yongliang-wu/NumPro.
2411.12287
Dongyoung Go
Dongyoung Go, Taesun Whang, Chanhee Lee, Hwa-Yeon Kim, Sunghoon Park, Seunghwan Ji, Jinho Kim, Dongchan Kim, Young-Bum Kim
CUE-M: Contextual Understanding and Enhanced Search with Multimodal Large Language Model
Preprint. Under review
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The integration of Retrieval-Augmented Generation (RAG) with Multimodal Large Language Models (MLLMs) has revolutionized information retrieval and expanded the practical applications of AI. However, current systems struggle in accurately interpreting user intent, employing diverse retrieval strategies, and effectively filtering unintended or inappropriate responses, limiting their effectiveness. This paper introduces Contextual Understanding and Enhanced Search with MLLM (CUE-M), a novel multimodal search framework that addresses these challenges through a multi-stage pipeline comprising image context enrichment, intent refinement, contextual query generation, external API integration, and relevance-based filtering. CUE-M incorporates a robust filtering pipeline combining image-based, text-based, and multimodal classifiers, dynamically adapting to instance- and category-specific concern defined by organizational policies. Extensive experiments on real-word datasets and public benchmarks on knowledge-based VQA and safety demonstrated that CUE-M outperforms baselines and establishes new state-of-the-art results, advancing the capabilities of multimodal retrieval systems.
[ { "version": "v1", "created": "Tue, 19 Nov 2024 07:16:48 GMT" }, { "version": "v2", "created": "Fri, 6 Dec 2024 05:43:58 GMT" }, { "version": "v3", "created": "Fri, 21 Mar 2025 00:37:43 GMT" } ]
2025-03-24T00:00:00
[ [ "Go", "Dongyoung", "" ], [ "Whang", "Taesun", "" ], [ "Lee", "Chanhee", "" ], [ "Kim", "Hwa-Yeon", "" ], [ "Park", "Sunghoon", "" ], [ "Ji", "Seunghwan", "" ], [ "Kim", "Jinho", "" ], [ "Kim", "Dongchan", "" ], [ "Kim", "Young-Bum", "" ] ]
TITLE: CUE-M: Contextual Understanding and Enhanced Search with Multimodal Large Language Model ABSTRACT: The integration of Retrieval-Augmented Generation (RAG) with Multimodal Large Language Models (MLLMs) has revolutionized information retrieval and expanded the practical applications of AI. However, current systems struggle in accurately interpreting user intent, employing diverse retrieval strategies, and effectively filtering unintended or inappropriate responses, limiting their effectiveness. This paper introduces Contextual Understanding and Enhanced Search with MLLM (CUE-M), a novel multimodal search framework that addresses these challenges through a multi-stage pipeline comprising image context enrichment, intent refinement, contextual query generation, external API integration, and relevance-based filtering. CUE-M incorporates a robust filtering pipeline combining image-based, text-based, and multimodal classifiers, dynamically adapting to instance- and category-specific concern defined by organizational policies. Extensive experiments on real-word datasets and public benchmarks on knowledge-based VQA and safety demonstrated that CUE-M outperforms baselines and establishes new state-of-the-art results, advancing the capabilities of multimodal retrieval systems.
2411.13945
Stein Stroobants
Stein Stroobants, Christophe de Wagter, Guido C.H.E. De Croon
Neuromorphic Attitude Estimation and Control
null
null
null
null
cs.RO cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
The real-world application of small drones is mostly hampered by energy limitations. Neuromorphic computing promises extremely energy-efficient AI for autonomous flight but is still challenging to train and deploy on real robots. To reap the maximal benefits from neuromorphic computing, it is necessary to perform all autonomy functions end-to-end on a single neuromorphic chip, from low-level attitude control to high-level navigation. This research presents the first neuromorphic control system using a spiking neural network (SNN) to effectively map a drone's raw sensory input directly to motor commands. We apply this method to low-level attitude estimation and control for a quadrotor, deploying the SNN on a tiny Crazyflie. We propose a modular SNN, separately training and then merging estimation and control sub-networks. The SNN is trained with imitation learning, using a flight dataset of sensory-motor pairs. Post-training, the network is deployed on the Crazyflie, issuing control commands from sensor inputs at 500Hz. Furthermore, for the training procedure we augmented training data by flying a controller with additional excitation and time-shifting the target data to enhance the predictive capabilities of the SNN. On the real drone, the perception-to-control SNN tracks attitude commands with an average error of 3.0 degrees, compared to 2.7 degrees for the regular flight stack. We also show the benefits of the proposed learning modifications for reducing the average tracking error and reducing oscillations. Our work shows the feasibility of performing neuromorphic end-to-end control, laying the basis for highly energy-efficient and low-latency neuromorphic autopilots.
[ { "version": "v1", "created": "Thu, 21 Nov 2024 08:54:45 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 07:57:38 GMT" } ]
2025-03-24T00:00:00
[ [ "Stroobants", "Stein", "" ], [ "de Wagter", "Christophe", "" ], [ "De Croon", "Guido C. H. E.", "" ] ]
TITLE: Neuromorphic Attitude Estimation and Control ABSTRACT: The real-world application of small drones is mostly hampered by energy limitations. Neuromorphic computing promises extremely energy-efficient AI for autonomous flight but is still challenging to train and deploy on real robots. To reap the maximal benefits from neuromorphic computing, it is necessary to perform all autonomy functions end-to-end on a single neuromorphic chip, from low-level attitude control to high-level navigation. This research presents the first neuromorphic control system using a spiking neural network (SNN) to effectively map a drone's raw sensory input directly to motor commands. We apply this method to low-level attitude estimation and control for a quadrotor, deploying the SNN on a tiny Crazyflie. We propose a modular SNN, separately training and then merging estimation and control sub-networks. The SNN is trained with imitation learning, using a flight dataset of sensory-motor pairs. Post-training, the network is deployed on the Crazyflie, issuing control commands from sensor inputs at 500Hz. Furthermore, for the training procedure we augmented training data by flying a controller with additional excitation and time-shifting the target data to enhance the predictive capabilities of the SNN. On the real drone, the perception-to-control SNN tracks attitude commands with an average error of 3.0 degrees, compared to 2.7 degrees for the regular flight stack. We also show the benefits of the proposed learning modifications for reducing the average tracking error and reducing oscillations. Our work shows the feasibility of performing neuromorphic end-to-end control, laying the basis for highly energy-efficient and low-latency neuromorphic autopilots.
2411.16173
Junho Kim
Junho Kim, Hyunjun Kim, Hosu Lee, Yong Man Ro
SALOVA: Segment-Augmented Long Video Assistant for Targeted Retrieval and Routing in Long-Form Video Analysis
Project page: https://ivy-lvlm.github.io/SALOVA/
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite advances in Large Multi-modal Models, applying them to long and untrimmed video content remains challenging due to limitations in context length and substantial memory overhead. These constraints often lead to significant information loss and reduced relevance in the model responses. With the exponential growth of video data across web platforms, understanding long-form video is crucial for advancing generalized intelligence. In this paper, we introduce SALOVA: Segment-Augmented LOng Video Assistant, a novel video-LLM framework designed to enhance the comprehension of lengthy video content through targeted retrieval process. We address two main challenges to achieve it: (i) We present the SceneWalk dataset, a high-quality collection of 87.8K long videos, each densely captioned at the segment level to enable models to capture scene continuity and maintain rich descriptive context. (ii) We develop robust architectural designs integrating dynamic routing mechanism and spatio-temporal projector to efficiently retrieve and process relevant video segments based on user queries. Our framework mitigates the limitations of current video-LMMs by allowing for precise identification and retrieval of relevant video segments in response to queries, thereby improving the contextual relevance of the generated responses. Through extensive experiments, SALOVA demonstrates enhanced capability in processing complex long-form videos, showing significant capability to maintain contextual integrity across extended sequences.
[ { "version": "v1", "created": "Mon, 25 Nov 2024 08:04:47 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 10:44:15 GMT" } ]
2025-03-24T00:00:00
[ [ "Kim", "Junho", "" ], [ "Kim", "Hyunjun", "" ], [ "Lee", "Hosu", "" ], [ "Ro", "Yong Man", "" ] ]
TITLE: SALOVA: Segment-Augmented Long Video Assistant for Targeted Retrieval and Routing in Long-Form Video Analysis ABSTRACT: Despite advances in Large Multi-modal Models, applying them to long and untrimmed video content remains challenging due to limitations in context length and substantial memory overhead. These constraints often lead to significant information loss and reduced relevance in the model responses. With the exponential growth of video data across web platforms, understanding long-form video is crucial for advancing generalized intelligence. In this paper, we introduce SALOVA: Segment-Augmented LOng Video Assistant, a novel video-LLM framework designed to enhance the comprehension of lengthy video content through targeted retrieval process. We address two main challenges to achieve it: (i) We present the SceneWalk dataset, a high-quality collection of 87.8K long videos, each densely captioned at the segment level to enable models to capture scene continuity and maintain rich descriptive context. (ii) We develop robust architectural designs integrating dynamic routing mechanism and spatio-temporal projector to efficiently retrieve and process relevant video segments based on user queries. Our framework mitigates the limitations of current video-LMMs by allowing for precise identification and retrieval of relevant video segments in response to queries, thereby improving the contextual relevance of the generated responses. Through extensive experiments, SALOVA demonstrates enhanced capability in processing complex long-form videos, showing significant capability to maintain contextual integrity across extended sequences.
2411.18276
Wenbo Cui
Wenbo Cui, Chengyang Zhao, Songlin Wei, Jiazhao Zhang, Haoran Geng, Yaran Chen, Haoran Li, He Wang
GAPartManip: A Large-scale Part-centric Dataset for Material-Agnostic Articulated Object Manipulation
Accepted by ICRA 2025. Project page: https://pku-epic.github.io/GAPartManip/
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
Effectively manipulating articulated objects in household scenarios is a crucial step toward achieving general embodied artificial intelligence. Mainstream research in 3D vision has primarily focused on manipulation through depth perception and pose detection. However, in real-world environments, these methods often face challenges due to imperfect depth perception, such as with transparent lids and reflective handles. Moreover, they generally lack the diversity in part-based interactions required for flexible and adaptable manipulation. To address these challenges, we introduced a large-scale part-centric dataset for articulated object manipulation that features both photo-realistic material randomization and detailed annotations of part-oriented, scene-level actionable interaction poses. We evaluated the effectiveness of our dataset by integrating it with several state-of-the-art methods for depth estimation and interaction pose prediction. Additionally, we proposed a novel modular framework that delivers superior and robust performance for generalizable articulated object manipulation. Our extensive experiments demonstrate that our dataset significantly improves the performance of depth perception and actionable interaction pose prediction in both simulation and real-world scenarios. More information and demos can be found at: https://pku-epic.github.io/GAPartManip/.
[ { "version": "v1", "created": "Wed, 27 Nov 2024 12:11:23 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 07:52:16 GMT" } ]
2025-03-24T00:00:00
[ [ "Cui", "Wenbo", "" ], [ "Zhao", "Chengyang", "" ], [ "Wei", "Songlin", "" ], [ "Zhang", "Jiazhao", "" ], [ "Geng", "Haoran", "" ], [ "Chen", "Yaran", "" ], [ "Li", "Haoran", "" ], [ "Wang", "He", "" ] ]
TITLE: GAPartManip: A Large-scale Part-centric Dataset for Material-Agnostic Articulated Object Manipulation ABSTRACT: Effectively manipulating articulated objects in household scenarios is a crucial step toward achieving general embodied artificial intelligence. Mainstream research in 3D vision has primarily focused on manipulation through depth perception and pose detection. However, in real-world environments, these methods often face challenges due to imperfect depth perception, such as with transparent lids and reflective handles. Moreover, they generally lack the diversity in part-based interactions required for flexible and adaptable manipulation. To address these challenges, we introduced a large-scale part-centric dataset for articulated object manipulation that features both photo-realistic material randomization and detailed annotations of part-oriented, scene-level actionable interaction poses. We evaluated the effectiveness of our dataset by integrating it with several state-of-the-art methods for depth estimation and interaction pose prediction. Additionally, we proposed a novel modular framework that delivers superior and robust performance for generalizable articulated object manipulation. Our extensive experiments demonstrate that our dataset significantly improves the performance of depth perception and actionable interaction pose prediction in both simulation and real-world scenarios. More information and demos can be found at: https://pku-epic.github.io/GAPartManip/.
2412.03473
Ziwen Li
Ziwen Li, Jiaxin Huang, Runnan Chen, Yunlong Che, Yandong Guo, Tongliang Liu, Fakhri Karray, Mingming Gong
UrbanGS: Semantic-Guided Gaussian Splatting for Urban Scene Reconstruction
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Reconstructing urban scenes is challenging due to their complex geometries and the presence of potentially dynamic objects. 3D Gaussian Splatting (3DGS)-based methods have shown strong performance, but existing approaches often incorporate manual 3D annotations to improve dynamic object modeling, which is impractical due to high labeling costs. Some methods leverage 4D Gaussian Splatting (4DGS) to represent the entire scene, but they treat static and dynamic objects uniformly, leading to unnecessary updates for static elements and ultimately degrading reconstruction quality. To address these issues, we propose UrbanGS, which leverages 2D semantic maps and an existing dynamic Gaussian approach to distinguish static objects from the scene, enabling separate processing of definite static and potentially dynamic elements. Specifically, for definite static regions, we enforce global consistency to prevent unintended changes in dynamic Gaussian and introduce a K-nearest neighbor (KNN)-based regularization to improve local coherence on low-textured ground surfaces. Notably, for potentially dynamic objects, we aggregate temporal information using learnable time embeddings, allowing each Gaussian to model deformations over time. Extensive experiments on real-world datasets demonstrate that our approach outperforms state-of-the-art methods in reconstruction quality and efficiency, accurately preserving static content while capturing dynamic elements.
[ { "version": "v1", "created": "Wed, 4 Dec 2024 16:59:49 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 10:30:57 GMT" } ]
2025-03-24T00:00:00
[ [ "Li", "Ziwen", "" ], [ "Huang", "Jiaxin", "" ], [ "Chen", "Runnan", "" ], [ "Che", "Yunlong", "" ], [ "Guo", "Yandong", "" ], [ "Liu", "Tongliang", "" ], [ "Karray", "Fakhri", "" ], [ "Gong", "Mingming", "" ] ]
TITLE: UrbanGS: Semantic-Guided Gaussian Splatting for Urban Scene Reconstruction ABSTRACT: Reconstructing urban scenes is challenging due to their complex geometries and the presence of potentially dynamic objects. 3D Gaussian Splatting (3DGS)-based methods have shown strong performance, but existing approaches often incorporate manual 3D annotations to improve dynamic object modeling, which is impractical due to high labeling costs. Some methods leverage 4D Gaussian Splatting (4DGS) to represent the entire scene, but they treat static and dynamic objects uniformly, leading to unnecessary updates for static elements and ultimately degrading reconstruction quality. To address these issues, we propose UrbanGS, which leverages 2D semantic maps and an existing dynamic Gaussian approach to distinguish static objects from the scene, enabling separate processing of definite static and potentially dynamic elements. Specifically, for definite static regions, we enforce global consistency to prevent unintended changes in dynamic Gaussian and introduce a K-nearest neighbor (KNN)-based regularization to improve local coherence on low-textured ground surfaces. Notably, for potentially dynamic objects, we aggregate temporal information using learnable time embeddings, allowing each Gaussian to model deformations over time. Extensive experiments on real-world datasets demonstrate that our approach outperforms state-of-the-art methods in reconstruction quality and efficiency, accurately preserving static content while capturing dynamic elements.
2412.06699
Huachen Gao
Baorui Ma, Huachen Gao, Haoge Deng, Zhengxiong Luo, Tiejun Huang, Lulu Tang, Xinlong Wang
You See it, You Got it: Learning 3D Creation on Pose-Free Videos at Scale
Accepted by CVPR 2025, Project Page: https://vision.baai.ac.cn/see3d
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent 3D generation models typically rely on limited-scale 3D `gold-labels' or 2D diffusion priors for 3D content creation. However, their performance is upper-bounded by constrained 3D priors due to the lack of scalable learning paradigms. In this work, we present See3D, a visual-conditional multi-view diffusion model trained on large-scale Internet videos for open-world 3D creation. The model aims to Get 3D knowledge by solely Seeing the visual contents from the vast and rapidly growing video data -- You See it, You Got it. To achieve this, we first scale up the training data using a proposed data curation pipeline that automatically filters out multi-view inconsistencies and insufficient observations from source videos. This results in a high-quality, richly diverse, large-scale dataset of multi-view images, termed WebVi3D, containing 320M frames from 16M video clips. Nevertheless, learning generic 3D priors from videos without explicit 3D geometry or camera pose annotations is nontrivial, and annotating poses for web-scale videos is prohibitively expensive. To eliminate the need for pose conditions, we introduce an innovative visual-condition - a purely 2D-inductive visual signal generated by adding time-dependent noise to the masked video data. Finally, we introduce a novel visual-conditional 3D generation framework by integrating See3D into a warping-based pipeline for high-fidelity 3D generation. Our numerical and visual comparisons on single and sparse reconstruction benchmarks show that See3D, trained on cost-effective and scalable video data, achieves notable zero-shot and open-world generation capabilities, markedly outperforming models trained on costly and constrained 3D datasets. Please refer to our project page at: https://vision.baai.ac.cn/see3d
[ { "version": "v1", "created": "Mon, 9 Dec 2024 17:44:56 GMT" }, { "version": "v2", "created": "Sat, 14 Dec 2024 15:42:05 GMT" }, { "version": "v3", "created": "Fri, 21 Mar 2025 08:55:03 GMT" } ]
2025-03-24T00:00:00
[ [ "Ma", "Baorui", "" ], [ "Gao", "Huachen", "" ], [ "Deng", "Haoge", "" ], [ "Luo", "Zhengxiong", "" ], [ "Huang", "Tiejun", "" ], [ "Tang", "Lulu", "" ], [ "Wang", "Xinlong", "" ] ]
TITLE: You See it, You Got it: Learning 3D Creation on Pose-Free Videos at Scale ABSTRACT: Recent 3D generation models typically rely on limited-scale 3D `gold-labels' or 2D diffusion priors for 3D content creation. However, their performance is upper-bounded by constrained 3D priors due to the lack of scalable learning paradigms. In this work, we present See3D, a visual-conditional multi-view diffusion model trained on large-scale Internet videos for open-world 3D creation. The model aims to Get 3D knowledge by solely Seeing the visual contents from the vast and rapidly growing video data -- You See it, You Got it. To achieve this, we first scale up the training data using a proposed data curation pipeline that automatically filters out multi-view inconsistencies and insufficient observations from source videos. This results in a high-quality, richly diverse, large-scale dataset of multi-view images, termed WebVi3D, containing 320M frames from 16M video clips. Nevertheless, learning generic 3D priors from videos without explicit 3D geometry or camera pose annotations is nontrivial, and annotating poses for web-scale videos is prohibitively expensive. To eliminate the need for pose conditions, we introduce an innovative visual-condition - a purely 2D-inductive visual signal generated by adding time-dependent noise to the masked video data. Finally, we introduce a novel visual-conditional 3D generation framework by integrating See3D into a warping-based pipeline for high-fidelity 3D generation. Our numerical and visual comparisons on single and sparse reconstruction benchmarks show that See3D, trained on cost-effective and scalable video data, achieves notable zero-shot and open-world generation capabilities, markedly outperforming models trained on costly and constrained 3D datasets. Please refer to our project page at: https://vision.baai.ac.cn/see3d
2412.08376
Siyan Dong
Siyan Dong, Shuzhe Wang, Shaohui Liu, Lulu Cai, Qingnan Fan, Juho Kannala, Yanchao Yang
Reloc3r: Large-Scale Training of Relative Camera Pose Regression for Generalizable, Fast, and Accurate Visual Localization
CVPR 2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual localization aims to determine the camera pose of a query image relative to a database of posed images. In recent years, deep neural networks that directly regress camera poses have gained popularity due to their fast inference capabilities. However, existing methods struggle to either generalize well to new scenes or provide accurate camera pose estimates. To address these issues, we present Reloc3r, a simple yet effective visual localization framework. It consists of an elegantly designed relative pose regression network, and a minimalist motion averaging module for absolute pose estimation. Trained on approximately eight million posed image pairs, Reloc3r achieves surprisingly good performance and generalization ability. We conduct extensive experiments on six public datasets, consistently demonstrating the effectiveness and efficiency of the proposed method. It provides high-quality camera pose estimates in real time and generalizes to novel scenes. Code: https://github.com/ffrivera0/reloc3r.
[ { "version": "v1", "created": "Wed, 11 Dec 2024 13:36:18 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 10:18:18 GMT" } ]
2025-03-24T00:00:00
[ [ "Dong", "Siyan", "" ], [ "Wang", "Shuzhe", "" ], [ "Liu", "Shaohui", "" ], [ "Cai", "Lulu", "" ], [ "Fan", "Qingnan", "" ], [ "Kannala", "Juho", "" ], [ "Yang", "Yanchao", "" ] ]
TITLE: Reloc3r: Large-Scale Training of Relative Camera Pose Regression for Generalizable, Fast, and Accurate Visual Localization ABSTRACT: Visual localization aims to determine the camera pose of a query image relative to a database of posed images. In recent years, deep neural networks that directly regress camera poses have gained popularity due to their fast inference capabilities. However, existing methods struggle to either generalize well to new scenes or provide accurate camera pose estimates. To address these issues, we present Reloc3r, a simple yet effective visual localization framework. It consists of an elegantly designed relative pose regression network, and a minimalist motion averaging module for absolute pose estimation. Trained on approximately eight million posed image pairs, Reloc3r achieves surprisingly good performance and generalization ability. We conduct extensive experiments on six public datasets, consistently demonstrating the effectiveness and efficiency of the proposed method. It provides high-quality camera pose estimates in real time and generalizes to novel scenes. Code: https://github.com/ffrivera0/reloc3r.
2412.09010
Yusuke Sakemi Ph.D.
Yusuke Sakemi, Yuji Okamoto, Takashi Morie, Sou Nobukawa, Takeo Hosomi, Kazuyuki Aihara
Harnessing Nonidealities in Analog In-Memory Computing Circuits: A Physical Modeling Approach for Neuromorphic Systems
Title changed
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Large-scale deep learning models are increasingly constrained by their immense energy consumption, limiting their scalability and applicability for edge intelligence. In-memory computing (IMC) offers a promising solution by addressing the von Neumann bottleneck inherent in traditional deep learning accelerators, significantly reducing energy consumption. However, the analog nature of IMC introduces hardware nonidealities that degrade model performance and reliability. This paper presents a novel approach to directly train physical models of IMC, formulated as ordinary-differential-equation (ODE)-based physical neural networks (PNNs). To enable the training of large-scale networks, we propose a technique called differentiable spike-time discretization (DSTD), which reduces the computational cost of ODE-based PNNs by up to 20 times in speed and 100 times in memory. We demonstrate that such large-scale networks enhance the learning performance by exploiting hardware nonidealities on the CIFAR-10 dataset. The proposed bottom-up methodology is validated through the post-layout SPICE simulations on the IMC circuit with nonideal characteristics using the sky130 process. The proposed PNN approach reduces the discrepancy between the model behavior and circuit dynamics by at least an order of magnitude. This work paves the way for leveraging nonideal physical devices, such as non-volatile resistive memories, for energy-efficient deep learning applications.
[ { "version": "v1", "created": "Thu, 12 Dec 2024 07:22:23 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 03:08:11 GMT" } ]
2025-03-24T00:00:00
[ [ "Sakemi", "Yusuke", "" ], [ "Okamoto", "Yuji", "" ], [ "Morie", "Takashi", "" ], [ "Nobukawa", "Sou", "" ], [ "Hosomi", "Takeo", "" ], [ "Aihara", "Kazuyuki", "" ] ]
TITLE: Harnessing Nonidealities in Analog In-Memory Computing Circuits: A Physical Modeling Approach for Neuromorphic Systems ABSTRACT: Large-scale deep learning models are increasingly constrained by their immense energy consumption, limiting their scalability and applicability for edge intelligence. In-memory computing (IMC) offers a promising solution by addressing the von Neumann bottleneck inherent in traditional deep learning accelerators, significantly reducing energy consumption. However, the analog nature of IMC introduces hardware nonidealities that degrade model performance and reliability. This paper presents a novel approach to directly train physical models of IMC, formulated as ordinary-differential-equation (ODE)-based physical neural networks (PNNs). To enable the training of large-scale networks, we propose a technique called differentiable spike-time discretization (DSTD), which reduces the computational cost of ODE-based PNNs by up to 20 times in speed and 100 times in memory. We demonstrate that such large-scale networks enhance the learning performance by exploiting hardware nonidealities on the CIFAR-10 dataset. The proposed bottom-up methodology is validated through the post-layout SPICE simulations on the IMC circuit with nonideal characteristics using the sky130 process. The proposed PNN approach reduces the discrepancy between the model behavior and circuit dynamics by at least an order of magnitude. This work paves the way for leveraging nonideal physical devices, such as non-volatile resistive memories, for energy-efficient deep learning applications.
2412.12406
Meisam Kabiri
Meisam Kabiri, Holger Voos
Global SLAM Using 5G ToA Integration: Performance Analysis with Unknown Base Stations and Loop Closure Alternatives
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This paper presents a novel approach that integrates 5G Time of Arrival (ToA) measurements into ORB-SLAM3 to enable global localization and enhance mapping capabilities for indoor drone navigation. We extend ORB-SLAM3's optimization pipeline to jointly process ToA data from 5G base stations alongside visual and inertial measurements while estimating system biases. This integration transforms the inherently local SLAM estimates into globally referenced trajectories and effectively resolves scale ambiguity in monocular configurations. Our method is evaluated using both Aerolab indoor datasets with RGB-D cameras and the EuRoC MAV benchmark, complemented by simulated 5G ToA measurements at 28 GHz and 78 GHz frequencies using MATLAB and QuaDRiGa. Extensive experiments across multiple SLAM configurations demonstrate that ToA integration enables consistent global positioning across all modes while maintaining local accuracy. For monocular configurations, ToA integration successfully resolves scale ambiguity and improves consistency. We further investigate scenarios with unknown base station positions and demonstrate that ToA measurements can effectively serve as an alternative to loop closure for drift correction. We also analyze how different geometric arrangements of base stations impact SLAM performance. Comparative analysis with state-of-the-art methods, including UWB-VO, confirms our approach's robustness even with lower measurement frequencies and sequential base station operation. The results validate that 5G ToA integration provides substantial benefits for global SLAM applications, particularly in challenging indoor environments where accurate positioning is critical.
[ { "version": "v1", "created": "Mon, 16 Dec 2024 23:17:40 GMT" }, { "version": "v2", "created": "Sat, 28 Dec 2024 13:57:49 GMT" }, { "version": "v3", "created": "Thu, 16 Jan 2025 16:55:40 GMT" }, { "version": "v4", "created": "Mon, 17 Mar 2025 21:48:33 GMT" }, { "version": "v5", "created": "Fri, 21 Mar 2025 13:48:46 GMT" } ]
2025-03-24T00:00:00
[ [ "Kabiri", "Meisam", "" ], [ "Voos", "Holger", "" ] ]
TITLE: Global SLAM Using 5G ToA Integration: Performance Analysis with Unknown Base Stations and Loop Closure Alternatives ABSTRACT: This paper presents a novel approach that integrates 5G Time of Arrival (ToA) measurements into ORB-SLAM3 to enable global localization and enhance mapping capabilities for indoor drone navigation. We extend ORB-SLAM3's optimization pipeline to jointly process ToA data from 5G base stations alongside visual and inertial measurements while estimating system biases. This integration transforms the inherently local SLAM estimates into globally referenced trajectories and effectively resolves scale ambiguity in monocular configurations. Our method is evaluated using both Aerolab indoor datasets with RGB-D cameras and the EuRoC MAV benchmark, complemented by simulated 5G ToA measurements at 28 GHz and 78 GHz frequencies using MATLAB and QuaDRiGa. Extensive experiments across multiple SLAM configurations demonstrate that ToA integration enables consistent global positioning across all modes while maintaining local accuracy. For monocular configurations, ToA integration successfully resolves scale ambiguity and improves consistency. We further investigate scenarios with unknown base station positions and demonstrate that ToA measurements can effectively serve as an alternative to loop closure for drift correction. We also analyze how different geometric arrangements of base stations impact SLAM performance. Comparative analysis with state-of-the-art methods, including UWB-VO, confirms our approach's robustness even with lower measurement frequencies and sequential base station operation. The results validate that 5G ToA integration provides substantial benefits for global SLAM applications, particularly in challenging indoor environments where accurate positioning is critical.
2412.15289
Xiaoning Dong
Xiaoning Dong, Wenbo Hu, Wei Xu, Tianxing He
SATA: A Paradigm for LLM Jailbreak via Simple Assistive Task Linkage
null
null
null
null
cs.CR cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have made significant advancements across various tasks, but their safety alignment remain a major concern. Exploring jailbreak prompts can expose LLMs' vulnerabilities and guide efforts to secure them. Existing methods primarily design sophisticated instructions for the LLM to follow, or rely on multiple iterations, which could hinder the performance and efficiency of jailbreaks. In this work, we propose a novel jailbreak paradigm, Simple Assistive Task Linkage (SATA), which can effectively circumvent LLM safeguards and elicit harmful responses. Specifically, SATA first masks harmful keywords within a malicious query to generate a relatively benign query containing one or multiple [MASK] special tokens. It then employs a simple assistive task such as a masked language model task or an element lookup by position task to encode the semantics of the masked keywords. Finally, SATA links the assistive task with the masked query to jointly perform the jailbreak. Extensive experiments show that SATA achieves state-of-the-art performance and outperforms baselines by a large margin. Specifically, on AdvBench dataset, with mask language model (MLM) assistive task, SATA achieves an overall attack success rate (ASR) of 85% and harmful score (HS) of 4.57, and with element lookup by position (ELP) assistive task, SATA attains an overall ASR of 76% and HS of 4.43.
[ { "version": "v1", "created": "Thu, 19 Dec 2024 05:57:37 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 13:00:44 GMT" } ]
2025-03-24T00:00:00
[ [ "Dong", "Xiaoning", "" ], [ "Hu", "Wenbo", "" ], [ "Xu", "Wei", "" ], [ "He", "Tianxing", "" ] ]
TITLE: SATA: A Paradigm for LLM Jailbreak via Simple Assistive Task Linkage ABSTRACT: Large language models (LLMs) have made significant advancements across various tasks, but their safety alignment remain a major concern. Exploring jailbreak prompts can expose LLMs' vulnerabilities and guide efforts to secure them. Existing methods primarily design sophisticated instructions for the LLM to follow, or rely on multiple iterations, which could hinder the performance and efficiency of jailbreaks. In this work, we propose a novel jailbreak paradigm, Simple Assistive Task Linkage (SATA), which can effectively circumvent LLM safeguards and elicit harmful responses. Specifically, SATA first masks harmful keywords within a malicious query to generate a relatively benign query containing one or multiple [MASK] special tokens. It then employs a simple assistive task such as a masked language model task or an element lookup by position task to encode the semantics of the masked keywords. Finally, SATA links the assistive task with the masked query to jointly perform the jailbreak. Extensive experiments show that SATA achieves state-of-the-art performance and outperforms baselines by a large margin. Specifically, on AdvBench dataset, with mask language model (MLM) assistive task, SATA achieves an overall attack success rate (ASR) of 85% and harmful score (HS) of 4.57, and with element lookup by position (ELP) assistive task, SATA attains an overall ASR of 76% and HS of 4.43.
2412.18600
Hongjie Li
Hongjie Li, Hong-Xing Yu, Jiaman Li, Jiajun Wu
ZeroHSI: Zero-Shot 4D Human-Scene Interaction by Video Generation
Project website: https://awfuact.github.io/zerohsi/ The first two authors contribute equally
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human-scene interaction (HSI) generation is crucial for applications in embodied AI, virtual reality, and robotics. Yet, existing methods cannot synthesize interactions in unseen environments such as in-the-wild scenes or reconstructed scenes, as they rely on paired 3D scenes and captured human motion data for training, which are unavailable for unseen environments. We present ZeroHSI, a novel approach that enables zero-shot 4D human-scene interaction synthesis, eliminating the need for training on any MoCap data. Our key insight is to distill human-scene interactions from state-of-the-art video generation models, which have been trained on vast amounts of natural human movements and interactions, and use differentiable rendering to reconstruct human-scene interactions. ZeroHSI can synthesize realistic human motions in both static scenes and environments with dynamic objects, without requiring any ground-truth motion data. We evaluate ZeroHSI on a curated dataset of different types of various indoor and outdoor scenes with different interaction prompts, demonstrating its ability to generate diverse and contextually appropriate human-scene interactions.
[ { "version": "v1", "created": "Tue, 24 Dec 2024 18:55:38 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 16:17:28 GMT" } ]
2025-03-24T00:00:00
[ [ "Li", "Hongjie", "" ], [ "Yu", "Hong-Xing", "" ], [ "Li", "Jiaman", "" ], [ "Wu", "Jiajun", "" ] ]
TITLE: ZeroHSI: Zero-Shot 4D Human-Scene Interaction by Video Generation ABSTRACT: Human-scene interaction (HSI) generation is crucial for applications in embodied AI, virtual reality, and robotics. Yet, existing methods cannot synthesize interactions in unseen environments such as in-the-wild scenes or reconstructed scenes, as they rely on paired 3D scenes and captured human motion data for training, which are unavailable for unseen environments. We present ZeroHSI, a novel approach that enables zero-shot 4D human-scene interaction synthesis, eliminating the need for training on any MoCap data. Our key insight is to distill human-scene interactions from state-of-the-art video generation models, which have been trained on vast amounts of natural human movements and interactions, and use differentiable rendering to reconstruct human-scene interactions. ZeroHSI can synthesize realistic human motions in both static scenes and environments with dynamic objects, without requiring any ground-truth motion data. We evaluate ZeroHSI on a curated dataset of different types of various indoor and outdoor scenes with different interaction prompts, demonstrating its ability to generate diverse and contextually appropriate human-scene interactions.
2412.20735
Li Yang
Yang Li, Dong Du, Linfeng Song, Chen Li, Weikang Wang, Tao Yang, Haitao Mi
HunyuanProver: A Scalable Data Synthesis Framework and Guided Tree Search for Automated Theorem Proving
null
null
null
null
cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
We introduce HunyuanProver, an language model finetuned from the Hunyuan 7B for interactive automatic theorem proving with LEAN4. To alleviate the data sparsity issue, we design a scalable framework to iterative synthesize data with low cost. Besides, guided tree search algorithms are designed to enable effective ``system 2 thinking`` of the prover. HunyuanProver achieves state-of-the-art (SOTA) performances on major benchmarks. Specifically, it achieves a pass of 68.4% on the miniF2F-test compared to 65.9%, the current SOTA results. It proves 4 IMO statements (imo_1960_p2, imo_1962_p2}, imo_1964_p2 and imo_1983_p6) in miniF2F-test. To benefit the community, we will open-source a dataset of 30k synthesized instances, where each instance contains the original question in natural language, the converted statement by autoformalization, and the proof by HunyuanProver.
[ { "version": "v1", "created": "Mon, 30 Dec 2024 06:18:33 GMT" }, { "version": "v2", "created": "Tue, 31 Dec 2024 10:48:14 GMT" }, { "version": "v3", "created": "Fri, 21 Mar 2025 02:00:37 GMT" } ]
2025-03-24T00:00:00
[ [ "Li", "Yang", "" ], [ "Du", "Dong", "" ], [ "Song", "Linfeng", "" ], [ "Li", "Chen", "" ], [ "Wang", "Weikang", "" ], [ "Yang", "Tao", "" ], [ "Mi", "Haitao", "" ] ]
TITLE: HunyuanProver: A Scalable Data Synthesis Framework and Guided Tree Search for Automated Theorem Proving ABSTRACT: We introduce HunyuanProver, an language model finetuned from the Hunyuan 7B for interactive automatic theorem proving with LEAN4. To alleviate the data sparsity issue, we design a scalable framework to iterative synthesize data with low cost. Besides, guided tree search algorithms are designed to enable effective ``system 2 thinking`` of the prover. HunyuanProver achieves state-of-the-art (SOTA) performances on major benchmarks. Specifically, it achieves a pass of 68.4% on the miniF2F-test compared to 65.9%, the current SOTA results. It proves 4 IMO statements (imo_1960_p2, imo_1962_p2}, imo_1964_p2 and imo_1983_p6) in miniF2F-test. To benefit the community, we will open-source a dataset of 30k synthesized instances, where each instance contains the original question in natural language, the converted statement by autoformalization, and the proof by HunyuanProver.
2501.00174
Marco Siino
Marco Siino, Ilenia Tinnirello, Marco La Cascia
The Text Classification Pipeline: Starting Shallow going Deeper
null
null
null
null
cs.CL cs.AI cs.IR
http://creativecommons.org/licenses/by/4.0/
Text classification stands as a cornerstone within the realm of Natural Language Processing (NLP), particularly when viewed through computer science and engineering. The past decade has seen deep learning revolutionize text classification, propelling advancements in text retrieval, categorization, information extraction, and summarization. The scholarly literature includes datasets, models, and evaluation criteria, with English being the predominant language of focus, despite studies involving Arabic, Chinese, Hindi, and others. The efficacy of text classification models relies heavily on their ability to capture intricate textual relationships and non-linear correlations, necessitating a comprehensive examination of the entire text classification pipeline. In the NLP domain, a plethora of text representation techniques and model architectures have emerged, with Large Language Models (LLMs) and Generative Pre-trained Transformers (GPTs) at the forefront. These models are adept at transforming extensive textual data into meaningful vector representations encapsulating semantic information. The multidisciplinary nature of text classification, encompassing data mining, linguistics, and information retrieval, highlights the importance of collaborative research to advance the field. This work integrates traditional and contemporary text mining methodologies, fostering a holistic understanding of text classification.
[ { "version": "v1", "created": "Mon, 30 Dec 2024 23:01:19 GMT" }, { "version": "v2", "created": "Thu, 20 Mar 2025 19:18:07 GMT" } ]
2025-03-24T00:00:00
[ [ "Siino", "Marco", "" ], [ "Tinnirello", "Ilenia", "" ], [ "La Cascia", "Marco", "" ] ]
TITLE: The Text Classification Pipeline: Starting Shallow going Deeper ABSTRACT: Text classification stands as a cornerstone within the realm of Natural Language Processing (NLP), particularly when viewed through computer science and engineering. The past decade has seen deep learning revolutionize text classification, propelling advancements in text retrieval, categorization, information extraction, and summarization. The scholarly literature includes datasets, models, and evaluation criteria, with English being the predominant language of focus, despite studies involving Arabic, Chinese, Hindi, and others. The efficacy of text classification models relies heavily on their ability to capture intricate textual relationships and non-linear correlations, necessitating a comprehensive examination of the entire text classification pipeline. In the NLP domain, a plethora of text representation techniques and model architectures have emerged, with Large Language Models (LLMs) and Generative Pre-trained Transformers (GPTs) at the forefront. These models are adept at transforming extensive textual data into meaningful vector representations encapsulating semantic information. The multidisciplinary nature of text classification, encompassing data mining, linguistics, and information retrieval, highlights the importance of collaborative research to advance the field. This work integrates traditional and contemporary text mining methodologies, fostering a holistic understanding of text classification.
2501.08837
Olga Zatsarynna
Olga Zatsarynna, Emad Bahrami, Yazan Abu Farha, Gianpiero Francesca, Juergen Gall
MANTA: Diffusion Mamba for Efficient and Effective Stochastic Long-Term Dense Anticipation
Accepted to CVPR2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long-term dense action anticipation is very challenging since it requires predicting actions and their durations several minutes into the future based on provided video observations. To model the uncertainty of future outcomes, stochastic models predict several potential future action sequences for the same observation. Recent work has further proposed to incorporate uncertainty modelling for observed frames by simultaneously predicting per-frame past and future actions in a unified manner. While such joint modelling of actions is beneficial, it requires long-range temporal capabilities to connect events across distant past and future time points. However, the previous work struggles to achieve such a long-range understanding due to its limited and/or sparse receptive field. To alleviate this issue, we propose a novel MANTA (MAmba for ANTicipation) network. Our model enables effective long-term temporal modelling even for very long sequences while maintaining linear complexity in sequence length. We demonstrate that our approach achieves state-of-the-art results on three datasets - Breakfast, 50Salads, and Assembly101 - while also significantly improving computational and memory efficiency. Our code is available at https://github.com/olga-zats/DIFF_MANTA .
[ { "version": "v1", "created": "Wed, 15 Jan 2025 14:46:44 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 17:04:07 GMT" } ]
2025-03-24T00:00:00
[ [ "Zatsarynna", "Olga", "" ], [ "Bahrami", "Emad", "" ], [ "Farha", "Yazan Abu", "" ], [ "Francesca", "Gianpiero", "" ], [ "Gall", "Juergen", "" ] ]
TITLE: MANTA: Diffusion Mamba for Efficient and Effective Stochastic Long-Term Dense Anticipation ABSTRACT: Long-term dense action anticipation is very challenging since it requires predicting actions and their durations several minutes into the future based on provided video observations. To model the uncertainty of future outcomes, stochastic models predict several potential future action sequences for the same observation. Recent work has further proposed to incorporate uncertainty modelling for observed frames by simultaneously predicting per-frame past and future actions in a unified manner. While such joint modelling of actions is beneficial, it requires long-range temporal capabilities to connect events across distant past and future time points. However, the previous work struggles to achieve such a long-range understanding due to its limited and/or sparse receptive field. To alleviate this issue, we propose a novel MANTA (MAmba for ANTicipation) network. Our model enables effective long-term temporal modelling even for very long sequences while maintaining linear complexity in sequence length. We demonstrate that our approach achieves state-of-the-art results on three datasets - Breakfast, 50Salads, and Assembly101 - while also significantly improving computational and memory efficiency. Our code is available at https://github.com/olga-zats/DIFF_MANTA .
2501.08909
Ruizhen Gu
Ruizhen Gu and Jos\'e Miguel Rojas and Donghwan Shin
Software Testing for Extended Reality Applications: A Systematic Mapping Study
51 pages, 10 figures
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Extended Reality (XR) is an emerging technology spanning diverse application domains and offering immersive user experiences. However, its unique characteristics, such as six degrees of freedom interactions, present significant testing challenges distinct from traditional 2D GUI applications, demanding novel testing techniques to build high-quality XR applications. This paper presents the first systematic mapping study on software testing for XR applications. We selected 34 studies focusing on techniques and empirical approaches in XR software testing for detailed examination. The studies are classified and reviewed to address the current research landscape, test facets, and evaluation methodologies in the XR testing domain. Additionally, we provide a repository summarising the mapping study, including datasets and tools referenced in the selected studies, to support future research and practical applications. Our study highlights open challenges in XR testing and proposes actionable future research directions to address the gaps and advance the field of XR software testing.
[ { "version": "v1", "created": "Wed, 15 Jan 2025 16:19:12 GMT" }, { "version": "v2", "created": "Thu, 20 Mar 2025 18:11:30 GMT" } ]
2025-03-24T00:00:00
[ [ "Gu", "Ruizhen", "" ], [ "Rojas", "José Miguel", "" ], [ "Shin", "Donghwan", "" ] ]
TITLE: Software Testing for Extended Reality Applications: A Systematic Mapping Study ABSTRACT: Extended Reality (XR) is an emerging technology spanning diverse application domains and offering immersive user experiences. However, its unique characteristics, such as six degrees of freedom interactions, present significant testing challenges distinct from traditional 2D GUI applications, demanding novel testing techniques to build high-quality XR applications. This paper presents the first systematic mapping study on software testing for XR applications. We selected 34 studies focusing on techniques and empirical approaches in XR software testing for detailed examination. The studies are classified and reviewed to address the current research landscape, test facets, and evaluation methodologies in the XR testing domain. Additionally, we provide a repository summarising the mapping study, including datasets and tools referenced in the selected studies, to support future research and practical applications. Our study highlights open challenges in XR testing and proposes actionable future research directions to address the gaps and advance the field of XR software testing.
2501.11006
Shashikant Ilager Mr
Shashikant Ilager, Lukas Florian Briem, Ivona Brandic
GREEN-CODE: Learning to Optimize Energy Efficiency in LLM-based Code Generation
Under submission in ACM/IEEE conference, 11 pages
null
null
null
cs.DC cs.AI cs.PF cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) are becoming integral to daily life, showcasing their vast potential across various Natural Language Processing (NLP) tasks. Beyond NLP, LLMs are increasingly used in software development tasks, such as code completion, modification, bug fixing, and code translation. Software engineers widely use tools like GitHub Copilot and Amazon Q, streamlining workflows and automating tasks with high accuracy. While the resource and energy intensity of LLM training is often highlighted, inference can be even more resource-intensive over time, as it's a continuous process with a high number of invocations. Therefore, developing resource-efficient alternatives for LLM inference is crucial for sustainability. This work proposes GREEN-CODE, a framework for energy-aware code generation in LLMs. GREEN-CODE performs dynamic early exit during LLM inference. We train a Reinforcement Learning (RL) agent that learns to balance the trade-offs between accuracy, latency, and energy consumption. Our approach is evaluated on two open-source LLMs, Llama 3.2 3B and OPT 2.7B, using the JavaCorpus and PY150 datasets. Results show that our method reduces the energy consumption between 23-50 % on average for code generation tasks without significantly affecting accuracy.
[ { "version": "v1", "created": "Sun, 19 Jan 2025 10:44:03 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 15:07:55 GMT" } ]
2025-03-24T00:00:00
[ [ "Ilager", "Shashikant", "" ], [ "Briem", "Lukas Florian", "" ], [ "Brandic", "Ivona", "" ] ]
TITLE: GREEN-CODE: Learning to Optimize Energy Efficiency in LLM-based Code Generation ABSTRACT: Large Language Models (LLMs) are becoming integral to daily life, showcasing their vast potential across various Natural Language Processing (NLP) tasks. Beyond NLP, LLMs are increasingly used in software development tasks, such as code completion, modification, bug fixing, and code translation. Software engineers widely use tools like GitHub Copilot and Amazon Q, streamlining workflows and automating tasks with high accuracy. While the resource and energy intensity of LLM training is often highlighted, inference can be even more resource-intensive over time, as it's a continuous process with a high number of invocations. Therefore, developing resource-efficient alternatives for LLM inference is crucial for sustainability. This work proposes GREEN-CODE, a framework for energy-aware code generation in LLMs. GREEN-CODE performs dynamic early exit during LLM inference. We train a Reinforcement Learning (RL) agent that learns to balance the trade-offs between accuracy, latency, and energy consumption. Our approach is evaluated on two open-source LLMs, Llama 3.2 3B and OPT 2.7B, using the JavaCorpus and PY150 datasets. Results show that our method reduces the energy consumption between 23-50 % on average for code generation tasks without significantly affecting accuracy.
2502.00212
Kefan Dong
Kefan Dong, Tengyu Ma
STP: Self-play LLM Theorem Provers with Iterative Conjecturing and Proving
25 pages, 5 figures
null
null
null
cs.LG cs.AI cs.LO
http://creativecommons.org/licenses/by-nc-nd/4.0/
A fundamental challenge in formal theorem proving by LLMs is the lack of high-quality training data. Although reinforcement learning or expert iteration partially mitigates this issue by alternating between LLM generating proofs and finetuning them on correctly generated ones, performance quickly plateaus due to the scarcity of correct proofs (sparse rewards). To keep improving the models with limited data, we draw inspiration from mathematicians, who continuously develop new results, partly by proposing novel conjectures or exercises (which are often variants of known results) and attempting to solve them. We design the Self-play Theorem Prover (STP) that simultaneously takes on two roles, conjecturer and prover, each providing training signals to the other. The conjecturer is trained iteratively on previously generated conjectures that are barely provable by the current prover, which incentivizes it to generate increasingly challenging conjectures over time. The prover attempts to prove the conjectures with standard expert iteration. We evaluate STP with both Lean and Isabelle formal versifiers. With 51.3 billion tokens generated during the training in Lean, STP proves 28.5% of the statements in the LeanWorkbook dataset, doubling the previous best result of 13.2% achieved through expert iteration. The final model achieves state-of-the-art performance among whole-proof generation methods on miniF2F-test (65.0%, pass@3200), Proofnet-test (23.9%, pass@3200) and PutnamBench (8/644, pass@3200). We release our code, model, and dataset in this URL: https://github.com/kfdong/STP.
[ { "version": "v1", "created": "Fri, 31 Jan 2025 23:01:48 GMT" }, { "version": "v2", "created": "Tue, 4 Feb 2025 07:20:28 GMT" }, { "version": "v3", "created": "Tue, 11 Feb 2025 03:52:52 GMT" }, { "version": "v4", "created": "Fri, 21 Mar 2025 03:27:55 GMT" } ]
2025-03-24T00:00:00
[ [ "Dong", "Kefan", "" ], [ "Ma", "Tengyu", "" ] ]
TITLE: STP: Self-play LLM Theorem Provers with Iterative Conjecturing and Proving ABSTRACT: A fundamental challenge in formal theorem proving by LLMs is the lack of high-quality training data. Although reinforcement learning or expert iteration partially mitigates this issue by alternating between LLM generating proofs and finetuning them on correctly generated ones, performance quickly plateaus due to the scarcity of correct proofs (sparse rewards). To keep improving the models with limited data, we draw inspiration from mathematicians, who continuously develop new results, partly by proposing novel conjectures or exercises (which are often variants of known results) and attempting to solve them. We design the Self-play Theorem Prover (STP) that simultaneously takes on two roles, conjecturer and prover, each providing training signals to the other. The conjecturer is trained iteratively on previously generated conjectures that are barely provable by the current prover, which incentivizes it to generate increasingly challenging conjectures over time. The prover attempts to prove the conjectures with standard expert iteration. We evaluate STP with both Lean and Isabelle formal versifiers. With 51.3 billion tokens generated during the training in Lean, STP proves 28.5% of the statements in the LeanWorkbook dataset, doubling the previous best result of 13.2% achieved through expert iteration. The final model achieves state-of-the-art performance among whole-proof generation methods on miniF2F-test (65.0%, pass@3200), Proofnet-test (23.9%, pass@3200) and PutnamBench (8/644, pass@3200). We release our code, model, and dataset in this URL: https://github.com/kfdong/STP.
2502.02454
Yuxin Qi
Quan Zhang, Yuxin Qi, Xi Tang, Jinwei Fang, Xi Lin, Ke Zhang, Chun Yuan
IMDPrompter: Adapting SAM to Image Manipulation Detection by Cross-View Automated Prompt Learning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using extensive training data from SA-1B, the Segment Anything Model (SAM) has demonstrated exceptional generalization and zero-shot capabilities, attracting widespread attention in areas such as medical image segmentation and remote sensing image segmentation. However, its performance in the field of image manipulation detection remains largely unexplored and unconfirmed. There are two main challenges in applying SAM to image manipulation detection: a) reliance on manual prompts, and b) the difficulty of single-view information in supporting cross-dataset generalization. To address these challenges, we develops a cross-view prompt learning paradigm called IMDPrompter based on SAM. Benefiting from the design of automated prompts, IMDPrompter no longer relies on manual guidance, enabling automated detection and localization. Additionally, we propose components such as Cross-view Feature Perception, Optimal Prompt Selection, and Cross-View Prompt Consistency, which facilitate cross-view perceptual learning and guide SAM to generate accurate masks. Extensive experimental results from five datasets (CASIA, Columbia, Coverage, IMD2020, and NIST16) validate the effectiveness of our proposed method.
[ { "version": "v1", "created": "Tue, 4 Feb 2025 16:20:41 GMT" }, { "version": "v2", "created": "Mon, 17 Mar 2025 09:53:55 GMT" }, { "version": "v3", "created": "Fri, 21 Mar 2025 08:02:45 GMT" } ]
2025-03-24T00:00:00
[ [ "Zhang", "Quan", "" ], [ "Qi", "Yuxin", "" ], [ "Tang", "Xi", "" ], [ "Fang", "Jinwei", "" ], [ "Lin", "Xi", "" ], [ "Zhang", "Ke", "" ], [ "Yuan", "Chun", "" ] ]
TITLE: IMDPrompter: Adapting SAM to Image Manipulation Detection by Cross-View Automated Prompt Learning ABSTRACT: Using extensive training data from SA-1B, the Segment Anything Model (SAM) has demonstrated exceptional generalization and zero-shot capabilities, attracting widespread attention in areas such as medical image segmentation and remote sensing image segmentation. However, its performance in the field of image manipulation detection remains largely unexplored and unconfirmed. There are two main challenges in applying SAM to image manipulation detection: a) reliance on manual prompts, and b) the difficulty of single-view information in supporting cross-dataset generalization. To address these challenges, we develops a cross-view prompt learning paradigm called IMDPrompter based on SAM. Benefiting from the design of automated prompts, IMDPrompter no longer relies on manual guidance, enabling automated detection and localization. Additionally, we propose components such as Cross-view Feature Perception, Optimal Prompt Selection, and Cross-View Prompt Consistency, which facilitate cross-view perceptual learning and guide SAM to generate accurate masks. Extensive experimental results from five datasets (CASIA, Columbia, Coverage, IMD2020, and NIST16) validate the effectiveness of our proposed method.
2502.03490
Nora Belrose
David Johnston, Nora Belrose
Examining Two Hop Reasoning Through Information Content Scaling
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Prior work has found that transformers have an inconsistent ability to learn to answer latent two-hop questions -- questions of the form "Who is Bob's mother's boss?" We study why this is the case by examining how transformers' capacity to learn datasets of two-hop questions and answers (two-hop QA) scales with their size, motivated by prior work on transformer knowledge capacity for simple factual memorization. We find that capacity scaling and generalization both support the hypothesis that latent two-hop QA requires transformers to learn each fact twice, while two-hop QA with chain of thought does not. We also show that with appropriate dataset parameters, it is possible to "trap" very small models in a regime where they memorize answers to two-hop questions independently, even though they would perform better if they could learn to answer them with function composition. Our findings show that measurement of capacity scaling can complement existing interpretability methods, though there are challenges in using it for this purpose.
[ { "version": "v1", "created": "Wed, 5 Feb 2025 02:13:04 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 03:49:12 GMT" } ]
2025-03-24T00:00:00
[ [ "Johnston", "David", "" ], [ "Belrose", "Nora", "" ] ]
TITLE: Examining Two Hop Reasoning Through Information Content Scaling ABSTRACT: Prior work has found that transformers have an inconsistent ability to learn to answer latent two-hop questions -- questions of the form "Who is Bob's mother's boss?" We study why this is the case by examining how transformers' capacity to learn datasets of two-hop questions and answers (two-hop QA) scales with their size, motivated by prior work on transformer knowledge capacity for simple factual memorization. We find that capacity scaling and generalization both support the hypothesis that latent two-hop QA requires transformers to learn each fact twice, while two-hop QA with chain of thought does not. We also show that with appropriate dataset parameters, it is possible to "trap" very small models in a regime where they memorize answers to two-hop questions independently, even though they would perform better if they could learn to answer them with function composition. Our findings show that measurement of capacity scaling can complement existing interpretability methods, though there are challenges in using it for this purpose.
2502.07411
Sheng Zhou
Sheng Zhou, Junbin Xiao, Qingyun Li, Yicong Li, Xun Yang, Dan Guo, Meng Wang, Tat-Seng Chua, Angela Yao
EgoTextVQA: Towards Egocentric Scene-Text Aware Video Question Answering
Accepted by CVPR 2025
null
null
null
cs.CV cs.MM
http://creativecommons.org/licenses/by-nc-sa/4.0/
We introduce EgoTextVQA, a novel and rigorously constructed benchmark for egocentric QA assistance involving scene text. EgoTextVQA contains 1.5K ego-view videos and 7K scene-text aware questions that reflect real user needs in outdoor driving and indoor house-keeping activities. The questions are designed to elicit identification and reasoning on scene text in an egocentric and dynamic environment. With EgoTextVQA, we comprehensively evaluate 10 prominent multimodal large language models. Currently, all models struggle, and the best results (Gemini 1.5 Pro) are around 33\% accuracy, highlighting the severe deficiency of these techniques in egocentric QA assistance. Our further investigations suggest that precise temporal grounding and multi-frame reasoning, along with high resolution and auxiliary scene-text inputs, are key for better performance. With thorough analyses and heuristic suggestions, we hope EgoTextVQA can serve as a solid testbed for research in egocentric scene-text QA assistance. Our dataset is released at: https://github.com/zhousheng97/EgoTextVQA.
[ { "version": "v1", "created": "Tue, 11 Feb 2025 09:45:06 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 14:21:30 GMT" } ]
2025-03-24T00:00:00
[ [ "Zhou", "Sheng", "" ], [ "Xiao", "Junbin", "" ], [ "Li", "Qingyun", "" ], [ "Li", "Yicong", "" ], [ "Yang", "Xun", "" ], [ "Guo", "Dan", "" ], [ "Wang", "Meng", "" ], [ "Chua", "Tat-Seng", "" ], [ "Yao", "Angela", "" ] ]
TITLE: EgoTextVQA: Towards Egocentric Scene-Text Aware Video Question Answering ABSTRACT: We introduce EgoTextVQA, a novel and rigorously constructed benchmark for egocentric QA assistance involving scene text. EgoTextVQA contains 1.5K ego-view videos and 7K scene-text aware questions that reflect real user needs in outdoor driving and indoor house-keeping activities. The questions are designed to elicit identification and reasoning on scene text in an egocentric and dynamic environment. With EgoTextVQA, we comprehensively evaluate 10 prominent multimodal large language models. Currently, all models struggle, and the best results (Gemini 1.5 Pro) are around 33\% accuracy, highlighting the severe deficiency of these techniques in egocentric QA assistance. Our further investigations suggest that precise temporal grounding and multi-frame reasoning, along with high resolution and auxiliary scene-text inputs, are key for better performance. With thorough analyses and heuristic suggestions, we hope EgoTextVQA can serve as a solid testbed for research in egocentric scene-text QA assistance. Our dataset is released at: https://github.com/zhousheng97/EgoTextVQA.
2502.08317
Jiarui Wu
Jiarui Wu, Zhuo Liu, Hangfeng He
Mitigating Hallucinations in Multimodal Spatial Relations through Constraint-Aware Prompting
19 pages
null
null
null
cs.CL cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatial relation hallucinations pose a persistent challenge in large vision-language models (LVLMs), leading to generate incorrect predictions about object positions and spatial configurations within an image. To address this issue, we propose a constraint-aware prompting framework designed to reduce spatial relation hallucinations. Specifically, we introduce two types of constraints: (1) bidirectional constraint, which ensures consistency in pairwise object relations, and (2) transitivity constraint, which enforces relational dependence across multiple objects. By incorporating these constraints, LVLMs can produce more spatially coherent and consistent outputs. We evaluate our method on three widely-used spatial relation datasets, demonstrating performance improvements over existing approaches. Additionally, a systematic analysis of various bidirectional relation analysis choices and transitivity reference selections highlights greater possibilities of our methods in incorporating constraints to mitigate spatial relation hallucinations.
[ { "version": "v1", "created": "Wed, 12 Feb 2025 11:32:19 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 03:39:57 GMT" } ]
2025-03-24T00:00:00
[ [ "Wu", "Jiarui", "" ], [ "Liu", "Zhuo", "" ], [ "He", "Hangfeng", "" ] ]
TITLE: Mitigating Hallucinations in Multimodal Spatial Relations through Constraint-Aware Prompting ABSTRACT: Spatial relation hallucinations pose a persistent challenge in large vision-language models (LVLMs), leading to generate incorrect predictions about object positions and spatial configurations within an image. To address this issue, we propose a constraint-aware prompting framework designed to reduce spatial relation hallucinations. Specifically, we introduce two types of constraints: (1) bidirectional constraint, which ensures consistency in pairwise object relations, and (2) transitivity constraint, which enforces relational dependence across multiple objects. By incorporating these constraints, LVLMs can produce more spatially coherent and consistent outputs. We evaluate our method on three widely-used spatial relation datasets, demonstrating performance improvements over existing approaches. Additionally, a systematic analysis of various bidirectional relation analysis choices and transitivity reference selections highlights greater possibilities of our methods in incorporating constraints to mitigate spatial relation hallucinations.
2502.10127
Gamal Elghazaly Dr.
Gamal Elghazaly and Raphael Frank
Leveraging V2X for Collaborative HD Maps Construction Using Scene Graph Generation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
High-Definition (HD) maps play a crucial role in autonomous vehicle navigation, complementing onboard perception sensors for improved accuracy and safety. Traditional HD map generation relies on dedicated mapping vehicles, which are costly and fail to capture real-time infrastructure changes. This paper presents HDMapLaneNet, a novel framework leveraging V2X communication and Scene Graph Generation to collaboratively construct a localized geometric layer of HD maps. The approach extracts lane centerlines from front-facing camera images, represents them as graphs, and transmits the data for global aggregation to the cloud via V2X. Preliminary results on the nuScenes dataset demonstrate superior association prediction performance compared to a state-of-the-art method.
[ { "version": "v1", "created": "Fri, 14 Feb 2025 12:56:10 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 16:34:23 GMT" } ]
2025-03-24T00:00:00
[ [ "Elghazaly", "Gamal", "" ], [ "Frank", "Raphael", "" ] ]
TITLE: Leveraging V2X for Collaborative HD Maps Construction Using Scene Graph Generation ABSTRACT: High-Definition (HD) maps play a crucial role in autonomous vehicle navigation, complementing onboard perception sensors for improved accuracy and safety. Traditional HD map generation relies on dedicated mapping vehicles, which are costly and fail to capture real-time infrastructure changes. This paper presents HDMapLaneNet, a novel framework leveraging V2X communication and Scene Graph Generation to collaboratively construct a localized geometric layer of HD maps. The approach extracts lane centerlines from front-facing camera images, represents them as graphs, and transmits the data for global aggregation to the cloud via V2X. Preliminary results on the nuScenes dataset demonstrate superior association prediction performance compared to a state-of-the-art method.
2502.12537
Sina Montazeri
Sina Montazeri, Haseebullah Jumakhan, Amir Mirzaeinia
Finding Optimal Trading History in Reinforcement Learning for Stock Market Trading
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper investigates the optimization of temporal windows in Financial Deep Reinforcement Learning (DRL) models using 2D Convolutional Neural Networks (CNNs). We introduce a novel approach to treating the temporal field as a hyperparameter and examine its impact on model performance across various datasets and feature arrangements. We introduce a new hyperparameter for the CNN policy, proposing that this temporal field can and should be treated as a hyperparameter for these models. We examine the significance of this temporal field by iteratively expanding the window of observations presented to the CNN policy during the deep reinforcement learning process. Our iterative process involves progressively increasing the observation period from two weeks to twelve weeks, allowing us to examine the effects of different temporal windows on the model's performance. This window expansion is implemented in two settings. In one setting, we rearrange the features in the dataset to group them by company, allowing the model to have a full view of company data in its observation window and CNN kernel. In the second setting, we do not group the features by company, and features are arranged by category. Our study reveals that shorter temporal windows are most effective when no feature rearrangement to group per company is in effect. However, the model will utilize longer temporal windows and yield better performance once we introduce the feature rearrangement. To examine the consistency of our findings, we repeated our experiment on two datasets containing the same thirty companies from the Dow Jones Index but with different features in each dataset and consistently observed the above-mentioned patterns. The result is a trading model significantly outperforming global financial services firms such as the Global X Guru by the established Mirae Asset.
[ { "version": "v1", "created": "Tue, 18 Feb 2025 04:50:00 GMT" }, { "version": "v2", "created": "Wed, 19 Feb 2025 10:24:59 GMT" } ]
2025-03-24T00:00:00
[ [ "Montazeri", "Sina", "" ], [ "Jumakhan", "Haseebullah", "" ], [ "Mirzaeinia", "Amir", "" ] ]
TITLE: Finding Optimal Trading History in Reinforcement Learning for Stock Market Trading ABSTRACT: This paper investigates the optimization of temporal windows in Financial Deep Reinforcement Learning (DRL) models using 2D Convolutional Neural Networks (CNNs). We introduce a novel approach to treating the temporal field as a hyperparameter and examine its impact on model performance across various datasets and feature arrangements. We introduce a new hyperparameter for the CNN policy, proposing that this temporal field can and should be treated as a hyperparameter for these models. We examine the significance of this temporal field by iteratively expanding the window of observations presented to the CNN policy during the deep reinforcement learning process. Our iterative process involves progressively increasing the observation period from two weeks to twelve weeks, allowing us to examine the effects of different temporal windows on the model's performance. This window expansion is implemented in two settings. In one setting, we rearrange the features in the dataset to group them by company, allowing the model to have a full view of company data in its observation window and CNN kernel. In the second setting, we do not group the features by company, and features are arranged by category. Our study reveals that shorter temporal windows are most effective when no feature rearrangement to group per company is in effect. However, the model will utilize longer temporal windows and yield better performance once we introduce the feature rearrangement. To examine the consistency of our findings, we repeated our experiment on two datasets containing the same thirty companies from the Dow Jones Index but with different features in each dataset and consistently observed the above-mentioned patterns. The result is a trading model significantly outperforming global financial services firms such as the Global X Guru by the established Mirae Asset.
2502.15749
Joonghyuk Hahn
Joonghyuk Hahn, Hyeseon Ahn, Jungin Kim, Soohan Lim, Yo-Sub Han
TCProF: Time-Complexity Prediction SSL Framework
26 pages, 13 figures, This paper has been accepted to NAACL 2025
null
null
null
cs.SE cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Time complexity is a theoretic measure to determine the amount of time the algorithm needs for its execution. In reality, developers write algorithms into code snippets within limited resources, making the calculation of a code's time complexity a fundamental task. However, determining the precise time complexity of a code is theoretically undecidable. In response, recent advancements have leaned toward deploying datasets for code time complexity prediction and initiating preliminary experiments for this challenge. We investigate the challenge in low-resource scenarios where only a few labeled instances are given for training. Remarkably, we are the first to introduce TCProF: a Time-Complexity Prediction SSL Framework as an effective solution for code time complexity prediction in low-resource settings. TCProF significantly boosts performance by integrating our augmentation, symbolic modules, and a co-training mechanism, achieving a more than 60% improvement over self-training approaches. We further provide an extensive comparative analysis between TCProF, ChatGPT, and Gemini-Pro, offering a detailed evaluation of our approach. Our code is at https://github.com/peer0/few-shot-tc.
[ { "version": "v1", "created": "Mon, 10 Feb 2025 12:39:33 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 01:48:59 GMT" } ]
2025-03-24T00:00:00
[ [ "Hahn", "Joonghyuk", "" ], [ "Ahn", "Hyeseon", "" ], [ "Kim", "Jungin", "" ], [ "Lim", "Soohan", "" ], [ "Han", "Yo-Sub", "" ] ]
TITLE: TCProF: Time-Complexity Prediction SSL Framework ABSTRACT: Time complexity is a theoretic measure to determine the amount of time the algorithm needs for its execution. In reality, developers write algorithms into code snippets within limited resources, making the calculation of a code's time complexity a fundamental task. However, determining the precise time complexity of a code is theoretically undecidable. In response, recent advancements have leaned toward deploying datasets for code time complexity prediction and initiating preliminary experiments for this challenge. We investigate the challenge in low-resource scenarios where only a few labeled instances are given for training. Remarkably, we are the first to introduce TCProF: a Time-Complexity Prediction SSL Framework as an effective solution for code time complexity prediction in low-resource settings. TCProF significantly boosts performance by integrating our augmentation, symbolic modules, and a co-training mechanism, achieving a more than 60% improvement over self-training approaches. We further provide an extensive comparative analysis between TCProF, ChatGPT, and Gemini-Pro, offering a detailed evaluation of our approach. Our code is at https://github.com/peer0/few-shot-tc.
2502.18321
Shuyi Chen
Shuyi Chen, Ferdinando Fioretto, Feng Qiu, Shixiang Zhu
Global-Decision-Focused Neural ODEs for Proactive Grid Resilience Management
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extreme hazard events such as wildfires and hurricanes increasingly threaten power systems, causing widespread outages and disrupting critical services. Recently, predict-then-optimize approaches have gained traction in grid operations, where system functionality forecasts are first generated and then used as inputs for downstream decision-making. However, this two-stage method often results in a misalignment between prediction and optimization objectives, leading to suboptimal resource allocation. To address this, we propose predict-all-then-optimize-globally (PATOG), a framework that integrates outage prediction with globally optimized interventions. At its core, our global-decision-focused (GDF) neural ODE model captures outage dynamics while optimizing resilience strategies in a decision-aware manner. Unlike conventional methods, our approach ensures spatially and temporally coherent decision-making, improving both predictive accuracy and operational efficiency. Experiments on synthetic and real-world datasets demonstrate significant improvements in outage prediction consistency and grid resilience.
[ { "version": "v1", "created": "Tue, 25 Feb 2025 16:15:35 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 15:16:16 GMT" } ]
2025-03-24T00:00:00
[ [ "Chen", "Shuyi", "" ], [ "Fioretto", "Ferdinando", "" ], [ "Qiu", "Feng", "" ], [ "Zhu", "Shixiang", "" ] ]
TITLE: Global-Decision-Focused Neural ODEs for Proactive Grid Resilience Management ABSTRACT: Extreme hazard events such as wildfires and hurricanes increasingly threaten power systems, causing widespread outages and disrupting critical services. Recently, predict-then-optimize approaches have gained traction in grid operations, where system functionality forecasts are first generated and then used as inputs for downstream decision-making. However, this two-stage method often results in a misalignment between prediction and optimization objectives, leading to suboptimal resource allocation. To address this, we propose predict-all-then-optimize-globally (PATOG), a framework that integrates outage prediction with globally optimized interventions. At its core, our global-decision-focused (GDF) neural ODE model captures outage dynamics while optimizing resilience strategies in a decision-aware manner. Unlike conventional methods, our approach ensures spatially and temporally coherent decision-making, improving both predictive accuracy and operational efficiency. Experiments on synthetic and real-world datasets demonstrate significant improvements in outage prediction consistency and grid resilience.
2503.01924
Yuhang Wang
Wang YuHang, Junkang Guo, Aolei Liu, Kaihao Wang, Zaitong Wu, Zhenyu Liu, Wenfei Yin, Jian Liu
TAET: Two-Stage Adversarial Equalization Training on Long-Tailed Distributions
Text: 8 pages of main content, 5 pages of appendices have been accepted by CVPR2025
Computer Vision and Pattern Recognition 2025
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial robustness is a critical challenge in deploying deep neural networks for real-world applications. While adversarial training is a widely recognized defense strategy, most existing studies focus on balanced datasets, overlooking the prevalence of long-tailed distributions in real-world data, which significantly complicates robustness. This paper provides a comprehensive analysis of adversarial training under long-tailed distributions and identifies limitations in the current state-of-the-art method, AT-BSL, in achieving robust performance under such conditions. To address these challenges, we propose a novel training framework, TAET, which integrates an initial stabilization phase followed by a stratified equalization adversarial training phase. Additionally, prior work on long-tailed robustness has largely ignored the crucial evaluation metric of balanced accuracy. To bridge this gap, we introduce the concept of balanced robustness, a comprehensive metric tailored for assessing robustness under long-tailed distributions. Extensive experiments demonstrate that our method surpasses existing advanced defenses, achieving significant improvements in both memory and computational efficiency. This work represents a substantial advancement in addressing robustness challenges in real-world applications. Our code is available at: https://github.com/BuhuiOK/TAET-Two-Stage-Adversarial-Equalization-Training-on-Long-Tailed-Distributions.
[ { "version": "v1", "created": "Sun, 2 Mar 2025 12:07:00 GMT" }, { "version": "v2", "created": "Thu, 20 Mar 2025 08:49:42 GMT" }, { "version": "v3", "created": "Fri, 21 Mar 2025 09:56:29 GMT" } ]
2025-03-24T00:00:00
[ [ "YuHang", "Wang", "" ], [ "Guo", "Junkang", "" ], [ "Liu", "Aolei", "" ], [ "Wang", "Kaihao", "" ], [ "Wu", "Zaitong", "" ], [ "Liu", "Zhenyu", "" ], [ "Yin", "Wenfei", "" ], [ "Liu", "Jian", "" ] ]
TITLE: TAET: Two-Stage Adversarial Equalization Training on Long-Tailed Distributions ABSTRACT: Adversarial robustness is a critical challenge in deploying deep neural networks for real-world applications. While adversarial training is a widely recognized defense strategy, most existing studies focus on balanced datasets, overlooking the prevalence of long-tailed distributions in real-world data, which significantly complicates robustness. This paper provides a comprehensive analysis of adversarial training under long-tailed distributions and identifies limitations in the current state-of-the-art method, AT-BSL, in achieving robust performance under such conditions. To address these challenges, we propose a novel training framework, TAET, which integrates an initial stabilization phase followed by a stratified equalization adversarial training phase. Additionally, prior work on long-tailed robustness has largely ignored the crucial evaluation metric of balanced accuracy. To bridge this gap, we introduce the concept of balanced robustness, a comprehensive metric tailored for assessing robustness under long-tailed distributions. Extensive experiments demonstrate that our method surpasses existing advanced defenses, achieving significant improvements in both memory and computational efficiency. This work represents a substantial advancement in addressing robustness challenges in real-world applications. Our code is available at: https://github.com/BuhuiOK/TAET-Two-Stage-Adversarial-Equalization-Training-on-Long-Tailed-Distributions.
2503.03234
Dakarai Crowder
Dakarai Crowder, Kojo Vandyck, Xiping Sun, James McCann, Wenzhen Yuan
Social Gesture Recognition in spHRI: Leveraging Fabric-Based Tactile Sensing on Humanoid Robots
Accepted to ICRA 25. 8 pages, 8 figures
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Humans are able to convey different messages using only touch. Equipping robots with the ability to understand social touch adds another modality in which humans and robots can communicate. In this paper, we present a social gesture recognition system using a fabric-based, large-scale tactile sensor placed onto the arms of a humanoid robot. We built a social gesture dataset using multiple participants and extracted temporal features for classification. By collecting tactile data on a humanoid robot, our system provides insights into human-robot social touch, and displays that the use of fabric based sensors could be a potential way of advancing the development of spHRI systems for more natural and effective communication.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 07:24:00 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 22:36:51 GMT" }, { "version": "v3", "created": "Fri, 21 Mar 2025 16:50:42 GMT" } ]
2025-03-24T00:00:00
[ [ "Crowder", "Dakarai", "" ], [ "Vandyck", "Kojo", "" ], [ "Sun", "Xiping", "" ], [ "McCann", "James", "" ], [ "Yuan", "Wenzhen", "" ] ]
TITLE: Social Gesture Recognition in spHRI: Leveraging Fabric-Based Tactile Sensing on Humanoid Robots ABSTRACT: Humans are able to convey different messages using only touch. Equipping robots with the ability to understand social touch adds another modality in which humans and robots can communicate. In this paper, we present a social gesture recognition system using a fabric-based, large-scale tactile sensor placed onto the arms of a humanoid robot. We built a social gesture dataset using multiple participants and extracted temporal features for classification. By collecting tactile data on a humanoid robot, our system provides insights into human-robot social touch, and displays that the use of fabric based sensors could be a potential way of advancing the development of spHRI systems for more natural and effective communication.
2503.03750
Mantas Mazeika
Richard Ren, Arunim Agarwal, Mantas Mazeika, Cristina Menghini, Robert Vacareanu, Brad Kenstler, Mick Yang, Isabelle Barrass, Alice Gatti, Xuwang Yin, Eduardo Trevino, Matias Geralnik, Adam Khoja, Dean Lee, Summer Yue, Dan Hendrycks
The MASK Benchmark: Disentangling Honesty From Accuracy in AI Systems
Website: https://www.mask-benchmark.ai
null
null
null
cs.LG cs.AI cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
As large language models (LLMs) become more capable and agentic, the requirement for trust in their outputs grows significantly, yet at the same time concerns have been mounting that models may learn to lie in pursuit of their goals. To address these concerns, a body of work has emerged around the notion of "honesty" in LLMs, along with interventions aimed at mitigating deceptive behaviors. However, evaluations of honesty are currently highly limited, with no benchmark combining large scale and applicability to all models. Moreover, many benchmarks claiming to measure honesty in fact simply measure accuracy--the correctness of a model's beliefs--in disguise. In this work, we introduce a large-scale human-collected dataset for measuring honesty directly, allowing us to disentangle accuracy from honesty for the first time. Across a diverse set of LLMs, we find that while larger models obtain higher accuracy on our benchmark, they do not become more honest. Surprisingly, while most frontier LLMs obtain high scores on truthfulness benchmarks, we find a substantial propensity in frontier LLMs to lie when pressured to do so, resulting in low honesty scores on our benchmark. We find that simple methods, such as representation engineering interventions, can improve honesty. These results underscore the growing need for robust evaluations and effective interventions to ensure LLMs remain trustworthy.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 18:59:23 GMT" }, { "version": "v2", "created": "Thu, 20 Mar 2025 23:06:17 GMT" } ]
2025-03-24T00:00:00
[ [ "Ren", "Richard", "" ], [ "Agarwal", "Arunim", "" ], [ "Mazeika", "Mantas", "" ], [ "Menghini", "Cristina", "" ], [ "Vacareanu", "Robert", "" ], [ "Kenstler", "Brad", "" ], [ "Yang", "Mick", "" ], [ "Barrass", "Isabelle", "" ], [ "Gatti", "Alice", "" ], [ "Yin", "Xuwang", "" ], [ "Trevino", "Eduardo", "" ], [ "Geralnik", "Matias", "" ], [ "Khoja", "Adam", "" ], [ "Lee", "Dean", "" ], [ "Yue", "Summer", "" ], [ "Hendrycks", "Dan", "" ] ]
TITLE: The MASK Benchmark: Disentangling Honesty From Accuracy in AI Systems ABSTRACT: As large language models (LLMs) become more capable and agentic, the requirement for trust in their outputs grows significantly, yet at the same time concerns have been mounting that models may learn to lie in pursuit of their goals. To address these concerns, a body of work has emerged around the notion of "honesty" in LLMs, along with interventions aimed at mitigating deceptive behaviors. However, evaluations of honesty are currently highly limited, with no benchmark combining large scale and applicability to all models. Moreover, many benchmarks claiming to measure honesty in fact simply measure accuracy--the correctness of a model's beliefs--in disguise. In this work, we introduce a large-scale human-collected dataset for measuring honesty directly, allowing us to disentangle accuracy from honesty for the first time. Across a diverse set of LLMs, we find that while larger models obtain higher accuracy on our benchmark, they do not become more honest. Surprisingly, while most frontier LLMs obtain high scores on truthfulness benchmarks, we find a substantial propensity in frontier LLMs to lie when pressured to do so, resulting in low honesty scores on our benchmark. We find that simple methods, such as representation engineering interventions, can improve honesty. These results underscore the growing need for robust evaluations and effective interventions to ensure LLMs remain trustworthy.
2503.06146
Ziyue Huang
Ziyue Huang, Yongchao Feng, Shuai Yang, Ziqi Liu, Qingjie Liu, and Yunhong Wang
OpenRSD: Towards Open-prompts for Object Detection in Remote Sensing Images
11 pages, 4 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Remote sensing object detection has made significant progress, but most studies still focus on closed-set detection, limiting generalization across diverse datasets. Open-vocabulary object detection (OVD) provides a solution by leveraging multimodal associations between text prompts and visual features. However, existing OVD methods for remote sensing (RS) images are constrained by small-scale datasets and fail to address the unique challenges of remote sensing interpretation, include oriented object detection and the need for both high precision and real-time performance in diverse scenarios. To tackle these challenges, we propose OpenRSD, a universal open-prompt RS object detection framework. OpenRSD supports multimodal prompts and integrates multi-task detection heads to balance accuracy and real-time requirements. Additionally, we design a multi-stage training pipeline to enhance the generalization of model. Evaluated on seven public datasets, OpenRSD demonstrates superior performance in oriented and horizontal bounding box detection, with real-time inference capabilities suitable for large-scale RS image analysis. Compared to YOLO-World, OpenRSD exhibits an 8.7\% higher average precision and achieves an inference speed of 20.8 FPS. Codes and models will be released.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 10:08:46 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 06:47:18 GMT" } ]
2025-03-24T00:00:00
[ [ "Huang", "Ziyue", "" ], [ "Feng", "Yongchao", "" ], [ "Yang", "Shuai", "" ], [ "Liu", "Ziqi", "" ], [ "Liu", "Qingjie", "" ], [ "Wang", "Yunhong", "" ] ]
TITLE: OpenRSD: Towards Open-prompts for Object Detection in Remote Sensing Images ABSTRACT: Remote sensing object detection has made significant progress, but most studies still focus on closed-set detection, limiting generalization across diverse datasets. Open-vocabulary object detection (OVD) provides a solution by leveraging multimodal associations between text prompts and visual features. However, existing OVD methods for remote sensing (RS) images are constrained by small-scale datasets and fail to address the unique challenges of remote sensing interpretation, include oriented object detection and the need for both high precision and real-time performance in diverse scenarios. To tackle these challenges, we propose OpenRSD, a universal open-prompt RS object detection framework. OpenRSD supports multimodal prompts and integrates multi-task detection heads to balance accuracy and real-time requirements. Additionally, we design a multi-stage training pipeline to enhance the generalization of model. Evaluated on seven public datasets, OpenRSD demonstrates superior performance in oriented and horizontal bounding box detection, with real-time inference capabilities suitable for large-scale RS image analysis. Compared to YOLO-World, OpenRSD exhibits an 8.7\% higher average precision and achieves an inference speed of 20.8 FPS. Codes and models will be released.
2503.06158
Ziruo Hao
Ziruo Hao, Zhenhua Cui, Tao Yang, Bo Hu, Xiaofeng Wu, Hui Feng
Invariant Federated Learning for Edge Intelligence: Mitigating Heterogeneity and Asynchrony via Exit Strategy and Invariant Penalty
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper provides an invariant federated learning system for resource-constrained edge intelligence. This framework can avoid the impact of heterogeneity and asynchrony by exit strategy and invariant penalty. We decompose local information into two orthogonal components to measure the contribution or impact of heterogeneous and asynchronous clients. We propose that the exit of abnormal clients can guarantee the effect of the model on most clients. Meanwhile, to ensure the models' performance on exited abnormal clients and those who lack training resources, we propose Federated Learning with Invariant Penalty for Generalization (FedIPG) based on the invariant orthogonal decomposition of parameters. Theoretical proof shows that FedIPG reduces the Out-Of-Distribution prediction loss without increasing the communication burden. The performance of FedIPG combined with an exit strategy is tested empirically in multiple scales using four datasets. It shows our system can enhance In-Distribution performance and outperform the state-of-the-art algorithm in Out-Of-Distribution generalization while maintaining model convergence. Additionally, the results of the visual experiment prove that FedIPG contains preliminary causality in terms of ignoring confounding features.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 10:47:27 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 12:03:44 GMT" } ]
2025-03-24T00:00:00
[ [ "Hao", "Ziruo", "" ], [ "Cui", "Zhenhua", "" ], [ "Yang", "Tao", "" ], [ "Hu", "Bo", "" ], [ "Wu", "Xiaofeng", "" ], [ "Feng", "Hui", "" ] ]
TITLE: Invariant Federated Learning for Edge Intelligence: Mitigating Heterogeneity and Asynchrony via Exit Strategy and Invariant Penalty ABSTRACT: This paper provides an invariant federated learning system for resource-constrained edge intelligence. This framework can avoid the impact of heterogeneity and asynchrony by exit strategy and invariant penalty. We decompose local information into two orthogonal components to measure the contribution or impact of heterogeneous and asynchronous clients. We propose that the exit of abnormal clients can guarantee the effect of the model on most clients. Meanwhile, to ensure the models' performance on exited abnormal clients and those who lack training resources, we propose Federated Learning with Invariant Penalty for Generalization (FedIPG) based on the invariant orthogonal decomposition of parameters. Theoretical proof shows that FedIPG reduces the Out-Of-Distribution prediction loss without increasing the communication burden. The performance of FedIPG combined with an exit strategy is tested empirically in multiple scales using four datasets. It shows our system can enhance In-Distribution performance and outperform the state-of-the-art algorithm in Out-Of-Distribution generalization while maintaining model convergence. Additionally, the results of the visual experiment prove that FedIPG contains preliminary causality in terms of ignoring confounding features.
2503.07390
Boeun Kim
Boeun Kim, Hea In Jeong, JungHoon Sung, Yihua Cheng, Jeongmin Lee, Ju Yong Chang, Sang-Il Choi, Younggeun Choi, Saim Shin, Jungho Kim, Hyung Jin Chang
PersonaBooth: Personalized Text-to-Motion Generation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper introduces Motion Personalization, a new task that generates personalized motions aligned with text descriptions using several basic motions containing Persona. To support this novel task, we introduce a new large-scale motion dataset called PerMo (PersonaMotion), which captures the unique personas of multiple actors. We also propose a multi-modal finetuning method of a pretrained motion diffusion model called PersonaBooth. PersonaBooth addresses two main challenges: i) A significant distribution gap between the persona-focused PerMo dataset and the pretraining datasets, which lack persona-specific data, and ii) the difficulty of capturing a consistent persona from the motions vary in content (action type). To tackle the dataset distribution gap, we introduce a persona token to accept new persona features and perform multi-modal adaptation for both text and visuals during finetuning. To capture a consistent persona, we incorporate a contrastive learning technique to enhance intra-cohesion among samples with the same persona. Furthermore, we introduce a context-aware fusion mechanism to maximize the integration of persona cues from multiple input motions. PersonaBooth outperforms state-of-the-art motion style transfer methods, establishing a new benchmark for motion personalization.
[ { "version": "v1", "created": "Mon, 10 Mar 2025 14:38:00 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 09:23:01 GMT" }, { "version": "v3", "created": "Fri, 21 Mar 2025 04:22:55 GMT" } ]
2025-03-24T00:00:00
[ [ "Kim", "Boeun", "" ], [ "Jeong", "Hea In", "" ], [ "Sung", "JungHoon", "" ], [ "Cheng", "Yihua", "" ], [ "Lee", "Jeongmin", "" ], [ "Chang", "Ju Yong", "" ], [ "Choi", "Sang-Il", "" ], [ "Choi", "Younggeun", "" ], [ "Shin", "Saim", "" ], [ "Kim", "Jungho", "" ], [ "Chang", "Hyung Jin", "" ] ]
TITLE: PersonaBooth: Personalized Text-to-Motion Generation ABSTRACT: This paper introduces Motion Personalization, a new task that generates personalized motions aligned with text descriptions using several basic motions containing Persona. To support this novel task, we introduce a new large-scale motion dataset called PerMo (PersonaMotion), which captures the unique personas of multiple actors. We also propose a multi-modal finetuning method of a pretrained motion diffusion model called PersonaBooth. PersonaBooth addresses two main challenges: i) A significant distribution gap between the persona-focused PerMo dataset and the pretraining datasets, which lack persona-specific data, and ii) the difficulty of capturing a consistent persona from the motions vary in content (action type). To tackle the dataset distribution gap, we introduce a persona token to accept new persona features and perform multi-modal adaptation for both text and visuals during finetuning. To capture a consistent persona, we incorporate a contrastive learning technique to enhance intra-cohesion among samples with the same persona. Furthermore, we introduce a context-aware fusion mechanism to maximize the integration of persona cues from multiple input motions. PersonaBooth outperforms state-of-the-art motion style transfer methods, establishing a new benchmark for motion personalization.
2503.08140
Ethan Griffiths
Ethan Griffiths, Maryam Haghighat, Simon Denman, Clinton Fookes, Milad Ramezani
HOTFormerLoc: Hierarchical Octree Transformer for Versatile Lidar Place Recognition Across Ground and Aerial Views
16 pages, 13 figures, 10 tables, Accepted to CVPR 2025
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present HOTFormerLoc, a novel and versatile Hierarchical Octree-based TransFormer, for large-scale 3D place recognition in both ground-to-ground and ground-to-aerial scenarios across urban and forest environments. We propose an octree-based multi-scale attention mechanism that captures spatial and semantic features across granularities. To address the variable density of point distributions from spinning lidar, we present cylindrical octree attention windows to reflect the underlying distribution during attention. We introduce relay tokens to enable efficient global-local interactions and multi-scale representation learning at reduced computational cost. Our pyramid attentional pooling then synthesises a robust global descriptor for end-to-end place recognition in challenging environments. In addition, we introduce CS-Wild-Places, a novel 3D cross-source dataset featuring point cloud data from aerial and ground lidar scans captured in dense forests. Point clouds in CS-Wild-Places contain representational gaps and distinctive attributes such as varying point densities and noise patterns, making it a challenging benchmark for cross-view localisation in the wild. HOTFormerLoc achieves a top-1 average recall improvement of 5.5% - 11.5% on the CS-Wild-Places benchmark. Furthermore, it consistently outperforms SOTA 3D place recognition methods, with an average performance gain of 4.9% on well-established urban and forest datasets. The code and CS-Wild-Places benchmark is available at https://csiro-robotics.github.io/HOTFormerLoc.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 07:59:45 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 07:00:11 GMT" } ]
2025-03-24T00:00:00
[ [ "Griffiths", "Ethan", "" ], [ "Haghighat", "Maryam", "" ], [ "Denman", "Simon", "" ], [ "Fookes", "Clinton", "" ], [ "Ramezani", "Milad", "" ] ]
TITLE: HOTFormerLoc: Hierarchical Octree Transformer for Versatile Lidar Place Recognition Across Ground and Aerial Views ABSTRACT: We present HOTFormerLoc, a novel and versatile Hierarchical Octree-based TransFormer, for large-scale 3D place recognition in both ground-to-ground and ground-to-aerial scenarios across urban and forest environments. We propose an octree-based multi-scale attention mechanism that captures spatial and semantic features across granularities. To address the variable density of point distributions from spinning lidar, we present cylindrical octree attention windows to reflect the underlying distribution during attention. We introduce relay tokens to enable efficient global-local interactions and multi-scale representation learning at reduced computational cost. Our pyramid attentional pooling then synthesises a robust global descriptor for end-to-end place recognition in challenging environments. In addition, we introduce CS-Wild-Places, a novel 3D cross-source dataset featuring point cloud data from aerial and ground lidar scans captured in dense forests. Point clouds in CS-Wild-Places contain representational gaps and distinctive attributes such as varying point densities and noise patterns, making it a challenging benchmark for cross-view localisation in the wild. HOTFormerLoc achieves a top-1 average recall improvement of 5.5% - 11.5% on the CS-Wild-Places benchmark. Furthermore, it consistently outperforms SOTA 3D place recognition methods, with an average performance gain of 4.9% on well-established urban and forest datasets. The code and CS-Wild-Places benchmark is available at https://csiro-robotics.github.io/HOTFormerLoc.
2503.09050
Alvin Kimbowa
Alvin Kimbowa and Arjun Parmar and Maziar Badii and David Liu and Matthew Harkey and Ilker Hacihaliloglu
Mono2D: A Trainable Monogenic Layer for Robust Knee Cartilage Segmentation on Out-of-Distribution 2D Ultrasound Data
11 pages, removed unrelated LaTeX template figure from last page
null
null
null
eess.IV cs.CV
http://creativecommons.org/licenses/by/4.0/
Automated knee cartilage segmentation using point-of-care ultrasound devices and deep-learning networks has the potential to enhance the management of knee osteoarthritis. However, segmentation algorithms often struggle with domain shifts caused by variations in ultrasound devices and acquisition parameters, limiting their generalizability. In this paper, we propose Mono2D, a monogenic layer that extracts multi-scale, contrast- and intensity-invariant local phase features using trainable bandpass quadrature filters. This layer mitigates domain shifts, improving generalization to out-of-distribution domains. Mono2D is integrated before the first layer of a segmentation network, and its parameters jointly trained alongside the network's parameters. We evaluated Mono2D on a multi-domain 2D ultrasound knee cartilage dataset for single-source domain generalization (SSDG). Our results demonstrate that Mono2D outperforms other SSDG methods in terms of Dice score and mean average surface distance. To further assess its generalizability, we evaluate Mono2D on a multi-site prostate MRI dataset, where it continues to outperform other SSDG methods, highlighting its potential to improve domain generalization in medical imaging. Nevertheless, further evaluation on diverse datasets is still necessary to assess its clinical utility.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 04:27:45 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 15:07:07 GMT" } ]
2025-03-24T00:00:00
[ [ "Kimbowa", "Alvin", "" ], [ "Parmar", "Arjun", "" ], [ "Badii", "Maziar", "" ], [ "Liu", "David", "" ], [ "Harkey", "Matthew", "" ], [ "Hacihaliloglu", "Ilker", "" ] ]
TITLE: Mono2D: A Trainable Monogenic Layer for Robust Knee Cartilage Segmentation on Out-of-Distribution 2D Ultrasound Data ABSTRACT: Automated knee cartilage segmentation using point-of-care ultrasound devices and deep-learning networks has the potential to enhance the management of knee osteoarthritis. However, segmentation algorithms often struggle with domain shifts caused by variations in ultrasound devices and acquisition parameters, limiting their generalizability. In this paper, we propose Mono2D, a monogenic layer that extracts multi-scale, contrast- and intensity-invariant local phase features using trainable bandpass quadrature filters. This layer mitigates domain shifts, improving generalization to out-of-distribution domains. Mono2D is integrated before the first layer of a segmentation network, and its parameters jointly trained alongside the network's parameters. We evaluated Mono2D on a multi-domain 2D ultrasound knee cartilage dataset for single-source domain generalization (SSDG). Our results demonstrate that Mono2D outperforms other SSDG methods in terms of Dice score and mean average surface distance. To further assess its generalizability, we evaluate Mono2D on a multi-site prostate MRI dataset, where it continues to outperform other SSDG methods, highlighting its potential to improve domain generalization in medical imaging. Nevertheless, further evaluation on diverse datasets is still necessary to assess its clinical utility.
2503.09417
Dima Taji
Dima Taji and Daniel Zeman
Towards Generating Automatic Anaphora Annotations
7 pages, 0 figures, 2 tables
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Training models that can perform well on various NLP tasks require large amounts of data, and this becomes more apparent with nuanced tasks such as anaphora and conference resolution. To combat the prohibitive costs of creating manual gold annotated data, this paper explores two methods to automatically create datasets with coreferential annotations; direct conversion from existing datasets, and parsing using multilingual models capable of handling new and unseen languages. The paper details the current progress on those two fronts, as well as the challenges the efforts currently face, and our approach to overcoming these challenges.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 14:15:57 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 13:00:05 GMT" } ]
2025-03-24T00:00:00
[ [ "Taji", "Dima", "" ], [ "Zeman", "Daniel", "" ] ]
TITLE: Towards Generating Automatic Anaphora Annotations ABSTRACT: Training models that can perform well on various NLP tasks require large amounts of data, and this becomes more apparent with nuanced tasks such as anaphora and conference resolution. To combat the prohibitive costs of creating manual gold annotated data, this paper explores two methods to automatically create datasets with coreferential annotations; direct conversion from existing datasets, and parsing using multilingual models capable of handling new and unseen languages. The paper details the current progress on those two fronts, as well as the challenges the efforts currently face, and our approach to overcoming these challenges.
2503.10148
Jialin Zhu
Jialin Zhu, Jiangbei Yue, Feixiang He, He Wang
3D Student Splatting and Scooping
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, 3D Gaussian Splatting (3DGS) provides a new framework for novel view synthesis, and has spiked a new wave of research in neural rendering and related applications. As 3DGS is becoming a foundational component of many models, any improvement on 3DGS itself can bring huge benefits. To this end, we aim to improve the fundamental paradigm and formulation of 3DGS. We argue that as an unnormalized mixture model, it needs to be neither Gaussians nor splatting. We subsequently propose a new mixture model consisting of flexible Student's t distributions, with both positive (splatting) and negative (scooping) densities. We name our model Student Splatting and Scooping, or SSS. When providing better expressivity, SSS also poses new challenges in learning. Therefore, we also propose a new principled sampling approach for optimization. Through exhaustive evaluation and comparison, across multiple datasets, settings, and metrics, we demonstrate that SSS outperforms existing methods in terms of quality and parameter efficiency, e.g. achieving matching or better quality with similar numbers of components, and obtaining comparable results while reducing the component number by as much as 82%.
[ { "version": "v1", "created": "Thu, 13 Mar 2025 08:20:54 GMT" }, { "version": "v2", "created": "Sat, 15 Mar 2025 13:33:29 GMT" }, { "version": "v3", "created": "Fri, 21 Mar 2025 02:26:08 GMT" } ]
2025-03-24T00:00:00
[ [ "Zhu", "Jialin", "" ], [ "Yue", "Jiangbei", "" ], [ "He", "Feixiang", "" ], [ "Wang", "He", "" ] ]
TITLE: 3D Student Splatting and Scooping ABSTRACT: Recently, 3D Gaussian Splatting (3DGS) provides a new framework for novel view synthesis, and has spiked a new wave of research in neural rendering and related applications. As 3DGS is becoming a foundational component of many models, any improvement on 3DGS itself can bring huge benefits. To this end, we aim to improve the fundamental paradigm and formulation of 3DGS. We argue that as an unnormalized mixture model, it needs to be neither Gaussians nor splatting. We subsequently propose a new mixture model consisting of flexible Student's t distributions, with both positive (splatting) and negative (scooping) densities. We name our model Student Splatting and Scooping, or SSS. When providing better expressivity, SSS also poses new challenges in learning. Therefore, we also propose a new principled sampling approach for optimization. Through exhaustive evaluation and comparison, across multiple datasets, settings, and metrics, we demonstrate that SSS outperforms existing methods in terms of quality and parameter efficiency, e.g. achieving matching or better quality with similar numbers of components, and obtaining comparable results while reducing the component number by as much as 82%.
2503.10705
Weiran Huang
Haoyuan Gao, Zicong Zhang, Yuqi Wei, Linglan Zhao, Guilin Li, Yexin Li, Linghe Kong, Weiran Huang
Enhanced Continual Learning of Vision-Language Models with Model Fusion
Accepted by ICLR 2025 workshop
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-Language Models (VLMs) represent a breakthrough in artificial intelligence by integrating visual and textual modalities to achieve impressive zero-shot capabilities. However, VLMs are susceptible to catastrophic forgetting when sequentially fine-tuned on multiple downstream tasks. Existing continual learning methods for VLMs often rely heavily on additional reference datasets, compromise zero-shot performance, or are limited to parameter-efficient fine-tuning scenarios. In this paper, we propose Continual Decoupling-Unifying (ConDU), a novel approach, by introducing model fusion into continual learning for VLMs. ConDU maintains a unified model along with task triggers and prototype sets, employing an iterative process of decoupling task-specific models for previous tasks and unifying them with the model for the newly learned task. Additionally, we introduce an inference strategy for zero-shot scenarios by aggregating predictions from multiple decoupled task-specific models. Extensive experiments across various settings show that ConDU achieves up to a 2\% improvement in average performance across all seen tasks compared to state-of-the-art baselines, while also enhancing zero-shot capabilities relative to the original VLM.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 15:48:13 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 09:15:37 GMT" } ]
2025-03-24T00:00:00
[ [ "Gao", "Haoyuan", "" ], [ "Zhang", "Zicong", "" ], [ "Wei", "Yuqi", "" ], [ "Zhao", "Linglan", "" ], [ "Li", "Guilin", "" ], [ "Li", "Yexin", "" ], [ "Kong", "Linghe", "" ], [ "Huang", "Weiran", "" ] ]
TITLE: Enhanced Continual Learning of Vision-Language Models with Model Fusion ABSTRACT: Vision-Language Models (VLMs) represent a breakthrough in artificial intelligence by integrating visual and textual modalities to achieve impressive zero-shot capabilities. However, VLMs are susceptible to catastrophic forgetting when sequentially fine-tuned on multiple downstream tasks. Existing continual learning methods for VLMs often rely heavily on additional reference datasets, compromise zero-shot performance, or are limited to parameter-efficient fine-tuning scenarios. In this paper, we propose Continual Decoupling-Unifying (ConDU), a novel approach, by introducing model fusion into continual learning for VLMs. ConDU maintains a unified model along with task triggers and prototype sets, employing an iterative process of decoupling task-specific models for previous tasks and unifying them with the model for the newly learned task. Additionally, we introduce an inference strategy for zero-shot scenarios by aggregating predictions from multiple decoupled task-specific models. Extensive experiments across various settings show that ConDU achieves up to a 2\% improvement in average performance across all seen tasks compared to state-of-the-art baselines, while also enhancing zero-shot capabilities relative to the original VLM.
2503.10838
Russell Scheinberg
So Young Lee, Russell Scheinberg, Amber Shore, Ameeta Agrawal
Who Relies More on World Knowledge and Bias for Syntactic Ambiguity Resolution: Humans or LLMs?
Accepted at NAACL 2025 main
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This study explores how recent large language models (LLMs) navigate relative clause attachment {ambiguity} and use world knowledge biases for disambiguation in six typologically diverse languages: English, Chinese, Japanese, Korean, Russian, and Spanish. We describe the process of creating a novel dataset -- MultiWho -- for fine-grained evaluation of relative clause attachment preferences in ambiguous and unambiguous contexts. Our experiments with three LLMs indicate that, contrary to humans, LLMs consistently exhibit a preference for local attachment, displaying limited responsiveness to syntactic variations or language-specific attachment patterns. Although LLMs performed well in unambiguous cases, they rigidly prioritized world knowledge biases, lacking the flexibility of human language processing. These findings highlight the need for more diverse, pragmatically nuanced multilingual training to improve LLMs' handling of complex structures and human-like comprehension.
[ { "version": "v1", "created": "Thu, 13 Mar 2025 19:44:15 GMT" }, { "version": "v2", "created": "Thu, 20 Mar 2025 19:35:30 GMT" } ]
2025-03-24T00:00:00
[ [ "Lee", "So Young", "" ], [ "Scheinberg", "Russell", "" ], [ "Shore", "Amber", "" ], [ "Agrawal", "Ameeta", "" ] ]
TITLE: Who Relies More on World Knowledge and Bias for Syntactic Ambiguity Resolution: Humans or LLMs? ABSTRACT: This study explores how recent large language models (LLMs) navigate relative clause attachment {ambiguity} and use world knowledge biases for disambiguation in six typologically diverse languages: English, Chinese, Japanese, Korean, Russian, and Spanish. We describe the process of creating a novel dataset -- MultiWho -- for fine-grained evaluation of relative clause attachment preferences in ambiguous and unambiguous contexts. Our experiments with three LLMs indicate that, contrary to humans, LLMs consistently exhibit a preference for local attachment, displaying limited responsiveness to syntactic variations or language-specific attachment patterns. Although LLMs performed well in unambiguous cases, they rigidly prioritized world knowledge biases, lacking the flexibility of human language processing. These findings highlight the need for more diverse, pragmatically nuanced multilingual training to improve LLMs' handling of complex structures and human-like comprehension.
2503.11919
Jeonghwan Park
Jeonghwan Park, Kang Li, Huiyu Zhou
k-fold Subsampling based Sequential Backward Feature Elimination
8 pages
International Conference on Pattern Recognition Applications and Methods, 2016
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present a new wrapper feature selection algorithm for human detection. This algorithm is a hybrid feature selection approach combining the benefits of filter and wrapper methods. It allows the selection of an optimal feature vector that well represents the shapes of the subjects in the images. In detail, the proposed feature selection algorithm adopts the k-fold subsampling and sequential backward elimination approach, while the standard linear support vector machine (SVM) is used as the classifier for human detection. We apply the proposed algorithm to the publicly accessible INRIA and ETH pedestrian full image datasets with the PASCAL VOC evaluation criteria. Compared to other state of the arts algorithms, our feature selection based approach can improve the detection speed of the SVM classifier by over 50% with up to 2% better detection accuracy. Our algorithm also outperforms the equivalent systems introduced in the deformable part model approach with around 9% improvement in the detection accuracy.
[ { "version": "v1", "created": "Fri, 14 Mar 2025 23:10:08 GMT" } ]
2025-03-24T00:00:00
[ [ "Park", "Jeonghwan", "" ], [ "Li", "Kang", "" ], [ "Zhou", "Huiyu", "" ] ]
TITLE: k-fold Subsampling based Sequential Backward Feature Elimination ABSTRACT: We present a new wrapper feature selection algorithm for human detection. This algorithm is a hybrid feature selection approach combining the benefits of filter and wrapper methods. It allows the selection of an optimal feature vector that well represents the shapes of the subjects in the images. In detail, the proposed feature selection algorithm adopts the k-fold subsampling and sequential backward elimination approach, while the standard linear support vector machine (SVM) is used as the classifier for human detection. We apply the proposed algorithm to the publicly accessible INRIA and ETH pedestrian full image datasets with the PASCAL VOC evaluation criteria. Compared to other state of the arts algorithms, our feature selection based approach can improve the detection speed of the SVM classifier by over 50% with up to 2% better detection accuracy. Our algorithm also outperforms the equivalent systems introduced in the deformable part model approach with around 9% improvement in the detection accuracy.
2503.11921
Atoosa Malemir Chegini
Atoosa Malemir Chegini, Keivan Rezaei, Hamid Eghbalzadeh, Soheil Feizi
RePanda: Pandas-powered Tabular Verification and Reasoning
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Fact-checking tabular data is essential for ensuring the accuracy of structured information. However, existing methods often rely on black-box models with opaque reasoning. We introduce RePanda, a structured fact verification approach that translates claims into executable pandas queries, enabling interpretable and verifiable reasoning. To train RePanda, we construct PanTabFact, a structured dataset derived from the TabFact train set, where claims are paired with executable queries generated using DeepSeek-Chat and refined through automated error correction. Fine-tuning DeepSeek-coder-7B-instruct-v1.5 on PanTabFact, RePanda achieves 84.09% accuracy on the TabFact test set. To evaluate Out-of-Distribution (OOD) generalization, we interpret question-answer pairs from WikiTableQuestions as factual claims and refer to this dataset as WikiFact. Without additional fine-tuning, RePanda achieves 84.72% accuracy on WikiFact, significantly outperforming all other baselines and demonstrating strong OOD robustness. Notably, these results closely match the zero-shot performance of DeepSeek-Chat (671B), indicating that our fine-tuning approach effectively distills structured reasoning from a much larger model into a compact, locally executable 7B model. Beyond fact verification, RePanda extends to tabular question answering by generating executable queries that retrieve precise answers. To support this, we introduce PanWiki, a dataset mapping WikiTableQuestions to pandas queries. Fine-tuning on PanWiki, RePanda achieves 75.1% accuracy in direct answer retrieval. These results highlight the effectiveness of structured execution-based reasoning for tabular verification and question answering. We have publicly released the dataset on Hugging Face at datasets/AtoosaChegini/PanTabFact.
[ { "version": "v1", "created": "Fri, 14 Mar 2025 23:12:36 GMT" }, { "version": "v2", "created": "Thu, 20 Mar 2025 19:10:27 GMT" } ]
2025-03-24T00:00:00
[ [ "Chegini", "Atoosa Malemir", "" ], [ "Rezaei", "Keivan", "" ], [ "Eghbalzadeh", "Hamid", "" ], [ "Feizi", "Soheil", "" ] ]
TITLE: RePanda: Pandas-powered Tabular Verification and Reasoning ABSTRACT: Fact-checking tabular data is essential for ensuring the accuracy of structured information. However, existing methods often rely on black-box models with opaque reasoning. We introduce RePanda, a structured fact verification approach that translates claims into executable pandas queries, enabling interpretable and verifiable reasoning. To train RePanda, we construct PanTabFact, a structured dataset derived from the TabFact train set, where claims are paired with executable queries generated using DeepSeek-Chat and refined through automated error correction. Fine-tuning DeepSeek-coder-7B-instruct-v1.5 on PanTabFact, RePanda achieves 84.09% accuracy on the TabFact test set. To evaluate Out-of-Distribution (OOD) generalization, we interpret question-answer pairs from WikiTableQuestions as factual claims and refer to this dataset as WikiFact. Without additional fine-tuning, RePanda achieves 84.72% accuracy on WikiFact, significantly outperforming all other baselines and demonstrating strong OOD robustness. Notably, these results closely match the zero-shot performance of DeepSeek-Chat (671B), indicating that our fine-tuning approach effectively distills structured reasoning from a much larger model into a compact, locally executable 7B model. Beyond fact verification, RePanda extends to tabular question answering by generating executable queries that retrieve precise answers. To support this, we introduce PanWiki, a dataset mapping WikiTableQuestions to pandas queries. Fine-tuning on PanWiki, RePanda achieves 75.1% accuracy in direct answer retrieval. These results highlight the effectiveness of structured execution-based reasoning for tabular verification and question answering. We have publicly released the dataset on Hugging Face at datasets/AtoosaChegini/PanTabFact.
2503.12261
Rajasekar Gnana Praveen
R. Gnana Praveen, Jahangir Alam, Eric Charton
United we stand, Divided we fall: Handling Weak Complementary Relationships for Audio-Visual Emotion Recognition in Valence-Arousal Space
Achieved 2nd place in valence arousal challenge Submission to CVPR2025 Workshop
null
null
null
cs.CV cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Audio and visual modalities are two predominant contact-free channels in videos, which are often expected to carry a complementary relationship with each other. However, they may not always complement each other, resulting in poor audio-visual feature representations. In this paper, we introduce Gated Recursive Joint Cross Attention (GRJCA) using a gating mechanism that can adaptively choose the most relevant features to effectively capture the synergic relationships across audio and visual modalities. Specifically, we improve the performance of Recursive Joint Cross-Attention (RJCA) by introducing a gating mechanism to control the flow of information between the input features and the attended features of multiple iterations depending on the strength of their complementary relationship. For instance, if the modalities exhibit strong complementary relationships, the gating mechanism emphasizes cross-attended features, otherwise non-attended features. To further improve the performance of the system, we also explored a hierarchical gating approach by introducing a gating mechanism at every iteration, followed by high-level gating across the gated outputs of each iteration. The proposed approach improves the performance of RJCA model by adding more flexibility to deal with weak complementary relationships across audio and visual modalities. Extensive experiments are conducted on the challenging Affwild2 dataset to demonstrate the robustness of the proposed approach. By effectively handling the weak complementary relationships across the audio and visual modalities, the proposed model achieves a Concordance Correlation Coefficient (CCC) of 0.561 (0.623) and 0.620 (0.660) for valence and arousal respectively on the test set (validation set).
[ { "version": "v1", "created": "Sat, 15 Mar 2025 21:03:20 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 16:51:33 GMT" } ]
2025-03-24T00:00:00
[ [ "Praveen", "R. Gnana", "" ], [ "Alam", "Jahangir", "" ], [ "Charton", "Eric", "" ] ]
TITLE: United we stand, Divided we fall: Handling Weak Complementary Relationships for Audio-Visual Emotion Recognition in Valence-Arousal Space ABSTRACT: Audio and visual modalities are two predominant contact-free channels in videos, which are often expected to carry a complementary relationship with each other. However, they may not always complement each other, resulting in poor audio-visual feature representations. In this paper, we introduce Gated Recursive Joint Cross Attention (GRJCA) using a gating mechanism that can adaptively choose the most relevant features to effectively capture the synergic relationships across audio and visual modalities. Specifically, we improve the performance of Recursive Joint Cross-Attention (RJCA) by introducing a gating mechanism to control the flow of information between the input features and the attended features of multiple iterations depending on the strength of their complementary relationship. For instance, if the modalities exhibit strong complementary relationships, the gating mechanism emphasizes cross-attended features, otherwise non-attended features. To further improve the performance of the system, we also explored a hierarchical gating approach by introducing a gating mechanism at every iteration, followed by high-level gating across the gated outputs of each iteration. The proposed approach improves the performance of RJCA model by adding more flexibility to deal with weak complementary relationships across audio and visual modalities. Extensive experiments are conducted on the challenging Affwild2 dataset to demonstrate the robustness of the proposed approach. By effectively handling the weak complementary relationships across the audio and visual modalities, the proposed model achieves a Concordance Correlation Coefficient (CCC) of 0.561 (0.623) and 0.620 (0.660) for valence and arousal respectively on the test set (validation set).
2503.12760
Brian Cho
Brian Cho, Ana-Roxana Pop, Ariel Evnine, Nathan Kallus
SNPL: Simultaneous Policy Learning and Evaluation for Safe Multi-Objective Policy Improvement
null
null
null
null
stat.ML cs.LG econ.EM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To design effective digital interventions, experimenters face the challenge of learning decision policies that balance multiple objectives using offline data. Often, they aim to develop policies that maximize goal outcomes, while ensuring there are no undesirable changes in guardrail outcomes. To provide credible recommendations, experimenters must not only identify policies that satisfy the desired changes in goal and guardrail outcomes, but also offer probabilistic guarantees about the changes these policies induce. In practice, however, policy classes are often large, and digital experiments tend to produce datasets with small effect sizes relative to noise. In this setting, standard approaches such as data splitting or multiple testing often result in unstable policy selection and/or insufficient statistical power. In this paper, we provide safe noisy policy learning (SNPL), a novel approach that leverages the concept of algorithmic stability to address these challenges. Our method enables policy learning while simultaneously providing high-confidence guarantees using the entire dataset, avoiding the need for data-splitting. We present finite-sample and asymptotic versions of our algorithm that ensure the recommended policy satisfies high-probability guarantees for avoiding guardrail regressions and/or achieving goal outcome improvements. We test both variants of our approach approach empirically on a real-world application of personalizing SMS delivery. Our results on real-world data suggest that our approach offers dramatic improvements in settings with large policy classes and low signal-to-noise across both finite-sample and asymptotic safety guarantees, offering up to 300\% improvements in detection rates and 150\% improvements in policy gains at significantly smaller sample sizes.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 02:53:53 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 17:38:14 GMT" } ]
2025-03-24T00:00:00
[ [ "Cho", "Brian", "" ], [ "Pop", "Ana-Roxana", "" ], [ "Evnine", "Ariel", "" ], [ "Kallus", "Nathan", "" ] ]
TITLE: SNPL: Simultaneous Policy Learning and Evaluation for Safe Multi-Objective Policy Improvement ABSTRACT: To design effective digital interventions, experimenters face the challenge of learning decision policies that balance multiple objectives using offline data. Often, they aim to develop policies that maximize goal outcomes, while ensuring there are no undesirable changes in guardrail outcomes. To provide credible recommendations, experimenters must not only identify policies that satisfy the desired changes in goal and guardrail outcomes, but also offer probabilistic guarantees about the changes these policies induce. In practice, however, policy classes are often large, and digital experiments tend to produce datasets with small effect sizes relative to noise. In this setting, standard approaches such as data splitting or multiple testing often result in unstable policy selection and/or insufficient statistical power. In this paper, we provide safe noisy policy learning (SNPL), a novel approach that leverages the concept of algorithmic stability to address these challenges. Our method enables policy learning while simultaneously providing high-confidence guarantees using the entire dataset, avoiding the need for data-splitting. We present finite-sample and asymptotic versions of our algorithm that ensure the recommended policy satisfies high-probability guarantees for avoiding guardrail regressions and/or achieving goal outcome improvements. We test both variants of our approach approach empirically on a real-world application of personalizing SMS delivery. Our results on real-world data suggest that our approach offers dramatic improvements in settings with large policy classes and low signal-to-noise across both finite-sample and asymptotic safety guarantees, offering up to 300\% improvements in detection rates and 150\% improvements in policy gains at significantly smaller sample sizes.
2503.13938
Qingyao Xu
Qingyao Xu, Siheng Chen, Guang Chen, Yanfeng Wang, Ya Zhang
ChatBEV: A Visual Language Model that Understands BEV Maps
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Traffic scene understanding is essential for intelligent transportation systems and autonomous driving, ensuring safe and efficient vehicle operation. While recent advancements in VLMs have shown promise for holistic scene understanding, the application of VLMs to traffic scenarios, particularly using BEV maps, remains under explored. Existing methods often suffer from limited task design and narrow data amount, hindering comprehensive scene understanding. To address these challenges, we introduce ChatBEV-QA, a novel BEV VQA benchmark contains over 137k questions, designed to encompass a wide range of scene understanding tasks, including global scene understanding, vehicle-lane interactions, and vehicle-vehicle interactions. This benchmark is constructed using an novel data collection pipeline that generates scalable and informative VQA data for BEV maps. We further fine-tune a specialized vision-language model ChatBEV, enabling it to interpret diverse question prompts and extract relevant context-aware information from BEV maps. Additionally, we propose a language-driven traffic scene generation pipeline, where ChatBEV facilitates map understanding and text-aligned navigation guidance, significantly enhancing the generation of realistic and consistent traffic scenarios. The dataset, code and the fine-tuned model will be released.
[ { "version": "v1", "created": "Tue, 18 Mar 2025 06:12:38 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 02:17:52 GMT" } ]
2025-03-24T00:00:00
[ [ "Xu", "Qingyao", "" ], [ "Chen", "Siheng", "" ], [ "Chen", "Guang", "" ], [ "Wang", "Yanfeng", "" ], [ "Zhang", "Ya", "" ] ]
TITLE: ChatBEV: A Visual Language Model that Understands BEV Maps ABSTRACT: Traffic scene understanding is essential for intelligent transportation systems and autonomous driving, ensuring safe and efficient vehicle operation. While recent advancements in VLMs have shown promise for holistic scene understanding, the application of VLMs to traffic scenarios, particularly using BEV maps, remains under explored. Existing methods often suffer from limited task design and narrow data amount, hindering comprehensive scene understanding. To address these challenges, we introduce ChatBEV-QA, a novel BEV VQA benchmark contains over 137k questions, designed to encompass a wide range of scene understanding tasks, including global scene understanding, vehicle-lane interactions, and vehicle-vehicle interactions. This benchmark is constructed using an novel data collection pipeline that generates scalable and informative VQA data for BEV maps. We further fine-tune a specialized vision-language model ChatBEV, enabling it to interpret diverse question prompts and extract relevant context-aware information from BEV maps. Additionally, we propose a language-driven traffic scene generation pipeline, where ChatBEV facilitates map understanding and text-aligned navigation guidance, significantly enhancing the generation of realistic and consistent traffic scenarios. The dataset, code and the fine-tuned model will be released.
2503.14275
Jiang Qin
Jiang Qin, Senmao Li, Alexandra Gomez-Villa, Shiqi Yang, Yaxing Wang, Kai Wang, Joost van de Weijer
Free-Lunch Color-Texture Disentanglement for Stylized Image Generation
Code will be available at https://deepffff.github.io/sadis.github.io/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in Text-to-Image (T2I) diffusion models have transformed image generation, enabling significant progress in stylized generation using only a few style reference images. However, current diffusion-based methods struggle with fine-grained style customization due to challenges in controlling multiple style attributes, such as color and texture. This paper introduces the first tuning-free approach to achieve free-lunch color-texture disentanglement in stylized T2I generation, addressing the need for independently controlled style elements for the Disentangled Stylized Image Generation (DisIG) problem. Our approach leverages the Image-Prompt Additivity property in the CLIP image embedding space to develop techniques for separating and extracting Color-Texture Embeddings (CTE) from individual color and texture reference images. To ensure that the color palette of the generated image aligns closely with the color reference, we apply a whitening and coloring transformation to enhance color consistency. Additionally, to prevent texture loss due to the signal-leak bias inherent in diffusion training, we introduce a noise term that preserves textural fidelity during the Regularized Whitening and Coloring Transformation (RegWCT). Through these methods, our Style Attributes Disentanglement approach (SADis) delivers a more precise and customizable solution for stylized image generation. Experiments on images from the WikiArt and StyleDrop datasets demonstrate that, both qualitatively and quantitatively, SADis surpasses state-of-the-art stylization methods in the DisIG task.Code will be released at https://deepffff.github.io/sadis.github.io/.
[ { "version": "v1", "created": "Tue, 18 Mar 2025 14:10:43 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 08:42:51 GMT" } ]
2025-03-24T00:00:00
[ [ "Qin", "Jiang", "" ], [ "Li", "Senmao", "" ], [ "Gomez-Villa", "Alexandra", "" ], [ "Yang", "Shiqi", "" ], [ "Wang", "Yaxing", "" ], [ "Wang", "Kai", "" ], [ "van de Weijer", "Joost", "" ] ]
TITLE: Free-Lunch Color-Texture Disentanglement for Stylized Image Generation ABSTRACT: Recent advances in Text-to-Image (T2I) diffusion models have transformed image generation, enabling significant progress in stylized generation using only a few style reference images. However, current diffusion-based methods struggle with fine-grained style customization due to challenges in controlling multiple style attributes, such as color and texture. This paper introduces the first tuning-free approach to achieve free-lunch color-texture disentanglement in stylized T2I generation, addressing the need for independently controlled style elements for the Disentangled Stylized Image Generation (DisIG) problem. Our approach leverages the Image-Prompt Additivity property in the CLIP image embedding space to develop techniques for separating and extracting Color-Texture Embeddings (CTE) from individual color and texture reference images. To ensure that the color palette of the generated image aligns closely with the color reference, we apply a whitening and coloring transformation to enhance color consistency. Additionally, to prevent texture loss due to the signal-leak bias inherent in diffusion training, we introduce a noise term that preserves textural fidelity during the Regularized Whitening and Coloring Transformation (RegWCT). Through these methods, our Style Attributes Disentanglement approach (SADis) delivers a more precise and customizable solution for stylized image generation. Experiments on images from the WikiArt and StyleDrop datasets demonstrate that, both qualitatively and quantitatively, SADis surpasses state-of-the-art stylization methods in the DisIG task.Code will be released at https://deepffff.github.io/sadis.github.io/.
2503.14878
Murtaza Zohair
Murtaza Zohair, Vidushi Sharma, Eduardo A. Soares, Khanh Nguyen, Maxwell Giammona, Linda Sundberg, Andy Tek, Emilio A. V. Vital, Young-Hye La
Chemical Foundation Model Guided Design of High Ionic Conductivity Electrolyte Formulations
null
null
null
null
cond-mat.mtrl-sci physics.chem-ph
http://creativecommons.org/licenses/by/4.0/
Designing optimal formulations is a major challenge in developing electrolytes for the next generation of rechargeable batteries due to the vast combinatorial design space and complex interplay between multiple constituents. Machine learning (ML) offers a powerful tool to uncover underlying chemical design rules and accelerate the process of formulation discovery. In this work, we present an approach to design new formulations that can achieve target performance, using a generalizable chemical foundation model. The chemical foundation model is fine-tuned on an experimental dataset of 13,666 ionic conductivity values curated from the lithium-ion battery literature. The fine-tuned model is used to discover 7 novel high conductivity electrolyte formulations through generative screening, improving the conductivity of LiFSI and LiDFOB based electrolytes by 82% and 172%, respectively. These findings highlight a generalizable workflow that is highly adaptable to the discovery of chemical mixtures with tailored properties to address challenges in energy storage and beyond.
[ { "version": "v1", "created": "Wed, 19 Mar 2025 04:14:19 GMT" }, { "version": "v2", "created": "Thu, 20 Mar 2025 18:50:35 GMT" } ]
2025-03-24T00:00:00
[ [ "Zohair", "Murtaza", "" ], [ "Sharma", "Vidushi", "" ], [ "Soares", "Eduardo A.", "" ], [ "Nguyen", "Khanh", "" ], [ "Giammona", "Maxwell", "" ], [ "Sundberg", "Linda", "" ], [ "Tek", "Andy", "" ], [ "Vital", "Emilio A. V.", "" ], [ "La", "Young-Hye", "" ] ]
TITLE: Chemical Foundation Model Guided Design of High Ionic Conductivity Electrolyte Formulations ABSTRACT: Designing optimal formulations is a major challenge in developing electrolytes for the next generation of rechargeable batteries due to the vast combinatorial design space and complex interplay between multiple constituents. Machine learning (ML) offers a powerful tool to uncover underlying chemical design rules and accelerate the process of formulation discovery. In this work, we present an approach to design new formulations that can achieve target performance, using a generalizable chemical foundation model. The chemical foundation model is fine-tuned on an experimental dataset of 13,666 ionic conductivity values curated from the lithium-ion battery literature. The fine-tuned model is used to discover 7 novel high conductivity electrolyte formulations through generative screening, improving the conductivity of LiFSI and LiDFOB based electrolytes by 82% and 172%, respectively. These findings highlight a generalizable workflow that is highly adaptable to the discovery of chemical mixtures with tailored properties to address challenges in energy storage and beyond.
2503.14897
Vaibhav Rathore
Vaibhav Rathore, Shubhranil B, Saikat Dutta, Sarthak Mehrotra, Zsolt Kira, Biplab Banerjee
When Domain Generalization meets Generalized Category Discovery: An Adaptive Task-Arithmetic Driven Approach
Accepted at CVPR 2025 (Main Conference)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Generalized Class Discovery (GCD) clusters base and novel classes in a target domain using supervision from a source domain with only base classes. Current methods often falter with distribution shifts and typically require access to target data during training, which can sometimes be impractical. To address this issue, we introduce the novel paradigm of Domain Generalization in GCD (DG-GCD), where only source data is available for training, while the target domain, with a distinct data distribution, remains unseen until inference. To this end, our solution, DG2CD-Net, aims to construct a domain-independent, discriminative embedding space for GCD. The core innovation is an episodic training strategy that enhances cross-domain generalization by adapting a base model on tasks derived from source and synthetic domains generated by a foundation model. Each episode focuses on a cross-domain GCD task, diversifying task setups over episodes and combining open-set domain adaptation with a novel margin loss and representation learning for optimizing the feature space progressively. To capture the effects of fine-tuning on the base model, we extend task arithmetic by adaptively weighting the local task vectors concerning the fine-tuned models based on their GCD performance on a validation distribution. This episodic update mechanism boosts the adaptability of the base model to unseen targets. Experiments across three datasets confirm that DG2CD-Net outperforms existing GCD methods customized for DG-GCD.
[ { "version": "v1", "created": "Wed, 19 Mar 2025 04:48:16 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 14:15:36 GMT" } ]
2025-03-24T00:00:00
[ [ "Rathore", "Vaibhav", "" ], [ "B", "Shubhranil", "" ], [ "Dutta", "Saikat", "" ], [ "Mehrotra", "Sarthak", "" ], [ "Kira", "Zsolt", "" ], [ "Banerjee", "Biplab", "" ] ]
TITLE: When Domain Generalization meets Generalized Category Discovery: An Adaptive Task-Arithmetic Driven Approach ABSTRACT: Generalized Class Discovery (GCD) clusters base and novel classes in a target domain using supervision from a source domain with only base classes. Current methods often falter with distribution shifts and typically require access to target data during training, which can sometimes be impractical. To address this issue, we introduce the novel paradigm of Domain Generalization in GCD (DG-GCD), where only source data is available for training, while the target domain, with a distinct data distribution, remains unseen until inference. To this end, our solution, DG2CD-Net, aims to construct a domain-independent, discriminative embedding space for GCD. The core innovation is an episodic training strategy that enhances cross-domain generalization by adapting a base model on tasks derived from source and synthetic domains generated by a foundation model. Each episode focuses on a cross-domain GCD task, diversifying task setups over episodes and combining open-set domain adaptation with a novel margin loss and representation learning for optimizing the feature space progressively. To capture the effects of fine-tuning on the base model, we extend task arithmetic by adaptively weighting the local task vectors concerning the fine-tuned models based on their GCD performance on a validation distribution. This episodic update mechanism boosts the adaptability of the base model to unseen targets. Experiments across three datasets confirm that DG2CD-Net outperforms existing GCD methods customized for DG-GCD.
2503.15300
Weixiao Gao
Weixiao Gao, Liangliang Nan, Hugo Ledoux
SUM Parts: Benchmarking Part-Level Semantic Segmentation of Urban Meshes
CVPR 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Semantic segmentation in urban scene analysis has mainly focused on images or point clouds, while textured meshes - offering richer spatial representation - remain underexplored. This paper introduces SUM Parts, the first large-scale dataset for urban textured meshes with part-level semantic labels, covering about 2.5 km2 with 21 classes. The dataset was created using our own annotation tool, which supports both face- and texture-based annotations with efficient interactive selection. We also provide a comprehensive evaluation of 3D semantic segmentation and interactive annotation methods on this dataset. Our project page is available at https://tudelft3d.github.io/SUMParts/.
[ { "version": "v1", "created": "Wed, 19 Mar 2025 15:22:23 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 13:58:31 GMT" } ]
2025-03-24T00:00:00
[ [ "Gao", "Weixiao", "" ], [ "Nan", "Liangliang", "" ], [ "Ledoux", "Hugo", "" ] ]
TITLE: SUM Parts: Benchmarking Part-Level Semantic Segmentation of Urban Meshes ABSTRACT: Semantic segmentation in urban scene analysis has mainly focused on images or point clouds, while textured meshes - offering richer spatial representation - remain underexplored. This paper introduces SUM Parts, the first large-scale dataset for urban textured meshes with part-level semantic labels, covering about 2.5 km2 with 21 classes. The dataset was created using our own annotation tool, which supports both face- and texture-based annotations with efficient interactive selection. We also provide a comprehensive evaluation of 3D semantic segmentation and interactive annotation methods on this dataset. Our project page is available at https://tudelft3d.github.io/SUMParts/.
2503.15463
Jianan Li
Jia-Nan Li, Jian Guan, Songhao Wu, Wei Wu, Rui Yan
From 1,000,000 Users to Every User: Scaling Up Personalized Preference for User-level Alignment
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Large language models (LLMs) have traditionally been aligned through one-size-fits-all approaches that assume uniform human preferences, fundamentally overlooking the diversity in user values and needs. This paper introduces a comprehensive framework for scalable personalized alignment of LLMs. We establish a systematic preference space characterizing psychological and behavioral dimensions, alongside diverse persona representations for robust preference inference in real-world scenarios. Building upon this foundation, we introduce \textsc{AlignX}, a large-scale dataset of over 1.3 million personalized preference examples, and develop two complementary alignment approaches: \textit{in-context alignment} directly conditioning on persona representations and \textit{preference-bridged alignment} modeling intermediate preference distributions. Extensive experiments demonstrate substantial improvements over existing methods, with an average 17.06\% accuracy gain across four benchmarks while exhibiting a strong adaptation capability to novel preferences, robustness to limited user data, and precise preference controllability. These results validate our framework's effectiveness, advancing toward truly user-adaptive AI systems.
[ { "version": "v1", "created": "Wed, 19 Mar 2025 17:41:46 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 10:33:21 GMT" } ]
2025-03-24T00:00:00
[ [ "Li", "Jia-Nan", "" ], [ "Guan", "Jian", "" ], [ "Wu", "Songhao", "" ], [ "Wu", "Wei", "" ], [ "Yan", "Rui", "" ] ]
TITLE: From 1,000,000 Users to Every User: Scaling Up Personalized Preference for User-level Alignment ABSTRACT: Large language models (LLMs) have traditionally been aligned through one-size-fits-all approaches that assume uniform human preferences, fundamentally overlooking the diversity in user values and needs. This paper introduces a comprehensive framework for scalable personalized alignment of LLMs. We establish a systematic preference space characterizing psychological and behavioral dimensions, alongside diverse persona representations for robust preference inference in real-world scenarios. Building upon this foundation, we introduce \textsc{AlignX}, a large-scale dataset of over 1.3 million personalized preference examples, and develop two complementary alignment approaches: \textit{in-context alignment} directly conditioning on persona representations and \textit{preference-bridged alignment} modeling intermediate preference distributions. Extensive experiments demonstrate substantial improvements over existing methods, with an average 17.06\% accuracy gain across four benchmarks while exhibiting a strong adaptation capability to novel preferences, robustness to limited user data, and precise preference controllability. These results validate our framework's effectiveness, advancing toward truly user-adaptive AI systems.
2503.15868
Soumitri Chattopadhyay
Debabrata Mandal and Soumitri Chattopadhyay and Guansen Tong and Praneeth Chakravarthula
UniCoRN: Latent Diffusion-based Unified Controllable Image Restoration Network across Multiple Degradations
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Image restoration is essential for enhancing degraded images across computer vision tasks. However, most existing methods address only a single type of degradation (e.g., blur, noise, or haze) at a time, limiting their real-world applicability where multiple degradations often occur simultaneously. In this paper, we propose UniCoRN, a unified image restoration approach capable of handling multiple degradation types simultaneously using a multi-head diffusion model. Specifically, we uncover the potential of low-level visual cues extracted from images in guiding a controllable diffusion model for real-world image restoration and we design a multi-head control network adaptable via a mixture-of-experts strategy. We train our model without any prior assumption of specific degradations, through a smartly designed curriculum learning recipe. Additionally, we also introduce MetaRestore, a metalens imaging benchmark containing images with multiple degradations and artifacts. Extensive evaluations on several challenging datasets, including our benchmark, demonstrate that our method achieves significant performance gains and can robustly restore images with severe degradations. Project page: https://codejaeger.github.io/unicorn-gh
[ { "version": "v1", "created": "Thu, 20 Mar 2025 05:42:13 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 15:24:45 GMT" } ]
2025-03-24T00:00:00
[ [ "Mandal", "Debabrata", "" ], [ "Chattopadhyay", "Soumitri", "" ], [ "Tong", "Guansen", "" ], [ "Chakravarthula", "Praneeth", "" ] ]
TITLE: UniCoRN: Latent Diffusion-based Unified Controllable Image Restoration Network across Multiple Degradations ABSTRACT: Image restoration is essential for enhancing degraded images across computer vision tasks. However, most existing methods address only a single type of degradation (e.g., blur, noise, or haze) at a time, limiting their real-world applicability where multiple degradations often occur simultaneously. In this paper, we propose UniCoRN, a unified image restoration approach capable of handling multiple degradation types simultaneously using a multi-head diffusion model. Specifically, we uncover the potential of low-level visual cues extracted from images in guiding a controllable diffusion model for real-world image restoration and we design a multi-head control network adaptable via a mixture-of-experts strategy. We train our model without any prior assumption of specific degradations, through a smartly designed curriculum learning recipe. Additionally, we also introduce MetaRestore, a metalens imaging benchmark containing images with multiple degradations and artifacts. Extensive evaluations on several challenging datasets, including our benchmark, demonstrate that our method achieves significant performance gains and can robustly restore images with severe degradations. Project page: https://codejaeger.github.io/unicorn-gh
2503.15879
DongGeon Lee
DongGeon Lee, Ahjeong Park, Hyeri Lee, Hyeonseo Nam, Yunho Maeng
Typed-RAG: Type-aware Multi-Aspect Decomposition for Non-Factoid Question Answering
Accepted to NAACL 2025 SRW
null
null
null
cs.CL cs.IR
http://creativecommons.org/licenses/by/4.0/
Non-factoid question-answering (NFQA) poses a significant challenge due to its open-ended nature, diverse intents, and the need for multi-aspect reasoning, which renders conventional factoid QA approaches, including retrieval-augmented generation (RAG), inadequate. Unlike factoid questions, non-factoid questions (NFQs) lack definitive answers and require synthesizing information from multiple sources across various reasoning dimensions. To address these limitations, we introduce Typed-RAG, a type-aware multi-aspect decomposition framework within the RAG paradigm for NFQA. Typed-RAG classifies NFQs into distinct types -- such as debate, experience, and comparison -- and applies aspect-based decomposition to refine retrieval and generation strategies. By decomposing multi-aspect NFQs into single-aspect sub-queries and aggregating the results, Typed-RAG generates more informative and contextually relevant responses. To evaluate Typed-RAG, we introduce Wiki-NFQA, a benchmark dataset covering diverse NFQ types. Experimental results demonstrate that Typed-RAG outperforms baselines, thereby highlighting the importance of type-aware decomposition for effective retrieval and generation in NFQA. Our code and dataset are available at https://github.com/TeamNLP/Typed-RAG.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 06:04:12 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 05:50:37 GMT" } ]
2025-03-24T00:00:00
[ [ "Lee", "DongGeon", "" ], [ "Park", "Ahjeong", "" ], [ "Lee", "Hyeri", "" ], [ "Nam", "Hyeonseo", "" ], [ "Maeng", "Yunho", "" ] ]
TITLE: Typed-RAG: Type-aware Multi-Aspect Decomposition for Non-Factoid Question Answering ABSTRACT: Non-factoid question-answering (NFQA) poses a significant challenge due to its open-ended nature, diverse intents, and the need for multi-aspect reasoning, which renders conventional factoid QA approaches, including retrieval-augmented generation (RAG), inadequate. Unlike factoid questions, non-factoid questions (NFQs) lack definitive answers and require synthesizing information from multiple sources across various reasoning dimensions. To address these limitations, we introduce Typed-RAG, a type-aware multi-aspect decomposition framework within the RAG paradigm for NFQA. Typed-RAG classifies NFQs into distinct types -- such as debate, experience, and comparison -- and applies aspect-based decomposition to refine retrieval and generation strategies. By decomposing multi-aspect NFQs into single-aspect sub-queries and aggregating the results, Typed-RAG generates more informative and contextually relevant responses. To evaluate Typed-RAG, we introduce Wiki-NFQA, a benchmark dataset covering diverse NFQ types. Experimental results demonstrate that Typed-RAG outperforms baselines, thereby highlighting the importance of type-aware decomposition for effective retrieval and generation in NFQA. Our code and dataset are available at https://github.com/TeamNLP/Typed-RAG.
2503.15886
Hui Liu
Hui Liu, Wenya Wang, Kecheng Chen, Jie Liu, Yibing Liu, Tiexin Qin, Peisong He, Xinghao Jiang, Haoliang Li
Enhancing Zero-Shot Image Recognition in Vision-Language Models through Human-like Concept Guidance
21 pages, 7 figures 7 tables
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
In zero-shot image recognition tasks, humans demonstrate remarkable flexibility in classifying unseen categories by composing known simpler concepts. However, existing vision-language models (VLMs), despite achieving significant progress through large-scale natural language supervision, often underperform in real-world applications because of sub-optimal prompt engineering and the inability to adapt effectively to target classes. To address these issues, we propose a Concept-guided Human-like Bayesian Reasoning (CHBR) framework. Grounded in Bayes' theorem, CHBR models the concept used in human image recognition as latent variables and formulates this task by summing across potential concepts, weighted by a prior distribution and a likelihood function. To tackle the intractable computation over an infinite concept space, we introduce an importance sampling algorithm that iteratively prompts large language models (LLMs) to generate discriminative concepts, emphasizing inter-class differences. We further propose three heuristic approaches involving Average Likelihood, Confidence Likelihood, and Test Time Augmentation (TTA) Likelihood, which dynamically refine the combination of concepts based on the test image. Extensive evaluations across fifteen datasets demonstrate that CHBR consistently outperforms existing state-of-the-art zero-shot generalization methods.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 06:20:13 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 02:55:26 GMT" } ]
2025-03-24T00:00:00
[ [ "Liu", "Hui", "" ], [ "Wang", "Wenya", "" ], [ "Chen", "Kecheng", "" ], [ "Liu", "Jie", "" ], [ "Liu", "Yibing", "" ], [ "Qin", "Tiexin", "" ], [ "He", "Peisong", "" ], [ "Jiang", "Xinghao", "" ], [ "Li", "Haoliang", "" ] ]
TITLE: Enhancing Zero-Shot Image Recognition in Vision-Language Models through Human-like Concept Guidance ABSTRACT: In zero-shot image recognition tasks, humans demonstrate remarkable flexibility in classifying unseen categories by composing known simpler concepts. However, existing vision-language models (VLMs), despite achieving significant progress through large-scale natural language supervision, often underperform in real-world applications because of sub-optimal prompt engineering and the inability to adapt effectively to target classes. To address these issues, we propose a Concept-guided Human-like Bayesian Reasoning (CHBR) framework. Grounded in Bayes' theorem, CHBR models the concept used in human image recognition as latent variables and formulates this task by summing across potential concepts, weighted by a prior distribution and a likelihood function. To tackle the intractable computation over an infinite concept space, we introduce an importance sampling algorithm that iteratively prompts large language models (LLMs) to generate discriminative concepts, emphasizing inter-class differences. We further propose three heuristic approaches involving Average Likelihood, Confidence Likelihood, and Test Time Augmentation (TTA) Likelihood, which dynamically refine the combination of concepts based on the test image. Extensive evaluations across fifteen datasets demonstrate that CHBR consistently outperforms existing state-of-the-art zero-shot generalization methods.
2503.16047
Akinyemi Sadeeq Akintola
Bisola Faith Kayode, Akinyemi Sadeeq Akintola, Oluwole Fagbohun, Egonna Anaesiuba-Bristol, Onyekachukwu Ojumah, Oluwagbade Odimayo, Toyese Oloyede, Aniema Inyang, Teslim Kazeem, Habeeb Alli, Udodirim Ibem Offia, Prisca Chinazor Amajuoyi
Temporal-Spatial Attention Network (TSAN) for DoS Attack Detection in Network Traffic
19 Pages, 5 figures
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
Denial-of-Service (DoS) attacks remain a critical threat to network security, disrupting services and causing significant economic losses. Traditional detection methods, including statistical and rule-based models, struggle to adapt to evolving attack patterns. To address this challenge, we propose a novel Temporal-Spatial Attention Network (TSAN) architecture for detecting Denial of Service (DoS) attacks in network traffic. By leveraging both temporal and spatial features of network traffic, our approach captures complex traffic patterns and anomalies that traditional methods might miss. The TSAN model incorporates transformer-based temporal encoding, convolutional spatial encoding, and a cross-attention mechanism to fuse these complementary feature spaces. Additionally, we employ multi-task learning with auxiliary tasks to enhance the model's robustness. Experimental results on the NSL-KDD dataset demonstrate that TSAN outperforms state-of-the-art models, achieving superior accuracy, precision, recall, and F1-score while maintaining computational efficiency for real-time deployment. The proposed architecture offers an optimal balance between detection accuracy and computational overhead, making it highly suitable for real-world network security applications.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 11:31:45 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 17:40:15 GMT" } ]
2025-03-24T00:00:00
[ [ "Kayode", "Bisola Faith", "" ], [ "Akintola", "Akinyemi Sadeeq", "" ], [ "Fagbohun", "Oluwole", "" ], [ "Anaesiuba-Bristol", "Egonna", "" ], [ "Ojumah", "Onyekachukwu", "" ], [ "Odimayo", "Oluwagbade", "" ], [ "Oloyede", "Toyese", "" ], [ "Inyang", "Aniema", "" ], [ "Kazeem", "Teslim", "" ], [ "Alli", "Habeeb", "" ], [ "Offia", "Udodirim Ibem", "" ], [ "Amajuoyi", "Prisca Chinazor", "" ] ]
TITLE: Temporal-Spatial Attention Network (TSAN) for DoS Attack Detection in Network Traffic ABSTRACT: Denial-of-Service (DoS) attacks remain a critical threat to network security, disrupting services and causing significant economic losses. Traditional detection methods, including statistical and rule-based models, struggle to adapt to evolving attack patterns. To address this challenge, we propose a novel Temporal-Spatial Attention Network (TSAN) architecture for detecting Denial of Service (DoS) attacks in network traffic. By leveraging both temporal and spatial features of network traffic, our approach captures complex traffic patterns and anomalies that traditional methods might miss. The TSAN model incorporates transformer-based temporal encoding, convolutional spatial encoding, and a cross-attention mechanism to fuse these complementary feature spaces. Additionally, we employ multi-task learning with auxiliary tasks to enhance the model's robustness. Experimental results on the NSL-KDD dataset demonstrate that TSAN outperforms state-of-the-art models, achieving superior accuracy, precision, recall, and F1-score while maintaining computational efficiency for real-time deployment. The proposed architecture offers an optimal balance between detection accuracy and computational overhead, making it highly suitable for real-world network security applications.
2503.16252
Liwen Zhang
Zhaowei Liu, Xin Guo, Fangqi Lou, Lingfeng Zeng, Jinyi Niu, Zixuan Wang, Jiajie Xu, Weige Cai, Ziwei Yang, Xueqian Zhao, Chao Li, Sheng Xu, Dezhi Chen, Yun Chen, Zuo Bai and Liwen Zhang
Fin-R1: A Large Language Model for Financial Reasoning through Reinforcement Learning
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reasoning large language models are rapidly evolving across various domains. However, their capabilities in handling complex financial tasks still require in-depth exploration. In this paper, we introduce Fin-R1, a reasoning large language model specifically designed for the financial sector. Fin-R1 is built using a two-stage architecture, leveraging a financial reasoning dataset distilled and processed based on DeepSeek-R1. Through supervised fine-tuning (SFT) and reinforcement learning (RL) training, it demonstrates performance close to DeepSeek-R1 with a parameter size of 7 billion across a range of financial reasoning tasks. It achieves the state-of-the-art (SOTA) in the FinQA and ConvFinQA tasks between those LLMs in our evaluation, surpassing larger models in other tasks as well. Fin-R1 showcases strong reasoning and decision-making capabilities, providing solutions to various problems encountered in the financial domain. Our code is available at https://github.com/SUFE-AIFLM-Lab/Fin-R1.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 15:46:18 GMT" }, { "version": "v2", "created": "Fri, 21 Mar 2025 01:57:58 GMT" } ]
2025-03-24T00:00:00
[ [ "Liu", "Zhaowei", "" ], [ "Guo", "Xin", "" ], [ "Lou", "Fangqi", "" ], [ "Zeng", "Lingfeng", "" ], [ "Niu", "Jinyi", "" ], [ "Wang", "Zixuan", "" ], [ "Xu", "Jiajie", "" ], [ "Cai", "Weige", "" ], [ "Yang", "Ziwei", "" ], [ "Zhao", "Xueqian", "" ], [ "Li", "Chao", "" ], [ "Xu", "Sheng", "" ], [ "Chen", "Dezhi", "" ], [ "Chen", "Yun", "" ], [ "Bai", "Zuo", "" ], [ "Zhang", "Liwen", "" ] ]
TITLE: Fin-R1: A Large Language Model for Financial Reasoning through Reinforcement Learning ABSTRACT: Reasoning large language models are rapidly evolving across various domains. However, their capabilities in handling complex financial tasks still require in-depth exploration. In this paper, we introduce Fin-R1, a reasoning large language model specifically designed for the financial sector. Fin-R1 is built using a two-stage architecture, leveraging a financial reasoning dataset distilled and processed based on DeepSeek-R1. Through supervised fine-tuning (SFT) and reinforcement learning (RL) training, it demonstrates performance close to DeepSeek-R1 with a parameter size of 7 billion across a range of financial reasoning tasks. It achieves the state-of-the-art (SOTA) in the FinQA and ConvFinQA tasks between those LLMs in our evaluation, surpassing larger models in other tasks as well. Fin-R1 showcases strong reasoning and decision-making capabilities, providing solutions to various problems encountered in the financial domain. Our code is available at https://github.com/SUFE-AIFLM-Lab/Fin-R1.
2503.16454
Haidong Wang
Haidong Wang, Qia Shan, JianHua Zhang, PengFei Xiao, Ao Liu
An Audio-Visual Fusion Emotion Generation Model Based on Neuroanatomical Alignment
null
null
null
null
cs.HC cs.AI
http://creativecommons.org/licenses/by/4.0/
In the field of affective computing, traditional methods for generating emotions predominantly rely on deep learning techniques and large-scale emotion datasets. However, deep learning techniques are often complex and difficult to interpret, and standardizing large-scale emotional datasets are difficult and costly to establish. To tackle these challenges, we introduce a novel framework named Audio-Visual Fusion for Brain-like Emotion Learning(AVF-BEL). In contrast to conventional brain-inspired emotion learning methods, this approach improves the audio-visual emotion fusion and generation model through the integration of modular components, thereby enabling more lightweight and interpretable emotion learning and generation processes. The framework simulates the integration of the visual, auditory, and emotional pathways of the brain, optimizes the fusion of emotional features across visual and auditory modalities, and improves upon the traditional Brain Emotional Learning (BEL) model. The experimental results indicate a significant improvement in the similarity of the audio-visual fusion emotion learning generation model compared to single-modality visual and auditory emotion learning and generation model. Ultimately, this aligns with the fundamental phenomenon of heightened emotion generation facilitated by the integrated impact of visual and auditory stimuli. This contribution not only enhances the interpretability and efficiency of affective intelligence but also provides new insights and pathways for advancing affective computing technology. Our source code can be accessed here: https://github.com/OpenHUTB/emotion}{https://github.com/OpenHUTB/emotion.
[ { "version": "v1", "created": "Fri, 21 Feb 2025 14:26:58 GMT" } ]
2025-03-24T00:00:00
[ [ "Wang", "Haidong", "" ], [ "Shan", "Qia", "" ], [ "Zhang", "JianHua", "" ], [ "Xiao", "PengFei", "" ], [ "Liu", "Ao", "" ] ]
TITLE: An Audio-Visual Fusion Emotion Generation Model Based on Neuroanatomical Alignment ABSTRACT: In the field of affective computing, traditional methods for generating emotions predominantly rely on deep learning techniques and large-scale emotion datasets. However, deep learning techniques are often complex and difficult to interpret, and standardizing large-scale emotional datasets are difficult and costly to establish. To tackle these challenges, we introduce a novel framework named Audio-Visual Fusion for Brain-like Emotion Learning(AVF-BEL). In contrast to conventional brain-inspired emotion learning methods, this approach improves the audio-visual emotion fusion and generation model through the integration of modular components, thereby enabling more lightweight and interpretable emotion learning and generation processes. The framework simulates the integration of the visual, auditory, and emotional pathways of the brain, optimizes the fusion of emotional features across visual and auditory modalities, and improves upon the traditional Brain Emotional Learning (BEL) model. The experimental results indicate a significant improvement in the similarity of the audio-visual fusion emotion learning generation model compared to single-modality visual and auditory emotion learning and generation model. Ultimately, this aligns with the fundamental phenomenon of heightened emotion generation facilitated by the integrated impact of visual and auditory stimuli. This contribution not only enhances the interpretability and efficiency of affective intelligence but also provides new insights and pathways for advancing affective computing technology. Our source code can be accessed here: https://github.com/OpenHUTB/emotion}{https://github.com/OpenHUTB/emotion.
2503.16465
Pengzhou Cheng
Pengzhou Cheng, Zheng Wu, Zongru Wu, Aston Zhang, Zhuosheng Zhang, Gongshen Liu
OS-Kairos: Adaptive Interaction for MLLM-Powered GUI Agents
25 pages, 24 figures, 11 tables
null
null
null
cs.HC cs.AI
http://creativecommons.org/licenses/by/4.0/
Autonomous graphical user interface (GUI) agents powered by multimodal large language models have shown great promise. However, a critical yet underexplored issue persists: over-execution, where the agent executes tasks in a fully autonomous way, without adequate assessment of its action confidence to compromise an adaptive human-agent collaboration. This poses substantial risks in complex scenarios, such as those involving ambiguous user instructions, unexpected interruptions, and environmental hijacks. To address the issue, we introduce OS-Kairos, an adaptive GUI agent capable of predicting confidence levels at each interaction step and efficiently deciding whether to act autonomously or seek human intervention. OS-Kairos is developed through two key mechanisms: (i) collaborative probing that annotates confidence scores at each interaction step; (ii) confidence-driven interaction that leverages these confidence scores to elicit the ability of adaptive interaction. Experimental results show that OS-Kairos substantially outperforms existing models on our curated dataset featuring complex scenarios, as well as on established benchmarks such as AITZ and Meta-GUI, with 24.59\%$\sim$87.29\% improvements in task success rate. OS-Kairos facilitates an adaptive human-agent collaboration, prioritizing effectiveness, generality, scalability, and efficiency for real-world GUI interaction. The dataset and codes are available at https://github.com/Wuzheng02/OS-Kairos.
[ { "version": "v1", "created": "Wed, 26 Feb 2025 12:31:16 GMT" } ]
2025-03-24T00:00:00
[ [ "Cheng", "Pengzhou", "" ], [ "Wu", "Zheng", "" ], [ "Wu", "Zongru", "" ], [ "Zhang", "Aston", "" ], [ "Zhang", "Zhuosheng", "" ], [ "Liu", "Gongshen", "" ] ]
TITLE: OS-Kairos: Adaptive Interaction for MLLM-Powered GUI Agents ABSTRACT: Autonomous graphical user interface (GUI) agents powered by multimodal large language models have shown great promise. However, a critical yet underexplored issue persists: over-execution, where the agent executes tasks in a fully autonomous way, without adequate assessment of its action confidence to compromise an adaptive human-agent collaboration. This poses substantial risks in complex scenarios, such as those involving ambiguous user instructions, unexpected interruptions, and environmental hijacks. To address the issue, we introduce OS-Kairos, an adaptive GUI agent capable of predicting confidence levels at each interaction step and efficiently deciding whether to act autonomously or seek human intervention. OS-Kairos is developed through two key mechanisms: (i) collaborative probing that annotates confidence scores at each interaction step; (ii) confidence-driven interaction that leverages these confidence scores to elicit the ability of adaptive interaction. Experimental results show that OS-Kairos substantially outperforms existing models on our curated dataset featuring complex scenarios, as well as on established benchmarks such as AITZ and Meta-GUI, with 24.59\%$\sim$87.29\% improvements in task success rate. OS-Kairos facilitates an adaptive human-agent collaboration, prioritizing effectiveness, generality, scalability, and efficiency for real-world GUI interaction. The dataset and codes are available at https://github.com/Wuzheng02/OS-Kairos.
2503.16478
Rishabh Vishwakarma
Rishabh Vishwakarma, Caroline Brophy, and Catherine Hurley
PieGlyph: An R package for creating axis invariant pie-glyphs for 2d plots
PieGlyph is available on CRAN here: https://CRAN.R-project.org/package=PieGlyph While the development version can be downloaded from Github at: https://github.com/rishvish/PieGlyph Package vignettes are available here: https://rishvish.github.io/PieGlyph/
null
null
null
cs.HC stat.AP
http://creativecommons.org/licenses/by-sa/4.0/
Effective visualisation of multidimensional data is crucial for generating insights. Glyph-based visualisations, which encode data dimensions onto multiple visual channels such as colour, shape, and size, provide an effective means of representing complex datasets. Pie-chart glyphs (pie-glyphs) are one such approach, where multiple data attributes are mapped to slices within a pie chart. This paper introduces the PieGlyph R package, which enables users to overlay any 2D plot with axis-invariant pie-glyphs, offering a compact and intuitive representation of multidimensional data. Unlike existing R packages such as scatterpie or ggforce, PieGlyph generates pie-glyphs independently of the plot axes by employing a nested coordinate system, ensuring they remain circular regardless of changes to the underlying coordinate system. This enhances interpretability, particularly in when visualising spatial data, as users can select the most appropriate map projection without distorting the glyphs' shape. Pie-glyphs are also particularly well-suited for visualising compositional data, where there is a natural sum-to-one constraint on the data attributes. PieGlyph is developed under the Grammar of Graphics paradigm using the ggplot2 framework and supports the generation of interactive pie-glyphs through the ggiraph package. Designed to integrate seamlessly with all features and extensions offered by ggplot2 and ggiraph, PieGlyph provides users with full flexibility in customising every aspect of the visualisation. This paper outlines the conceptual framework of PieGlyph, compares it with existing alternatives, and demonstrates its applications through example visualisations.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 10:53:04 GMT" } ]
2025-03-24T00:00:00
[ [ "Vishwakarma", "Rishabh", "" ], [ "Brophy", "Caroline", "" ], [ "Hurley", "Catherine", "" ] ]
TITLE: PieGlyph: An R package for creating axis invariant pie-glyphs for 2d plots ABSTRACT: Effective visualisation of multidimensional data is crucial for generating insights. Glyph-based visualisations, which encode data dimensions onto multiple visual channels such as colour, shape, and size, provide an effective means of representing complex datasets. Pie-chart glyphs (pie-glyphs) are one such approach, where multiple data attributes are mapped to slices within a pie chart. This paper introduces the PieGlyph R package, which enables users to overlay any 2D plot with axis-invariant pie-glyphs, offering a compact and intuitive representation of multidimensional data. Unlike existing R packages such as scatterpie or ggforce, PieGlyph generates pie-glyphs independently of the plot axes by employing a nested coordinate system, ensuring they remain circular regardless of changes to the underlying coordinate system. This enhances interpretability, particularly in when visualising spatial data, as users can select the most appropriate map projection without distorting the glyphs' shape. Pie-glyphs are also particularly well-suited for visualising compositional data, where there is a natural sum-to-one constraint on the data attributes. PieGlyph is developed under the Grammar of Graphics paradigm using the ggplot2 framework and supports the generation of interactive pie-glyphs through the ggiraph package. Designed to integrate seamlessly with all features and extensions offered by ggplot2 and ggiraph, PieGlyph provides users with full flexibility in customising every aspect of the visualisation. This paper outlines the conceptual framework of PieGlyph, compares it with existing alternatives, and demonstrates its applications through example visualisations.
2503.16480
Ramit Debnath
Yara Kyrychenko, Jon Roozenbeek, Brandon Davidson, Sander van der Linden and Ramit Debnath
Human Preferences for Constructive Interactions in Language Model Alignment
1 Figure, 1 Table, 11 pages
null
null
null
cs.HC cs.AI cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
As large language models (LLMs) enter the mainstream, aligning them to foster constructive dialogue rather than exacerbate societal divisions is critical. Using an individualized and multicultural alignment dataset of over 7,500 conversations of individuals from 74 countries engaging with 21 LLMs, we examined how linguistic attributes linked to constructive interactions are reflected in human preference data used for training AI. We found that users consistently preferred well-reasoned and nuanced responses while rejecting those high in personal storytelling. However, users who believed that AI should reflect their values tended to place less preference on reasoning in LLM responses and more on curiosity. Encouragingly, we observed that users could set the tone for how constructive their conversation would be, as LLMs mirrored linguistic attributes, including toxicity, in user queries.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 15:08:41 GMT" } ]
2025-03-24T00:00:00
[ [ "Kyrychenko", "Yara", "" ], [ "Roozenbeek", "Jon", "" ], [ "Davidson", "Brandon", "" ], [ "van der Linden", "Sander", "" ], [ "Debnath", "Ramit", "" ] ]
TITLE: Human Preferences for Constructive Interactions in Language Model Alignment ABSTRACT: As large language models (LLMs) enter the mainstream, aligning them to foster constructive dialogue rather than exacerbate societal divisions is critical. Using an individualized and multicultural alignment dataset of over 7,500 conversations of individuals from 74 countries engaging with 21 LLMs, we examined how linguistic attributes linked to constructive interactions are reflected in human preference data used for training AI. We found that users consistently preferred well-reasoned and nuanced responses while rejecting those high in personal storytelling. However, users who believed that AI should reflect their values tended to place less preference on reasoning in LLM responses and more on curiosity. Encouragingly, we observed that users could set the tone for how constructive their conversation would be, as LLMs mirrored linguistic attributes, including toxicity, in user queries.
2503.16481
Subham Agrawal
Subham Agrawal and Nico Ostermann-Myrau and Nils Dengler and Maren Bennewitz
Pedestrians and Robots: A Novel Dataset for Learning Distinct Social Navigation Forces
null
null
null
null
cs.HC cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing use of robots in human-centric public spaces such as shopping malls, sidewalks, and hospitals, requires understanding of how pedestrians respond to their presence. However, existing research lacks comprehensive datasets that capture the full range of pedestrian behaviors, e.g., including avoidance, neutrality, and attraction in the presence of robots. Such datasets can be used to effectively learn models capable of accurately predicting diverse responses of pedestrians to robot presence, which are crucial for advancing robot navigation strategies and optimizing pedestrian-aware motion planning. In this paper, we address these challenges by collecting a novel dataset of pedestrian motion in two outdoor locations under three distinct conditions, i.e., no robot presence, a stationary robot, and a moving robot. Thus, unlike existing datasets, ours explicitly encapsulates variations in pedestrian behavior across the different robot conditions. Using our dataset, we propose a novel Neural Social Robot Force Model (NSRFM), an extension of the traditional Social Force Model that integrates neural networks and robot-induced forces to better predict pedestrian behavior in the presence of robots. We validate the NSRFM by comparing its generated trajectories on different real-world datasets. Furthermore, we implemented it in simulation to enable the learning and benchmarking of robot navigation strategies based on their impact on pedestrian movement. Our results demonstrate the model's effectiveness in replicating real-world pedestrian reactions and its its utility in developing, evaluating, and benchmarking social robot navigation algorithms.
[ { "version": "v1", "created": "Wed, 5 Mar 2025 17:02:29 GMT" } ]
2025-03-24T00:00:00
[ [ "Agrawal", "Subham", "" ], [ "Ostermann-Myrau", "Nico", "" ], [ "Dengler", "Nils", "" ], [ "Bennewitz", "Maren", "" ] ]
TITLE: Pedestrians and Robots: A Novel Dataset for Learning Distinct Social Navigation Forces ABSTRACT: The increasing use of robots in human-centric public spaces such as shopping malls, sidewalks, and hospitals, requires understanding of how pedestrians respond to their presence. However, existing research lacks comprehensive datasets that capture the full range of pedestrian behaviors, e.g., including avoidance, neutrality, and attraction in the presence of robots. Such datasets can be used to effectively learn models capable of accurately predicting diverse responses of pedestrians to robot presence, which are crucial for advancing robot navigation strategies and optimizing pedestrian-aware motion planning. In this paper, we address these challenges by collecting a novel dataset of pedestrian motion in two outdoor locations under three distinct conditions, i.e., no robot presence, a stationary robot, and a moving robot. Thus, unlike existing datasets, ours explicitly encapsulates variations in pedestrian behavior across the different robot conditions. Using our dataset, we propose a novel Neural Social Robot Force Model (NSRFM), an extension of the traditional Social Force Model that integrates neural networks and robot-induced forces to better predict pedestrian behavior in the presence of robots. We validate the NSRFM by comparing its generated trajectories on different real-world datasets. Furthermore, we implemented it in simulation to enable the learning and benchmarking of robot navigation strategies based on their impact on pedestrian movement. Our results demonstrate the model's effectiveness in replicating real-world pedestrian reactions and its its utility in developing, evaluating, and benchmarking social robot navigation algorithms.
2503.16500
Jorge de Heuvel
Jorge de Heuvel, Daniel Marta, Simon Holk, Iolanda Leite, Maren Bennewitz
The Impact of VR and 2D Interfaces on Human Feedback in Preference-Based Robot Learning
null
null
null
null
cs.HC cs.RO
http://creativecommons.org/licenses/by/4.0/
Aligning robot navigation with human preferences is essential for ensuring comfortable and predictable robot movement in shared spaces, facilitating seamless human-robot coexistence. While preference-based learning methods, such as reinforcement learning from human feedback (RLHF), enable this alignment, the choice of the preference collection interface may influence the process. Traditional 2D interfaces provide structured views but lack spatial depth, whereas immersive VR offers richer perception, potentially affecting preference articulation. This study systematically examines how the interface modality impacts human preference collection and navigation policy alignment. We introduce a novel dataset of 2,325 human preference queries collected through both VR and 2D interfaces, revealing significant differences in user experience, preference consistency, and policy outcomes. Our findings highlight the trade-offs between immersion, perception, and preference reliability, emphasizing the importance of interface selection in preference-based robot learning. The dataset will be publicly released to support future research.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 21:02:47 GMT" } ]
2025-03-24T00:00:00
[ [ "de Heuvel", "Jorge", "" ], [ "Marta", "Daniel", "" ], [ "Holk", "Simon", "" ], [ "Leite", "Iolanda", "" ], [ "Bennewitz", "Maren", "" ] ]
TITLE: The Impact of VR and 2D Interfaces on Human Feedback in Preference-Based Robot Learning ABSTRACT: Aligning robot navigation with human preferences is essential for ensuring comfortable and predictable robot movement in shared spaces, facilitating seamless human-robot coexistence. While preference-based learning methods, such as reinforcement learning from human feedback (RLHF), enable this alignment, the choice of the preference collection interface may influence the process. Traditional 2D interfaces provide structured views but lack spatial depth, whereas immersive VR offers richer perception, potentially affecting preference articulation. This study systematically examines how the interface modality impacts human preference collection and navigation policy alignment. We introduce a novel dataset of 2,325 human preference queries collected through both VR and 2D interfaces, revealing significant differences in user experience, preference consistency, and policy outcomes. Our findings highlight the trade-offs between immersion, perception, and preference reliability, emphasizing the importance of interface selection in preference-based robot learning. The dataset will be publicly released to support future research.
2503.16505
Dimitrios Tsirmpas
Dimitris Tsirmpas and Ion Androutsopoulos and John Pavlopoulos
Scalable Evaluation of Online Moderation Strategies via Synthetic Simulations
25 pages, 6 tables, 9 figures
null
null
null
cs.HC cs.CL cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Despite the ever-growing importance of online moderation, there has been no large-scale study evaluating the effectiveness of alternative moderation strategies. This is largely due to the lack of appropriate datasets, and the difficulty of getting human discussants, moderators, and evaluators involved in multiple experiments. In this paper, we propose a methodology for leveraging synthetic experiments performed exclusively by Large Language Models (LLMs) to initially bypass the need for human participation in experiments involving online moderation. We evaluate six LLM moderation configurations; two currently used real-life moderation strategies (guidelines issued for human moderators for online moderation and real-life facilitation), two baseline strategies (guidelines elicited for LLM alignment work, and LLM moderation with minimal prompting) a baseline with no moderator at all, as well as our own proposed strategy inspired by a Reinforcement Learning (RL) formulation of the problem. We find that our own moderation strategy significantly outperforms established moderation guidelines, as well as out-of-the-box LLM moderation. We also find that smaller LLMs, with less intensive instruction-tuning, can create more varied discussions than larger models. In order to run these experiments, we create and release an efficient, purpose-built, open-source Python framework, dubbed "SynDisco" to easily simulate hundreds of discussions using LLM user-agents and moderators. Additionally, we release the Virtual Moderation Dataset (VMD), a large dataset of LLM-generated and LLM-annotated discussions, generated by three families of open-source LLMs accompanied by an exploratory analysis of the dataset.
[ { "version": "v1", "created": "Thu, 13 Mar 2025 08:13:07 GMT" } ]
2025-03-24T00:00:00
[ [ "Tsirmpas", "Dimitris", "" ], [ "Androutsopoulos", "Ion", "" ], [ "Pavlopoulos", "John", "" ] ]
TITLE: Scalable Evaluation of Online Moderation Strategies via Synthetic Simulations ABSTRACT: Despite the ever-growing importance of online moderation, there has been no large-scale study evaluating the effectiveness of alternative moderation strategies. This is largely due to the lack of appropriate datasets, and the difficulty of getting human discussants, moderators, and evaluators involved in multiple experiments. In this paper, we propose a methodology for leveraging synthetic experiments performed exclusively by Large Language Models (LLMs) to initially bypass the need for human participation in experiments involving online moderation. We evaluate six LLM moderation configurations; two currently used real-life moderation strategies (guidelines issued for human moderators for online moderation and real-life facilitation), two baseline strategies (guidelines elicited for LLM alignment work, and LLM moderation with minimal prompting) a baseline with no moderator at all, as well as our own proposed strategy inspired by a Reinforcement Learning (RL) formulation of the problem. We find that our own moderation strategy significantly outperforms established moderation guidelines, as well as out-of-the-box LLM moderation. We also find that smaller LLMs, with less intensive instruction-tuning, can create more varied discussions than larger models. In order to run these experiments, we create and release an efficient, purpose-built, open-source Python framework, dubbed "SynDisco" to easily simulate hundreds of discussions using LLM user-agents and moderators. Additionally, we release the Virtual Moderation Dataset (VMD), a large dataset of LLM-generated and LLM-annotated discussions, generated by three families of open-source LLMs accompanied by an exploratory analysis of the dataset.
2503.16511
Tingkai Liu
Tingkai Liu, Ari S. Benjamin, Anthony M. Zador
Token-Level Uncertainty-Aware Objective for Language Model Post-Training
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the current work, we connect token-level uncertainty in causal language modeling to two types of training objectives: 1) masked maximum likelihood (MLE), 2) self-distillation. We show that masked MLE is effective in reducing epistemic uncertainty, and serve as an effective token-level automatic curriculum learning technique. However, masked MLE is prone to overfitting and requires self-distillation regularization to improve or maintain performance on out-of-distribution tasks. We demonstrate significant performance gain via the proposed training objective - combined masked MLE and self-distillation - across multiple architectures (Gemma, LLaMA, Phi) and datasets (Alpaca, ShareGPT, GSM8K), mitigating overfitting while maintaining adaptability during post-training. Our findings suggest that uncertainty-aware training provides an effective mechanism for enhancing language model training.
[ { "version": "v1", "created": "Sat, 15 Mar 2025 00:32:14 GMT" } ]
2025-03-24T00:00:00
[ [ "Liu", "Tingkai", "" ], [ "Benjamin", "Ari S.", "" ], [ "Zador", "Anthony M.", "" ] ]
TITLE: Token-Level Uncertainty-Aware Objective for Language Model Post-Training ABSTRACT: In the current work, we connect token-level uncertainty in causal language modeling to two types of training objectives: 1) masked maximum likelihood (MLE), 2) self-distillation. We show that masked MLE is effective in reducing epistemic uncertainty, and serve as an effective token-level automatic curriculum learning technique. However, masked MLE is prone to overfitting and requires self-distillation regularization to improve or maintain performance on out-of-distribution tasks. We demonstrate significant performance gain via the proposed training objective - combined masked MLE and self-distillation - across multiple architectures (Gemma, LLaMA, Phi) and datasets (Alpaca, ShareGPT, GSM8K), mitigating overfitting while maintaining adaptability during post-training. Our findings suggest that uncertainty-aware training provides an effective mechanism for enhancing language model training.
2503.16520
Ji-Eun Han
Ji-Eun Han, Yoonseok Heo
Not All Personas Are Worth It: Culture-Reflective Persona Data Augmentation
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Incorporating personas into conversational AI models is crucial for achieving authentic and engaging interactions. However, the cultural diversity and adaptability of existing persona datasets is often overlooked, reducing their efficacy in building culturally aware AI systems. To address this issue, we propose a two-step pipeline for generating culture-specific personas and introduce KoPersona, a dataset comprising 200,000 personas designed to capture Korean cultural values, behaviors, and social nuances. A comprehensive evaluation through various metrics validates the quality of KoPersona and its relevance to Korean culture. This work not only contributes to persona-based research, but also establishes a scalable approach for creating culturally relevant personas adaptable to various languages and cultural contexts.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 01:23:57 GMT" } ]
2025-03-24T00:00:00
[ [ "Han", "Ji-Eun", "" ], [ "Heo", "Yoonseok", "" ] ]
TITLE: Not All Personas Are Worth It: Culture-Reflective Persona Data Augmentation ABSTRACT: Incorporating personas into conversational AI models is crucial for achieving authentic and engaging interactions. However, the cultural diversity and adaptability of existing persona datasets is often overlooked, reducing their efficacy in building culturally aware AI systems. To address this issue, we propose a two-step pipeline for generating culture-specific personas and introduce KoPersona, a dataset comprising 200,000 personas designed to capture Korean cultural values, behaviors, and social nuances. A comprehensive evaluation through various metrics validates the quality of KoPersona and its relevance to Korean culture. This work not only contributes to persona-based research, but also establishes a scalable approach for creating culturally relevant personas adaptable to various languages and cultural contexts.
2503.16522
Yongjia Ma
Yongjia Ma, Donglin Di, Xuan Liu, Xiaokai Chen, Lei Fan, Wei Chen, Tonghua Su
Adams Bashforth Moulton Solver for Inversion and Editing in Rectified Flow
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Rectified flow models have achieved remarkable performance in image and video generation tasks. However, existing numerical solvers face a trade-off between fast sampling and high-accuracy solutions, limiting their effectiveness in downstream applications such as reconstruction and editing. To address this challenge, we propose leveraging the Adams-Bashforth-Moulton (ABM) predictor-corrector method to enhance the accuracy of ODE solving in rectified flow models. Specifically, we introduce ABM-Solver, which integrates a multi step predictor corrector approach to reduce local truncation errors and employs Adaptive Step Size Adjustment to improve sampling speed. Furthermore, to effectively preserve non edited regions while facilitating semantic modifications, we introduce a Mask Guided Feature Injection module. We estimate self-similarity to generate a spatial mask that differentiates preserved regions from those available for editing. Extensive experiments on multiple high-resolution image datasets validate that ABM-Solver significantly improves inversion precision and editing quality, outperforming existing solvers without requiring additional training or optimization.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 02:17:33 GMT" } ]
2025-03-24T00:00:00
[ [ "Ma", "Yongjia", "" ], [ "Di", "Donglin", "" ], [ "Liu", "Xuan", "" ], [ "Chen", "Xiaokai", "" ], [ "Fan", "Lei", "" ], [ "Chen", "Wei", "" ], [ "Su", "Tonghua", "" ] ]
TITLE: Adams Bashforth Moulton Solver for Inversion and Editing in Rectified Flow ABSTRACT: Rectified flow models have achieved remarkable performance in image and video generation tasks. However, existing numerical solvers face a trade-off between fast sampling and high-accuracy solutions, limiting their effectiveness in downstream applications such as reconstruction and editing. To address this challenge, we propose leveraging the Adams-Bashforth-Moulton (ABM) predictor-corrector method to enhance the accuracy of ODE solving in rectified flow models. Specifically, we introduce ABM-Solver, which integrates a multi step predictor corrector approach to reduce local truncation errors and employs Adaptive Step Size Adjustment to improve sampling speed. Furthermore, to effectively preserve non edited regions while facilitating semantic modifications, we introduce a Mask Guided Feature Injection module. We estimate self-similarity to generate a spatial mask that differentiates preserved regions from those available for editing. Extensive experiments on multiple high-resolution image datasets validate that ABM-Solver significantly improves inversion precision and editing quality, outperforming existing solvers without requiring additional training or optimization.
2503.16525
Huan Yang
Huan Yang, Renji Zhang and Deyu Zhang
KVShare: Semantic-Aware Key-Value Cache Sharing for Efficient Large Language Model Inference
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents KVShare, a multi-user Key-Value (KV) Cache sharing technology based on semantic similarity, designed to enhance the inference efficiency of Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs). Addressing the limitations of existing prefix caching (strict text prefix matching) and semantic caching (loss of response diversity), KVShare achieves fine-grained KV cache reuse through semantic alignment algorithms and differential editing operations. Experiments on real-world user conversation datasets demonstrate that KVShare improves KV cache hit rates by over 60%, while maintaining output quality comparable to full computation (no significant degradation in BLEU and Rouge-L metrics). This approach effectively reduces GPU resource consumption and is applicable to scenarios with repetitive queries, such as healthcare and education.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 16:43:35 GMT" } ]
2025-03-24T00:00:00
[ [ "Yang", "Huan", "" ], [ "Zhang", "Renji", "" ], [ "Zhang", "Deyu", "" ] ]
TITLE: KVShare: Semantic-Aware Key-Value Cache Sharing for Efficient Large Language Model Inference ABSTRACT: This paper presents KVShare, a multi-user Key-Value (KV) Cache sharing technology based on semantic similarity, designed to enhance the inference efficiency of Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs). Addressing the limitations of existing prefix caching (strict text prefix matching) and semantic caching (loss of response diversity), KVShare achieves fine-grained KV cache reuse through semantic alignment algorithms and differential editing operations. Experiments on real-world user conversation datasets demonstrate that KVShare improves KV cache hit rates by over 60%, while maintaining output quality comparable to full computation (no significant degradation in BLEU and Rouge-L metrics). This approach effectively reduces GPU resource consumption and is applicable to scenarios with repetitive queries, such as healthcare and education.
2503.16527
Ang Li
Ang Li, Haozhe Chen, Hongseok Namkoong, Tianyi Peng
LLM Generated Persona is a Promise with a Catch
null
null
null
null
cs.CL cs.AI cs.CY cs.SI
http://creativecommons.org/licenses/by/4.0/
The use of large language models (LLMs) to simulate human behavior has gained significant attention, particularly through personas that approximate individual characteristics. Persona-based simulations hold promise for transforming disciplines that rely on population-level feedback, including social science, economic analysis, marketing research, and business operations. Traditional methods to collect realistic persona data face significant challenges. They are prohibitively expensive and logistically challenging due to privacy constraints, and often fail to capture multi-dimensional attributes, particularly subjective qualities. Consequently, synthetic persona generation with LLMs offers a scalable, cost-effective alternative. However, current approaches rely on ad hoc and heuristic generation techniques that do not guarantee methodological rigor or simulation precision, resulting in systematic biases in downstream tasks. Through extensive large-scale experiments including presidential election forecasts and general opinion surveys of the U.S. population, we reveal that these biases can lead to significant deviations from real-world outcomes. Our findings underscore the need to develop a rigorous science of persona generation and outline the methodological innovations, organizational and institutional support, and empirical foundations required to enhance the reliability and scalability of LLM-driven persona simulations. To support further research and development in this area, we have open-sourced approximately one million generated personas, available for public access and analysis at https://huggingface.co/datasets/Tianyi-Lab/Personas.
[ { "version": "v1", "created": "Tue, 18 Mar 2025 03:11:27 GMT" } ]
2025-03-24T00:00:00
[ [ "Li", "Ang", "" ], [ "Chen", "Haozhe", "" ], [ "Namkoong", "Hongseok", "" ], [ "Peng", "Tianyi", "" ] ]
TITLE: LLM Generated Persona is a Promise with a Catch ABSTRACT: The use of large language models (LLMs) to simulate human behavior has gained significant attention, particularly through personas that approximate individual characteristics. Persona-based simulations hold promise for transforming disciplines that rely on population-level feedback, including social science, economic analysis, marketing research, and business operations. Traditional methods to collect realistic persona data face significant challenges. They are prohibitively expensive and logistically challenging due to privacy constraints, and often fail to capture multi-dimensional attributes, particularly subjective qualities. Consequently, synthetic persona generation with LLMs offers a scalable, cost-effective alternative. However, current approaches rely on ad hoc and heuristic generation techniques that do not guarantee methodological rigor or simulation precision, resulting in systematic biases in downstream tasks. Through extensive large-scale experiments including presidential election forecasts and general opinion surveys of the U.S. population, we reveal that these biases can lead to significant deviations from real-world outcomes. Our findings underscore the need to develop a rigorous science of persona generation and outline the methodological innovations, organizational and institutional support, and empirical foundations required to enhance the reliability and scalability of LLM-driven persona simulations. To support further research and development in this area, we have open-sourced approximately one million generated personas, available for public access and analysis at https://huggingface.co/datasets/Tianyi-Lab/Personas.
2503.16530
Chengfeng Dou
Chengfeng Dou, Ying Zhang, Zhi Jin, Wenpin Jiao, Haiyan Zhao, Yongqiang Zhao, Zhengwei Tao
Enhancing LLM Generation with Knowledge Hypergraph for Evidence-Based Medicine
null
null
null
null
cs.CL cs.AI cs.IR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Evidence-based medicine (EBM) plays a crucial role in the application of large language models (LLMs) in healthcare, as it provides reliable support for medical decision-making processes. Although it benefits from current retrieval-augmented generation~(RAG) technologies, it still faces two significant challenges: the collection of dispersed evidence and the efficient organization of this evidence to support the complex queries necessary for EBM. To tackle these issues, we propose using LLMs to gather scattered evidence from multiple sources and present a knowledge hypergraph-based evidence management model to integrate these evidence while capturing intricate relationships. Furthermore, to better support complex queries, we have developed an Importance-Driven Evidence Prioritization (IDEP) algorithm that utilizes the LLM to generate multiple evidence features, each with an associated importance score, which are then used to rank the evidence and produce the final retrieval results. Experimental results from six datasets demonstrate that our approach outperforms existing RAG techniques in application domains of interest to EBM, such as medical quizzing, hallucination detection, and decision support. Testsets and the constructed knowledge graph can be accessed at \href{https://drive.google.com/file/d/1WJ9QTokK3MdkjEmwuFQxwH96j_Byawj_/view?usp=drive_link}{https://drive.google.com/rag4ebm}.
[ { "version": "v1", "created": "Tue, 18 Mar 2025 09:17:31 GMT" } ]
2025-03-24T00:00:00
[ [ "Dou", "Chengfeng", "" ], [ "Zhang", "Ying", "" ], [ "Jin", "Zhi", "" ], [ "Jiao", "Wenpin", "" ], [ "Zhao", "Haiyan", "" ], [ "Zhao", "Yongqiang", "" ], [ "Tao", "Zhengwei", "" ] ]
TITLE: Enhancing LLM Generation with Knowledge Hypergraph for Evidence-Based Medicine ABSTRACT: Evidence-based medicine (EBM) plays a crucial role in the application of large language models (LLMs) in healthcare, as it provides reliable support for medical decision-making processes. Although it benefits from current retrieval-augmented generation~(RAG) technologies, it still faces two significant challenges: the collection of dispersed evidence and the efficient organization of this evidence to support the complex queries necessary for EBM. To tackle these issues, we propose using LLMs to gather scattered evidence from multiple sources and present a knowledge hypergraph-based evidence management model to integrate these evidence while capturing intricate relationships. Furthermore, to better support complex queries, we have developed an Importance-Driven Evidence Prioritization (IDEP) algorithm that utilizes the LLM to generate multiple evidence features, each with an associated importance score, which are then used to rank the evidence and produce the final retrieval results. Experimental results from six datasets demonstrate that our approach outperforms existing RAG techniques in application domains of interest to EBM, such as medical quizzing, hallucination detection, and decision support. Testsets and the constructed knowledge graph can be accessed at \href{https://drive.google.com/file/d/1WJ9QTokK3MdkjEmwuFQxwH96j_Byawj_/view?usp=drive_link}{https://drive.google.com/rag4ebm}.
2503.16532
Meisam Jamshidi Seikavandi
Meisam Jamshidi Seikavandi, Jostein Fimland, Maria Barrett, Paolo Burelli
Modelling Emotions in Face-to-Face Setting: The Interplay of Eye-Tracking, Personality, and Temporal Dynamics
null
null
null
null
cs.HC cs.AI
http://creativecommons.org/licenses/by/4.0/
Accurate emotion recognition is pivotal for nuanced and engaging human-computer interactions, yet remains difficult to achieve, especially in dynamic, conversation-like settings. In this study, we showcase how integrating eye-tracking data, temporal dynamics, and personality traits can substantially enhance the detection of both perceived and felt emotions. Seventy-three participants viewed short, speech-containing videos from the CREMA-D dataset, while being recorded for eye-tracking signals (pupil size, fixation patterns), Big Five personality assessments, and self-reported emotional states. Our neural network models combined these diverse inputs including stimulus emotion labels for contextual cues and yielded marked performance gains compared to the state-of-the-art. Specifically, perceived valence predictions reached a macro F1-score of 0.76, and models incorporating personality traits and stimulus information demonstrated significant improvements in felt emotion accuracy. These results highlight the benefit of unifying physiological, individual and contextual factors to address the subjectivity and complexity of emotional expression. Beyond validating the role of user-specific data in capturing subtle internal states, our findings inform the design of future affective computing and human-agent systems, paving the way for more adaptive and cross-individual emotional intelligence in real-world interactions.
[ { "version": "v1", "created": "Tue, 18 Mar 2025 13:15:32 GMT" } ]
2025-03-24T00:00:00
[ [ "Seikavandi", "Meisam Jamshidi", "" ], [ "Fimland", "Jostein", "" ], [ "Barrett", "Maria", "" ], [ "Burelli", "Paolo", "" ] ]
TITLE: Modelling Emotions in Face-to-Face Setting: The Interplay of Eye-Tracking, Personality, and Temporal Dynamics ABSTRACT: Accurate emotion recognition is pivotal for nuanced and engaging human-computer interactions, yet remains difficult to achieve, especially in dynamic, conversation-like settings. In this study, we showcase how integrating eye-tracking data, temporal dynamics, and personality traits can substantially enhance the detection of both perceived and felt emotions. Seventy-three participants viewed short, speech-containing videos from the CREMA-D dataset, while being recorded for eye-tracking signals (pupil size, fixation patterns), Big Five personality assessments, and self-reported emotional states. Our neural network models combined these diverse inputs including stimulus emotion labels for contextual cues and yielded marked performance gains compared to the state-of-the-art. Specifically, perceived valence predictions reached a macro F1-score of 0.76, and models incorporating personality traits and stimulus information demonstrated significant improvements in felt emotion accuracy. These results highlight the benefit of unifying physiological, individual and contextual factors to address the subjectivity and complexity of emotional expression. Beyond validating the role of user-specific data in capturing subtle internal states, our findings inform the design of future affective computing and human-agent systems, paving the way for more adaptive and cross-individual emotional intelligence in real-world interactions.
2503.16538
Bastian P\"atzold
Bastian P\"atzold, Jan Nogga, Sven Behnke
Leveraging Vision-Language Models for Open-Vocabulary Instance Segmentation and Tracking
Submitted to IEEE Robotics and Automation Letters (RA-L)
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a novel approach that leverages the capabilities of vision-language models (VLMs) by integrating them with established approaches for open-vocabulary detection (OVD), instance segmentation, and tracking. We utilize VLM-generated structured descriptions to identify visible object instances, collect application-relevant attributes, and inform an open-vocabulary detector to extract corresponding bounding boxes that are passed to a video segmentation model providing precise segmentation masks and tracking capabilities. Once initialized, this model can then directly extract segmentation masks, allowing processing of image streams in real time with minimal computational overhead. Tracks can be updated online as needed by generating new structured descriptions and corresponding open-vocabulary detections. This combines the descriptive power of VLMs with the grounding capability of OVD and the pixel-level understanding and speed of video segmentation. Our evaluation across datasets and robotics platforms demonstrates the broad applicability of this approach, showcasing its ability to extract task-specific attributes from non-standard objects in dynamic environments.
[ { "version": "v1", "created": "Tue, 18 Mar 2025 20:18:42 GMT" } ]
2025-03-24T00:00:00
[ [ "Pätzold", "Bastian", "" ], [ "Nogga", "Jan", "" ], [ "Behnke", "Sven", "" ] ]
TITLE: Leveraging Vision-Language Models for Open-Vocabulary Instance Segmentation and Tracking ABSTRACT: This paper introduces a novel approach that leverages the capabilities of vision-language models (VLMs) by integrating them with established approaches for open-vocabulary detection (OVD), instance segmentation, and tracking. We utilize VLM-generated structured descriptions to identify visible object instances, collect application-relevant attributes, and inform an open-vocabulary detector to extract corresponding bounding boxes that are passed to a video segmentation model providing precise segmentation masks and tracking capabilities. Once initialized, this model can then directly extract segmentation masks, allowing processing of image streams in real time with minimal computational overhead. Tracks can be updated online as needed by generating new structured descriptions and corresponding open-vocabulary detections. This combines the descriptive power of VLMs with the grounding capability of OVD and the pixel-level understanding and speed of video segmentation. Our evaluation across datasets and robotics platforms demonstrates the broad applicability of this approach, showcasing its ability to extract task-specific attributes from non-standard objects in dynamic environments.
2503.16542
Shiyi Jiang
Shiyi Jiang, Farshad Firouzi, Krishnendu Chakrabarty
Defending Against Gradient Inversion Attacks for Biomedical Images via Learnable Data Perturbation
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
The increasing need for sharing healthcare data and collaborating on clinical research has raised privacy concerns. Health information leakage due to malicious attacks can lead to serious problems such as misdiagnoses and patient identification issues. Privacy-preserving machine learning (PPML) and privacy-enhancing technologies, particularly federated learning (FL), have emerged in recent years as innovative solutions to balance privacy protection with data utility; however, they also suffer from inherent privacy vulnerabilities. Gradient inversion attacks constitute major threats to data sharing in federated learning. Researchers have proposed many defenses against gradient inversion attacks. However, current defense methods for healthcare data lack generalizability, i.e., existing solutions may not be applicable to data from a broader range of populations. In addition, most existing defense methods are tested using non-healthcare data, which raises concerns about their applicability to real-world healthcare systems. In this study, we present a defense against gradient inversion attacks in federated learning. We achieve this using latent data perturbation and minimax optimization, utilizing both general and medical image datasets. Our method is compared to two baselines, and the results show that our approach can outperform the baselines with a reduction of 12.5% in the attacker's accuracy in classifying reconstructed images. The proposed method also yields an increase of over 12.4% in Mean Squared Error (MSE) between the original and reconstructed images at the same level of model utility of around 90% client classification accuracy. The results suggest the potential of a generalizable defense for healthcare data.
[ { "version": "v1", "created": "Wed, 19 Mar 2025 01:53:23 GMT" } ]
2025-03-24T00:00:00
[ [ "Jiang", "Shiyi", "" ], [ "Firouzi", "Farshad", "" ], [ "Chakrabarty", "Krishnendu", "" ] ]
TITLE: Defending Against Gradient Inversion Attacks for Biomedical Images via Learnable Data Perturbation ABSTRACT: The increasing need for sharing healthcare data and collaborating on clinical research has raised privacy concerns. Health information leakage due to malicious attacks can lead to serious problems such as misdiagnoses and patient identification issues. Privacy-preserving machine learning (PPML) and privacy-enhancing technologies, particularly federated learning (FL), have emerged in recent years as innovative solutions to balance privacy protection with data utility; however, they also suffer from inherent privacy vulnerabilities. Gradient inversion attacks constitute major threats to data sharing in federated learning. Researchers have proposed many defenses against gradient inversion attacks. However, current defense methods for healthcare data lack generalizability, i.e., existing solutions may not be applicable to data from a broader range of populations. In addition, most existing defense methods are tested using non-healthcare data, which raises concerns about their applicability to real-world healthcare systems. In this study, we present a defense against gradient inversion attacks in federated learning. We achieve this using latent data perturbation and minimax optimization, utilizing both general and medical image datasets. Our method is compared to two baselines, and the results show that our approach can outperform the baselines with a reduction of 12.5% in the attacker's accuracy in classifying reconstructed images. The proposed method also yields an increase of over 12.4% in Mean Squared Error (MSE) between the original and reconstructed images at the same level of model utility of around 90% client classification accuracy. The results suggest the potential of a generalizable defense for healthcare data.
2503.16544
Donghuo Zeng
Donghuo Zeng, Roberto Legaspi, Yuewen Sun, Xinshuai Dong, Kazushi Ikeda, Peter Spirtes, Kun Zhang
Causal Discovery and Counterfactual Reasoning to Optimize Persuasive Dialogue Policies
21 pages, 8 figures
null
null
null
cs.CL cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
Tailoring persuasive conversations to users leads to more effective persuasion. However, existing dialogue systems often struggle to adapt to dynamically evolving user states. This paper presents a novel method that leverages causal discovery and counterfactual reasoning for optimizing system persuasion capability and outcomes. We employ the Greedy Relaxation of the Sparsest Permutation (GRaSP) algorithm to identify causal relationships between user and system utterance strategies, treating user strategies as states and system strategies as actions. GRaSP identifies user strategies as causal factors influencing system responses, which inform Bidirectional Conditional Generative Adversarial Networks (BiCoGAN) in generating counterfactual utterances for the system. Subsequently, we use the Dueling Double Deep Q-Network (D3QN) model to utilize counterfactual data to determine the best policy for selecting system utterances. Our experiments with the PersuasionForGood dataset show measurable improvements in persuasion outcomes using our approach over baseline methods. The observed increase in cumulative rewards and Q-values highlights the effectiveness of causal discovery in enhancing counterfactual reasoning and optimizing reinforcement learning policies for online dialogue systems.
[ { "version": "v1", "created": "Wed, 19 Mar 2025 06:06:10 GMT" } ]
2025-03-24T00:00:00
[ [ "Zeng", "Donghuo", "" ], [ "Legaspi", "Roberto", "" ], [ "Sun", "Yuewen", "" ], [ "Dong", "Xinshuai", "" ], [ "Ikeda", "Kazushi", "" ], [ "Spirtes", "Peter", "" ], [ "Zhang", "Kun", "" ] ]
TITLE: Causal Discovery and Counterfactual Reasoning to Optimize Persuasive Dialogue Policies ABSTRACT: Tailoring persuasive conversations to users leads to more effective persuasion. However, existing dialogue systems often struggle to adapt to dynamically evolving user states. This paper presents a novel method that leverages causal discovery and counterfactual reasoning for optimizing system persuasion capability and outcomes. We employ the Greedy Relaxation of the Sparsest Permutation (GRaSP) algorithm to identify causal relationships between user and system utterance strategies, treating user strategies as states and system strategies as actions. GRaSP identifies user strategies as causal factors influencing system responses, which inform Bidirectional Conditional Generative Adversarial Networks (BiCoGAN) in generating counterfactual utterances for the system. Subsequently, we use the Dueling Double Deep Q-Network (D3QN) model to utilize counterfactual data to determine the best policy for selecting system utterances. Our experiments with the PersuasionForGood dataset show measurable improvements in persuasion outcomes using our approach over baseline methods. The observed increase in cumulative rewards and Q-values highlights the effectiveness of causal discovery in enhancing counterfactual reasoning and optimizing reinforcement learning policies for online dialogue systems.
2503.16550
Sun Yudao
Yudao Sun, Juan Yin, Juan Zhao, Fan Zhang, Yongheng Liu, Hongji Chen
Unified Enhancement of the Generalization and Robustness of Language Models via Bi-Stage Optimization
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Neural network language models (LMs) are confronted with significant challenges in generalization and robustness. Currently, many studies focus on improving either generalization or robustness in isolation, without methods addressing both aspects simultaneously, which presents a significant challenge in developing LMs that are both robust and generalized. In this paper, we propose a bi-stage optimization framework to uniformly enhance both the generalization and robustness of LMs, termed UEGR. Specifically, during the forward propagation stage, we enrich the output probability distributions of adversarial samples by adaptive dropout to generate diverse sub models, and incorporate JS divergence and adversarial losses of these output distributions to reinforce output stability. During backward propagation stage, we compute parameter saliency scores and selectively update only the most critical parameters to minimize unnecessary deviations and consolidate the model's resilience. Theoretical analysis shows that our framework includes gradient regularization to limit the model's sensitivity to input perturbations and selective parameter updates to flatten the loss landscape, thus improving both generalization and robustness. The experimental results show that our method significantly improves the generalization and robustness of LMs compared to other existing methods across 13 publicly available language datasets, achieving state-of-the-art (SOTA) performance.
[ { "version": "v1", "created": "Wed, 19 Mar 2025 13:50:36 GMT" } ]
2025-03-24T00:00:00
[ [ "Sun", "Yudao", "" ], [ "Yin", "Juan", "" ], [ "Zhao", "Juan", "" ], [ "Zhang", "Fan", "" ], [ "Liu", "Yongheng", "" ], [ "Chen", "Hongji", "" ] ]
TITLE: Unified Enhancement of the Generalization and Robustness of Language Models via Bi-Stage Optimization ABSTRACT: Neural network language models (LMs) are confronted with significant challenges in generalization and robustness. Currently, many studies focus on improving either generalization or robustness in isolation, without methods addressing both aspects simultaneously, which presents a significant challenge in developing LMs that are both robust and generalized. In this paper, we propose a bi-stage optimization framework to uniformly enhance both the generalization and robustness of LMs, termed UEGR. Specifically, during the forward propagation stage, we enrich the output probability distributions of adversarial samples by adaptive dropout to generate diverse sub models, and incorporate JS divergence and adversarial losses of these output distributions to reinforce output stability. During backward propagation stage, we compute parameter saliency scores and selectively update only the most critical parameters to minimize unnecessary deviations and consolidate the model's resilience. Theoretical analysis shows that our framework includes gradient regularization to limit the model's sensitivity to input perturbations and selective parameter updates to flatten the loss landscape, thus improving both generalization and robustness. The experimental results show that our method significantly improves the generalization and robustness of LMs compared to other existing methods across 13 publicly available language datasets, achieving state-of-the-art (SOTA) performance.
2503.16553
Zhenlin Qin
Zhenlin Qin, Leizhen Wang, Francisco Camara Pereira, Zhenlinag Ma
A Foundational individual Mobility Prediction Model based on Open-Source Large Language Models
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) are widely applied to domain-specific tasks due to their massive general knowledge and remarkable inference capacities. Current studies on LLMs have shown immense potential in applying LLMs to model individual mobility prediction problems. However, most LLM-based mobility prediction models only train on specific datasets or use single well-designed prompts, leading to difficulty in adapting to different cities and users with diverse contexts. To fill these gaps, this paper proposes a unified fine-tuning framework to train a foundational open source LLM-based mobility prediction model. We conducted extensive experiments on six real-world mobility datasets to validate the proposed model. The results showed that the proposed model achieved the best performance in prediction accuracy and transferability over state-of-the-art models based on deep learning and LLMs.
[ { "version": "v1", "created": "Wed, 19 Mar 2025 15:08:37 GMT" } ]
2025-03-24T00:00:00
[ [ "Qin", "Zhenlin", "" ], [ "Wang", "Leizhen", "" ], [ "Pereira", "Francisco Camara", "" ], [ "Ma", "Zhenlinag", "" ] ]
TITLE: A Foundational individual Mobility Prediction Model based on Open-Source Large Language Models ABSTRACT: Large Language Models (LLMs) are widely applied to domain-specific tasks due to their massive general knowledge and remarkable inference capacities. Current studies on LLMs have shown immense potential in applying LLMs to model individual mobility prediction problems. However, most LLM-based mobility prediction models only train on specific datasets or use single well-designed prompts, leading to difficulty in adapting to different cities and users with diverse contexts. To fill these gaps, this paper proposes a unified fine-tuning framework to train a foundational open source LLM-based mobility prediction model. We conducted extensive experiments on six real-world mobility datasets to validate the proposed model. The results showed that the proposed model achieved the best performance in prediction accuracy and transferability over state-of-the-art models based on deep learning and LLMs.
2503.16556
Sabeen Ahmed
Sabeen Ahmed, Nathan Parker, Margaret Park, Daniel Jeong, Lauren Peres, Evan W. Davis, Jennifer B. Permuth, Erin Siegel, Matthew B. Schabath, Yasin Yilmaz, and Ghulam Rasool
Reliable Radiologic Skeletal Muscle Area Assessment -- A Biomarker for Cancer Cachexia Diagnosis
47 pages, 19 figures, 9 Tables
null
null
null
eess.IV cs.AI cs.CE cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Cancer cachexia is a common metabolic disorder characterized by severe muscle atrophy which is associated with poor prognosis and quality of life. Monitoring skeletal muscle area (SMA) longitudinally through computed tomography (CT) scans, an imaging modality routinely acquired in cancer care, is an effective way to identify and track this condition. However, existing tools often lack full automation and exhibit inconsistent accuracy, limiting their potential for integration into clinical workflows. To address these challenges, we developed SMAART-AI (Skeletal Muscle Assessment-Automated and Reliable Tool-based on AI), an end-to-end automated pipeline powered by deep learning models (nnU-Net 2D) trained on mid-third lumbar level CT images with 5-fold cross-validation, ensuring generalizability and robustness. SMAART-AI incorporates an uncertainty-based mechanism to flag high-error SMA predictions for expert review, enhancing reliability. We combined the SMA, skeletal muscle index, BMI, and clinical data to train a multi-layer perceptron (MLP) model designed to predict cachexia at the time of cancer diagnosis. Tested on the gastroesophageal cancer dataset, SMAART-AI achieved a Dice score of 97.80% +/- 0.93%, with SMA estimated across all four datasets in this study at a median absolute error of 2.48% compared to manual annotations with SliceOmatic. Uncertainty metrics-variance, entropy, and coefficient of variation-strongly correlated with SMA prediction errors (0.83, 0.76, and 0.73 respectively). The MLP model predicts cachexia with 79% precision, providing clinicians with a reliable tool for early diagnosis and intervention. By combining automation, accuracy, and uncertainty awareness, SMAART-AI bridges the gap between research and clinical application, offering a transformative approach to managing cancer cachexia.
[ { "version": "v1", "created": "Wed, 19 Mar 2025 19:07:59 GMT" } ]
2025-03-24T00:00:00
[ [ "Ahmed", "Sabeen", "" ], [ "Parker", "Nathan", "" ], [ "Park", "Margaret", "" ], [ "Jeong", "Daniel", "" ], [ "Peres", "Lauren", "" ], [ "Davis", "Evan W.", "" ], [ "Permuth", "Jennifer B.", "" ], [ "Siegel", "Erin", "" ], [ "Schabath", "Matthew B.", "" ], [ "Yilmaz", "Yasin", "" ], [ "Rasool", "Ghulam", "" ] ]
TITLE: Reliable Radiologic Skeletal Muscle Area Assessment -- A Biomarker for Cancer Cachexia Diagnosis ABSTRACT: Cancer cachexia is a common metabolic disorder characterized by severe muscle atrophy which is associated with poor prognosis and quality of life. Monitoring skeletal muscle area (SMA) longitudinally through computed tomography (CT) scans, an imaging modality routinely acquired in cancer care, is an effective way to identify and track this condition. However, existing tools often lack full automation and exhibit inconsistent accuracy, limiting their potential for integration into clinical workflows. To address these challenges, we developed SMAART-AI (Skeletal Muscle Assessment-Automated and Reliable Tool-based on AI), an end-to-end automated pipeline powered by deep learning models (nnU-Net 2D) trained on mid-third lumbar level CT images with 5-fold cross-validation, ensuring generalizability and robustness. SMAART-AI incorporates an uncertainty-based mechanism to flag high-error SMA predictions for expert review, enhancing reliability. We combined the SMA, skeletal muscle index, BMI, and clinical data to train a multi-layer perceptron (MLP) model designed to predict cachexia at the time of cancer diagnosis. Tested on the gastroesophageal cancer dataset, SMAART-AI achieved a Dice score of 97.80% +/- 0.93%, with SMA estimated across all four datasets in this study at a median absolute error of 2.48% compared to manual annotations with SliceOmatic. Uncertainty metrics-variance, entropy, and coefficient of variation-strongly correlated with SMA prediction errors (0.83, 0.76, and 0.73 respectively). The MLP model predicts cachexia with 79% precision, providing clinicians with a reliable tool for early diagnosis and intervention. By combining automation, accuracy, and uncertainty awareness, SMAART-AI bridges the gap between research and clinical application, offering a transformative approach to managing cancer cachexia.
2503.16557
Abdulaziz Al Mannai
Abdulaziz Al Mannai
Investigating Cultural Dimensions and Technological Acceptance: The Adoption of Electronic Performance and Tracking Systems in Qatar's Football Sector
null
null
null
null
cs.CY cs.LG
http://creativecommons.org/licenses/by/4.0/
Qatar's football sector has undergone a substantial technological transformation with the implementation of Electronic Performance and Tracking Systems (EPTS). This study examines the impact of cultural and technological factors on EPTS adoption, using Hofstede's Cultural Dimensions Theory and the Technology Acceptance Model (TAM) as theoretical frameworks. An initial exploratory study involved ten participants, followed by an expanded dataset comprising thirty stakeholders, including players, coaches, and staff from Qatari football organizations. Multiple regression analysis was conducted to evaluate the relationships between perceived usefulness, perceived ease of use, power distance, innovation receptiveness, integration complexity, and overall adoption. The results indicate that perceived usefulness, innovation receptiveness, and lower power distance significantly drive EPTS adoption, while ease of use is marginally significant and integration complexity is non-significant in this sample. These findings provide practical insights for sports technology stakeholders in Qatar and emphasize the importance of aligning cultural considerations with technological readiness for successful EPTS integration.
[ { "version": "v1", "created": "Wed, 19 Mar 2025 19:50:09 GMT" } ]
2025-03-24T00:00:00
[ [ "Mannai", "Abdulaziz Al", "" ] ]
TITLE: Investigating Cultural Dimensions and Technological Acceptance: The Adoption of Electronic Performance and Tracking Systems in Qatar's Football Sector ABSTRACT: Qatar's football sector has undergone a substantial technological transformation with the implementation of Electronic Performance and Tracking Systems (EPTS). This study examines the impact of cultural and technological factors on EPTS adoption, using Hofstede's Cultural Dimensions Theory and the Technology Acceptance Model (TAM) as theoretical frameworks. An initial exploratory study involved ten participants, followed by an expanded dataset comprising thirty stakeholders, including players, coaches, and staff from Qatari football organizations. Multiple regression analysis was conducted to evaluate the relationships between perceived usefulness, perceived ease of use, power distance, innovation receptiveness, integration complexity, and overall adoption. The results indicate that perceived usefulness, innovation receptiveness, and lower power distance significantly drive EPTS adoption, while ease of use is marginally significant and integration complexity is non-significant in this sample. These findings provide practical insights for sports technology stakeholders in Qatar and emphasize the importance of aligning cultural considerations with technological readiness for successful EPTS integration.
2503.16561
Ibrahim Al Azhar
Ibrahim Al Azher, Miftahul Jannat Mokarrama, Zhishuai Guo, Sagnik Ray Choudhury, Hamed Alhoori
FutureGen: LLM-RAG Approach to Generate the Future Work of Scientific Article
19 pages, 5 figures
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The future work section of a scientific article outlines potential research directions by identifying gaps and limitations of a current study. This section serves as a valuable resource for early-career researchers seeking unexplored areas and experienced researchers looking for new projects or collaborations. In this study, we generate future work suggestions from key sections of a scientific article alongside related papers and analyze how the trends have evolved. We experimented with various Large Language Models (LLMs) and integrated Retrieval-Augmented Generation (RAG) to enhance the generation process. We incorporate a LLM feedback mechanism to improve the quality of the generated content and propose an LLM-as-a-judge approach for evaluation. Our results demonstrated that the RAG-based approach with LLM feedback outperforms other methods evaluated through qualitative and quantitative metrics. Moreover, we conduct a human evaluation to assess the LLM as an extractor and judge. The code and dataset for this project are here, code: HuggingFace
[ { "version": "v1", "created": "Thu, 20 Mar 2025 06:14:02 GMT" } ]
2025-03-24T00:00:00
[ [ "Azher", "Ibrahim Al", "" ], [ "Mokarrama", "Miftahul Jannat", "" ], [ "Guo", "Zhishuai", "" ], [ "Choudhury", "Sagnik Ray", "" ], [ "Alhoori", "Hamed", "" ] ]
TITLE: FutureGen: LLM-RAG Approach to Generate the Future Work of Scientific Article ABSTRACT: The future work section of a scientific article outlines potential research directions by identifying gaps and limitations of a current study. This section serves as a valuable resource for early-career researchers seeking unexplored areas and experienced researchers looking for new projects or collaborations. In this study, we generate future work suggestions from key sections of a scientific article alongside related papers and analyze how the trends have evolved. We experimented with various Large Language Models (LLMs) and integrated Retrieval-Augmented Generation (RAG) to enhance the generation process. We incorporate a LLM feedback mechanism to improve the quality of the generated content and propose an LLM-as-a-judge approach for evaluation. Our results demonstrated that the RAG-based approach with LLM feedback outperforms other methods evaluated through qualitative and quantitative metrics. Moreover, we conduct a human evaluation to assess the LLM as an extractor and judge. The code and dataset for this project are here, code: HuggingFace
2503.16567
Paolo Burelli
Laurits Dixen, Stefan Heinrich and Paolo Burelli
Exploring Deep Learning Models for EEG Neural Decoding
null
null
0.1007/978-3-031-82487-6_12
null
cs.LG eess.SP q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Neural decoding is an important method in cognitive neuroscience that aims to decode brain representations from recorded neural activity using a multivariate machine learning model. The THINGS initiative provides a large EEG dataset of 46 subjects watching rapidly shown images. Here, we test the feasibility of using this method for decoding high-level object features using recent deep learning models. We create a derivative dataset from this of living vs non-living entities test 15 different deep learning models with 5 different architectures and compare to a SOTA linear model. We show that the linear model is not able to solve the decoding task, while almost all the deep learning models are successful, suggesting that in some cases non-linear models are needed to decode neural representations. We also run a comparative study of the models' performance on individual object categories, and suggest how artificial neural networks can be used to study brain activity.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 08:02:09 GMT" } ]
2025-03-24T00:00:00
[ [ "Dixen", "Laurits", "" ], [ "Heinrich", "Stefan", "" ], [ "Burelli", "Paolo", "" ] ]
TITLE: Exploring Deep Learning Models for EEG Neural Decoding ABSTRACT: Neural decoding is an important method in cognitive neuroscience that aims to decode brain representations from recorded neural activity using a multivariate machine learning model. The THINGS initiative provides a large EEG dataset of 46 subjects watching rapidly shown images. Here, we test the feasibility of using this method for decoding high-level object features using recent deep learning models. We create a derivative dataset from this of living vs non-living entities test 15 different deep learning models with 5 different architectures and compare to a SOTA linear model. We show that the linear model is not able to solve the decoding task, while almost all the deep learning models are successful, suggesting that in some cases non-linear models are needed to decode neural representations. We also run a comparative study of the models' performance on individual object categories, and suggest how artificial neural networks can be used to study brain activity.
2503.16572
Shu Yang
Shu Yang, Chengting Yu, Lei Liu, Hanzhi Ma, Aili Wang, Erping Li
Efficient ANN-Guided Distillation: Aligning Rate-based Features of Spiking Neural Networks through Hybrid Block-wise Replacement
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking Neural Networks (SNNs) have garnered considerable attention as a potential alternative to Artificial Neural Networks (ANNs). Recent studies have highlighted SNNs' potential on large-scale datasets. For SNN training, two main approaches exist: direct training and ANN-to-SNN (ANN2SNN) conversion. To fully leverage existing ANN models in guiding SNN learning, either direct ANN-to-SNN conversion or ANN-SNN distillation training can be employed. In this paper, we propose an ANN-SNN distillation framework from the ANN-to-SNN perspective, designed with a block-wise replacement strategy for ANN-guided learning. By generating intermediate hybrid models that progressively align SNN feature spaces to those of ANN through rate-based features, our framework naturally incorporates rate-based backpropagation as a training method. Our approach achieves results comparable to or better than state-of-the-art SNN distillation methods, showing both training and learning efficiency.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 09:04:38 GMT" } ]
2025-03-24T00:00:00
[ [ "Yang", "Shu", "" ], [ "Yu", "Chengting", "" ], [ "Liu", "Lei", "" ], [ "Ma", "Hanzhi", "" ], [ "Wang", "Aili", "" ], [ "Li", "Erping", "" ] ]
TITLE: Efficient ANN-Guided Distillation: Aligning Rate-based Features of Spiking Neural Networks through Hybrid Block-wise Replacement ABSTRACT: Spiking Neural Networks (SNNs) have garnered considerable attention as a potential alternative to Artificial Neural Networks (ANNs). Recent studies have highlighted SNNs' potential on large-scale datasets. For SNN training, two main approaches exist: direct training and ANN-to-SNN (ANN2SNN) conversion. To fully leverage existing ANN models in guiding SNN learning, either direct ANN-to-SNN conversion or ANN-SNN distillation training can be employed. In this paper, we propose an ANN-SNN distillation framework from the ANN-to-SNN perspective, designed with a block-wise replacement strategy for ANN-guided learning. By generating intermediate hybrid models that progressively align SNN feature spaces to those of ANN through rate-based features, our framework naturally incorporates rate-based backpropagation as a training method. Our approach achieves results comparable to or better than state-of-the-art SNN distillation methods, showing both training and learning efficiency.
2503.16575
Han Yuan
Bo Hu, Han Yuan, Vlad Pandelea, Wuqiong Luo, Yingzhu Zhao, Zheng Ma
Extract, Match, and Score: An Evaluation Paradigm for Long Question-context-answer Triplets in Financial Analysis
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
The rapid advancement of large language models (LLMs) has sparked widespread adoption across diverse applications, making robust evaluation frameworks crucial for assessing their performance. While conventional evaluation metrics remain applicable for shorter texts, their efficacy diminishes when evaluating the quality of long-form answers. This limitation is particularly critical in real-world scenarios involving extended questions, extensive context, and long-form answers, such as financial analysis or regulatory compliance. In this paper, we use a practical financial use case to illustrate applications that handle "long question-context-answer triplets". We construct a real-world financial dataset comprising long triplets and demonstrate the inadequacies of traditional metrics. To address this, we propose an effective Extract, Match, and Score (EMS) evaluation approach tailored to the complexities of long-form LLMs' outputs, providing practitioners with a reliable methodology for assessing LLMs' performance in complex real-world scenarios.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 09:38:44 GMT" } ]
2025-03-24T00:00:00
[ [ "Hu", "Bo", "" ], [ "Yuan", "Han", "" ], [ "Pandelea", "Vlad", "" ], [ "Luo", "Wuqiong", "" ], [ "Zhao", "Yingzhu", "" ], [ "Ma", "Zheng", "" ] ]
TITLE: Extract, Match, and Score: An Evaluation Paradigm for Long Question-context-answer Triplets in Financial Analysis ABSTRACT: The rapid advancement of large language models (LLMs) has sparked widespread adoption across diverse applications, making robust evaluation frameworks crucial for assessing their performance. While conventional evaluation metrics remain applicable for shorter texts, their efficacy diminishes when evaluating the quality of long-form answers. This limitation is particularly critical in real-world scenarios involving extended questions, extensive context, and long-form answers, such as financial analysis or regulatory compliance. In this paper, we use a practical financial use case to illustrate applications that handle "long question-context-answer triplets". We construct a real-world financial dataset comprising long triplets and demonstrate the inadequacies of traditional metrics. To address this, we propose an effective Extract, Match, and Score (EMS) evaluation approach tailored to the complexities of long-form LLMs' outputs, providing practitioners with a reliable methodology for assessing LLMs' performance in complex real-world scenarios.
2503.16577
Bilal Ahmad
Bilal Ahmad, Jinfu Chen, Haibao Chen
Feature selection strategies for optimized heart disease diagnosis using ML and DL models
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heart disease remains one of the leading causes of morbidity and mortality worldwide, necessitating the development of effective diagnostic tools to enable early diagnosis and clinical decision-making. This study evaluates the impact of feature selection techniques Mutual Information (MI), Analysis of Variance (ANOVA), and Chi-Square on the predictive performance of various machine learning (ML) and deep learning (DL) models using a dataset of clinical indicators for heart disease. Eleven ML/DL models were assessed using metrics such as precision, recall, AUC score, F1-score, and accuracy. Results indicate that MI outperformed other methods, particularly for advanced models like neural networks, achieving the highest accuracy of 82.3% and recall score of 0.94. Logistic regression (accuracy 82.1%) and random forest (accuracy 80.99%) also demonstrated improved performance with MI. Simpler models such as Naive Bayes and decision trees achieved comparable results with ANOVA and Chi-Square, yielding accuracies of 76.45% and 75.99%, respectively, making them computationally efficient alternatives. Conversely, k Nearest Neighbors (KNN) and Support Vector Machines (SVM) exhibited lower performance, with accuracies ranging between 51.52% and 54.43%, regardless of the feature selection method. This study provides a comprehensive comparison of feature selection methods for heart disease prediction, demonstrating the critical role of feature selection in optimizing model performance. The results offer practical guidance for selecting appropriate feature selection techniques based on the chosen classification algorithm, contributing to the development of more accurate and efficient diagnostic tools for enhanced clinical decision-making in cardiology.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 09:59:01 GMT" } ]
2025-03-24T00:00:00
[ [ "Ahmad", "Bilal", "" ], [ "Chen", "Jinfu", "" ], [ "Chen", "Haibao", "" ] ]
TITLE: Feature selection strategies for optimized heart disease diagnosis using ML and DL models ABSTRACT: Heart disease remains one of the leading causes of morbidity and mortality worldwide, necessitating the development of effective diagnostic tools to enable early diagnosis and clinical decision-making. This study evaluates the impact of feature selection techniques Mutual Information (MI), Analysis of Variance (ANOVA), and Chi-Square on the predictive performance of various machine learning (ML) and deep learning (DL) models using a dataset of clinical indicators for heart disease. Eleven ML/DL models were assessed using metrics such as precision, recall, AUC score, F1-score, and accuracy. Results indicate that MI outperformed other methods, particularly for advanced models like neural networks, achieving the highest accuracy of 82.3% and recall score of 0.94. Logistic regression (accuracy 82.1%) and random forest (accuracy 80.99%) also demonstrated improved performance with MI. Simpler models such as Naive Bayes and decision trees achieved comparable results with ANOVA and Chi-Square, yielding accuracies of 76.45% and 75.99%, respectively, making them computationally efficient alternatives. Conversely, k Nearest Neighbors (KNN) and Support Vector Machines (SVM) exhibited lower performance, with accuracies ranging between 51.52% and 54.43%, regardless of the feature selection method. This study provides a comprehensive comparison of feature selection methods for heart disease prediction, demonstrating the critical role of feature selection in optimizing model performance. The results offer practical guidance for selecting appropriate feature selection techniques based on the chosen classification algorithm, contributing to the development of more accurate and efficient diagnostic tools for enhanced clinical decision-making in cardiology.
2503.16578
Yang Chen
Yang Chen and Hui Wang and Shiyao Wang and Junyang Chen and Jiabei He and Jiaming Zhou and Xi Yang and Yequan Wang and Yonghua Lin and Yong Qin
SeniorTalk: A Chinese Conversation Dataset with Rich Annotations for Super-Aged Seniors
null
null
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
While voice technologies increasingly serve aging populations, current systems exhibit significant performance gaps due to inadequate training data capturing elderly-specific vocal characteristics like presbyphonia and dialectal variations. The limited data available on super-aged individuals in existing elderly speech datasets, coupled with overly simple recording styles and annotation dimensions, exacerbates this issue. To address the critical scarcity of speech data from individuals aged 75 and above, we introduce SeniorTalk, a carefully annotated Chinese spoken dialogue dataset. This dataset contains 55.53 hours of speech from 101 natural conversations involving 202 participants, ensuring a strategic balance across gender, region, and age. Through detailed annotation across multiple dimensions, it can support a wide range of speech tasks. We perform extensive experiments on speaker verification, speaker diarization, speech recognition, and speech editing tasks, offering crucial insights for the development of speech technologies targeting this age group.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 11:31:47 GMT" } ]
2025-03-24T00:00:00
[ [ "Chen", "Yang", "" ], [ "Wang", "Hui", "" ], [ "Wang", "Shiyao", "" ], [ "Chen", "Junyang", "" ], [ "He", "Jiabei", "" ], [ "Zhou", "Jiaming", "" ], [ "Yang", "Xi", "" ], [ "Wang", "Yequan", "" ], [ "Lin", "Yonghua", "" ], [ "Qin", "Yong", "" ] ]
TITLE: SeniorTalk: A Chinese Conversation Dataset with Rich Annotations for Super-Aged Seniors ABSTRACT: While voice technologies increasingly serve aging populations, current systems exhibit significant performance gaps due to inadequate training data capturing elderly-specific vocal characteristics like presbyphonia and dialectal variations. The limited data available on super-aged individuals in existing elderly speech datasets, coupled with overly simple recording styles and annotation dimensions, exacerbates this issue. To address the critical scarcity of speech data from individuals aged 75 and above, we introduce SeniorTalk, a carefully annotated Chinese spoken dialogue dataset. This dataset contains 55.53 hours of speech from 101 natural conversations involving 202 participants, ensuring a strategic balance across gender, region, and age. Through detailed annotation across multiple dimensions, it can support a wide range of speech tasks. We perform extensive experiments on speaker verification, speaker diarization, speech recognition, and speech editing tasks, offering crucial insights for the development of speech technologies targeting this age group.
2503.16581
Arbi Haza Nasution
Zahra Khalila, Arbi Haza Nasution, Winda Monika, Aytug Onan, Yohei Murakami, Yasir Bin Ismail Radi, Noor Mohammad Osmani
Investigating Retrieval-Augmented Generation in Quranic Studies: A Study of 13 Open-Source Large Language Models
11 pages, keywords: Large-language-models; retrieval-augmented generation; question answering; Quranic studies; Islamic teachings
International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025
10.14569/IJACSA.2025.01602134
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Accurate and contextually faithful responses are critical when applying large language models (LLMs) to sensitive and domain-specific tasks, such as answering queries related to quranic studies. General-purpose LLMs often struggle with hallucinations, where generated responses deviate from authoritative sources, raising concerns about their reliability in religious contexts. This challenge highlights the need for systems that can integrate domain-specific knowledge while maintaining response accuracy, relevance, and faithfulness. In this study, we investigate 13 open-source LLMs categorized into large (e.g., Llama3:70b, Gemma2:27b, QwQ:32b), medium (e.g., Gemma2:9b, Llama3:8b), and small (e.g., Llama3.2:3b, Phi3:3.8b). A Retrieval-Augmented Generation (RAG) is used to make up for the problems that come with using separate models. This research utilizes a descriptive dataset of Quranic surahs including the meanings, historical context, and qualities of the 114 surahs, allowing the model to gather relevant knowledge before responding. The models are evaluated using three key metrics set by human evaluators: context relevance, answer faithfulness, and answer relevance. The findings reveal that large models consistently outperform smaller models in capturing query semantics and producing accurate, contextually grounded responses. The Llama3.2:3b model, even though it is considered small, does very well on faithfulness (4.619) and relevance (4.857), showing the promise of smaller architectures that have been well optimized. This article examines the trade-offs between model size, computational efficiency, and response quality while using LLMs in domain-specific applications.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 13:26:30 GMT" } ]
2025-03-24T00:00:00
[ [ "Khalila", "Zahra", "" ], [ "Nasution", "Arbi Haza", "" ], [ "Monika", "Winda", "" ], [ "Onan", "Aytug", "" ], [ "Murakami", "Yohei", "" ], [ "Radi", "Yasir Bin Ismail", "" ], [ "Osmani", "Noor Mohammad", "" ] ]
TITLE: Investigating Retrieval-Augmented Generation in Quranic Studies: A Study of 13 Open-Source Large Language Models ABSTRACT: Accurate and contextually faithful responses are critical when applying large language models (LLMs) to sensitive and domain-specific tasks, such as answering queries related to quranic studies. General-purpose LLMs often struggle with hallucinations, where generated responses deviate from authoritative sources, raising concerns about their reliability in religious contexts. This challenge highlights the need for systems that can integrate domain-specific knowledge while maintaining response accuracy, relevance, and faithfulness. In this study, we investigate 13 open-source LLMs categorized into large (e.g., Llama3:70b, Gemma2:27b, QwQ:32b), medium (e.g., Gemma2:9b, Llama3:8b), and small (e.g., Llama3.2:3b, Phi3:3.8b). A Retrieval-Augmented Generation (RAG) is used to make up for the problems that come with using separate models. This research utilizes a descriptive dataset of Quranic surahs including the meanings, historical context, and qualities of the 114 surahs, allowing the model to gather relevant knowledge before responding. The models are evaluated using three key metrics set by human evaluators: context relevance, answer faithfulness, and answer relevance. The findings reveal that large models consistently outperform smaller models in capturing query semantics and producing accurate, contextually grounded responses. The Llama3.2:3b model, even though it is considered small, does very well on faithfulness (4.619) and relevance (4.857), showing the promise of smaller architectures that have been well optimized. This article examines the trade-offs between model size, computational efficiency, and response quality while using LLMs in domain-specific applications.
2503.16582
Ruiqi Yang
Ruiqi Yang, Jianxu Wang, Wei Yuan, Xun Wang, Mei Li
Machine Learning-Based Genomic Linguistic Analysis (Gene Sequence Feature Learning): A Case Study on Predicting Heavy Metal Response Genes in Rice
null
null
null
null
cs.LG cs.AI q-bio.GN
http://creativecommons.org/licenses/by/4.0/
This study explores the application of machine learning-based genetic linguistics for identifying heavy metal response genes in rice (Oryza sativa). By integrating convolutional neural networks and random forest algorithms, we developed a hybrid model capable of extracting and learning meaningful features from gene sequences, such as k-mer frequencies and physicochemical properties. The model was trained and tested on datasets of genes, achieving high predictive performance (precision: 0.89, F1-score: 0.82). RNA-seq and qRT-PCR experiments conducted on rice leaves which exposed to Hg0, revealed differential expression of genes associated with heavy metal responses, which validated the model's predictions. Co-expression network analysis identified 103 related genes, and a literature review indicated that these genes are highly likely to be involved in heavy metal-related biological processes. By integrating and comparing the analysis results with those of differentially expressed genes (DEGs), the validity of the new machine learning method was further demonstrated. This study highlights the efficacy of combining machine learning with genetic linguistics for large-scale gene prediction. It demonstrates a cost-effective and efficient approach for uncovering molecular mechanisms underlying heavy metal responses, with potential applications in developing stress-tolerant crop varieties.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 13:41:31 GMT" } ]
2025-03-24T00:00:00
[ [ "Yang", "Ruiqi", "" ], [ "Wang", "Jianxu", "" ], [ "Yuan", "Wei", "" ], [ "Wang", "Xun", "" ], [ "Li", "Mei", "" ] ]
TITLE: Machine Learning-Based Genomic Linguistic Analysis (Gene Sequence Feature Learning): A Case Study on Predicting Heavy Metal Response Genes in Rice ABSTRACT: This study explores the application of machine learning-based genetic linguistics for identifying heavy metal response genes in rice (Oryza sativa). By integrating convolutional neural networks and random forest algorithms, we developed a hybrid model capable of extracting and learning meaningful features from gene sequences, such as k-mer frequencies and physicochemical properties. The model was trained and tested on datasets of genes, achieving high predictive performance (precision: 0.89, F1-score: 0.82). RNA-seq and qRT-PCR experiments conducted on rice leaves which exposed to Hg0, revealed differential expression of genes associated with heavy metal responses, which validated the model's predictions. Co-expression network analysis identified 103 related genes, and a literature review indicated that these genes are highly likely to be involved in heavy metal-related biological processes. By integrating and comparing the analysis results with those of differentially expressed genes (DEGs), the validity of the new machine learning method was further demonstrated. This study highlights the efficacy of combining machine learning with genetic linguistics for large-scale gene prediction. It demonstrates a cost-effective and efficient approach for uncovering molecular mechanisms underlying heavy metal responses, with potential applications in developing stress-tolerant crop varieties.
2503.16584
Xin Huang
Xin Huang, Shiyao Zhu, Ziyu Wang, Yaping He, Hao Jin, Zhengkui Liu
EVA-MED: An Enhanced Valence-Arousal Multimodal Emotion Dataset for Emotion Recognition
10 pages, 6figures, 1table
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
We introduce a novel multimodal emotion recognition dataset that enhances the precision of Valence-Arousal Model while accounting for individual differences. This dataset includes electroencephalography (EEG), electrocardiography (ECG), and pulse interval (PI) from 64 participants. Data collection employed two emotion induction paradigms: video stimuli that targeted different valence levels (positive, neutral, and negative) and the Mannheim Multicomponent Stress Test (MMST), which induced high arousal through cognitive, emotional, and social stressors. To enrich the dataset, participants' personality traits, anxiety, depression, and emotional states were assessed using validated questionnaires. By capturing a broad spectrum of affective responses while accounting for individual differences, this dataset provides a robust resource for precise emotion modeling. The integration of multimodal physiological data with psychological assessments lays a strong foundation for personalized emotion recognition. We anticipate this resource will support the development of more accurate, adaptive, and individualized emotion recognition systems across diverse applications.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 14:41:50 GMT" } ]
2025-03-24T00:00:00
[ [ "Huang", "Xin", "" ], [ "Zhu", "Shiyao", "" ], [ "Wang", "Ziyu", "" ], [ "He", "Yaping", "" ], [ "Jin", "Hao", "" ], [ "Liu", "Zhengkui", "" ] ]
TITLE: EVA-MED: An Enhanced Valence-Arousal Multimodal Emotion Dataset for Emotion Recognition ABSTRACT: We introduce a novel multimodal emotion recognition dataset that enhances the precision of Valence-Arousal Model while accounting for individual differences. This dataset includes electroencephalography (EEG), electrocardiography (ECG), and pulse interval (PI) from 64 participants. Data collection employed two emotion induction paradigms: video stimuli that targeted different valence levels (positive, neutral, and negative) and the Mannheim Multicomponent Stress Test (MMST), which induced high arousal through cognitive, emotional, and social stressors. To enrich the dataset, participants' personality traits, anxiety, depression, and emotional states were assessed using validated questionnaires. By capturing a broad spectrum of affective responses while accounting for individual differences, this dataset provides a robust resource for precise emotion modeling. The integration of multimodal physiological data with psychological assessments lays a strong foundation for personalized emotion recognition. We anticipate this resource will support the development of more accurate, adaptive, and individualized emotion recognition systems across diverse applications.
2503.16585
M. Hadi Amini
Hadi Amini, Md Jueal Mia, Yasaman Saadati, Ahmed Imteaj, Seyedsina Nabavirazavi, Urmish Thakker, Md Zarif Hossain, Awal Ahmed Fime, S.S. Iyengar
Distributed LLMs and Multimodal Large Language Models: A Survey on Advances, Challenges, and Future Directions
null
null
null
null
cs.CL cs.CV cs.DC cs.LG
http://creativecommons.org/licenses/by/4.0/
Language models (LMs) are machine learning models designed to predict linguistic patterns by estimating the probability of word sequences based on large-scale datasets, such as text. LMs have a wide range of applications in natural language processing (NLP) tasks, including autocomplete and machine translation. Although larger datasets typically enhance LM performance, scalability remains a challenge due to constraints in computational power and resources. Distributed computing strategies offer essential solutions for improving scalability and managing the growing computational demand. Further, the use of sensitive datasets in training and deployment raises significant privacy concerns. Recent research has focused on developing decentralized techniques to enable distributed training and inference while utilizing diverse computational resources and enabling edge AI. This paper presents a survey on distributed solutions for various LMs, including large language models (LLMs), vision language models (VLMs), multimodal LLMs (MLLMs), and small language models (SLMs). While LLMs focus on processing and generating text, MLLMs are designed to handle multiple modalities of data (e.g., text, images, and audio) and to integrate them for broader applications. To this end, this paper reviews key advancements across the MLLM pipeline, including distributed training, inference, fine-tuning, and deployment, while also identifying the contributions, limitations, and future areas of improvement. Further, it categorizes the literature based on six primary focus areas of decentralization. Our analysis describes gaps in current methodologies for enabling distributed solutions for LMs and outline future research directions, emphasizing the need for novel solutions to enhance the robustness and applicability of distributed LMs.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 15:18:25 GMT" } ]
2025-03-24T00:00:00
[ [ "Amini", "Hadi", "" ], [ "Mia", "Md Jueal", "" ], [ "Saadati", "Yasaman", "" ], [ "Imteaj", "Ahmed", "" ], [ "Nabavirazavi", "Seyedsina", "" ], [ "Thakker", "Urmish", "" ], [ "Hossain", "Md Zarif", "" ], [ "Fime", "Awal Ahmed", "" ], [ "Iyengar", "S. S.", "" ] ]
TITLE: Distributed LLMs and Multimodal Large Language Models: A Survey on Advances, Challenges, and Future Directions ABSTRACT: Language models (LMs) are machine learning models designed to predict linguistic patterns by estimating the probability of word sequences based on large-scale datasets, such as text. LMs have a wide range of applications in natural language processing (NLP) tasks, including autocomplete and machine translation. Although larger datasets typically enhance LM performance, scalability remains a challenge due to constraints in computational power and resources. Distributed computing strategies offer essential solutions for improving scalability and managing the growing computational demand. Further, the use of sensitive datasets in training and deployment raises significant privacy concerns. Recent research has focused on developing decentralized techniques to enable distributed training and inference while utilizing diverse computational resources and enabling edge AI. This paper presents a survey on distributed solutions for various LMs, including large language models (LLMs), vision language models (VLMs), multimodal LLMs (MLLMs), and small language models (SLMs). While LLMs focus on processing and generating text, MLLMs are designed to handle multiple modalities of data (e.g., text, images, and audio) and to integrate them for broader applications. To this end, this paper reviews key advancements across the MLLM pipeline, including distributed training, inference, fine-tuning, and deployment, while also identifying the contributions, limitations, and future areas of improvement. Further, it categorizes the literature based on six primary focus areas of decentralization. Our analysis describes gaps in current methodologies for enabling distributed solutions for LMs and outline future research directions, emphasizing the need for novel solutions to enhance the robustness and applicability of distributed LMs.
2503.16591
Luigi Piccinelli
Luigi Piccinelli, Christos Sakaridis, Mattia Segu, Yung-Hsu Yang, Siyuan Li, Wim Abbeloos, Luc Van Gool
UniK3D: Universal Camera Monocular 3D Estimation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Monocular 3D estimation is crucial for visual perception. However, current methods fall short by relying on oversimplified assumptions, such as pinhole camera models or rectified images. These limitations severely restrict their general applicability, causing poor performance in real-world scenarios with fisheye or panoramic images and resulting in substantial context loss. To address this, we present UniK3D, the first generalizable method for monocular 3D estimation able to model any camera. Our method introduces a spherical 3D representation which allows for better disentanglement of camera and scene geometry and enables accurate metric 3D reconstruction for unconstrained camera models. Our camera component features a novel, model-independent representation of the pencil of rays, achieved through a learned superposition of spherical harmonics. We also introduce an angular loss, which, together with the camera module design, prevents the contraction of the 3D outputs for wide-view cameras. A comprehensive zero-shot evaluation on 13 diverse datasets demonstrates the state-of-the-art performance of UniK3D across 3D, depth, and camera metrics, with substantial gains in challenging large-field-of-view and panoramic settings, while maintaining top accuracy in conventional pinhole small-field-of-view domains. Code and models are available at github.com/lpiccinelli-eth/unik3d .
[ { "version": "v1", "created": "Thu, 20 Mar 2025 17:49:23 GMT" } ]
2025-03-24T00:00:00
[ [ "Piccinelli", "Luigi", "" ], [ "Sakaridis", "Christos", "" ], [ "Segu", "Mattia", "" ], [ "Yang", "Yung-Hsu", "" ], [ "Li", "Siyuan", "" ], [ "Abbeloos", "Wim", "" ], [ "Van Gool", "Luc", "" ] ]
TITLE: UniK3D: Universal Camera Monocular 3D Estimation ABSTRACT: Monocular 3D estimation is crucial for visual perception. However, current methods fall short by relying on oversimplified assumptions, such as pinhole camera models or rectified images. These limitations severely restrict their general applicability, causing poor performance in real-world scenarios with fisheye or panoramic images and resulting in substantial context loss. To address this, we present UniK3D, the first generalizable method for monocular 3D estimation able to model any camera. Our method introduces a spherical 3D representation which allows for better disentanglement of camera and scene geometry and enables accurate metric 3D reconstruction for unconstrained camera models. Our camera component features a novel, model-independent representation of the pencil of rays, achieved through a learned superposition of spherical harmonics. We also introduce an angular loss, which, together with the camera module design, prevents the contraction of the 3D outputs for wide-view cameras. A comprehensive zero-shot evaluation on 13 diverse datasets demonstrates the state-of-the-art performance of UniK3D across 3D, depth, and camera metrics, with substantial gains in challenging large-field-of-view and panoramic settings, while maintaining top accuracy in conventional pinhole small-field-of-view domains. Code and models are available at github.com/lpiccinelli-eth/unik3d .
2503.16614
Victor Aguiar Evangelista De Farias
Maria de Lourdes M. Silva, Andr\'e L. C. Mendon\c{c}a, Eduardo R. D. Neto, Iago C. Chaves, Felipe T. Brito, Victor A. E. Farias, Javam C. Machado
Classification of User Reports for Detection of Faulty Computer Components using NLP Models: A Case Study
9 pages, 2 figures
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Computer manufacturers typically offer platforms for users to report faults. However, there remains a significant gap in these platforms' ability to effectively utilize textual reports, which impedes users from describing their issues in their own words. In this context, Natural Language Processing (NLP) offers a promising solution, by enabling the analysis of user-generated text. This paper presents an innovative approach that employs NLP models to classify user reports for detecting faulty computer components, such as CPU, memory, motherboard, video card, and more. In this work, we build a dataset of 341 user reports obtained from many sources. Additionally, through extensive experimental evaluation, our approach achieved an accuracy of 79% with our dataset.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 18:11:26 GMT" } ]
2025-03-24T00:00:00
[ [ "Silva", "Maria de Lourdes M.", "" ], [ "Mendonça", "André L. C.", "" ], [ "Neto", "Eduardo R. D.", "" ], [ "Chaves", "Iago C.", "" ], [ "Brito", "Felipe T.", "" ], [ "Farias", "Victor A. E.", "" ], [ "Machado", "Javam C.", "" ] ]
TITLE: Classification of User Reports for Detection of Faulty Computer Components using NLP Models: A Case Study ABSTRACT: Computer manufacturers typically offer platforms for users to report faults. However, there remains a significant gap in these platforms' ability to effectively utilize textual reports, which impedes users from describing their issues in their own words. In this context, Natural Language Processing (NLP) offers a promising solution, by enabling the analysis of user-generated text. This paper presents an innovative approach that employs NLP models to classify user reports for detecting faulty computer components, such as CPU, memory, motherboard, video card, and more. In this work, we build a dataset of 341 user reports obtained from many sources. Additionally, through extensive experimental evaluation, our approach achieved an accuracy of 79% with our dataset.
2503.16616
Xiaoran Zhang
Xiaoran Zhang, Byung-Woo Hong, Hyoungseob Park, Daniel H. Pak, Anne-Marie Rickmann, Lawrence H. Staib, James S. Duncan, Alex Wong
Progressive Test Time Energy Adaptation for Medical Image Segmentation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We propose a model-agnostic, progressive test-time energy adaptation approach for medical image segmentation. Maintaining model performance across diverse medical datasets is challenging, as distribution shifts arise from inconsistent imaging protocols and patient variations. Unlike domain adaptation methods that require multiple passes through target data - impractical in clinical settings - our approach adapts pretrained models progressively as they process test data. Our method leverages a shape energy model trained on source data, which assigns an energy score at the patch level to segmentation maps: low energy represents in-distribution (accurate) shapes, while high energy signals out-of-distribution (erroneous) predictions. By minimizing this energy score at test time, we refine the segmentation model to align with the target distribution. To validate the effectiveness and adaptability, we evaluated our framework on eight public MRI (bSSFP, T1- and T2-weighted) and X-ray datasets spanning cardiac, spinal cord, and lung segmentation. We consistently outperform baselines both quantitatively and qualitatively.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 18:15:50 GMT" } ]
2025-03-24T00:00:00
[ [ "Zhang", "Xiaoran", "" ], [ "Hong", "Byung-Woo", "" ], [ "Park", "Hyoungseob", "" ], [ "Pak", "Daniel H.", "" ], [ "Rickmann", "Anne-Marie", "" ], [ "Staib", "Lawrence H.", "" ], [ "Duncan", "James S.", "" ], [ "Wong", "Alex", "" ] ]
TITLE: Progressive Test Time Energy Adaptation for Medical Image Segmentation ABSTRACT: We propose a model-agnostic, progressive test-time energy adaptation approach for medical image segmentation. Maintaining model performance across diverse medical datasets is challenging, as distribution shifts arise from inconsistent imaging protocols and patient variations. Unlike domain adaptation methods that require multiple passes through target data - impractical in clinical settings - our approach adapts pretrained models progressively as they process test data. Our method leverages a shape energy model trained on source data, which assigns an energy score at the patch level to segmentation maps: low energy represents in-distribution (accurate) shapes, while high energy signals out-of-distribution (erroneous) predictions. By minimizing this energy score at test time, we refine the segmentation model to align with the target distribution. To validate the effectiveness and adaptability, we evaluated our framework on eight public MRI (bSSFP, T1- and T2-weighted) and X-ray datasets spanning cardiac, spinal cord, and lung segmentation. We consistently outperform baselines both quantitatively and qualitatively.
2503.16628
Moshiur Rahman Tonmoy
Moshiur Rahman Tonmoy, Md. Mithun Hossain, Nilanjan Dey, M. F. Mridha
MobilePlantViT: A Mobile-friendly Hybrid ViT for Generalized Plant Disease Image Classification
Submitted to a journal for peer-review under IEEE Transactions series
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Plant diseases significantly threaten global food security by reducing crop yields and undermining agricultural sustainability. AI-driven automated classification has emerged as a promising solution, with deep learning models demonstrating impressive performance in plant disease identification. However, deploying these models on mobile and edge devices remains challenging due to high computational demands and resource constraints, highlighting the need for lightweight, accurate solutions for accessible smart agriculture systems. To address this, we propose MobilePlantViT, a novel hybrid Vision Transformer (ViT) architecture designed for generalized plant disease classification, which optimizes resource efficiency while maintaining high performance. Extensive experiments across diverse plant disease datasets of varying scales show our model's effectiveness and strong generalizability, achieving test accuracies ranging from 80% to over 99%. Notably, with only 0.69 million parameters, our architecture outperforms the smallest versions of MobileViTv1 and MobileViTv2, despite their higher parameter counts. These results underscore the potential of our approach for real-world, AI-powered automated plant disease classification in sustainable and resource-efficient smart agriculture systems. All codes will be available in the GitHub repository: https://github.com/moshiurtonmoy/MobilePlantViT
[ { "version": "v1", "created": "Thu, 20 Mar 2025 18:34:02 GMT" } ]
2025-03-24T00:00:00
[ [ "Tonmoy", "Moshiur Rahman", "" ], [ "Hossain", "Md. Mithun", "" ], [ "Dey", "Nilanjan", "" ], [ "Mridha", "M. F.", "" ] ]
TITLE: MobilePlantViT: A Mobile-friendly Hybrid ViT for Generalized Plant Disease Image Classification ABSTRACT: Plant diseases significantly threaten global food security by reducing crop yields and undermining agricultural sustainability. AI-driven automated classification has emerged as a promising solution, with deep learning models demonstrating impressive performance in plant disease identification. However, deploying these models on mobile and edge devices remains challenging due to high computational demands and resource constraints, highlighting the need for lightweight, accurate solutions for accessible smart agriculture systems. To address this, we propose MobilePlantViT, a novel hybrid Vision Transformer (ViT) architecture designed for generalized plant disease classification, which optimizes resource efficiency while maintaining high performance. Extensive experiments across diverse plant disease datasets of varying scales show our model's effectiveness and strong generalizability, achieving test accuracies ranging from 80% to over 99%. Notably, with only 0.69 million parameters, our architecture outperforms the smallest versions of MobileViTv1 and MobileViTv2, despite their higher parameter counts. These results underscore the potential of our approach for real-world, AI-powered automated plant disease classification in sustainable and resource-efficient smart agriculture systems. All codes will be available in the GitHub repository: https://github.com/moshiurtonmoy/MobilePlantViT
2503.16630
Dana Cohen-Bar
Dana Cohen-Bar, Daniel Cohen-Or, Gal Chechik, Yoni Kasten
TriTex: Learning Texture from a Single Mesh via Triplane Semantic Features
Project page: https://danacohen95.github.io/TriTex/
null
null
null
cs.GR cs.CV
http://creativecommons.org/licenses/by/4.0/
As 3D content creation continues to grow, transferring semantic textures between 3D meshes remains a significant challenge in computer graphics. While recent methods leverage text-to-image diffusion models for texturing, they often struggle to preserve the appearance of the source texture during texture transfer. We present \ourmethod, a novel approach that learns a volumetric texture field from a single textured mesh by mapping semantic features to surface colors. Using an efficient triplane-based architecture, our method enables semantic-aware texture transfer to a novel target mesh. Despite training on just one example, it generalizes effectively to diverse shapes within the same category. Extensive evaluation on our newly created benchmark dataset shows that \ourmethod{} achieves superior texture transfer quality and fast inference times compared to existing methods. Our approach advances single-example texture transfer, providing a practical solution for maintaining visual coherence across related 3D models in applications like game development and simulation.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 18:35:03 GMT" } ]
2025-03-24T00:00:00
[ [ "Cohen-Bar", "Dana", "" ], [ "Cohen-Or", "Daniel", "" ], [ "Chechik", "Gal", "" ], [ "Kasten", "Yoni", "" ] ]
TITLE: TriTex: Learning Texture from a Single Mesh via Triplane Semantic Features ABSTRACT: As 3D content creation continues to grow, transferring semantic textures between 3D meshes remains a significant challenge in computer graphics. While recent methods leverage text-to-image diffusion models for texturing, they often struggle to preserve the appearance of the source texture during texture transfer. We present \ourmethod, a novel approach that learns a volumetric texture field from a single textured mesh by mapping semantic features to surface colors. Using an efficient triplane-based architecture, our method enables semantic-aware texture transfer to a novel target mesh. Despite training on just one example, it generalizes effectively to diverse shapes within the same category. Extensive evaluation on our newly created benchmark dataset shows that \ourmethod{} achieves superior texture transfer quality and fast inference times compared to existing methods. Our approach advances single-example texture transfer, providing a practical solution for maintaining visual coherence across related 3D models in applications like game development and simulation.
2503.16635
Yinchi Zhou
Yinchi Zhou, Huidong Xie, Menghua Xia, Qiong Liu, Bo Zhou, Tianqi Chen, Jun Hou, Liang Guo, Xinyuan Zheng, Hanzhong Wang, Biao Li, Axel Rominger, Kuangyu Shi, Nicha C. Dvorneka, and Chi Liu
Fed-NDIF: A Noise-Embedded Federated Diffusion Model For Low-Count Whole-Body PET Denoising
null
null
null
null
eess.IV cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Low-count positron emission tomography (LCPET) imaging can reduce patients' exposure to radiation but often suffers from increased image noise and reduced lesion detectability, necessitating effective denoising techniques. Diffusion models have shown promise in LCPET denoising for recovering degraded image quality. However, training such models requires large and diverse datasets, which are challenging to obtain in the medical domain. To address data scarcity and privacy concerns, we combine diffusion models with federated learning -- a decentralized training approach where models are trained individually at different sites, and their parameters are aggregated on a central server over multiple iterations. The variation in scanner types and image noise levels within and across institutions poses additional challenges for federated learning in LCPET denoising. In this study, we propose a novel noise-embedded federated learning diffusion model (Fed-NDIF) to address these challenges, leveraging a multicenter dataset and varying count levels. Our approach incorporates liver normalized standard deviation (NSTD) noise embedding into a 2.5D diffusion model and utilizes the Federated Averaging (FedAvg) algorithm to aggregate locally trained models into a global model, which is subsequently fine-tuned on local datasets to optimize performance and obtain personalized models. Extensive validation on datasets from the University of Bern, Ruijin Hospital in Shanghai, and Yale-New Haven Hospital demonstrates the superior performance of our method in enhancing image quality and improving lesion quantification. The Fed-NDIF model shows significant improvements in PSNR, SSIM, and NMSE of the entire 3D volume, as well as enhanced lesion detectability and quantification, compared to local diffusion models and federated UNet-based models.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 18:37:46 GMT" } ]
2025-03-24T00:00:00
[ [ "Zhou", "Yinchi", "" ], [ "Xie", "Huidong", "" ], [ "Xia", "Menghua", "" ], [ "Liu", "Qiong", "" ], [ "Zhou", "Bo", "" ], [ "Chen", "Tianqi", "" ], [ "Hou", "Jun", "" ], [ "Guo", "Liang", "" ], [ "Zheng", "Xinyuan", "" ], [ "Wang", "Hanzhong", "" ], [ "Li", "Biao", "" ], [ "Rominger", "Axel", "" ], [ "Shi", "Kuangyu", "" ], [ "Dvorneka", "Nicha C.", "" ], [ "Liu", "Chi", "" ] ]
TITLE: Fed-NDIF: A Noise-Embedded Federated Diffusion Model For Low-Count Whole-Body PET Denoising ABSTRACT: Low-count positron emission tomography (LCPET) imaging can reduce patients' exposure to radiation but often suffers from increased image noise and reduced lesion detectability, necessitating effective denoising techniques. Diffusion models have shown promise in LCPET denoising for recovering degraded image quality. However, training such models requires large and diverse datasets, which are challenging to obtain in the medical domain. To address data scarcity and privacy concerns, we combine diffusion models with federated learning -- a decentralized training approach where models are trained individually at different sites, and their parameters are aggregated on a central server over multiple iterations. The variation in scanner types and image noise levels within and across institutions poses additional challenges for federated learning in LCPET denoising. In this study, we propose a novel noise-embedded federated learning diffusion model (Fed-NDIF) to address these challenges, leveraging a multicenter dataset and varying count levels. Our approach incorporates liver normalized standard deviation (NSTD) noise embedding into a 2.5D diffusion model and utilizes the Federated Averaging (FedAvg) algorithm to aggregate locally trained models into a global model, which is subsequently fine-tuned on local datasets to optimize performance and obtain personalized models. Extensive validation on datasets from the University of Bern, Ruijin Hospital in Shanghai, and Yale-New Haven Hospital demonstrates the superior performance of our method in enhancing image quality and improving lesion quantification. The Fed-NDIF model shows significant improvements in PSNR, SSIM, and NMSE of the entire 3D volume, as well as enhanced lesion detectability and quantification, compared to local diffusion models and federated UNet-based models.
2503.16639
Thomas Kreutz
Thomas Kreutz, Max M\"uhlh\"auser, Alejandro Sanchez Guinea
Whenever, Wherever: Towards Orchestrating Crowd Simulations with Spatio-Temporal Spawn Dynamics
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Realistic crowd simulations are essential for immersive virtual environments, relying on both individual behaviors (microscopic dynamics) and overall crowd patterns (macroscopic characteristics). While recent data-driven methods like deep reinforcement learning improve microscopic realism, they often overlook critical macroscopic features such as crowd density and flow, which are governed by spatio-temporal spawn dynamics, namely, when and where agents enter a scene. Traditional methods, like random spawn rates, stochastic processes, or fixed schedules, are not guaranteed to capture the underlying complexity or lack diversity and realism. To address this issue, we propose a novel approach called nTPP-GMM that models spatio-temporal spawn dynamics using Neural Temporal Point Processes (nTPPs) that are coupled with a spawn-conditional Gaussian Mixture Model (GMM) for agent spawn and goal positions. We evaluate our approach by orchestrating crowd simulations of three diverse real-world datasets with nTPP-GMM. Our experiments demonstrate the orchestration with nTPP-GMM leads to realistic simulations that reflect real-world crowd scenarios and allow crowd analysis.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 18:46:41 GMT" } ]
2025-03-24T00:00:00
[ [ "Kreutz", "Thomas", "" ], [ "Mühlhäuser", "Max", "" ], [ "Guinea", "Alejandro Sanchez", "" ] ]
TITLE: Whenever, Wherever: Towards Orchestrating Crowd Simulations with Spatio-Temporal Spawn Dynamics ABSTRACT: Realistic crowd simulations are essential for immersive virtual environments, relying on both individual behaviors (microscopic dynamics) and overall crowd patterns (macroscopic characteristics). While recent data-driven methods like deep reinforcement learning improve microscopic realism, they often overlook critical macroscopic features such as crowd density and flow, which are governed by spatio-temporal spawn dynamics, namely, when and where agents enter a scene. Traditional methods, like random spawn rates, stochastic processes, or fixed schedules, are not guaranteed to capture the underlying complexity or lack diversity and realism. To address this issue, we propose a novel approach called nTPP-GMM that models spatio-temporal spawn dynamics using Neural Temporal Point Processes (nTPPs) that are coupled with a spawn-conditional Gaussian Mixture Model (GMM) for agent spawn and goal positions. We evaluate our approach by orchestrating crowd simulations of three diverse real-world datasets with nTPP-GMM. Our experiments demonstrate the orchestration with nTPP-GMM leads to realistic simulations that reflect real-world crowd scenarios and allow crowd analysis.
2503.16661
Nikos Kanakaris
Alejandro Ariza-Casabona, Nikos Kanakaris, Daniele Malitesta
ContextGNN goes to Elliot: Towards Benchmarking Relational Deep Learning for Static Link Prediction (aka Personalized Item Recommendation)
null
null
null
null
cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
Relational deep learning (RDL) settles among the most exciting advances in machine learning for relational databases, leveraging the representational power of message passing graph neural networks (GNNs) to derive useful knowledge and run predicting tasks on tables connected through primary-to-foreign key links. The RDL paradigm has been successfully applied to recommendation lately, through its most recent representative deep learning architecture namely, ContextGNN. While acknowledging ContextGNN's improved performance on real-world recommendation datasets and tasks, preliminary tests for the more traditional static link prediction task (aka personalized item recommendation) on the popular Amazon Book dataset have demonstrated how ContextGNN has still room for improvement compared to other state-of-the-art GNN-based recommender systems. To this end, with this paper, we integrate ContextGNN within Elliot, a popular framework for reproducibility and benchmarking analyses, counting around 50 state-of-the-art recommendation models from the literature to date. On such basis, we run preliminary experiments on three standard recommendation datasets and against six state-of-the-art GNN-based recommender systems, confirming similar trends to those observed by the authors in their original paper. The code is publicly available on GitHub: https://github.com/danielemalitesta/Rel-DeepLearning-RecSys.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 19:17:09 GMT" } ]
2025-03-24T00:00:00
[ [ "Ariza-Casabona", "Alejandro", "" ], [ "Kanakaris", "Nikos", "" ], [ "Malitesta", "Daniele", "" ] ]
TITLE: ContextGNN goes to Elliot: Towards Benchmarking Relational Deep Learning for Static Link Prediction (aka Personalized Item Recommendation) ABSTRACT: Relational deep learning (RDL) settles among the most exciting advances in machine learning for relational databases, leveraging the representational power of message passing graph neural networks (GNNs) to derive useful knowledge and run predicting tasks on tables connected through primary-to-foreign key links. The RDL paradigm has been successfully applied to recommendation lately, through its most recent representative deep learning architecture namely, ContextGNN. While acknowledging ContextGNN's improved performance on real-world recommendation datasets and tasks, preliminary tests for the more traditional static link prediction task (aka personalized item recommendation) on the popular Amazon Book dataset have demonstrated how ContextGNN has still room for improvement compared to other state-of-the-art GNN-based recommender systems. To this end, with this paper, we integrate ContextGNN within Elliot, a popular framework for reproducibility and benchmarking analyses, counting around 50 state-of-the-art recommendation models from the literature to date. On such basis, we run preliminary experiments on three standard recommendation datasets and against six state-of-the-art GNN-based recommender systems, confirming similar trends to those observed by the authors in their original paper. The code is publicly available on GitHub: https://github.com/danielemalitesta/Rel-DeepLearning-RecSys.
2503.16664
Martin Kosteln\'ik
Martin Kosteln\'ik, Karel Bene\v{s}, Michal Hradi\v{s}
TextBite: A Historical Czech Document Dataset for Logical Page Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Logical page segmentation is an important step in document analysis, enabling better semantic representations, information retrieval, and text understanding. Previous approaches define logical segmentation either through text or geometric objects, relying on OCR or precise geometry. To avoid the need for OCR, we define the task purely as segmentation in the image domain. Furthermore, to ensure the evaluation remains unaffected by geometrical variations that do not impact text segmentation, we propose to use only foreground text pixels in the evaluation metric and disregard all background pixels. To support research in logical document segmentation, we introduce TextBite, a dataset of historical Czech documents spanning the 18th to 20th centuries, featuring diverse layouts from newspapers, dictionaries, and handwritten records. The dataset comprises 8,449 page images with 78,863 annotated segments of logically and thematically coherent text. We propose a set of baseline methods combining text region detection and relation prediction. The dataset, baselines and evaluation framework can be accessed at https://github.com/DCGM/textbite-dataset.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 19:19:12 GMT" } ]
2025-03-24T00:00:00
[ [ "Kostelník", "Martin", "" ], [ "Beneš", "Karel", "" ], [ "Hradiš", "Michal", "" ] ]
TITLE: TextBite: A Historical Czech Document Dataset for Logical Page Segmentation ABSTRACT: Logical page segmentation is an important step in document analysis, enabling better semantic representations, information retrieval, and text understanding. Previous approaches define logical segmentation either through text or geometric objects, relying on OCR or precise geometry. To avoid the need for OCR, we define the task purely as segmentation in the image domain. Furthermore, to ensure the evaluation remains unaffected by geometrical variations that do not impact text segmentation, we propose to use only foreground text pixels in the evaluation metric and disregard all background pixels. To support research in logical document segmentation, we introduce TextBite, a dataset of historical Czech documents spanning the 18th to 20th centuries, featuring diverse layouts from newspapers, dictionaries, and handwritten records. The dataset comprises 8,449 page images with 78,863 annotated segments of logically and thematically coherent text. We propose a set of baseline methods combining text region detection and relation prediction. The dataset, baselines and evaluation framework can be accessed at https://github.com/DCGM/textbite-dataset.
2503.16667
Oriol Vendrell-Gallart
Oriol Vendrell-Gallart, Nima Negarandeh, Zahra Zanjani Foumani, Mahsa Amiri, Lorenzo Valdevit, Ramin Bostanabad
A preliminary data fusion study to assess the feasibility of Foundation Process-Property Models in Laser Powder Bed Fusion
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Foundation models are at the forefront of an increasing number of critical applications. In regards to technologies such as additive manufacturing (AM), these models have the potential to dramatically accelerate process optimization and, in turn, design of next generation materials. A major challenge that impedes the construction of foundation process-property models is data scarcity. To understand the impact of this challenge, and since foundation models rely on data fusion, in this work we conduct controlled experiments where we focus on the transferability of information across different material systems and properties. More specifically, we generate experimental datasets from 17-4 PH and 316L stainless steels (SSs) in Laser Powder Bed Fusion (LPBF) where we measure the effect of five process parameters on porosity and hardness. We then leverage Gaussian processes (GPs) for process-property modeling in various configurations to test if knowledge about one material system or property can be leveraged to build more accurate machine learning models for other material systems or properties. Through extensive cross-validation studies and probing the GPs' interpretable hyperparameters, we study the intricate relation among data size and dimensionality, complexity of the process-property relations, noise, and characteristics of machine learning models. Our findings highlight the need for structured learning approaches that incorporate domain knowledge in building foundation process-property models rather than relying on uninformed data fusion in data-limited applications.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 19:29:38 GMT" } ]
2025-03-24T00:00:00
[ [ "Vendrell-Gallart", "Oriol", "" ], [ "Negarandeh", "Nima", "" ], [ "Foumani", "Zahra Zanjani", "" ], [ "Amiri", "Mahsa", "" ], [ "Valdevit", "Lorenzo", "" ], [ "Bostanabad", "Ramin", "" ] ]
TITLE: A preliminary data fusion study to assess the feasibility of Foundation Process-Property Models in Laser Powder Bed Fusion ABSTRACT: Foundation models are at the forefront of an increasing number of critical applications. In regards to technologies such as additive manufacturing (AM), these models have the potential to dramatically accelerate process optimization and, in turn, design of next generation materials. A major challenge that impedes the construction of foundation process-property models is data scarcity. To understand the impact of this challenge, and since foundation models rely on data fusion, in this work we conduct controlled experiments where we focus on the transferability of information across different material systems and properties. More specifically, we generate experimental datasets from 17-4 PH and 316L stainless steels (SSs) in Laser Powder Bed Fusion (LPBF) where we measure the effect of five process parameters on porosity and hardness. We then leverage Gaussian processes (GPs) for process-property modeling in various configurations to test if knowledge about one material system or property can be leveraged to build more accurate machine learning models for other material systems or properties. Through extensive cross-validation studies and probing the GPs' interpretable hyperparameters, we study the intricate relation among data size and dimensionality, complexity of the process-property relations, noise, and characteristics of machine learning models. Our findings highlight the need for structured learning approaches that incorporate domain knowledge in building foundation process-property models rather than relying on uninformed data fusion in data-limited applications.
2503.16669
Yichen Huang
Yichen Huang, Zachary Novack, Koichi Saito, Jiatong Shi, Shinji Watanabe, Yuki Mitsufuji, John Thickstun, Chris Donahue
Aligning Text-to-Music Evaluation with Human Preferences
null
null
null
null
cs.SD cs.AI eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite significant recent advances in generative acoustic text-to-music (TTM) modeling, robust evaluation of these models lags behind, relying in particular on the popular Fr\'echet Audio Distance (FAD). In this work, we rigorously study the design space of reference-based divergence metrics for evaluating TTM models through (1) designing four synthetic meta-evaluations to measure sensitivity to particular musical desiderata, and (2) collecting and evaluating on MusicPrefs, the first open-source dataset of human preferences for TTM systems. We find that not only is the standard FAD setup inconsistent on both synthetic and human preference data, but that nearly all existing metrics fail to effectively capture desiderata, and are only weakly correlated with human perception. We propose a new metric, the MAUVE Audio Divergence (MAD), computed on representations from a self-supervised audio embedding model. We find that this metric effectively captures diverse musical desiderata (average rank correlation 0.84 for MAD vs. 0.49 for FAD and also correlates more strongly with MusicPrefs (0.62 vs. 0.14).
[ { "version": "v1", "created": "Thu, 20 Mar 2025 19:31:04 GMT" } ]
2025-03-24T00:00:00
[ [ "Huang", "Yichen", "" ], [ "Novack", "Zachary", "" ], [ "Saito", "Koichi", "" ], [ "Shi", "Jiatong", "" ], [ "Watanabe", "Shinji", "" ], [ "Mitsufuji", "Yuki", "" ], [ "Thickstun", "John", "" ], [ "Donahue", "Chris", "" ] ]
TITLE: Aligning Text-to-Music Evaluation with Human Preferences ABSTRACT: Despite significant recent advances in generative acoustic text-to-music (TTM) modeling, robust evaluation of these models lags behind, relying in particular on the popular Fr\'echet Audio Distance (FAD). In this work, we rigorously study the design space of reference-based divergence metrics for evaluating TTM models through (1) designing four synthetic meta-evaluations to measure sensitivity to particular musical desiderata, and (2) collecting and evaluating on MusicPrefs, the first open-source dataset of human preferences for TTM systems. We find that not only is the standard FAD setup inconsistent on both synthetic and human preference data, but that nearly all existing metrics fail to effectively capture desiderata, and are only weakly correlated with human perception. We propose a new metric, the MAUVE Audio Divergence (MAD), computed on representations from a self-supervised audio embedding model. We find that this metric effectively captures diverse musical desiderata (average rank correlation 0.84 for MAD vs. 0.49 for FAD and also correlates more strongly with MusicPrefs (0.62 vs. 0.14).
2503.16674
Molly Kennedy
Molly Kennedy, Ayyoob Imani, Timo Spinde, Hinrich Sch\"utze
Through the LLM Looking Glass: A Socratic Self-Assessment of Donkeys, Elephants, and Markets
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While detecting and avoiding bias in LLM-generated text is becoming increasingly important, media bias often remains subtle and subjective, making it particularly difficult to identify and mitigate. In this study, we assess media bias in LLM-generated content and LLMs' ability to detect subtle ideological bias. We conduct this evaluation using two datasets, PoliGen and EconoLex, covering political and economic discourse, respectively. We evaluate eight widely used LLMs by prompting them to generate articles and analyze their ideological preferences via self-assessment. By using self-assessment, the study aims to directly measure the models' biases rather than relying on external interpretations, thereby minimizing subjective judgments about media bias. Our results reveal a consistent preference of Democratic over Republican positions across all models. Conversely, in economic topics, biases vary among Western LLMs, while those developed in China lean more strongly toward socialism.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 19:40:40 GMT" } ]
2025-03-24T00:00:00
[ [ "Kennedy", "Molly", "" ], [ "Imani", "Ayyoob", "" ], [ "Spinde", "Timo", "" ], [ "Schütze", "Hinrich", "" ] ]
TITLE: Through the LLM Looking Glass: A Socratic Self-Assessment of Donkeys, Elephants, and Markets ABSTRACT: While detecting and avoiding bias in LLM-generated text is becoming increasingly important, media bias often remains subtle and subjective, making it particularly difficult to identify and mitigate. In this study, we assess media bias in LLM-generated content and LLMs' ability to detect subtle ideological bias. We conduct this evaluation using two datasets, PoliGen and EconoLex, covering political and economic discourse, respectively. We evaluate eight widely used LLMs by prompting them to generate articles and analyze their ideological preferences via self-assessment. By using self-assessment, the study aims to directly measure the models' biases rather than relying on external interpretations, thereby minimizing subjective judgments about media bias. Our results reveal a consistent preference of Democratic over Republican positions across all models. Conversely, in economic topics, biases vary among Western LLMs, while those developed in China lean more strongly toward socialism.
2503.16693
Zhan Cheng
Zhan Cheng, Bolin Shen, Tianming Sha, Yuan Gao, Shibo Li and Yushun Dong
ATOM: A Framework of Detecting Query-Based Model Extraction Attacks for Graph Neural Networks
null
null
null
null
cs.LG cs.CR
http://creativecommons.org/licenses/by/4.0/
Graph Neural Networks (GNNs) have gained traction in Graph-based Machine Learning as a Service (GMLaaS) platforms, yet they remain vulnerable to graph-based model extraction attacks (MEAs), where adversaries reconstruct surrogate models by querying the victim model. Existing defense mechanisms, such as watermarking and fingerprinting, suffer from poor real-time performance, susceptibility to evasion, or reliance on post-attack verification, making them inadequate for handling the dynamic characteristics of graph-based MEA variants. To address these limitations, we propose ATOM, a novel real-time MEA detection framework tailored for GNNs. ATOM integrates sequential modeling and reinforcement learning to dynamically detect evolving attack patterns, while leveraging $k$-core embedding to capture the structural properties, enhancing detection precision. Furthermore, we provide theoretical analysis to characterize query behaviors and optimize detection strategies. Extensive experiments on multiple real-world datasets demonstrate that ATOM outperforms existing approaches in detection performance, maintaining stable across different time steps, thereby offering a more effective defense mechanism for GMLaaS environments.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 20:25:32 GMT" } ]
2025-03-24T00:00:00
[ [ "Cheng", "Zhan", "" ], [ "Shen", "Bolin", "" ], [ "Sha", "Tianming", "" ], [ "Gao", "Yuan", "" ], [ "Li", "Shibo", "" ], [ "Dong", "Yushun", "" ] ]
TITLE: ATOM: A Framework of Detecting Query-Based Model Extraction Attacks for Graph Neural Networks ABSTRACT: Graph Neural Networks (GNNs) have gained traction in Graph-based Machine Learning as a Service (GMLaaS) platforms, yet they remain vulnerable to graph-based model extraction attacks (MEAs), where adversaries reconstruct surrogate models by querying the victim model. Existing defense mechanisms, such as watermarking and fingerprinting, suffer from poor real-time performance, susceptibility to evasion, or reliance on post-attack verification, making them inadequate for handling the dynamic characteristics of graph-based MEA variants. To address these limitations, we propose ATOM, a novel real-time MEA detection framework tailored for GNNs. ATOM integrates sequential modeling and reinforcement learning to dynamically detect evolving attack patterns, while leveraging $k$-core embedding to capture the structural properties, enhancing detection precision. Furthermore, we provide theoretical analysis to characterize query behaviors and optimize detection strategies. Extensive experiments on multiple real-world datasets demonstrate that ATOM outperforms existing approaches in detection performance, maintaining stable across different time steps, thereby offering a more effective defense mechanism for GMLaaS environments.
2503.16718
Massa Baali
Massa Baali, Xiang Li, Hao Chen, Rita Singh, Bhiksha Raj
CAARMA: Class Augmentation with Adversarial Mixup Regularization
null
null
null
null
cs.SD cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Speaker verification is a typical zero-shot learning task, where inference of unseen classes is performed by comparing embeddings of test instances to known examples. The models performing inference must hence naturally generate embeddings that cluster same-class instances compactly, while maintaining separation across classes. In order to learn to do so, they are typically trained on a large number of classes (speakers), often using specialized losses. However real-world speaker datasets often lack the class diversity needed to effectively learn this in a generalizable manner. We introduce CAARMA, a class augmentation framework that addresses this problem by generating synthetic classes through data mixing in the embedding space, expanding the number of training classes. To ensure the authenticity of the synthetic classes we adopt a novel adversarial refinement mechanism that minimizes categorical distinctions between synthetic and real classes. We evaluate CAARMA on multiple speaker verification tasks, as well as other representative zero-shot comparison-based speech analysis tasks and obtain consistent improvements: our framework demonstrates a significant improvement of 8\% over all baseline models. Code for CAARMA will be released.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 21:41:16 GMT" } ]
2025-03-24T00:00:00
[ [ "Baali", "Massa", "" ], [ "Li", "Xiang", "" ], [ "Chen", "Hao", "" ], [ "Singh", "Rita", "" ], [ "Raj", "Bhiksha", "" ] ]
TITLE: CAARMA: Class Augmentation with Adversarial Mixup Regularization ABSTRACT: Speaker verification is a typical zero-shot learning task, where inference of unseen classes is performed by comparing embeddings of test instances to known examples. The models performing inference must hence naturally generate embeddings that cluster same-class instances compactly, while maintaining separation across classes. In order to learn to do so, they are typically trained on a large number of classes (speakers), often using specialized losses. However real-world speaker datasets often lack the class diversity needed to effectively learn this in a generalizable manner. We introduce CAARMA, a class augmentation framework that addresses this problem by generating synthetic classes through data mixing in the embedding space, expanding the number of training classes. To ensure the authenticity of the synthetic classes we adopt a novel adversarial refinement mechanism that minimizes categorical distinctions between synthetic and real classes. We evaluate CAARMA on multiple speaker verification tasks, as well as other representative zero-shot comparison-based speech analysis tasks and obtain consistent improvements: our framework demonstrates a significant improvement of 8\% over all baseline models. Code for CAARMA will be released.
2503.16724
Zhaoxin Li
Zhaoxin Li, Zhang Xi-Jia, Batuhan Altundas, Letian Chen, Rohan Paleja, Matthew Gombolay
Towards Automated Semantic Interpretability in Reinforcement Learning via Vision-Language Models
null
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semantic Interpretability in Reinforcement Learning (RL) enables transparency, accountability, and safer deployment by making the agent's decisions understandable and verifiable. Achieving this, however, requires a feature space composed of human-understandable concepts, which traditionally rely on human specification and fail to generalize to unseen environments. In this work, we introduce Semantically Interpretable Reinforcement Learning with Vision-Language Models Empowered Automation (SILVA), an automated framework that leverages pre-trained vision-language models (VLM) for semantic feature extraction and interpretable tree-based models for policy optimization. SILVA first queries a VLM to identify relevant semantic features for an unseen environment, then extracts these features from the environment. Finally, it trains an Interpretable Control Tree via RL, mapping the extracted features to actions in a transparent and interpretable manner. To address the computational inefficiency of extracting features directly with VLMs, we develop a feature extraction pipeline that generates a dataset for training a lightweight convolutional network, which is subsequently used during RL. By leveraging VLMs to automate tree-based RL, SILVA removes the reliance on human annotation previously required by interpretable models while also overcoming the inability of VLMs alone to generate valid robot policies, enabling semantically interpretable reinforcement learning without human-in-the-loop.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 21:53:19 GMT" } ]
2025-03-24T00:00:00
[ [ "Li", "Zhaoxin", "" ], [ "Xi-Jia", "Zhang", "" ], [ "Altundas", "Batuhan", "" ], [ "Chen", "Letian", "" ], [ "Paleja", "Rohan", "" ], [ "Gombolay", "Matthew", "" ] ]
TITLE: Towards Automated Semantic Interpretability in Reinforcement Learning via Vision-Language Models ABSTRACT: Semantic Interpretability in Reinforcement Learning (RL) enables transparency, accountability, and safer deployment by making the agent's decisions understandable and verifiable. Achieving this, however, requires a feature space composed of human-understandable concepts, which traditionally rely on human specification and fail to generalize to unseen environments. In this work, we introduce Semantically Interpretable Reinforcement Learning with Vision-Language Models Empowered Automation (SILVA), an automated framework that leverages pre-trained vision-language models (VLM) for semantic feature extraction and interpretable tree-based models for policy optimization. SILVA first queries a VLM to identify relevant semantic features for an unseen environment, then extracts these features from the environment. Finally, it trains an Interpretable Control Tree via RL, mapping the extracted features to actions in a transparent and interpretable manner. To address the computational inefficiency of extracting features directly with VLMs, we develop a feature extraction pipeline that generates a dataset for training a lightweight convolutional network, which is subsequently used during RL. By leveraging VLMs to automate tree-based RL, SILVA removes the reliance on human annotation previously required by interpretable models while also overcoming the inability of VLMs alone to generate valid robot policies, enabling semantically interpretable reinforcement learning without human-in-the-loop.
2503.16730
Srijan Sengupta
Subhankar Bhadra and Marianna Pensky and Srijan Sengupta
Scalable community detection in massive networks via predictive assignment
null
null
null
null
stat.ME cs.SI physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Massive network datasets are becoming increasingly common in scientific applications. Existing community detection methods encounter significant computational challenges for such massive networks due to two reasons. First, the full network needs to be stored and analyzed on a single server, leading to high memory costs. Second, existing methods typically use matrix factorization or iterative optimization using the full network, resulting in high runtimes. We propose a strategy called \textit{predictive assignment} to enable computationally efficient community detection while ensuring statistical accuracy. The core idea is to avoid large-scale matrix computations by breaking up the task into a smaller matrix computation plus a large number of vector computations that can be carried out in parallel. Under the proposed method, community detection is carried out on a small subgraph to estimate the relevant model parameters. Next, each remaining node is assigned to a community based on these estimates. We prove that predictive assignment achieves strong consistency under the stochastic blockmodel and its degree-corrected version. We also demonstrate the empirical performance of predictive assignment on simulated networks and two large real-world datasets: DBLP (Digital Bibliography \& Library Project), a computer science bibliographical database, and the Twitch Gamers Social Network.
[ { "version": "v1", "created": "Thu, 20 Mar 2025 22:14:06 GMT" } ]
2025-03-24T00:00:00
[ [ "Bhadra", "Subhankar", "" ], [ "Pensky", "Marianna", "" ], [ "Sengupta", "Srijan", "" ] ]
TITLE: Scalable community detection in massive networks via predictive assignment ABSTRACT: Massive network datasets are becoming increasingly common in scientific applications. Existing community detection methods encounter significant computational challenges for such massive networks due to two reasons. First, the full network needs to be stored and analyzed on a single server, leading to high memory costs. Second, existing methods typically use matrix factorization or iterative optimization using the full network, resulting in high runtimes. We propose a strategy called \textit{predictive assignment} to enable computationally efficient community detection while ensuring statistical accuracy. The core idea is to avoid large-scale matrix computations by breaking up the task into a smaller matrix computation plus a large number of vector computations that can be carried out in parallel. Under the proposed method, community detection is carried out on a small subgraph to estimate the relevant model parameters. Next, each remaining node is assigned to a community based on these estimates. We prove that predictive assignment achieves strong consistency under the stochastic blockmodel and its degree-corrected version. We also demonstrate the empirical performance of predictive assignment on simulated networks and two large real-world datasets: DBLP (Digital Bibliography \& Library Project), a computer science bibliographical database, and the Twitch Gamers Social Network.