id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
listlengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
2503.08207
Denan Li
Denan Li, Jiyuan Yang, Xiangkai Chen, Lintao Yu, Shi Liu
To Use or Not to Use a Universal Force Field
21 pages, 5 figures
null
null
null
physics.comp-ph cond-mat.mtrl-sci cs.LG
http://creativecommons.org/licenses/by/4.0/
Artificial intelligence (AI) is revolutionizing scientific research, particularly in computational materials science, by enabling more accurate and efficient simulations. Machine learning force fields (MLFFs) have emerged as powerful tools for molecular dynamics (MD) simulations, potentially offering quantum-mechanical accuracy with the efficiency of classical MD. This Perspective evaluates the viability of universal MLFFs for simulating complex materials systems from the standpoint of a potential practitioner. Using the temperature-driven ferroelectric-paraelectric phase transition of PbTiO$_3$ as a benchmark, we assess leading universal force fields, including CHGNet, MACE, M3GNet, and GPTFF, alongside specialized models like UniPero. While universal MLFFs trained on PBE-derived datasets perform well in predicting equilibrium properties, they largely fail to capture realistic finite-temperature phase transitions under constant-pressure MD, often exhibiting unphysical instabilities. These shortcomings stem from inherited biases in exchange-correlation functionals and limited generalization to anharmonic interactions governing dynamic behavior. However, fine-tuning universal models or employing system-specific MLFFs like UniPero successfully restores predictive accuracy. We advocates for hybrid approaches combining universal pretraining with targeted optimization, improved error quantification frameworks, and community-driven benchmarks to advance MLFFs as robust tools for computational materials discovery.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 09:23:01 GMT" } ]
2025-03-12T00:00:00
[ [ "Li", "Denan", "" ], [ "Yang", "Jiyuan", "" ], [ "Chen", "Xiangkai", "" ], [ "Yu", "Lintao", "" ], [ "Liu", "Shi", "" ] ]
TITLE: To Use or Not to Use a Universal Force Field ABSTRACT: Artificial intelligence (AI) is revolutionizing scientific research, particularly in computational materials science, by enabling more accurate and efficient simulations. Machine learning force fields (MLFFs) have emerged as powerful tools for molecular dynamics (MD) simulations, potentially offering quantum-mechanical accuracy with the efficiency of classical MD. This Perspective evaluates the viability of universal MLFFs for simulating complex materials systems from the standpoint of a potential practitioner. Using the temperature-driven ferroelectric-paraelectric phase transition of PbTiO$_3$ as a benchmark, we assess leading universal force fields, including CHGNet, MACE, M3GNet, and GPTFF, alongside specialized models like UniPero. While universal MLFFs trained on PBE-derived datasets perform well in predicting equilibrium properties, they largely fail to capture realistic finite-temperature phase transitions under constant-pressure MD, often exhibiting unphysical instabilities. These shortcomings stem from inherited biases in exchange-correlation functionals and limited generalization to anharmonic interactions governing dynamic behavior. However, fine-tuning universal models or employing system-specific MLFFs like UniPero successfully restores predictive accuracy. We advocates for hybrid approaches combining universal pretraining with targeted optimization, improved error quantification frameworks, and community-driven benchmarks to advance MLFFs as robust tools for computational materials discovery.
no_new_dataset
0.944791
2503.08217
Guangting Zheng
Guangting Zheng, Jiajun Deng, Xiaomeng Chu, Yu Yuan, Houqiang Li and Yanyong Zhang
S3R-GS: Streamlining the Pipeline for Large-Scale Street Scene Reconstruction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, 3D Gaussian Splatting (3DGS) has reshaped the field of photorealistic 3D reconstruction, achieving impressive rendering quality and speed. However, when applied to large-scale street scenes, existing methods suffer from rapidly escalating per-viewpoint reconstruction costs as scene size increases, leading to significant computational overhead. After revisiting the conventional pipeline, we identify three key factors accounting for this issue: unnecessary local-to-global transformations, excessive 3D-to-2D projections, and inefficient rendering of distant content. To address these challenges, we propose S3R-GS, a 3DGS framework that Streamlines the pipeline for large-scale Street Scene Reconstruction, effectively mitigating these limitations. Moreover, most existing street 3DGS methods rely on ground-truth 3D bounding boxes to separate dynamic and static components, but 3D bounding boxes are difficult to obtain, limiting real-world applicability. To address this, we propose an alternative solution with 2D boxes, which are easier to annotate or can be predicted by off-the-shelf vision foundation models. Such designs together make S3R-GS readily adapt to large, in-the-wild scenarios. Extensive experiments demonstrate that S3R-GS enhances rendering quality and significantly accelerates reconstruction. Remarkably, when applied to videos from the challenging Argoverse2 dataset, it achieves state-of-the-art PSNR and SSIM, reducing reconstruction time to below 50%--and even 20%--of competing methods.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 09:37:13 GMT" } ]
2025-03-12T00:00:00
[ [ "Zheng", "Guangting", "" ], [ "Deng", "Jiajun", "" ], [ "Chu", "Xiaomeng", "" ], [ "Yuan", "Yu", "" ], [ "Li", "Houqiang", "" ], [ "Zhang", "Yanyong", "" ] ]
TITLE: S3R-GS: Streamlining the Pipeline for Large-Scale Street Scene Reconstruction ABSTRACT: Recently, 3D Gaussian Splatting (3DGS) has reshaped the field of photorealistic 3D reconstruction, achieving impressive rendering quality and speed. However, when applied to large-scale street scenes, existing methods suffer from rapidly escalating per-viewpoint reconstruction costs as scene size increases, leading to significant computational overhead. After revisiting the conventional pipeline, we identify three key factors accounting for this issue: unnecessary local-to-global transformations, excessive 3D-to-2D projections, and inefficient rendering of distant content. To address these challenges, we propose S3R-GS, a 3DGS framework that Streamlines the pipeline for large-scale Street Scene Reconstruction, effectively mitigating these limitations. Moreover, most existing street 3DGS methods rely on ground-truth 3D bounding boxes to separate dynamic and static components, but 3D bounding boxes are difficult to obtain, limiting real-world applicability. To address this, we propose an alternative solution with 2D boxes, which are easier to annotate or can be predicted by off-the-shelf vision foundation models. Such designs together make S3R-GS readily adapt to large, in-the-wild scenarios. Extensive experiments demonstrate that S3R-GS enhances rendering quality and significantly accelerates reconstruction. Remarkably, when applied to videos from the challenging Argoverse2 dataset, it achieves state-of-the-art PSNR and SSIM, reducing reconstruction time to below 50%--and even 20%--of competing methods.
no_new_dataset
0.942348
2503.08218
Kaiqiang Xiong
Kaiqiang Xiong, Ying Feng, Qi Zhang, Jianbo Jiao, Yang Zhao, Zhihao Liang, Huachen Gao, Ronggang Wang
MVD-HuGaS: Human Gaussians from a Single Image via 3D Human Multi-view Diffusion Prior
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
3D human reconstruction from a single image is a challenging problem and has been exclusively studied in the literature. Recently, some methods have resorted to diffusion models for guidance, optimizing a 3D representation via Score Distillation Sampling(SDS) or generating one back-view image for facilitating reconstruction. However, these methods tend to produce unsatisfactory artifacts (\textit{e.g.} flattened human structure or over-smoothing results caused by inconsistent priors from multiple views) and struggle with real-world generalization in the wild. In this work, we present \emph{MVD-HuGaS}, enabling free-view 3D human rendering from a single image via a multi-view human diffusion model. We first generate multi-view images from the single reference image with an enhanced multi-view diffusion model, which is well fine-tuned on high-quality 3D human datasets to incorporate 3D geometry priors and human structure priors. To infer accurate camera poses from the sparse generated multi-view images for reconstruction, an alignment module is introduced to facilitate joint optimization of 3D Gaussians and camera poses. Furthermore, we propose a depth-based Facial Distortion Mitigation module to refine the generated facial regions, thereby improving the overall fidelity of the reconstruction.Finally, leveraging the refined multi-view images, along with their accurate camera poses, MVD-HuGaS optimizes the 3D Gaussians of the target human for high-fidelity free-view renderings. Extensive experiments on Thuman2.0 and 2K2K datasets show that the proposed MVD-HuGaS achieves state-of-the-art performance on single-view 3D human rendering.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 09:37:15 GMT" } ]
2025-03-12T00:00:00
[ [ "Xiong", "Kaiqiang", "" ], [ "Feng", "Ying", "" ], [ "Zhang", "Qi", "" ], [ "Jiao", "Jianbo", "" ], [ "Zhao", "Yang", "" ], [ "Liang", "Zhihao", "" ], [ "Gao", "Huachen", "" ], [ "Wang", "Ronggang", "" ] ]
TITLE: MVD-HuGaS: Human Gaussians from a Single Image via 3D Human Multi-view Diffusion Prior ABSTRACT: 3D human reconstruction from a single image is a challenging problem and has been exclusively studied in the literature. Recently, some methods have resorted to diffusion models for guidance, optimizing a 3D representation via Score Distillation Sampling(SDS) or generating one back-view image for facilitating reconstruction. However, these methods tend to produce unsatisfactory artifacts (\textit{e.g.} flattened human structure or over-smoothing results caused by inconsistent priors from multiple views) and struggle with real-world generalization in the wild. In this work, we present \emph{MVD-HuGaS}, enabling free-view 3D human rendering from a single image via a multi-view human diffusion model. We first generate multi-view images from the single reference image with an enhanced multi-view diffusion model, which is well fine-tuned on high-quality 3D human datasets to incorporate 3D geometry priors and human structure priors. To infer accurate camera poses from the sparse generated multi-view images for reconstruction, an alignment module is introduced to facilitate joint optimization of 3D Gaussians and camera poses. Furthermore, we propose a depth-based Facial Distortion Mitigation module to refine the generated facial regions, thereby improving the overall fidelity of the reconstruction.Finally, leveraging the refined multi-view images, along with their accurate camera poses, MVD-HuGaS optimizes the 3D Gaussians of the target human for high-fidelity free-view renderings. Extensive experiments on Thuman2.0 and 2K2K datasets show that the proposed MVD-HuGaS achieves state-of-the-art performance on single-view 3D human rendering.
no_new_dataset
0.9549
2503.08221
Junbin Xiao
Junbin Xiao, Nanxin Huang, Hao Qiu, Zhulin Tao, Xun Yang, Richang Hong, Meng Wang, Angela Yao
EgoBlind: Towards Egocentric Visual Assistance for the Blind People
Preprint. Under Review
null
null
null
cs.CV cs.AI cs.MM
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present EgoBlind, the first egocentric VideoQA dataset collected from blind individuals to evaluate the assistive capabilities of contemporary multimodal large language models (MLLMs). EgoBlind comprises 1,210 videos that record the daily lives of real blind users from a first-person perspective. It also features 4,927 questions directly posed or generated and verified by blind individuals to reflect their needs for visual assistance under various scenarios. We provide each question with an average of 3 reference answers to alleviate subjective evaluation. Using EgoBlind, we comprehensively evaluate 15 leading MLLMs and find that all models struggle, with the best performers achieving accuracy around 56\%, far behind human performance of 87.4\%. To guide future advancements, we identify and summarize major limitations of existing MLLMs in egocentric visual assistance for the blind and provide heuristic suggestions for improvement. With these efforts, we hope EgoBlind can serve as a valuable foundation for developing more effective AI assistants to enhance the independence of the blind individuals' lives.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 09:40:31 GMT" } ]
2025-03-12T00:00:00
[ [ "Xiao", "Junbin", "" ], [ "Huang", "Nanxin", "" ], [ "Qiu", "Hao", "" ], [ "Tao", "Zhulin", "" ], [ "Yang", "Xun", "" ], [ "Hong", "Richang", "" ], [ "Wang", "Meng", "" ], [ "Yao", "Angela", "" ] ]
TITLE: EgoBlind: Towards Egocentric Visual Assistance for the Blind People ABSTRACT: We present EgoBlind, the first egocentric VideoQA dataset collected from blind individuals to evaluate the assistive capabilities of contemporary multimodal large language models (MLLMs). EgoBlind comprises 1,210 videos that record the daily lives of real blind users from a first-person perspective. It also features 4,927 questions directly posed or generated and verified by blind individuals to reflect their needs for visual assistance under various scenarios. We provide each question with an average of 3 reference answers to alleviate subjective evaluation. Using EgoBlind, we comprehensively evaluate 15 leading MLLMs and find that all models struggle, with the best performers achieving accuracy around 56\%, far behind human performance of 87.4\%. To guide future advancements, we identify and summarize major limitations of existing MLLMs in egocentric visual assistance for the blind and provide heuristic suggestions for improvement. With these efforts, we hope EgoBlind can serve as a valuable foundation for developing more effective AI assistants to enhance the independence of the blind individuals' lives.
new_dataset
0.961316
2503.08239
Muhammad Ahmad
Saad Sohail, Muhammad Usama, Usman Ghous, Manuel Mazzara, Salvatore Distefano, Muhammad Ahmad
EnergyFormer: Energy Attention with Fourier Embedding for Hyperspectral Image Classification
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Hyperspectral imaging (HSI) provides rich spectral-spatial information across hundreds of contiguous bands, enabling precise material discrimination in applications such as environmental monitoring, agriculture, and urban analysis. However, the high dimensionality and spectral variability of HSI data pose significant challenges for feature extraction and classification. This paper presents EnergyFormer, a transformer-based framework designed to address these challenges through three key innovations: (1) Multi-Head Energy Attention (MHEA), which optimizes an energy function to selectively enhance critical spectral-spatial features, improving feature discrimination; (2) Fourier Position Embedding (FoPE), which adaptively encodes spectral and spatial dependencies to reinforce long-range interactions; and (3) Enhanced Convolutional Block Attention Module (ECBAM), which selectively amplifies informative wavelength bands and spatial structures, enhancing representation learning. Extensive experiments on the WHU-Hi-HanChuan, Salinas, and Pavia University datasets demonstrate that EnergyFormer achieves exceptional overall accuracies of 99.28\%, 98.63\%, and 98.72\%, respectively, outperforming state-of-the-art CNN, transformer, and Mamba-based models. The source code will be made available at https://github.com/mahmad000.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 10:03:35 GMT" } ]
2025-03-12T00:00:00
[ [ "Sohail", "Saad", "" ], [ "Usama", "Muhammad", "" ], [ "Ghous", "Usman", "" ], [ "Mazzara", "Manuel", "" ], [ "Distefano", "Salvatore", "" ], [ "Ahmad", "Muhammad", "" ] ]
TITLE: EnergyFormer: Energy Attention with Fourier Embedding for Hyperspectral Image Classification ABSTRACT: Hyperspectral imaging (HSI) provides rich spectral-spatial information across hundreds of contiguous bands, enabling precise material discrimination in applications such as environmental monitoring, agriculture, and urban analysis. However, the high dimensionality and spectral variability of HSI data pose significant challenges for feature extraction and classification. This paper presents EnergyFormer, a transformer-based framework designed to address these challenges through three key innovations: (1) Multi-Head Energy Attention (MHEA), which optimizes an energy function to selectively enhance critical spectral-spatial features, improving feature discrimination; (2) Fourier Position Embedding (FoPE), which adaptively encodes spectral and spatial dependencies to reinforce long-range interactions; and (3) Enhanced Convolutional Block Attention Module (ECBAM), which selectively amplifies informative wavelength bands and spatial structures, enhancing representation learning. Extensive experiments on the WHU-Hi-HanChuan, Salinas, and Pavia University datasets demonstrate that EnergyFormer achieves exceptional overall accuracies of 99.28\%, 98.63\%, and 98.72\%, respectively, outperforming state-of-the-art CNN, transformer, and Mamba-based models. The source code will be made available at https://github.com/mahmad000.
no_new_dataset
0.955817
2503.08240
Lachlan Simpson
Lachlan Simpson, Federico Costanza, Kyle Millar, Adriel Cheng, Cheng-Chew Lim, Hong Gunn Chew
Tangentially Aligned Integrated Gradients for User-Friendly Explanations
To appear in the proceedings of the 32nd Irish Conference on Artificial Intelligence and Cognitive Science
null
null
null
cs.LG math.DG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Integrated gradients is prevalent within machine learning to address the black-box problem of neural networks. The explanations given by integrated gradients depend on a choice of base-point. The choice of base-point is not a priori obvious and can lead to drastically different explanations. There is a longstanding hypothesis that data lies on a low dimensional Riemannian manifold. The quality of explanations on a manifold can be measured by the extent to which an explanation for a point lies in its tangent space. In this work, we propose that the base-point should be chosen such that it maximises the tangential alignment of the explanation. We formalise the notion of tangential alignment and provide theoretical conditions under which a base-point choice will provide explanations lying in the tangent space. We demonstrate how to approximate the optimal base-point on several well-known image classification datasets. Furthermore, we compare the optimal base-point choice with common base-points and three gradient explainability models.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 10:04:13 GMT" } ]
2025-03-12T00:00:00
[ [ "Simpson", "Lachlan", "" ], [ "Costanza", "Federico", "" ], [ "Millar", "Kyle", "" ], [ "Cheng", "Adriel", "" ], [ "Lim", "Cheng-Chew", "" ], [ "Chew", "Hong Gunn", "" ] ]
TITLE: Tangentially Aligned Integrated Gradients for User-Friendly Explanations ABSTRACT: Integrated gradients is prevalent within machine learning to address the black-box problem of neural networks. The explanations given by integrated gradients depend on a choice of base-point. The choice of base-point is not a priori obvious and can lead to drastically different explanations. There is a longstanding hypothesis that data lies on a low dimensional Riemannian manifold. The quality of explanations on a manifold can be measured by the extent to which an explanation for a point lies in its tangent space. In this work, we propose that the base-point should be chosen such that it maximises the tangential alignment of the explanation. We formalise the notion of tangential alignment and provide theoretical conditions under which a base-point choice will provide explanations lying in the tangent space. We demonstrate how to approximate the optimal base-point on several well-known image classification datasets. Furthermore, we compare the optimal base-point choice with common base-points and three gradient explainability models.
no_new_dataset
0.951953
2503.08246
Peter Macgregor
Seiyun Shin, Ilan Shomorony, Peter Macgregor
Dynamic DBSCAN with Euler Tour Sequences
AISTATS 2025
null
null
null
cs.LG cs.DS
http://creativecommons.org/licenses/by/4.0/
We propose a fast and dynamic algorithm for Density-Based Spatial Clustering of Applications with Noise (DBSCAN) that efficiently supports online updates. Traditional DBSCAN algorithms, designed for batch processing, become computationally expensive when applied to dynamic datasets, particularly in large-scale applications where data continuously evolves. To address this challenge, our algorithm leverages the Euler Tour Trees data structure, enabling dynamic clustering updates without the need to reprocess the entire dataset. This approach preserves a near-optimal accuracy in density estimation, as achieved by the state-of-the-art static DBSCAN method (Esfandiari et al., 2021) Our method achieves an improved time complexity of $O(d \log^3(n) + \log^4(n))$ for every data point insertion and deletion, where $n$ and $d$ denote the total number of updates and the data dimension, respectively. Empirical studies also demonstrate significant speedups over conventional DBSCANs in real-time clustering of dynamic datasets, while maintaining comparable or superior clustering quality.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 10:08:39 GMT" } ]
2025-03-12T00:00:00
[ [ "Shin", "Seiyun", "" ], [ "Shomorony", "Ilan", "" ], [ "Macgregor", "Peter", "" ] ]
TITLE: Dynamic DBSCAN with Euler Tour Sequences ABSTRACT: We propose a fast and dynamic algorithm for Density-Based Spatial Clustering of Applications with Noise (DBSCAN) that efficiently supports online updates. Traditional DBSCAN algorithms, designed for batch processing, become computationally expensive when applied to dynamic datasets, particularly in large-scale applications where data continuously evolves. To address this challenge, our algorithm leverages the Euler Tour Trees data structure, enabling dynamic clustering updates without the need to reprocess the entire dataset. This approach preserves a near-optimal accuracy in density estimation, as achieved by the state-of-the-art static DBSCAN method (Esfandiari et al., 2021) Our method achieves an improved time complexity of $O(d \log^3(n) + \log^4(n))$ for every data point insertion and deletion, where $n$ and $d$ denote the total number of updates and the data dimension, respectively. Empirical studies also demonstrate significant speedups over conventional DBSCANs in real-time clustering of dynamic datasets, while maintaining comparable or superior clustering quality.
no_new_dataset
0.94699
2503.08251
Arshia Afzal
Arshia Afzal and Volkan Cevher and Mahsa Shoaran
MT-NAM: An Efficient and Adaptive Model for Epileptic Seizure Detection
Submitted to IEEE-TBME
null
null
null
eess.SP cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Enhancing the accuracy and efficiency of machine learning algorithms employed in neural interface systems is crucial for advancing next-generation intelligent therapeutic devices. However, current systems often utilize basic machine learning models that do not fully exploit the natural structure of brain signals. Additionally, existing learning models used for neural signal processing often demonstrate low speed and efficiency during inference. To address these challenges, this study introduces Micro Tree-based NAM (MT-NAM), a distilled model based on the recently proposed Neural Additive Models (NAM). The MT-NAM achieves a remarkable 100$\times$ improvement in inference speed compared to standard NAM, without compromising accuracy. We evaluate our approach on the CHB-MIT scalp EEG dataset, which includes recordings from 24 patients with varying numbers of sessions and seizures. NAM achieves an 85.3\% window-based sensitivity and 95\% specificity. Interestingly, our proposed MT-NAM shows only a 2\% reduction in sensitivity compared to the original NAM. To regain this sensitivity, we utilize a test-time template adjuster (T3A) as an update mechanism, enabling our model to achieve higher sensitivity during test time by accommodating transient shifts in neural signals. With this online update approach, MT-NAM achieves the same sensitivity as the standard NAM while achieving approximately 50$\times$ acceleration in inference speed.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 10:14:53 GMT" } ]
2025-03-12T00:00:00
[ [ "Afzal", "Arshia", "" ], [ "Cevher", "Volkan", "" ], [ "Shoaran", "Mahsa", "" ] ]
TITLE: MT-NAM: An Efficient and Adaptive Model for Epileptic Seizure Detection ABSTRACT: Enhancing the accuracy and efficiency of machine learning algorithms employed in neural interface systems is crucial for advancing next-generation intelligent therapeutic devices. However, current systems often utilize basic machine learning models that do not fully exploit the natural structure of brain signals. Additionally, existing learning models used for neural signal processing often demonstrate low speed and efficiency during inference. To address these challenges, this study introduces Micro Tree-based NAM (MT-NAM), a distilled model based on the recently proposed Neural Additive Models (NAM). The MT-NAM achieves a remarkable 100$\times$ improvement in inference speed compared to standard NAM, without compromising accuracy. We evaluate our approach on the CHB-MIT scalp EEG dataset, which includes recordings from 24 patients with varying numbers of sessions and seizures. NAM achieves an 85.3\% window-based sensitivity and 95\% specificity. Interestingly, our proposed MT-NAM shows only a 2\% reduction in sensitivity compared to the original NAM. To regain this sensitivity, we utilize a test-time template adjuster (T3A) as an update mechanism, enabling our model to achieve higher sensitivity during test time by accommodating transient shifts in neural signals. With this online update approach, MT-NAM achieves the same sensitivity as the standard NAM while achieving approximately 50$\times$ acceleration in inference speed.
no_new_dataset
0.946349
2503.08270
Chengjun Yu
Chengjun Yu, Wei Zhai, Yuhang Yang, Yang Cao, Zheng-Jun Zha
HERO: Human Reaction Generation from Videos
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human reaction generation represents a significant research domain for interactive AI, as humans constantly interact with their surroundings. Previous works focus mainly on synthesizing the reactive motion given a human motion sequence. This paradigm limits interaction categories to human-human interactions and ignores emotions that may influence reaction generation. In this work, we propose to generate 3D human reactions from RGB videos, which involves a wider range of interaction categories and naturally provides information about expressions that may reflect the subject's emotions. To cope with this task, we present HERO, a simple yet powerful framework for Human rEaction geneRation from videOs. HERO considers both global and frame-level local representations of the video to extract the interaction intention, and then uses the extracted interaction intention to guide the synthesis of the reaction. Besides, local visual representations are continuously injected into the model to maximize the exploitation of the dynamic properties inherent in videos. Furthermore, the ViMo dataset containing paired Video-Motion data is collected to support the task. In addition to human-human interactions, these video-motion pairs also cover animal-human interactions and scene-human interactions. Extensive experiments demonstrate the superiority of our methodology. The code and dataset will be publicly available at https://jackyu6.github.io/HERO.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 10:39:32 GMT" } ]
2025-03-12T00:00:00
[ [ "Yu", "Chengjun", "" ], [ "Zhai", "Wei", "" ], [ "Yang", "Yuhang", "" ], [ "Cao", "Yang", "" ], [ "Zha", "Zheng-Jun", "" ] ]
TITLE: HERO: Human Reaction Generation from Videos ABSTRACT: Human reaction generation represents a significant research domain for interactive AI, as humans constantly interact with their surroundings. Previous works focus mainly on synthesizing the reactive motion given a human motion sequence. This paradigm limits interaction categories to human-human interactions and ignores emotions that may influence reaction generation. In this work, we propose to generate 3D human reactions from RGB videos, which involves a wider range of interaction categories and naturally provides information about expressions that may reflect the subject's emotions. To cope with this task, we present HERO, a simple yet powerful framework for Human rEaction geneRation from videOs. HERO considers both global and frame-level local representations of the video to extract the interaction intention, and then uses the extracted interaction intention to guide the synthesis of the reaction. Besides, local visual representations are continuously injected into the model to maximize the exploitation of the dynamic properties inherent in videos. Furthermore, the ViMo dataset containing paired Video-Motion data is collected to support the task. In addition to human-human interactions, these video-motion pairs also cover animal-human interactions and scene-human interactions. Extensive experiments demonstrate the superiority of our methodology. The code and dataset will be publicly available at https://jackyu6.github.io/HERO.
no_new_dataset
0.936981
2503.08271
Wenzhe Niu
Wenzhe Niu, Zongxia Xie, Yanru Sun, Wei He, Man Xu, Chao Hao
LangTime: A Language-Guided Unified Model for Time Series Forecasting with Proximal Policy Optimization
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent research has shown an increasing interest in utilizing pre-trained large language models (LLMs) for a variety of time series applications. However, there are three main challenges when using LLMs as foundational models for time series forecasting: (1) Cross-domain generalization. (2) Cross-modality alignment. (3) Error accumulation in autoregressive frameworks. To address these challenges, we proposed LangTime, a language-guided unified model for time series forecasting that incorporates cross-domain pre-training with reinforcement learning-based fine-tuning. Specifically, LangTime constructs Temporal Comprehension Prompts (TCPs), which include dataset-wise and channel-wise instructions, to facilitate domain adaptation and condense time series into a single token, enabling LLMs to understand better and align temporal data. To improve autoregressive forecasting, we introduce TimePPO, a reinforcement learning-based fine-tuning algorithm. TimePPO mitigates error accumulation by leveraging a multidimensional rewards function tailored for time series and a repeat-based value estimation strategy. Extensive experiments demonstrate that LangTime achieves state-of-the-art cross-domain forecasting performance, while TimePPO fine-tuning effectively enhances the stability and accuracy of autoregressive forecasting.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 10:40:39 GMT" } ]
2025-03-12T00:00:00
[ [ "Niu", "Wenzhe", "" ], [ "Xie", "Zongxia", "" ], [ "Sun", "Yanru", "" ], [ "He", "Wei", "" ], [ "Xu", "Man", "" ], [ "Hao", "Chao", "" ] ]
TITLE: LangTime: A Language-Guided Unified Model for Time Series Forecasting with Proximal Policy Optimization ABSTRACT: Recent research has shown an increasing interest in utilizing pre-trained large language models (LLMs) for a variety of time series applications. However, there are three main challenges when using LLMs as foundational models for time series forecasting: (1) Cross-domain generalization. (2) Cross-modality alignment. (3) Error accumulation in autoregressive frameworks. To address these challenges, we proposed LangTime, a language-guided unified model for time series forecasting that incorporates cross-domain pre-training with reinforcement learning-based fine-tuning. Specifically, LangTime constructs Temporal Comprehension Prompts (TCPs), which include dataset-wise and channel-wise instructions, to facilitate domain adaptation and condense time series into a single token, enabling LLMs to understand better and align temporal data. To improve autoregressive forecasting, we introduce TimePPO, a reinforcement learning-based fine-tuning algorithm. TimePPO mitigates error accumulation by leveraging a multidimensional rewards function tailored for time series and a repeat-based value estimation strategy. Extensive experiments demonstrate that LangTime achieves state-of-the-art cross-domain forecasting performance, while TimePPO fine-tuning effectively enhances the stability and accuracy of autoregressive forecasting.
no_new_dataset
0.947478
2503.08276
Miao Zhang
Jun Yin, Yangfan He, Miao Zhang, Pengyu Zeng, Tianyi Wang, Shuai Lu, Xueqian Wang
PromptLNet: Region-Adaptive Aesthetic Enhancement via Prompt Guidance in Low-Light Enhancement Net
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning and improving large language models through human preference feedback has become a mainstream approach, but it has rarely been applied to the field of low-light image enhancement. Existing low-light enhancement evaluations typically rely on objective metrics (such as FID, PSNR, etc.), which often result in models that perform well objectively but lack aesthetic quality. Moreover, most low-light enhancement models are primarily designed for global brightening, lacking detailed refinement. Therefore, the generated images often require additional local adjustments, leading to research gaps in practical applications. To bridge this gap, we propose the following innovations: 1) We collect human aesthetic evaluation text pairs and aesthetic scores from multiple low-light image datasets (e.g., LOL, LOL2, LOM, DCIM, MEF, etc.) to train a low-light image aesthetic evaluation model, supplemented by an optimization algorithm designed to fine-tune the diffusion model. 2) We propose a prompt-driven brightness adjustment module capable of performing fine-grained brightness and aesthetic adjustments for specific instances or regions. 3) We evaluate our method alongside existing state-of-the-art algorithms on mainstream benchmarks. Experimental results show that our method not only outperforms traditional methods in terms of visual quality but also provides greater flexibility and controllability, paving the way for improved aesthetic quality.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 10:45:08 GMT" } ]
2025-03-12T00:00:00
[ [ "Yin", "Jun", "" ], [ "He", "Yangfan", "" ], [ "Zhang", "Miao", "" ], [ "Zeng", "Pengyu", "" ], [ "Wang", "Tianyi", "" ], [ "Lu", "Shuai", "" ], [ "Wang", "Xueqian", "" ] ]
TITLE: PromptLNet: Region-Adaptive Aesthetic Enhancement via Prompt Guidance in Low-Light Enhancement Net ABSTRACT: Learning and improving large language models through human preference feedback has become a mainstream approach, but it has rarely been applied to the field of low-light image enhancement. Existing low-light enhancement evaluations typically rely on objective metrics (such as FID, PSNR, etc.), which often result in models that perform well objectively but lack aesthetic quality. Moreover, most low-light enhancement models are primarily designed for global brightening, lacking detailed refinement. Therefore, the generated images often require additional local adjustments, leading to research gaps in practical applications. To bridge this gap, we propose the following innovations: 1) We collect human aesthetic evaluation text pairs and aesthetic scores from multiple low-light image datasets (e.g., LOL, LOL2, LOM, DCIM, MEF, etc.) to train a low-light image aesthetic evaluation model, supplemented by an optimization algorithm designed to fine-tune the diffusion model. 2) We propose a prompt-driven brightness adjustment module capable of performing fine-grained brightness and aesthetic adjustments for specific instances or regions. 3) We evaluate our method alongside existing state-of-the-art algorithms on mainstream benchmarks. Experimental results show that our method not only outperforms traditional methods in terms of visual quality but also provides greater flexibility and controllability, paving the way for improved aesthetic quality.
no_new_dataset
0.950041
2503.08290
Sachin Verma
Sachin Verma, Frank Lindseth, Gabriel Kiss
SegDesicNet: Lightweight Semantic Segmentation in Remote Sensing with Geo-Coordinate Embeddings for Domain Adaptation
https://openaccess.thecvf.com/content/WACV2025/papers/Verma_SegDesicNet_Lightweight_Semantic_Segmentation_in_Remote_Sensing_with_Geo-Coordinate_Embeddings_WACV_2025_paper.pdf
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2025
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Semantic segmentation is essential for analyzing highdefinition remote sensing images (HRSIs) because it allows the precise classification of objects and regions at the pixel level. However, remote sensing data present challenges owing to geographical location, weather, and environmental variations, making it difficult for semantic segmentation models to generalize across diverse scenarios. Existing methods are often limited to specific data domains and require expert annotators and specialized equipment for semantic labeling. In this study, we propose a novel unsupervised domain adaptation technique for remote sensing semantic segmentation by utilizing geographical coordinates that are readily accessible in remote sensing setups as metadata in a dataset. To bridge the domain gap, we propose a novel approach that considers the combination of an image\'s location encoding trait and the spherical nature of Earth\'s surface. Our proposed SegDesicNet module regresses the GRID positional encoding of the geo coordinates projected over the unit sphere to obtain the domain loss. Our experimental results demonstrate that the proposed SegDesicNet outperforms state of the art domain adaptation methods in remote sensing image segmentation, achieving an improvement of approximately ~6% in the mean intersection over union (MIoU) with a ~ 27\% drop in parameter count on benchmarked subsets of the publicly available FLAIR #1 dataset. We also benchmarked our method performance on the custom split of the ISPRS Potsdam dataset. Our algorithm seeks to reduce the modeling disparity between artificial neural networks and human comprehension of the physical world, making the technology more human centric and scalable.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 11:01:18 GMT" } ]
2025-03-12T00:00:00
[ [ "Verma", "Sachin", "" ], [ "Lindseth", "Frank", "" ], [ "Kiss", "Gabriel", "" ] ]
TITLE: SegDesicNet: Lightweight Semantic Segmentation in Remote Sensing with Geo-Coordinate Embeddings for Domain Adaptation ABSTRACT: Semantic segmentation is essential for analyzing highdefinition remote sensing images (HRSIs) because it allows the precise classification of objects and regions at the pixel level. However, remote sensing data present challenges owing to geographical location, weather, and environmental variations, making it difficult for semantic segmentation models to generalize across diverse scenarios. Existing methods are often limited to specific data domains and require expert annotators and specialized equipment for semantic labeling. In this study, we propose a novel unsupervised domain adaptation technique for remote sensing semantic segmentation by utilizing geographical coordinates that are readily accessible in remote sensing setups as metadata in a dataset. To bridge the domain gap, we propose a novel approach that considers the combination of an image\'s location encoding trait and the spherical nature of Earth\'s surface. Our proposed SegDesicNet module regresses the GRID positional encoding of the geo coordinates projected over the unit sphere to obtain the domain loss. Our experimental results demonstrate that the proposed SegDesicNet outperforms state of the art domain adaptation methods in remote sensing image segmentation, achieving an improvement of approximately ~6% in the mean intersection over union (MIoU) with a ~ 27\% drop in parameter count on benchmarked subsets of the publicly available FLAIR #1 dataset. We also benchmarked our method performance on the custom split of the ISPRS Potsdam dataset. Our algorithm seeks to reduce the modeling disparity between artificial neural networks and human comprehension of the physical world, making the technology more human centric and scalable.
no_new_dataset
0.9549
2503.08293
Alberto Miguel Diez
Alberto Miguel-Diez, Adri\'an Campazas-Vega, Claudia \'Alvarez-Aparicio, Gonzalo Esteban-Costales, \'Angel Manuel Guerrero-Higueras
A systematic literature review of unsupervised learning algorithms for anomalous traffic detection based on flows
This article has been accepted for publication in Logic Journal of the IGPL Published by Oxford University Press
null
null
null
cs.CR cs.LG cs.NI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The constant increase of devices connected to the Internet, and therefore of cyber-attacks, makes it necessary to analyze network traffic in order to recognize malicious activity. Traditional packet-based analysis methods are insufficient because in large networks the amount of traffic is so high that it is unfeasible to review all communications. For this reason, flows is a suitable approach for this situation, which in future 5G networks will have to be used, as the number of packets will increase dramatically. If this is also combined with unsupervised learning models, it can detect new threats for which it has not been trained. This paper presents a systematic review of the literature on unsupervised learning algorithms for detecting anomalies in network flows, following the PRISMA guideline. A total of 63 scientific articles have been reviewed, analyzing 13 of them in depth. The results obtained show that autoencoder is the most used option, followed by SVM, ALAD, or SOM. On the other hand, all the datasets used for anomaly detection have been collected, including some specialised in IoT or with real data collected from honeypots.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 11:06:00 GMT" } ]
2025-03-12T00:00:00
[ [ "Miguel-Diez", "Alberto", "" ], [ "Campazas-Vega", "Adrián", "" ], [ "Álvarez-Aparicio", "Claudia", "" ], [ "Esteban-Costales", "Gonzalo", "" ], [ "Guerrero-Higueras", "Ángel Manuel", "" ] ]
TITLE: A systematic literature review of unsupervised learning algorithms for anomalous traffic detection based on flows ABSTRACT: The constant increase of devices connected to the Internet, and therefore of cyber-attacks, makes it necessary to analyze network traffic in order to recognize malicious activity. Traditional packet-based analysis methods are insufficient because in large networks the amount of traffic is so high that it is unfeasible to review all communications. For this reason, flows is a suitable approach for this situation, which in future 5G networks will have to be used, as the number of packets will increase dramatically. If this is also combined with unsupervised learning models, it can detect new threats for which it has not been trained. This paper presents a systematic review of the literature on unsupervised learning algorithms for detecting anomalies in network flows, following the PRISMA guideline. A total of 63 scientific articles have been reviewed, analyzing 13 of them in depth. The results obtained show that autoencoder is the most used option, followed by SVM, ALAD, or SOM. On the other hand, all the datasets used for anomaly detection have been collected, including some specialised in IoT or with real data collected from honeypots.
no_new_dataset
0.940298
2503.08298
George Papadakis
Jakub Maciejewski, Konstantinos Nikoletos, George Papadakis, Yannis Velegrakis
Progressive Entity Resolution: A Design Space Exploration
null
null
10.1145/3709715
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
Entity Resolution (ER) is typically implemented as a batch task that processes all available data before identifying duplicate records. However, applications with time or computational constraints, e.g., those running in the cloud, require a progressive approach that produces results in a pay-as-you-go fashion. Numerous algorithms have been proposed for Progressive ER in the literature. In this work, we propose a novel framework for Progressive Entity Resolution that organizes relevant techniques into four consecutive steps: (i) filtering, which reduces the search space to the most likely candidate matches, (ii) weighting, which associates every pair of candidate matches with a similarity score, (iii) scheduling, which prioritizes the execution of the candidate matches so that the real duplicates precede the non-matching pairs, and (iv) matching, which applies a complex, matching function to the pairs in the order defined by the previous step. We associate each step with existing and novel techniques, illustrating that our framework overall generates a superset of the main existing works in the field. We select the most representative combinations resulting from our framework and fine-tune them over 10 established datasets for Record Linkage and 8 for Deduplication, with our results indicating that our taxonomy yields a wide range of high performing progressive techniques both in terms of effectiveness and time efficiency.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 11:10:15 GMT" } ]
2025-03-12T00:00:00
[ [ "Maciejewski", "Jakub", "" ], [ "Nikoletos", "Konstantinos", "" ], [ "Papadakis", "George", "" ], [ "Velegrakis", "Yannis", "" ] ]
TITLE: Progressive Entity Resolution: A Design Space Exploration ABSTRACT: Entity Resolution (ER) is typically implemented as a batch task that processes all available data before identifying duplicate records. However, applications with time or computational constraints, e.g., those running in the cloud, require a progressive approach that produces results in a pay-as-you-go fashion. Numerous algorithms have been proposed for Progressive ER in the literature. In this work, we propose a novel framework for Progressive Entity Resolution that organizes relevant techniques into four consecutive steps: (i) filtering, which reduces the search space to the most likely candidate matches, (ii) weighting, which associates every pair of candidate matches with a similarity score, (iii) scheduling, which prioritizes the execution of the candidate matches so that the real duplicates precede the non-matching pairs, and (iv) matching, which applies a complex, matching function to the pairs in the order defined by the previous step. We associate each step with existing and novel techniques, illustrating that our framework overall generates a superset of the main existing works in the field. We select the most representative combinations resulting from our framework and fine-tune them over 10 established datasets for Record Linkage and 8 for Deduplication, with our results indicating that our taxonomy yields a wide range of high performing progressive techniques both in terms of effectiveness and time efficiency.
no_new_dataset
0.951908
2503.08308
Zhuo Zhi
Zhuo Zhi, Chen Feng, Adam Daneshmend, Mine Orlu, Andreas Demosthenous, Lu Yin, Da Li, Ziquan Liu, Miguel R. D. Rodrigues
Seeing and Reasoning with Confidence: Supercharging Multimodal LLMs with an Uncertainty-Aware Agentic Framework
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Multimodal large language models (MLLMs) show promise in tasks like visual question answering (VQA) but still face challenges in multimodal reasoning. Recent works adapt agentic frameworks or chain-of-thought (CoT) reasoning to improve performance. However, CoT-based multimodal reasoning often demands costly data annotation and fine-tuning, while agentic approaches relying on external tools risk introducing unreliable output from these tools. In this paper, we propose Seeing and Reasoning with Confidence (SRICE), a training-free multimodal reasoning framework that integrates external vision models with uncertainty quantification (UQ) into an MLLM to address these challenges. Specifically, SRICE guides the inference process by allowing MLLM to autonomously select regions of interest through multi-stage interactions with the help of external tools. We propose to use a conformal prediction-based approach to calibrate the output of external tools and select the optimal tool by estimating the uncertainty of an MLLM's output. Our experiment shows that the average improvement of SRICE over the base MLLM is 4.6% on five datasets and the performance on some datasets even outperforms fine-tuning-based methods, revealing the significance of ensuring reliable tool use in an MLLM agent.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 11:18:53 GMT" } ]
2025-03-12T00:00:00
[ [ "Zhi", "Zhuo", "" ], [ "Feng", "Chen", "" ], [ "Daneshmend", "Adam", "" ], [ "Orlu", "Mine", "" ], [ "Demosthenous", "Andreas", "" ], [ "Yin", "Lu", "" ], [ "Li", "Da", "" ], [ "Liu", "Ziquan", "" ], [ "Rodrigues", "Miguel R. D.", "" ] ]
TITLE: Seeing and Reasoning with Confidence: Supercharging Multimodal LLMs with an Uncertainty-Aware Agentic Framework ABSTRACT: Multimodal large language models (MLLMs) show promise in tasks like visual question answering (VQA) but still face challenges in multimodal reasoning. Recent works adapt agentic frameworks or chain-of-thought (CoT) reasoning to improve performance. However, CoT-based multimodal reasoning often demands costly data annotation and fine-tuning, while agentic approaches relying on external tools risk introducing unreliable output from these tools. In this paper, we propose Seeing and Reasoning with Confidence (SRICE), a training-free multimodal reasoning framework that integrates external vision models with uncertainty quantification (UQ) into an MLLM to address these challenges. Specifically, SRICE guides the inference process by allowing MLLM to autonomously select regions of interest through multi-stage interactions with the help of external tools. We propose to use a conformal prediction-based approach to calibrate the output of external tools and select the optimal tool by estimating the uncertainty of an MLLM's output. Our experiment shows that the average improvement of SRICE over the base MLLM is 4.6% on five datasets and the performance on some datasets even outperforms fine-tuning-based methods, revealing the significance of ensuring reliable tool use in an MLLM agent.
no_new_dataset
0.946646
2503.08316
Georgios Katranis
Georgios Katranis, Frederik Plahl, Joachim Grimstadt, Ilshat Mamaev, Silvia Vock, Andrey Morozov
Dynamic Risk Assessment for Human-Robot Collaboration Using a Heuristics-based Approach
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human-robot collaboration (HRC) introduces significant safety challenges, particularly in protecting human operators working alongside collaborative robots (cobots). While current ISO standards emphasize risk assessment and hazard identification, these procedures are often insufficient for addressing the complexity of HRC environments, which involve numerous design factors and dynamic interactions. This publication presents a method for objective hazard analysis to support Dynamic Risk Assessment, extending beyond reliance on expert knowledge. The approach monitors scene parameters, such as the distance between human body parts and the cobot, as well as the cobot`s Cartesian velocity. Additionally, an anthropocentric parameter focusing on the orientation of the human head within the collaborative workspace is introduced. These parameters are transformed into hazard indicators using non-linear heuristic functions. The hazard indicators are then aggregated to estimate the total hazard level of a given scenario. The proposed method is evaluated using an industrial dataset that depicts various interactions between a human operator and a cobot.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 11:25:47 GMT" } ]
2025-03-12T00:00:00
[ [ "Katranis", "Georgios", "" ], [ "Plahl", "Frederik", "" ], [ "Grimstadt", "Joachim", "" ], [ "Mamaev", "Ilshat", "" ], [ "Vock", "Silvia", "" ], [ "Morozov", "Andrey", "" ] ]
TITLE: Dynamic Risk Assessment for Human-Robot Collaboration Using a Heuristics-based Approach ABSTRACT: Human-robot collaboration (HRC) introduces significant safety challenges, particularly in protecting human operators working alongside collaborative robots (cobots). While current ISO standards emphasize risk assessment and hazard identification, these procedures are often insufficient for addressing the complexity of HRC environments, which involve numerous design factors and dynamic interactions. This publication presents a method for objective hazard analysis to support Dynamic Risk Assessment, extending beyond reliance on expert knowledge. The approach monitors scene parameters, such as the distance between human body parts and the cobot, as well as the cobot`s Cartesian velocity. Additionally, an anthropocentric parameter focusing on the orientation of the human head within the collaborative workspace is introduced. These parameters are transformed into hazard indicators using non-linear heuristic functions. The hazard indicators are then aggregated to estimate the total hazard level of a given scenario. The proposed method is evaluated using an industrial dataset that depicts various interactions between a human operator and a cobot.
new_dataset
0.963609
2503.08323
Morteza Rohanian
Morteza Rohanian, Tarun Mehra, Nicola Miglino, Farhad Nooralahzadeh, Michael Krauthammer, Andreas Wicki
Towards Scalable and Cross-Lingual Specialist Language Models for Oncology
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Clinical oncology generates vast, unstructured data that often contain inconsistencies, missing information, and ambiguities, making it difficult to extract reliable insights for data-driven decision-making. General-purpose large language models (LLMs) struggle with these challenges due to their lack of domain-specific reasoning, including specialized clinical terminology, context-dependent interpretations, and multi-modal data integration. We address these issues with an oncology-specialized, efficient, and adaptable NLP framework that combines instruction tuning, retrieval-augmented generation (RAG), and graph-based knowledge integration. Our lightweight models prove effective at oncology-specific tasks, such as named entity recognition (e.g., identifying cancer diagnoses), entity linking (e.g., linking entities to standardized ontologies), TNM staging, document classification (e.g., cancer subtype classification from pathology reports), and treatment response prediction. Our framework emphasizes adaptability and resource efficiency. We include minimal German instructions, collected at the University Hospital Zurich (USZ), to test whether small amounts of non-English language data can effectively transfer knowledge across languages. This approach mirrors our motivation for lightweight models, which balance strong performance with reduced computational costs, making them suitable for resource-limited healthcare settings. We validated our models on oncology datasets, demonstrating strong results in named entity recognition, relation extraction, and document classification.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 11:34:57 GMT" } ]
2025-03-12T00:00:00
[ [ "Rohanian", "Morteza", "" ], [ "Mehra", "Tarun", "" ], [ "Miglino", "Nicola", "" ], [ "Nooralahzadeh", "Farhad", "" ], [ "Krauthammer", "Michael", "" ], [ "Wicki", "Andreas", "" ] ]
TITLE: Towards Scalable and Cross-Lingual Specialist Language Models for Oncology ABSTRACT: Clinical oncology generates vast, unstructured data that often contain inconsistencies, missing information, and ambiguities, making it difficult to extract reliable insights for data-driven decision-making. General-purpose large language models (LLMs) struggle with these challenges due to their lack of domain-specific reasoning, including specialized clinical terminology, context-dependent interpretations, and multi-modal data integration. We address these issues with an oncology-specialized, efficient, and adaptable NLP framework that combines instruction tuning, retrieval-augmented generation (RAG), and graph-based knowledge integration. Our lightweight models prove effective at oncology-specific tasks, such as named entity recognition (e.g., identifying cancer diagnoses), entity linking (e.g., linking entities to standardized ontologies), TNM staging, document classification (e.g., cancer subtype classification from pathology reports), and treatment response prediction. Our framework emphasizes adaptability and resource efficiency. We include minimal German instructions, collected at the University Hospital Zurich (USZ), to test whether small amounts of non-English language data can effectively transfer knowledge across languages. This approach mirrors our motivation for lightweight models, which balance strong performance with reduced computational costs, making them suitable for resource-limited healthcare settings. We validated our models on oncology datasets, demonstrating strong results in named entity recognition, relation extraction, and document classification.
no_new_dataset
0.951504
2503.08328
Liang Yu
Liang Yu, Lai Tu, Xiang Bai
MFRS: A Multi-Frequency Reference Series Approach to Scalable and Accurate Time-Series Forecasting
null
null
null
null
cs.LG cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multivariate time-series forecasting holds immense value across diverse applications, requiring methods to effectively capture complex temporal and inter-variable dynamics. A key challenge lies in uncovering the intrinsic patterns that govern predictability, beyond conventional designs, focusing on network architectures to explore latent relationships or temporal dependencies. Inspired by signal decomposition, this paper posits that time series predictability is derived from periodic characteristics at different frequencies. Consequently, we propose a novel time series forecasting method based on multi-frequency reference series correlation analysis. Through spectral analysis on long-term training data, we identify dominant spectral components and their harmonics to design base-pattern reference series. Unlike signal decomposition, which represents the original series as a linear combination of basis signals, our method uses a transformer model to compute cross-attention between the original series and reference series, capturing essential features for forecasting. Experiments on major open and synthetic datasets show state-of-the-art performance. Furthermore, by focusing on attention with a small number of reference series rather than pairwise variable attention, our method ensures scalability and broad applicability. The source code is available at: https://github.com/yuliang555/MFRS
[ { "version": "v1", "created": "Tue, 11 Mar 2025 11:40:14 GMT" } ]
2025-03-12T00:00:00
[ [ "Yu", "Liang", "" ], [ "Tu", "Lai", "" ], [ "Bai", "Xiang", "" ] ]
TITLE: MFRS: A Multi-Frequency Reference Series Approach to Scalable and Accurate Time-Series Forecasting ABSTRACT: Multivariate time-series forecasting holds immense value across diverse applications, requiring methods to effectively capture complex temporal and inter-variable dynamics. A key challenge lies in uncovering the intrinsic patterns that govern predictability, beyond conventional designs, focusing on network architectures to explore latent relationships or temporal dependencies. Inspired by signal decomposition, this paper posits that time series predictability is derived from periodic characteristics at different frequencies. Consequently, we propose a novel time series forecasting method based on multi-frequency reference series correlation analysis. Through spectral analysis on long-term training data, we identify dominant spectral components and their harmonics to design base-pattern reference series. Unlike signal decomposition, which represents the original series as a linear combination of basis signals, our method uses a transformer model to compute cross-attention between the original series and reference series, capturing essential features for forecasting. Experiments on major open and synthetic datasets show state-of-the-art performance. Furthermore, by focusing on attention with a small number of reference series rather than pairwise variable attention, our method ensures scalability and broad applicability. The source code is available at: https://github.com/yuliang555/MFRS
no_new_dataset
0.944022
2503.08335
Soumya Jahagirdar
Soumya Shamarao Jahagirdar, Jayasree Saha, C V Jawahar
Prompt2LVideos: Exploring Prompts for Understanding Long-Form Multimodal Videos
CVIP 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Learning multimodal video understanding typically relies on datasets comprising video clips paired with manually annotated captions. However, this becomes even more challenging when dealing with long-form videos, lasting from minutes to hours, in educational and news domains due to the need for more annotators with subject expertise. Hence, there arises a need for automated solutions. Recent advancements in Large Language Models (LLMs) promise to capture concise and informative content that allows the comprehension of entire videos by leveraging Automatic Speech Recognition (ASR) and Optical Character Recognition (OCR) technologies. ASR provides textual content from audio, while OCR extracts textual content from specific frames. This paper introduces a dataset comprising long-form lectures and news videos. We present baseline approaches to understand their limitations on this dataset and advocate for exploring prompt engineering techniques to comprehend long-form multimodal video datasets comprehensively.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 11:47:48 GMT" } ]
2025-03-12T00:00:00
[ [ "Jahagirdar", "Soumya Shamarao", "" ], [ "Saha", "Jayasree", "" ], [ "Jawahar", "C V", "" ] ]
TITLE: Prompt2LVideos: Exploring Prompts for Understanding Long-Form Multimodal Videos ABSTRACT: Learning multimodal video understanding typically relies on datasets comprising video clips paired with manually annotated captions. However, this becomes even more challenging when dealing with long-form videos, lasting from minutes to hours, in educational and news domains due to the need for more annotators with subject expertise. Hence, there arises a need for automated solutions. Recent advancements in Large Language Models (LLMs) promise to capture concise and informative content that allows the comprehension of entire videos by leveraging Automatic Speech Recognition (ASR) and Optical Character Recognition (OCR) technologies. ASR provides textual content from audio, while OCR extracts textual content from specific frames. This paper introduces a dataset comprising long-form lectures and news videos. We present baseline approaches to understand their limitations on this dataset and advocate for exploring prompt engineering techniques to comprehend long-form multimodal video datasets comprehensively.
new_dataset
0.96862
2503.08336
Runwei Guan
Runwei Guan, Jianan Liu, Ningwei Ouyang, Daizong Liu, Xiaolou Sun, Lianqing Zheng, Ming Xu, Yutao Yue, Hui Xiong
Talk2PC: Enhancing 3D Visual Grounding through LiDAR and Radar Point Clouds Fusion for Autonomous Driving
14 pages, 11 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embodied outdoor scene understanding forms the foundation for autonomous agents to perceive, analyze, and react to dynamic driving environments. However, existing 3D understanding is predominantly based on 2D Vision-Language Models (VLMs), collecting and processing limited scene-aware contexts. Instead, compared to the 2D planar visual information, point cloud sensors like LiDAR offer rich depth information and fine-grained 3D representations of objects. Meanwhile, the emerging 4D millimeter-wave (mmWave) radar is capable of detecting the motion trend, velocity, and reflection intensity of each object. Therefore, the integration of these two modalities provides more flexible querying conditions for natural language, enabling more accurate 3D visual grounding. To this end, in this paper, we exploratively propose a novel method called TPCNet, the first outdoor 3D visual grounding model upon the paradigm of prompt-guided point cloud sensor combination, including both LiDAR and radar contexts. To adaptively balance the features of these two sensors required by the prompt, we have designed a multi-fusion paradigm called Two-Stage Heterogeneous Modal Adaptive Fusion. Specifically, this paradigm initially employs Bidirectional Agent Cross-Attention (BACA), which feeds dual-sensor features, characterized by global receptive fields, to the text features for querying. Additionally, we have designed a Dynamic Gated Graph Fusion (DGGF) module to locate the regions of interest identified by the queries. To further enhance accuracy, we innovatively devise an C3D-RECHead, based on the nearest object edge. Our experiments have demonstrated that our TPCNet, along with its individual modules, achieves the state-of-the-art performance on both the Talk2Radar and Talk2Car datasets.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 11:48:27 GMT" } ]
2025-03-12T00:00:00
[ [ "Guan", "Runwei", "" ], [ "Liu", "Jianan", "" ], [ "Ouyang", "Ningwei", "" ], [ "Liu", "Daizong", "" ], [ "Sun", "Xiaolou", "" ], [ "Zheng", "Lianqing", "" ], [ "Xu", "Ming", "" ], [ "Yue", "Yutao", "" ], [ "Xiong", "Hui", "" ] ]
TITLE: Talk2PC: Enhancing 3D Visual Grounding through LiDAR and Radar Point Clouds Fusion for Autonomous Driving ABSTRACT: Embodied outdoor scene understanding forms the foundation for autonomous agents to perceive, analyze, and react to dynamic driving environments. However, existing 3D understanding is predominantly based on 2D Vision-Language Models (VLMs), collecting and processing limited scene-aware contexts. Instead, compared to the 2D planar visual information, point cloud sensors like LiDAR offer rich depth information and fine-grained 3D representations of objects. Meanwhile, the emerging 4D millimeter-wave (mmWave) radar is capable of detecting the motion trend, velocity, and reflection intensity of each object. Therefore, the integration of these two modalities provides more flexible querying conditions for natural language, enabling more accurate 3D visual grounding. To this end, in this paper, we exploratively propose a novel method called TPCNet, the first outdoor 3D visual grounding model upon the paradigm of prompt-guided point cloud sensor combination, including both LiDAR and radar contexts. To adaptively balance the features of these two sensors required by the prompt, we have designed a multi-fusion paradigm called Two-Stage Heterogeneous Modal Adaptive Fusion. Specifically, this paradigm initially employs Bidirectional Agent Cross-Attention (BACA), which feeds dual-sensor features, characterized by global receptive fields, to the text features for querying. Additionally, we have designed a Dynamic Gated Graph Fusion (DGGF) module to locate the regions of interest identified by the queries. To further enhance accuracy, we innovatively devise an C3D-RECHead, based on the nearest object edge. Our experiments have demonstrated that our TPCNet, along with its individual modules, achieves the state-of-the-art performance on both the Talk2Radar and Talk2Car datasets.
no_new_dataset
0.948058
2503.08346
Chanyoung Kim
Chanyoung Kim, Dayun Ju, Jinyeong Kim, Woojung Han, Roberto Alcover-Couso and Seong Jae Hwang
Pathology-Aware Adaptive Watermarking for Text-Driven Medical Image Synthesis
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
As recent text-conditioned diffusion models have enabled the generation of high-quality images, concerns over their potential misuse have also grown. This issue is critical in the medical domain, where text-conditioned generated medical images could enable insurance fraud or falsified records, highlighting the urgent need for reliable safeguards against unethical use. While watermarking techniques have emerged as a promising solution in general image domains, their direct application to medical imaging presents significant challenges. A key challenge is preserving fine-grained disease manifestations, as even minor distortions from a watermark may lead to clinical misinterpretation, which compromises diagnostic integrity. To overcome this gap, we present MedSign, a deep learning-based watermarking framework specifically designed for text-to-medical image synthesis, which preserves pathologically significant regions by adaptively adjusting watermark strength. Specifically, we generate a pathology localization map using cross-attention between medical text tokens and the diffusion denoising network, aggregating token-wise attention across layers, heads, and time steps. Leveraging this map, we optimize the LDM decoder to incorporate watermarking during image synthesis, ensuring cohesive integration while minimizing interference in diagnostically critical regions. Experimental results show that our MedSign preserves diagnostic integrity while ensuring watermark robustness, achieving state-of-the-art performance in image quality and detection accuracy on MIMIC-CXR and OIA-ODIR datasets.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 11:55:14 GMT" } ]
2025-03-12T00:00:00
[ [ "Kim", "Chanyoung", "" ], [ "Ju", "Dayun", "" ], [ "Kim", "Jinyeong", "" ], [ "Han", "Woojung", "" ], [ "Alcover-Couso", "Roberto", "" ], [ "Hwang", "Seong Jae", "" ] ]
TITLE: Pathology-Aware Adaptive Watermarking for Text-Driven Medical Image Synthesis ABSTRACT: As recent text-conditioned diffusion models have enabled the generation of high-quality images, concerns over their potential misuse have also grown. This issue is critical in the medical domain, where text-conditioned generated medical images could enable insurance fraud or falsified records, highlighting the urgent need for reliable safeguards against unethical use. While watermarking techniques have emerged as a promising solution in general image domains, their direct application to medical imaging presents significant challenges. A key challenge is preserving fine-grained disease manifestations, as even minor distortions from a watermark may lead to clinical misinterpretation, which compromises diagnostic integrity. To overcome this gap, we present MedSign, a deep learning-based watermarking framework specifically designed for text-to-medical image synthesis, which preserves pathologically significant regions by adaptively adjusting watermark strength. Specifically, we generate a pathology localization map using cross-attention between medical text tokens and the diffusion denoising network, aggregating token-wise attention across layers, heads, and time steps. Leveraging this map, we optimize the LDM decoder to incorporate watermarking during image synthesis, ensuring cohesive integration while minimizing interference in diagnostically critical regions. Experimental results show that our MedSign preserves diagnostic integrity while ensuring watermark robustness, achieving state-of-the-art performance in image quality and detection accuracy on MIMIC-CXR and OIA-ODIR datasets.
no_new_dataset
0.943452
2503.08348
Sangram Patil
H. P. Khandagale, Sangram Patil, V. S. Gavali, S. V. Chavan, P. P. Halkarnikar, Prateek A. Meshram
Design and Implementation of FourCropNet: A CNN-Based System for Efficient Multi-Crop Disease Detection and Management
null
Journal of Information Systems Engineering and Management 2025, 10(7s) e-ISSN: 2468-4376
10.52783/jisem.v10i7s.877
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Plant disease detection is a critical task in agriculture, directly impacting crop yield, food security, and sustainable farming practices. This study proposes FourCropNet, a novel deep learning model designed to detect diseases in multiple crops, including CottonLeaf, Grape, Soybean, and Corn. The model leverages an advanced architecture comprising residual blocks for efficient feature extraction, attention mechanisms to enhance focus on disease-relevant regions, and lightweight layers for computational efficiency. These components collectively enable FourCropNet to achieve superior performance across varying datasets and class complexities, from single-crop datasets to combined datasets with 15 classes. The proposed model was evaluated on diverse datasets, demonstrating high accuracy, specificity, sensitivity, and F1 scores. Notably, FourCropNet achieved the highest accuracy of 99.7% for Grape, 99.5% for Corn, and 95.3% for the combined dataset. Its scalability and ability to generalize across datasets underscore its robustness. Comparative analysis shows that FourCropNet consistently outperforms state-of-the-art models such as MobileNet, VGG16, and EfficientNet across various metrics. FourCropNet's innovative design and consistent performance make it a reliable solution for real-time disease detection in agriculture. This model has the potential to assist farmers in timely disease diagnosis, reducing economic losses and promoting sustainable agricultural practices.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 12:00:56 GMT" } ]
2025-03-12T00:00:00
[ [ "Khandagale", "H. P.", "" ], [ "Patil", "Sangram", "" ], [ "Gavali", "V. S.", "" ], [ "Chavan", "S. V.", "" ], [ "Halkarnikar", "P. P.", "" ], [ "Meshram", "Prateek A.", "" ] ]
TITLE: Design and Implementation of FourCropNet: A CNN-Based System for Efficient Multi-Crop Disease Detection and Management ABSTRACT: Plant disease detection is a critical task in agriculture, directly impacting crop yield, food security, and sustainable farming practices. This study proposes FourCropNet, a novel deep learning model designed to detect diseases in multiple crops, including CottonLeaf, Grape, Soybean, and Corn. The model leverages an advanced architecture comprising residual blocks for efficient feature extraction, attention mechanisms to enhance focus on disease-relevant regions, and lightweight layers for computational efficiency. These components collectively enable FourCropNet to achieve superior performance across varying datasets and class complexities, from single-crop datasets to combined datasets with 15 classes. The proposed model was evaluated on diverse datasets, demonstrating high accuracy, specificity, sensitivity, and F1 scores. Notably, FourCropNet achieved the highest accuracy of 99.7% for Grape, 99.5% for Corn, and 95.3% for the combined dataset. Its scalability and ability to generalize across datasets underscore its robustness. Comparative analysis shows that FourCropNet consistently outperforms state-of-the-art models such as MobileNet, VGG16, and EfficientNet across various metrics. FourCropNet's innovative design and consistent performance make it a reliable solution for real-time disease detection in agriculture. This model has the potential to assist farmers in timely disease diagnosis, reducing economic losses and promoting sustainable agricultural practices.
no_new_dataset
0.945197
2503.08358
Md Faizal Karim
Md Faizal Karim, Mohammed Saad Hashmi, Shreya Bollimuntha, Mahesh Reddy Tapeti, Gaurav Singh, Nagamanikandan Govindan, K Madhava Krishna
DG16M: A Large-Scale Dataset for Dual-Arm Grasping with Force-Optimized Grasps
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Dual-arm robotic grasping is crucial for handling large objects that require stable and coordinated manipulation. While single-arm grasping has been extensively studied, datasets tailored for dual-arm settings remain scarce. We introduce a large-scale dataset of 16 million dual-arm grasps, evaluated under improved force-closure constraints. Additionally, we develop a benchmark dataset containing 300 objects with approximately 30,000 grasps, evaluated in a physics simulation environment, providing a better grasp quality assessment for dual-arm grasp synthesis methods. Finally, we demonstrate the effectiveness of our dataset by training a Dual-Arm Grasp Classifier network that outperforms the state-of-the-art methods by 15\%, achieving higher grasp success rates and improved generalization across objects.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 12:15:20 GMT" } ]
2025-03-12T00:00:00
[ [ "Karim", "Md Faizal", "" ], [ "Hashmi", "Mohammed Saad", "" ], [ "Bollimuntha", "Shreya", "" ], [ "Tapeti", "Mahesh Reddy", "" ], [ "Singh", "Gaurav", "" ], [ "Govindan", "Nagamanikandan", "" ], [ "Krishna", "K Madhava", "" ] ]
TITLE: DG16M: A Large-Scale Dataset for Dual-Arm Grasping with Force-Optimized Grasps ABSTRACT: Dual-arm robotic grasping is crucial for handling large objects that require stable and coordinated manipulation. While single-arm grasping has been extensively studied, datasets tailored for dual-arm settings remain scarce. We introduce a large-scale dataset of 16 million dual-arm grasps, evaluated under improved force-closure constraints. Additionally, we develop a benchmark dataset containing 300 objects with approximately 30,000 grasps, evaluated in a physics simulation environment, providing a better grasp quality assessment for dual-arm grasp synthesis methods. Finally, we demonstrate the effectiveness of our dataset by training a Dual-Arm Grasp Classifier network that outperforms the state-of-the-art methods by 15\%, achieving higher grasp success rates and improved generalization across objects.
new_dataset
0.958731
2503.08363
Zhaiyu Chen
Zhaiyu Chen, Yuqing Wang, Liangliang Nan, Xiao Xiang Zhu
Parametric Point Cloud Completion for Polygonal Surface Reconstruction
CVPR 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Existing polygonal surface reconstruction methods heavily depend on input completeness and struggle with incomplete point clouds. We argue that while current point cloud completion techniques may recover missing points, they are not optimized for polygonal surface reconstruction, where the parametric representation of underlying surfaces remains overlooked. To address this gap, we introduce parametric completion, a novel paradigm for point cloud completion, which recovers parametric primitives instead of individual points to convey high-level geometric structures. Our presented approach, PaCo, enables high-quality polygonal surface reconstruction by leveraging plane proxies that encapsulate both plane parameters and inlier points, proving particularly effective in challenging scenarios with highly incomplete data. Comprehensive evaluations of our approach on the ABC dataset establish its effectiveness with superior performance and set a new standard for polygonal surface reconstruction from incomplete data. Project page: https://parametric-completion.github.io.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 12:20:24 GMT" } ]
2025-03-12T00:00:00
[ [ "Chen", "Zhaiyu", "" ], [ "Wang", "Yuqing", "" ], [ "Nan", "Liangliang", "" ], [ "Zhu", "Xiao Xiang", "" ] ]
TITLE: Parametric Point Cloud Completion for Polygonal Surface Reconstruction ABSTRACT: Existing polygonal surface reconstruction methods heavily depend on input completeness and struggle with incomplete point clouds. We argue that while current point cloud completion techniques may recover missing points, they are not optimized for polygonal surface reconstruction, where the parametric representation of underlying surfaces remains overlooked. To address this gap, we introduce parametric completion, a novel paradigm for point cloud completion, which recovers parametric primitives instead of individual points to convey high-level geometric structures. Our presented approach, PaCo, enables high-quality polygonal surface reconstruction by leveraging plane proxies that encapsulate both plane parameters and inlier points, proving particularly effective in challenging scenarios with highly incomplete data. Comprehensive evaluations of our approach on the ABC dataset establish its effectiveness with superior performance and set a new standard for polygonal surface reconstruction from incomplete data. Project page: https://parametric-completion.github.io.
no_new_dataset
0.952618
2503.08367
Runling Long
Runling Long, Yunlong Wang, Jia Wan, Xiang Deng, Xinting Zhu, Weili Guan, Antoni B. Chan, Liqiang Nie
Embodied Crowd Counting
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Occlusion is one of the fundamental challenges in crowd counting. In the community, various data-driven approaches have been developed to address this issue, yet their effectiveness is limited. This is mainly because most existing crowd counting datasets on which the methods are trained are based on passive cameras, restricting their ability to fully sense the environment. Recently, embodied navigation methods have shown significant potential in precise object detection in interactive scenes. These methods incorporate active camera settings, holding promise in addressing the fundamental issues in crowd counting. However, most existing methods are designed for indoor navigation, showing unknown performance in analyzing complex object distribution in large scale scenes, such as crowds. Besides, most existing embodied navigation datasets are indoor scenes with limited scale and object quantity, preventing them from being introduced into dense crowd analysis. Based on this, a novel task, Embodied Crowd Counting (ECC), is proposed. We first build up an interactive simulator, Embodied Crowd Counting Dataset (ECCD), which enables large scale scenes and large object quantity. A prior probability distribution that approximates realistic crowd distribution is introduced to generate crowds. Then, a zero-shot navigation method (ZECC) is proposed. This method contains a MLLM driven coarse-to-fine navigation mechanism, enabling active Z-axis exploration, and a normal-line-based crowd distribution analysis method for fine counting. Experimental results against baselines show that the proposed method achieves the best trade-off between counting accuracy and navigation cost.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 12:23:34 GMT" } ]
2025-03-12T00:00:00
[ [ "Long", "Runling", "" ], [ "Wang", "Yunlong", "" ], [ "Wan", "Jia", "" ], [ "Deng", "Xiang", "" ], [ "Zhu", "Xinting", "" ], [ "Guan", "Weili", "" ], [ "Chan", "Antoni B.", "" ], [ "Nie", "Liqiang", "" ] ]
TITLE: Embodied Crowd Counting ABSTRACT: Occlusion is one of the fundamental challenges in crowd counting. In the community, various data-driven approaches have been developed to address this issue, yet their effectiveness is limited. This is mainly because most existing crowd counting datasets on which the methods are trained are based on passive cameras, restricting their ability to fully sense the environment. Recently, embodied navigation methods have shown significant potential in precise object detection in interactive scenes. These methods incorporate active camera settings, holding promise in addressing the fundamental issues in crowd counting. However, most existing methods are designed for indoor navigation, showing unknown performance in analyzing complex object distribution in large scale scenes, such as crowds. Besides, most existing embodied navigation datasets are indoor scenes with limited scale and object quantity, preventing them from being introduced into dense crowd analysis. Based on this, a novel task, Embodied Crowd Counting (ECC), is proposed. We first build up an interactive simulator, Embodied Crowd Counting Dataset (ECCD), which enables large scale scenes and large object quantity. A prior probability distribution that approximates realistic crowd distribution is introduced to generate crowds. Then, a zero-shot navigation method (ZECC) is proposed. This method contains a MLLM driven coarse-to-fine navigation mechanism, enabling active Z-axis exploration, and a normal-line-based crowd distribution analysis method for fine counting. Experimental results against baselines show that the proposed method achieves the best trade-off between counting accuracy and navigation cost.
new_dataset
0.841956
2503.08368
Chaoquan Jiang
Chaoquan Jiang, Yunfan Yang, Rui Hu, Jitao Sang
Debiased Prompt Tuning in Vision-Language Model without Annotations
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prompt tuning of Vision-Language Models (VLMs) such as CLIP, has demonstrated the ability to rapidly adapt to various downstream tasks. However, recent studies indicate that tuned VLMs may suffer from the problem of spurious correlations, where the model relies on spurious features (e.g. background and gender) in the data. This may lead to the model having worse robustness in out-of-distribution data. Standard methods for eliminating spurious correlation typically require us to know the spurious attribute labels of each sample, which is hard in the real world. In this work, we explore improving the group robustness of prompt tuning in VLMs without relying on manual annotation of spurious features. We notice the zero - shot image recognition ability of VLMs and use this ability to identify spurious features, thus avoiding the cost of manual annotation. By leveraging pseudo-spurious attribute annotations, we further propose a method to automatically adjust the training weights of different groups. Extensive experiments show that our approach efficiently improves the worst-group accuracy on CelebA, Waterbirds, and MetaShift datasets, achieving the best robustness gap between the worst-group accuracy and the overall accuracy.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 12:24:54 GMT" } ]
2025-03-12T00:00:00
[ [ "Jiang", "Chaoquan", "" ], [ "Yang", "Yunfan", "" ], [ "Hu", "Rui", "" ], [ "Sang", "Jitao", "" ] ]
TITLE: Debiased Prompt Tuning in Vision-Language Model without Annotations ABSTRACT: Prompt tuning of Vision-Language Models (VLMs) such as CLIP, has demonstrated the ability to rapidly adapt to various downstream tasks. However, recent studies indicate that tuned VLMs may suffer from the problem of spurious correlations, where the model relies on spurious features (e.g. background and gender) in the data. This may lead to the model having worse robustness in out-of-distribution data. Standard methods for eliminating spurious correlation typically require us to know the spurious attribute labels of each sample, which is hard in the real world. In this work, we explore improving the group robustness of prompt tuning in VLMs without relying on manual annotation of spurious features. We notice the zero - shot image recognition ability of VLMs and use this ability to identify spurious features, thus avoiding the cost of manual annotation. By leveraging pseudo-spurious attribute annotations, we further propose a method to automatically adjust the training weights of different groups. Extensive experiments show that our approach efficiently improves the worst-group accuracy on CelebA, Waterbirds, and MetaShift datasets, achieving the best robustness gap between the worst-group accuracy and the overall accuracy.
no_new_dataset
0.948489
2503.08370
Xucheng Guo
Xucheng Guo, Yiran Shen, Xiaofang Xiao, Yuanfeng Zhou, Lin Wang
Ev-Layout: A Large-scale Event-based Multi-modal Dataset for Indoor Layout Estimation and Tracking
null
null
null
null
cs.GR cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents Ev-Layout, a novel large-scale event-based multi-modal dataset designed for indoor layout estimation and tracking. Ev-Layout makes key contributions to the community by: Utilizing a hybrid data collection platform (with a head-mounted display and VR interface) that integrates both RGB and bio-inspired event cameras to capture indoor layouts in motion. Incorporating time-series data from inertial measurement units (IMUs) and ambient lighting conditions recorded during data collection to highlight the potential impact of motion speed and lighting on layout estimation accuracy. The dataset consists of 2.5K sequences, including over 771.3K RGB images and 10 billion event data points. Of these, 39K images are annotated with indoor layouts, enabling research in both event-based and video-based indoor layout estimation. Based on the dataset, we propose an event-based layout estimation pipeline with a novel event-temporal distribution feature module to effectively aggregate the spatio-temporal information from events. Additionally, we introduce a spatio-temporal feature fusion module that can be easily integrated into a transformer module for fusion purposes. Finally, we conduct benchmarking and extensive experiments on the Ev-Layout dataset, demonstrating that our approach significantly improves the accuracy of dynamic indoor layout estimation compared to existing event-based methods.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 12:26:39 GMT" } ]
2025-03-12T00:00:00
[ [ "Guo", "Xucheng", "" ], [ "Shen", "Yiran", "" ], [ "Xiao", "Xiaofang", "" ], [ "Zhou", "Yuanfeng", "" ], [ "Wang", "Lin", "" ] ]
TITLE: Ev-Layout: A Large-scale Event-based Multi-modal Dataset for Indoor Layout Estimation and Tracking ABSTRACT: This paper presents Ev-Layout, a novel large-scale event-based multi-modal dataset designed for indoor layout estimation and tracking. Ev-Layout makes key contributions to the community by: Utilizing a hybrid data collection platform (with a head-mounted display and VR interface) that integrates both RGB and bio-inspired event cameras to capture indoor layouts in motion. Incorporating time-series data from inertial measurement units (IMUs) and ambient lighting conditions recorded during data collection to highlight the potential impact of motion speed and lighting on layout estimation accuracy. The dataset consists of 2.5K sequences, including over 771.3K RGB images and 10 billion event data points. Of these, 39K images are annotated with indoor layouts, enabling research in both event-based and video-based indoor layout estimation. Based on the dataset, we propose an event-based layout estimation pipeline with a novel event-temporal distribution feature module to effectively aggregate the spatio-temporal information from events. Additionally, we introduce a spatio-temporal feature fusion module that can be easily integrated into a transformer module for fusion purposes. Finally, we conduct benchmarking and extensive experiments on the Ev-Layout dataset, demonstrating that our approach significantly improves the accuracy of dynamic indoor layout estimation compared to existing event-based methods.
new_dataset
0.968171
2503.08371
Bariscan Bozkurt
Bariscan Bozkurt, Ben Deaner, Dimitri Meunier, Liyuan Xu, Arthur Gretton
Density Ratio-based Proxy Causal Learning Without Density Ratios
AISTATS 2025 accepted, 81 pages
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We address the setting of Proxy Causal Learning (PCL), which has the goal of estimating causal effects from observed data in the presence of hidden confounding. Proxy methods accomplish this task using two proxy variables related to the latent confounder: a treatment proxy (related to the treatment) and an outcome proxy (related to the outcome). Two approaches have been proposed to perform causal effect estimation given proxy variables; however only one of these has found mainstream acceptance, since the other was understood to require density ratio estimation - a challenging task in high dimensions. In the present work, we propose a practical and effective implementation of the second approach, which bypasses explicit density ratio estimation and is suitable for continuous and high-dimensional treatments. We employ kernel ridge regression to derive estimators, resulting in simple closed-form solutions for dose-response and conditional dose-response curves, along with consistency guarantees. Our methods empirically demonstrate superior or comparable performance to existing frameworks on synthetic and real-world datasets.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 12:27:54 GMT" } ]
2025-03-12T00:00:00
[ [ "Bozkurt", "Bariscan", "" ], [ "Deaner", "Ben", "" ], [ "Meunier", "Dimitri", "" ], [ "Xu", "Liyuan", "" ], [ "Gretton", "Arthur", "" ] ]
TITLE: Density Ratio-based Proxy Causal Learning Without Density Ratios ABSTRACT: We address the setting of Proxy Causal Learning (PCL), which has the goal of estimating causal effects from observed data in the presence of hidden confounding. Proxy methods accomplish this task using two proxy variables related to the latent confounder: a treatment proxy (related to the treatment) and an outcome proxy (related to the outcome). Two approaches have been proposed to perform causal effect estimation given proxy variables; however only one of these has found mainstream acceptance, since the other was understood to require density ratio estimation - a challenging task in high dimensions. In the present work, we propose a practical and effective implementation of the second approach, which bypasses explicit density ratio estimation and is suitable for continuous and high-dimensional treatments. We employ kernel ridge regression to derive estimators, resulting in simple closed-form solutions for dose-response and conditional dose-response curves, along with consistency guarantees. Our methods empirically demonstrate superior or comparable performance to existing frameworks on synthetic and real-world datasets.
no_new_dataset
0.949342
2503.08373
Fabian Isensee
Fabian Isensee, Maximilian Rokuss, Lars Kr\"amer, Stefan Dinkelacker, Ashis Ravindran, Florian Stritzke, Benjamin Hamm, Tassilo Wald, Moritz Langenberg, Constantin Ulrich, Jonathan Deissler, Ralf Floca, Klaus Maier-Hein
nnInteractive: Redefining 3D Promptable Segmentation
Fabian Isensee, Maximilian Rokuss and Lars Kr\"amer contributed equally. Each co-first author may list themselves as lead author on their CV
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Accurate and efficient 3D segmentation is essential for both clinical and research applications. While foundation models like SAM have revolutionized interactive segmentation, their 2D design and domain shift limitations make them ill-suited for 3D medical images. Current adaptations address some of these challenges but remain limited, either lacking volumetric awareness, offering restricted interactivity, or supporting only a small set of structures and modalities. Usability also remains a challenge, as current tools are rarely integrated into established imaging platforms and often rely on cumbersome web-based interfaces with restricted functionality. We introduce nnInteractive, the first comprehensive 3D interactive open-set segmentation method. It supports diverse prompts-including points, scribbles, boxes, and a novel lasso prompt-while leveraging intuitive 2D interactions to generate full 3D segmentations. Trained on 120+ diverse volumetric 3D datasets (CT, MRI, PET, 3D Microscopy, etc.), nnInteractive sets a new state-of-the-art in accuracy, adaptability, and usability. Crucially, it is the first method integrated into widely used image viewers (e.g., Napari, MITK), ensuring broad accessibility for real-world clinical and research applications. Extensive benchmarking demonstrates that nnInteractive far surpasses existing methods, setting a new standard for AI-driven interactive 3D segmentation. nnInteractive is publicly available: https://github.com/MIC-DKFZ/napari-nninteractive (Napari plugin), https://www.mitk.org/MITK-nnInteractive (MITK integration), https://github.com/MIC-DKFZ/nnInteractive (Python backend).
[ { "version": "v1", "created": "Tue, 11 Mar 2025 12:30:34 GMT" } ]
2025-03-12T00:00:00
[ [ "Isensee", "Fabian", "" ], [ "Rokuss", "Maximilian", "" ], [ "Krämer", "Lars", "" ], [ "Dinkelacker", "Stefan", "" ], [ "Ravindran", "Ashis", "" ], [ "Stritzke", "Florian", "" ], [ "Hamm", "Benjamin", "" ], [ "Wald", "Tassilo", "" ], [ "Langenberg", "Moritz", "" ], [ "Ulrich", "Constantin", "" ], [ "Deissler", "Jonathan", "" ], [ "Floca", "Ralf", "" ], [ "Maier-Hein", "Klaus", "" ] ]
TITLE: nnInteractive: Redefining 3D Promptable Segmentation ABSTRACT: Accurate and efficient 3D segmentation is essential for both clinical and research applications. While foundation models like SAM have revolutionized interactive segmentation, their 2D design and domain shift limitations make them ill-suited for 3D medical images. Current adaptations address some of these challenges but remain limited, either lacking volumetric awareness, offering restricted interactivity, or supporting only a small set of structures and modalities. Usability also remains a challenge, as current tools are rarely integrated into established imaging platforms and often rely on cumbersome web-based interfaces with restricted functionality. We introduce nnInteractive, the first comprehensive 3D interactive open-set segmentation method. It supports diverse prompts-including points, scribbles, boxes, and a novel lasso prompt-while leveraging intuitive 2D interactions to generate full 3D segmentations. Trained on 120+ diverse volumetric 3D datasets (CT, MRI, PET, 3D Microscopy, etc.), nnInteractive sets a new state-of-the-art in accuracy, adaptability, and usability. Crucially, it is the first method integrated into widely used image viewers (e.g., Napari, MITK), ensuring broad accessibility for real-world clinical and research applications. Extensive benchmarking demonstrates that nnInteractive far surpasses existing methods, setting a new standard for AI-driven interactive 3D segmentation. nnInteractive is publicly available: https://github.com/MIC-DKFZ/napari-nninteractive (Napari plugin), https://www.mitk.org/MITK-nnInteractive (MITK integration), https://github.com/MIC-DKFZ/nnInteractive (Python backend).
no_new_dataset
0.944485
2503.08379
Leandro Car\'isio Fernandes
Leandro Car\'isio Fernandes, Leandro dos Santos Ribeiro, Marcos Vin\'icius Borela de Castro, Leonardo Augusto da Silva Pacheco, Edans Fl\'avius de Oliveira Sandes
JurisTCU: A Brazilian Portuguese Information Retrieval Dataset with Query Relevance Judgments
21 pages
null
null
null
cs.IR cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper introduces JurisTCU, a Brazilian Portuguese dataset for legal information retrieval (LIR). The dataset is freely available and consists of 16,045 jurisprudential documents from the Brazilian Federal Court of Accounts, along with 150 queries annotated with relevance judgments. It addresses the scarcity of Portuguese-language LIR datasets with query relevance annotations. The queries are organized into three groups: real user keyword-based queries, synthetic keyword-based queries, and synthetic question-based queries. Relevance judgments were produced through a hybrid approach combining LLM-based scoring with expert domain validation. We used JurisTCU in 14 experiments using lexical search (document expansion methods) and semantic search (BERT-based and OpenAI embeddings). We show that the document expansion methods significantly improve the performance of standard BM25 search on this dataset, with improvements exceeding 45% in P@10, R@10, and nDCG@10 metrics when evaluating short keyword-based queries. Among the embedding models, the OpenAI models produced the best results, with improvements of approximately 70% in P@10, R@10, and nDCG@10 metrics for short keyword-based queries, suggesting that these dense embeddings capture semantic relationships in this domain, surpassing the reliance on lexical terms. Besides offering a dataset for the Portuguese-language IR research community, suitable for evaluating search systems, the results also contribute to enhancing a search system highly relevant to Brazilian citizens.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 12:39:04 GMT" } ]
2025-03-12T00:00:00
[ [ "Fernandes", "Leandro Carísio", "" ], [ "Ribeiro", "Leandro dos Santos", "" ], [ "de Castro", "Marcos Vinícius Borela", "" ], [ "Pacheco", "Leonardo Augusto da Silva", "" ], [ "Sandes", "Edans Flávius de Oliveira", "" ] ]
TITLE: JurisTCU: A Brazilian Portuguese Information Retrieval Dataset with Query Relevance Judgments ABSTRACT: This paper introduces JurisTCU, a Brazilian Portuguese dataset for legal information retrieval (LIR). The dataset is freely available and consists of 16,045 jurisprudential documents from the Brazilian Federal Court of Accounts, along with 150 queries annotated with relevance judgments. It addresses the scarcity of Portuguese-language LIR datasets with query relevance annotations. The queries are organized into three groups: real user keyword-based queries, synthetic keyword-based queries, and synthetic question-based queries. Relevance judgments were produced through a hybrid approach combining LLM-based scoring with expert domain validation. We used JurisTCU in 14 experiments using lexical search (document expansion methods) and semantic search (BERT-based and OpenAI embeddings). We show that the document expansion methods significantly improve the performance of standard BM25 search on this dataset, with improvements exceeding 45% in P@10, R@10, and nDCG@10 metrics when evaluating short keyword-based queries. Among the embedding models, the OpenAI models produced the best results, with improvements of approximately 70% in P@10, R@10, and nDCG@10 metrics for short keyword-based queries, suggesting that these dense embeddings capture semantic relationships in this domain, surpassing the reliance on lexical terms. Besides offering a dataset for the Portuguese-language IR research community, suitable for evaluating search systems, the results also contribute to enhancing a search system highly relevant to Brazilian citizens.
new_dataset
0.976423
2503.08382
Jesus Zarzar
Jesus Zarzar, Tom Monnier, Roman Shapovalov, Andrea Vedaldi, David Novotny
Twinner: Shining Light on Digital Twins in a Few Snaps
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present the first large reconstruction model, Twinner, capable of recovering a scene's illumination as well as an object's geometry and material properties from only a few posed images. Twinner is based on the Large Reconstruction Model and innovates in three key ways: 1) We introduce a memory-efficient voxel-grid transformer whose memory scales only quadratically with the size of the voxel grid. 2) To deal with scarcity of high-quality ground-truth PBR-shaded models, we introduce a large fully-synthetic dataset of procedurally-generated PBR-textured objects lit with varied illumination. 3) To narrow the synthetic-to-real gap, we finetune the model on real life datasets by means of a differentiable physically-based shading model, eschewing the need for ground-truth illumination or material properties which are challenging to obtain in real life. We demonstrate the efficacy of our model on the real life StanfordORB benchmark where, given few input views, we achieve reconstruction quality significantly superior to existing feedforward reconstruction networks, and comparable to significantly slower per-scene optimization methods.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 12:43:11 GMT" } ]
2025-03-12T00:00:00
[ [ "Zarzar", "Jesus", "" ], [ "Monnier", "Tom", "" ], [ "Shapovalov", "Roman", "" ], [ "Vedaldi", "Andrea", "" ], [ "Novotny", "David", "" ] ]
TITLE: Twinner: Shining Light on Digital Twins in a Few Snaps ABSTRACT: We present the first large reconstruction model, Twinner, capable of recovering a scene's illumination as well as an object's geometry and material properties from only a few posed images. Twinner is based on the Large Reconstruction Model and innovates in three key ways: 1) We introduce a memory-efficient voxel-grid transformer whose memory scales only quadratically with the size of the voxel grid. 2) To deal with scarcity of high-quality ground-truth PBR-shaded models, we introduce a large fully-synthetic dataset of procedurally-generated PBR-textured objects lit with varied illumination. 3) To narrow the synthetic-to-real gap, we finetune the model on real life datasets by means of a differentiable physically-based shading model, eschewing the need for ground-truth illumination or material properties which are challenging to obtain in real life. We demonstrate the efficacy of our model on the real life StanfordORB benchmark where, given few input views, we achieve reconstruction quality significantly superior to existing feedforward reconstruction networks, and comparable to significantly slower per-scene optimization methods.
new_dataset
0.957118
2503.08384
Susu Sun
Susu Sun, Dominique van Midden, Geert Litjens, Christian F. Baumgartner
Prototype-Based Multiple Instance Learning for Gigapixel Whole Slide Image Classification
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Multiple Instance Learning (MIL) methods have succeeded remarkably in histopathology whole slide image (WSI) analysis. However, most MIL models only offer attention-based explanations that do not faithfully capture the model's decision mechanism and do not allow human-model interaction. To address these limitations, we introduce ProtoMIL, an inherently interpretable MIL model for WSI analysis that offers user-friendly explanations and supports human intervention. Our approach employs a sparse autoencoder to discover human-interpretable concepts from the image feature space, which are then used to train ProtoMIL. The model represents predictions as linear combinations of concepts, making the decision process transparent. Furthermore, ProtoMIL allows users to perform model interventions by altering the input concepts. Experiments on two widely used pathology datasets demonstrate that ProtoMIL achieves a classification performance comparable to state-of-the-art MIL models while offering intuitively understandable explanations. Moreover, we demonstrate that our method can eliminate reliance on diagnostically irrelevant information via human intervention, guiding the model toward being right for the right reason. Code will be publicly available at https://github.com/ss-sun/ProtoMIL.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 12:44:03 GMT" } ]
2025-03-12T00:00:00
[ [ "Sun", "Susu", "" ], [ "van Midden", "Dominique", "" ], [ "Litjens", "Geert", "" ], [ "Baumgartner", "Christian F.", "" ] ]
TITLE: Prototype-Based Multiple Instance Learning for Gigapixel Whole Slide Image Classification ABSTRACT: Multiple Instance Learning (MIL) methods have succeeded remarkably in histopathology whole slide image (WSI) analysis. However, most MIL models only offer attention-based explanations that do not faithfully capture the model's decision mechanism and do not allow human-model interaction. To address these limitations, we introduce ProtoMIL, an inherently interpretable MIL model for WSI analysis that offers user-friendly explanations and supports human intervention. Our approach employs a sparse autoencoder to discover human-interpretable concepts from the image feature space, which are then used to train ProtoMIL. The model represents predictions as linear combinations of concepts, making the decision process transparent. Furthermore, ProtoMIL allows users to perform model interventions by altering the input concepts. Experiments on two widely used pathology datasets demonstrate that ProtoMIL achieves a classification performance comparable to state-of-the-art MIL models while offering intuitively understandable explanations. Moreover, we demonstrate that our method can eliminate reliance on diagnostically irrelevant information via human intervention, guiding the model toward being right for the right reason. Code will be publicly available at https://github.com/ss-sun/ProtoMIL.
no_new_dataset
0.944791
2503.08388
Wa\"el Doulazmi
Valentin Charraut and Thomas Tournaire and Wa\"el Doulazmi and Thibault Buhet
V-Max: Making RL practical for Autonomous Driving
null
null
null
null
cs.LG cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Learning-based decision-making has the potential to enable generalizable Autonomous Driving (AD) policies, reducing the engineering overhead of rule-based approaches. Imitation Learning (IL) remains the dominant paradigm, benefiting from large-scale human demonstration datasets, but it suffers from inherent limitations such as distribution shift and imitation gaps. Reinforcement Learning (RL) presents a promising alternative, yet its adoption in AD remains limited due to the lack of standardized and efficient research frameworks. To this end, we introduce V-Max, an open research framework providing all the necessary tools to make RL practical for AD. V-Max is built on Waymax, a hardware-accelerated AD simulator designed for large-scale experimentation. We extend it using ScenarioNet's approach, enabling the fast simulation of diverse AD datasets. V-Max integrates a set of observation and reward functions, transformer-based encoders, and training pipelines. Additionally, it includes adversarial evaluation settings and an extensive set of evaluation metrics. Through a large-scale benchmark, we analyze how network architectures, observation functions, training data, and reward shaping impact RL performance.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 12:53:24 GMT" } ]
2025-03-12T00:00:00
[ [ "Charraut", "Valentin", "" ], [ "Tournaire", "Thomas", "" ], [ "Doulazmi", "Waël", "" ], [ "Buhet", "Thibault", "" ] ]
TITLE: V-Max: Making RL practical for Autonomous Driving ABSTRACT: Learning-based decision-making has the potential to enable generalizable Autonomous Driving (AD) policies, reducing the engineering overhead of rule-based approaches. Imitation Learning (IL) remains the dominant paradigm, benefiting from large-scale human demonstration datasets, but it suffers from inherent limitations such as distribution shift and imitation gaps. Reinforcement Learning (RL) presents a promising alternative, yet its adoption in AD remains limited due to the lack of standardized and efficient research frameworks. To this end, we introduce V-Max, an open research framework providing all the necessary tools to make RL practical for AD. V-Max is built on Waymax, a hardware-accelerated AD simulator designed for large-scale experimentation. We extend it using ScenarioNet's approach, enabling the fast simulation of diverse AD datasets. V-Max integrates a set of observation and reward functions, transformer-based encoders, and training pipelines. Additionally, it includes adversarial evaluation settings and an extensive set of evaluation metrics. Through a large-scale benchmark, we analyze how network architectures, observation functions, training data, and reward shaping impact RL performance.
no_new_dataset
0.939192
2503.08410
Marcos Cirne
Marcos Cirne, Hannah Menke, Alhasan Abdellatif, Julien Maes, Florian Doster, Ahmed H. Elsheikh
A Deep-Learning Iterative Stacked Approach for Prediction of Reactive Dissolution in Porous Media
24 pages, 16 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Simulating reactive dissolution of solid minerals in porous media has many subsurface applications, including carbon capture and storage (CCS), geothermal systems and oil & gas recovery. As traditional direct numerical simulators are computationally expensive, it is of paramount importance to develop faster and more efficient alternatives. Deep-learning-based solutions, most of them built upon convolutional neural networks (CNNs), have been recently designed to tackle this problem. However, these solutions were limited to approximating one field over the domain (e.g. velocity field). In this manuscript, we present a novel deep learning approach that incorporates both temporal and spatial information to predict the future states of the dissolution process at a fixed time-step horizon, given a sequence of input states. The overall performance, in terms of speed and prediction accuracy, is demonstrated on a numerical simulation dataset, comparing its prediction results against state-of-the-art approaches, also achieving a speedup around $10^4$ over traditional numerical simulators.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 13:18:03 GMT" } ]
2025-03-12T00:00:00
[ [ "Cirne", "Marcos", "" ], [ "Menke", "Hannah", "" ], [ "Abdellatif", "Alhasan", "" ], [ "Maes", "Julien", "" ], [ "Doster", "Florian", "" ], [ "Elsheikh", "Ahmed H.", "" ] ]
TITLE: A Deep-Learning Iterative Stacked Approach for Prediction of Reactive Dissolution in Porous Media ABSTRACT: Simulating reactive dissolution of solid minerals in porous media has many subsurface applications, including carbon capture and storage (CCS), geothermal systems and oil & gas recovery. As traditional direct numerical simulators are computationally expensive, it is of paramount importance to develop faster and more efficient alternatives. Deep-learning-based solutions, most of them built upon convolutional neural networks (CNNs), have been recently designed to tackle this problem. However, these solutions were limited to approximating one field over the domain (e.g. velocity field). In this manuscript, we present a novel deep learning approach that incorporates both temporal and spatial information to predict the future states of the dissolution process at a fixed time-step horizon, given a sequence of input states. The overall performance, in terms of speed and prediction accuracy, is demonstrated on a numerical simulation dataset, comparing its prediction results against state-of-the-art approaches, also achieving a speedup around $10^4$ over traditional numerical simulators.
no_new_dataset
0.943919
2503.08417
Kwan Yun
Kwan Yun, Seokhyeon Hong, Chaelin Kim, Junyong Noh
AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models
11 pages, 10 figures, CVPR 2025
null
null
null
cs.GR cs.AI cs.CV cs.LG cs.MM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Despite recent advancements in learning-based motion in-betweening, a key limitation has been overlooked: the requirement for character-specific datasets. In this work, we introduce AnyMoLe, a novel method that addresses this limitation by leveraging video diffusion models to generate motion in-between frames for arbitrary characters without external data. Our approach employs a two-stage frame generation process to enhance contextual understanding. Furthermore, to bridge the domain gap between real-world and rendered character animations, we introduce ICAdapt, a fine-tuning technique for video diffusion models. Additionally, we propose a ``motion-video mimicking'' optimization technique, enabling seamless motion generation for characters with arbitrary joint structures using 2D and 3D-aware features. AnyMoLe significantly reduces data dependency while generating smooth and realistic transitions, making it applicable to a wide range of motion in-betweening tasks.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 13:28:59 GMT" } ]
2025-03-12T00:00:00
[ [ "Yun", "Kwan", "" ], [ "Hong", "Seokhyeon", "" ], [ "Kim", "Chaelin", "" ], [ "Noh", "Junyong", "" ] ]
TITLE: AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models ABSTRACT: Despite recent advancements in learning-based motion in-betweening, a key limitation has been overlooked: the requirement for character-specific datasets. In this work, we introduce AnyMoLe, a novel method that addresses this limitation by leveraging video diffusion models to generate motion in-between frames for arbitrary characters without external data. Our approach employs a two-stage frame generation process to enhance contextual understanding. Furthermore, to bridge the domain gap between real-world and rendered character animations, we introduce ICAdapt, a fine-tuning technique for video diffusion models. Additionally, we propose a ``motion-video mimicking'' optimization technique, enabling seamless motion generation for characters with arbitrary joint structures using 2D and 3D-aware features. AnyMoLe significantly reduces data dependency while generating smooth and realistic transitions, making it applicable to a wide range of motion in-betweening tasks.
no_new_dataset
0.943867
2503.08420
Yihang Wu
Ahmad Chaddad, Yan Hu, Yihang Wu, Binbin Wen, Reem Kateb
Generalizable and Explainable Deep Learning for Medical Image Computing: An Overview
Published in Current Opinion in Biomedical Engineering
null
10.1016/j.cobme.2024.100567
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective. This paper presents an overview of generalizable and explainable artificial intelligence (XAI) in deep learning (DL) for medical imaging, aimed at addressing the urgent need for transparency and explainability in clinical applications. Methodology. We propose to use four CNNs in three medical datasets (brain tumor, skin cancer, and chest x-ray) for medical image classification tasks. In addition, we perform paired t-tests to show the significance of the differences observed between different methods. Furthermore, we propose to combine ResNet50 with five common XAI techniques to obtain explainable results for model prediction, aiming at improving model transparency. We also involve a quantitative metric (confidence increase) to evaluate the usefulness of XAI techniques. Key findings. The experimental results indicate that ResNet50 can achieve feasible accuracy and F1 score in all datasets (e.g., 86.31\% accuracy in skin cancer). Furthermore, the findings show that while certain XAI methods, such as XgradCAM, effectively highlight relevant abnormal regions in medical images, others, like EigenGradCAM, may perform less effectively in specific scenarios. In addition, XgradCAM indicates higher confidence increase (e.g., 0.12 in glioma tumor) compared to GradCAM++ (0.09) and LayerCAM (0.08). Implications. Based on the experimental results and recent advancements, we outline future research directions to enhance the robustness and generalizability of DL models in the field of biomedical imaging.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 13:31:09 GMT" } ]
2025-03-12T00:00:00
[ [ "Chaddad", "Ahmad", "" ], [ "Hu", "Yan", "" ], [ "Wu", "Yihang", "" ], [ "Wen", "Binbin", "" ], [ "Kateb", "Reem", "" ] ]
TITLE: Generalizable and Explainable Deep Learning for Medical Image Computing: An Overview ABSTRACT: Objective. This paper presents an overview of generalizable and explainable artificial intelligence (XAI) in deep learning (DL) for medical imaging, aimed at addressing the urgent need for transparency and explainability in clinical applications. Methodology. We propose to use four CNNs in three medical datasets (brain tumor, skin cancer, and chest x-ray) for medical image classification tasks. In addition, we perform paired t-tests to show the significance of the differences observed between different methods. Furthermore, we propose to combine ResNet50 with five common XAI techniques to obtain explainable results for model prediction, aiming at improving model transparency. We also involve a quantitative metric (confidence increase) to evaluate the usefulness of XAI techniques. Key findings. The experimental results indicate that ResNet50 can achieve feasible accuracy and F1 score in all datasets (e.g., 86.31\% accuracy in skin cancer). Furthermore, the findings show that while certain XAI methods, such as XgradCAM, effectively highlight relevant abnormal regions in medical images, others, like EigenGradCAM, may perform less effectively in specific scenarios. In addition, XgradCAM indicates higher confidence increase (e.g., 0.12 in glioma tumor) compared to GradCAM++ (0.09) and LayerCAM (0.08). Implications. Based on the experimental results and recent advancements, we outline future research directions to enhance the robustness and generalizability of DL models in the field of biomedical imaging.
no_new_dataset
0.949529
2503.08437
Shankar Gangisetty
Shankar Gangisetty, Abdul Wasi, Shyam Nandan Rai, C. V. Jawahar, Sajay Raj, Manish Prajapati, Ayesha Choudhary, Aaryadev Chandra, Dev Chandan, Shireen Chand, Suvaditya Mukherjee
ICPR 2024 Competition on Rider Intention Prediction
null
null
10.1007/978-3-031-80139-6_3
null
cs.CV cs.AI cs.HC cs.RO
http://creativecommons.org/licenses/by/4.0/
The recent surge in the vehicle market has led to an alarming increase in road accidents. This underscores the critical importance of enhancing road safety measures, particularly for vulnerable road users like motorcyclists. Hence, we introduce the rider intention prediction (RIP) competition that aims to address challenges in rider safety by proactively predicting maneuvers before they occur, thereby strengthening rider safety. This capability enables the riders to react to the potential incorrect maneuvers flagged by advanced driver assistance systems (ADAS). We collect a new dataset, namely, rider action anticipation dataset (RAAD) for the competition consisting of two tasks: single-view RIP and multi-view RIP. The dataset incorporates a spectrum of traffic conditions and challenging navigational maneuvers on roads with varying lighting conditions. For the competition, we received seventy-five registrations and five team submissions for inference of which we compared the methods of the top three performing teams on both the RIP tasks: one state-space model (Mamba2) and two learning-based approaches (SVM and CNN-LSTM). The results indicate that the state-space model outperformed the other methods across the entire dataset, providing a balanced performance across maneuver classes. The SVM-based RIP method showed the second-best performance when using random sampling and SMOTE. However, the CNN-LSTM method underperformed, primarily due to class imbalance issues, particularly struggling with minority classes. This paper details the proposed RAAD dataset and provides a summary of the submissions for the RIP 2024 competition.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 13:50:37 GMT" } ]
2025-03-12T00:00:00
[ [ "Gangisetty", "Shankar", "" ], [ "Wasi", "Abdul", "" ], [ "Rai", "Shyam Nandan", "" ], [ "Jawahar", "C. V.", "" ], [ "Raj", "Sajay", "" ], [ "Prajapati", "Manish", "" ], [ "Choudhary", "Ayesha", "" ], [ "Chandra", "Aaryadev", "" ], [ "Chandan", "Dev", "" ], [ "Chand", "Shireen", "" ], [ "Mukherjee", "Suvaditya", "" ] ]
TITLE: ICPR 2024 Competition on Rider Intention Prediction ABSTRACT: The recent surge in the vehicle market has led to an alarming increase in road accidents. This underscores the critical importance of enhancing road safety measures, particularly for vulnerable road users like motorcyclists. Hence, we introduce the rider intention prediction (RIP) competition that aims to address challenges in rider safety by proactively predicting maneuvers before they occur, thereby strengthening rider safety. This capability enables the riders to react to the potential incorrect maneuvers flagged by advanced driver assistance systems (ADAS). We collect a new dataset, namely, rider action anticipation dataset (RAAD) for the competition consisting of two tasks: single-view RIP and multi-view RIP. The dataset incorporates a spectrum of traffic conditions and challenging navigational maneuvers on roads with varying lighting conditions. For the competition, we received seventy-five registrations and five team submissions for inference of which we compared the methods of the top three performing teams on both the RIP tasks: one state-space model (Mamba2) and two learning-based approaches (SVM and CNN-LSTM). The results indicate that the state-space model outperformed the other methods across the entire dataset, providing a balanced performance across maneuver classes. The SVM-based RIP method showed the second-best performance when using random sampling and SMOTE. However, the CNN-LSTM method underperformed, primarily due to class imbalance issues, particularly struggling with minority classes. This paper details the proposed RAAD dataset and provides a summary of the submissions for the RIP 2024 competition.
new_dataset
0.971564
2503.08461
Jianian Zhu
Jianian Zhu, Hang Wu, Haojie Wang, Yinghui Li, Biao Hou, Ruixuan Li, Jidong Zhai
FastCache: Optimizing Multimodal LLM Serving through Lightweight KV-Cache Compression Framework
14 pages, 14 figures
null
null
null
cs.MM cs.DC
http://creativecommons.org/licenses/by/4.0/
Multi-modal Large Language Models (MLLMs) serving systems commonly employ KV-cache compression to reduce memory footprint. However, existing compression methods introduce significant processing overhead and queuing delays, particularly in concurrent serving scenarios. We present \texttt{FastCache}, a novel serving framework that effectively addresses these challenges through two key innovations: (1) a dynamic batching strategy that optimizes request scheduling across prefill, compression, and decode stages, and (2) an efficient KV-cache memory pool mechanism that eliminates memory fragmentation while maintaining high GPU utilization. Our comprehensive experiments on the GQA and MileBench datasets demonstrate that \texttt{FastCache} achieves up to 19.3$\times$ reduction in Time-To-First-Token (TTFT) and 12.1$\times$ improvement in throughput compared to state-of-the-art baselines. The system maintains stable performance under high-concurrency scenarios (up to 40 req/s) while reducing average memory consumption by 20\%. These results establish \texttt{FastCache} as an efficient solution for real-world LLM serving systems with KV-cache compression.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:10:58 GMT" } ]
2025-03-12T00:00:00
[ [ "Zhu", "Jianian", "" ], [ "Wu", "Hang", "" ], [ "Wang", "Haojie", "" ], [ "Li", "Yinghui", "" ], [ "Hou", "Biao", "" ], [ "Li", "Ruixuan", "" ], [ "Zhai", "Jidong", "" ] ]
TITLE: FastCache: Optimizing Multimodal LLM Serving through Lightweight KV-Cache Compression Framework ABSTRACT: Multi-modal Large Language Models (MLLMs) serving systems commonly employ KV-cache compression to reduce memory footprint. However, existing compression methods introduce significant processing overhead and queuing delays, particularly in concurrent serving scenarios. We present \texttt{FastCache}, a novel serving framework that effectively addresses these challenges through two key innovations: (1) a dynamic batching strategy that optimizes request scheduling across prefill, compression, and decode stages, and (2) an efficient KV-cache memory pool mechanism that eliminates memory fragmentation while maintaining high GPU utilization. Our comprehensive experiments on the GQA and MileBench datasets demonstrate that \texttt{FastCache} achieves up to 19.3$\times$ reduction in Time-To-First-Token (TTFT) and 12.1$\times$ improvement in throughput compared to state-of-the-art baselines. The system maintains stable performance under high-concurrency scenarios (up to 40 req/s) while reducing average memory consumption by 20\%. These results establish \texttt{FastCache} as an efficient solution for real-world LLM serving systems with KV-cache compression.
no_new_dataset
0.942135
2503.08463
Junyoung Kim
Junyoung Kim, Madhulika Balakumar, Kenneth Ross
A Data Aggregation Visualization System supported by Processing-in-Memory
13 pages, 11 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Data visualization of aggregation queries is one of the most common ways of doing data exploration and data science as it can help identify correlations and patterns in the data. We propose DIVAN, a system that automatically normalizes the one-dimensional axes by frequency to generate large numbers of two-dimensional visualizations. DIVAN normalizes the input data via binning to allocate more pixels to data values that appear more frequently in the dataset. DIVAN can utilize either CPUs or Processing-in-Memory (PIM) architectures to quickly calculate aggregates to support the visualizations. On real world datasets, we show that DIVAN generates visualizations that highlight patterns and correlations, some expected and some unexpected. By using PIM, we can calculate aggregates 45%-64% faster than modern CPUs on large datasets. For use cases with 100 million rows and 32 columns, our system is able to compute 4,960 aggregates (each of size 128x128x128) in about a minute.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:12:46 GMT" } ]
2025-03-12T00:00:00
[ [ "Kim", "Junyoung", "" ], [ "Balakumar", "Madhulika", "" ], [ "Ross", "Kenneth", "" ] ]
TITLE: A Data Aggregation Visualization System supported by Processing-in-Memory ABSTRACT: Data visualization of aggregation queries is one of the most common ways of doing data exploration and data science as it can help identify correlations and patterns in the data. We propose DIVAN, a system that automatically normalizes the one-dimensional axes by frequency to generate large numbers of two-dimensional visualizations. DIVAN normalizes the input data via binning to allocate more pixels to data values that appear more frequently in the dataset. DIVAN can utilize either CPUs or Processing-in-Memory (PIM) architectures to quickly calculate aggregates to support the visualizations. On real world datasets, we show that DIVAN generates visualizations that highlight patterns and correlations, some expected and some unexpected. By using PIM, we can calculate aggregates 45%-64% faster than modern CPUs on large datasets. For use cases with 100 million rows and 32 columns, our system is able to compute 4,960 aggregates (each of size 128x128x128) in about a minute.
no_new_dataset
0.948058
2503.08471
Zhuoguang Chen
Zhuoguang Chen, Kenan Li, Xiuyu Yang, Tao Jiang, Yiming Li, Hang Zhao
TrackOcc: Camera-based 4D Panoptic Occupancy Tracking
Accepted at ICRA 2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Comprehensive and consistent dynamic scene understanding from camera input is essential for advanced autonomous systems. Traditional camera-based perception tasks like 3D object tracking and semantic occupancy prediction lack either spatial comprehensiveness or temporal consistency. In this work, we introduce a brand-new task, Camera-based 4D Panoptic Occupancy Tracking, which simultaneously addresses panoptic occupancy segmentation and object tracking from camera-only input. Furthermore, we propose TrackOcc, a cutting-edge approach that processes image inputs in a streaming, end-to-end manner with 4D panoptic queries to address the proposed task. Leveraging the localization-aware loss, TrackOcc enhances the accuracy of 4D panoptic occupancy tracking without bells and whistles. Experimental results demonstrate that our method achieves state-of-the-art performance on the Waymo dataset. The source code will be released at https://github.com/Tsinghua-MARS-Lab/TrackOcc.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:17:06 GMT" } ]
2025-03-12T00:00:00
[ [ "Chen", "Zhuoguang", "" ], [ "Li", "Kenan", "" ], [ "Yang", "Xiuyu", "" ], [ "Jiang", "Tao", "" ], [ "Li", "Yiming", "" ], [ "Zhao", "Hang", "" ] ]
TITLE: TrackOcc: Camera-based 4D Panoptic Occupancy Tracking ABSTRACT: Comprehensive and consistent dynamic scene understanding from camera input is essential for advanced autonomous systems. Traditional camera-based perception tasks like 3D object tracking and semantic occupancy prediction lack either spatial comprehensiveness or temporal consistency. In this work, we introduce a brand-new task, Camera-based 4D Panoptic Occupancy Tracking, which simultaneously addresses panoptic occupancy segmentation and object tracking from camera-only input. Furthermore, we propose TrackOcc, a cutting-edge approach that processes image inputs in a streaming, end-to-end manner with 4D panoptic queries to address the proposed task. Leveraging the localization-aware loss, TrackOcc enhances the accuracy of 4D panoptic occupancy tracking without bells and whistles. Experimental results demonstrate that our method achieves state-of-the-art performance on the Waymo dataset. The source code will be released at https://github.com/Tsinghua-MARS-Lab/TrackOcc.
no_new_dataset
0.939248
2503.08472
Hao Jiang
Hao Jiang, Yixing Xu, Pradeep Varakantham
Optimizing Ride-Pooling Operations with Extended Pickup and Drop-Off Flexibility
null
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
The Ride-Pool Matching Problem (RMP) is central to on-demand ride-pooling services, where vehicles must be matched with multiple requests while adhering to service constraints such as pickup delays, detour limits, and vehicle capacity. Most existing RMP solutions assume passengers are picked up and dropped off at their original locations, neglecting the potential for passengers to walk to nearby spots to meet vehicles. This assumption restricts the optimization potential in ride-pooling operations. In this paper, we propose a novel matching method that incorporates extended pickup and drop-off areas for passengers. We first design a tree-based approach to efficiently generate feasible matches between passengers and vehicles. Next, we optimize vehicle routes to cover all designated pickup and drop-off locations while minimizing total travel distance. Finally, we employ dynamic assignment strategies to achieve optimal matching outcomes. Experiments on city-scale taxi datasets demonstrate that our method improves the number of served requests by up to 13\% and average travel distance by up to 21\% compared to leading existing solutions, underscoring the potential of leveraging passenger mobility to significantly enhance ride-pooling service efficiency.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:17:30 GMT" } ]
2025-03-12T00:00:00
[ [ "Jiang", "Hao", "" ], [ "Xu", "Yixing", "" ], [ "Varakantham", "Pradeep", "" ] ]
TITLE: Optimizing Ride-Pooling Operations with Extended Pickup and Drop-Off Flexibility ABSTRACT: The Ride-Pool Matching Problem (RMP) is central to on-demand ride-pooling services, where vehicles must be matched with multiple requests while adhering to service constraints such as pickup delays, detour limits, and vehicle capacity. Most existing RMP solutions assume passengers are picked up and dropped off at their original locations, neglecting the potential for passengers to walk to nearby spots to meet vehicles. This assumption restricts the optimization potential in ride-pooling operations. In this paper, we propose a novel matching method that incorporates extended pickup and drop-off areas for passengers. We first design a tree-based approach to efficiently generate feasible matches between passengers and vehicles. Next, we optimize vehicle routes to cover all designated pickup and drop-off locations while minimizing total travel distance. Finally, we employ dynamic assignment strategies to achieve optimal matching outcomes. Experiments on city-scale taxi datasets demonstrate that our method improves the number of served requests by up to 13\% and average travel distance by up to 21\% compared to leading existing solutions, underscoring the potential of leveraging passenger mobility to significantly enhance ride-pooling service efficiency.
no_new_dataset
0.945851
2503.08474
Martin Alexander B\"uchner
Tim Steinke, Martin B\"uchner, Niclas V\"odisch, and Abhinav Valada
Collaborative Dynamic 3D Scene Graphs for Open-Vocabulary Urban Scene Understanding
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mapping and scene representation are fundamental to reliable planning and navigation in mobile robots. While purely geometric maps using voxel grids allow for general navigation, obtaining up-to-date spatial and semantically rich representations that scale to dynamic large-scale environments remains challenging. In this work, we present CURB-OSG, an open-vocabulary dynamic 3D scene graph engine that generates hierarchical decompositions of urban driving scenes via multi-agent collaboration. By fusing the camera and LiDAR observations from multiple perceiving agents with unknown initial poses, our approach generates more accurate maps compared to a single agent while constructing a unified open-vocabulary semantic hierarchy of the scene. Unlike previous methods that rely on ground truth agent poses or are evaluated purely in simulation, CURB-OSG alleviates these constraints. We evaluate the capabilities of CURB-OSG on real-world multi-agent sensor data obtained from multiple sessions of the Oxford Radar RobotCar dataset. We demonstrate improved mapping and object prediction accuracy through multi-agent collaboration as well as evaluate the environment partitioning capabilities of the proposed approach. To foster further research, we release our code and supplementary material at https://ov-curb.cs.uni-freiburg.de.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:21:59 GMT" } ]
2025-03-12T00:00:00
[ [ "Steinke", "Tim", "" ], [ "Büchner", "Martin", "" ], [ "Vödisch", "Niclas", "" ], [ "Valada", "Abhinav", "" ] ]
TITLE: Collaborative Dynamic 3D Scene Graphs for Open-Vocabulary Urban Scene Understanding ABSTRACT: Mapping and scene representation are fundamental to reliable planning and navigation in mobile robots. While purely geometric maps using voxel grids allow for general navigation, obtaining up-to-date spatial and semantically rich representations that scale to dynamic large-scale environments remains challenging. In this work, we present CURB-OSG, an open-vocabulary dynamic 3D scene graph engine that generates hierarchical decompositions of urban driving scenes via multi-agent collaboration. By fusing the camera and LiDAR observations from multiple perceiving agents with unknown initial poses, our approach generates more accurate maps compared to a single agent while constructing a unified open-vocabulary semantic hierarchy of the scene. Unlike previous methods that rely on ground truth agent poses or are evaluated purely in simulation, CURB-OSG alleviates these constraints. We evaluate the capabilities of CURB-OSG on real-world multi-agent sensor data obtained from multiple sessions of the Oxford Radar RobotCar dataset. We demonstrate improved mapping and object prediction accuracy through multi-agent collaboration as well as evaluate the environment partitioning capabilities of the proposed approach. To foster further research, we release our code and supplementary material at https://ov-curb.cs.uni-freiburg.de.
no_new_dataset
0.944485
2503.08482
Pouya Shaeri
Pouya Shaeri, Saud AlKhaled, Ariane Middel
A Multimodal Physics-Informed Neural Network Approach for Mean Radiant Temperature Modeling
null
null
null
null
cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Outdoor thermal comfort is a critical determinant of urban livability, particularly in hot desert climates where extreme heat poses challenges to public health, energy consumption, and urban planning. Mean Radiant Temperature ($T_{mrt}$) is a key parameter for evaluating outdoor thermal comfort, especially in urban environments where radiation dynamics significantly impact human thermal exposure. Traditional methods of estimating $T_{mrt}$ rely on field measurements and computational simulations, both of which are resource intensive. This study introduces a Physics-Informed Neural Network (PINN) approach that integrates shortwave and longwave radiation modeling with deep learning techniques. By leveraging a multimodal dataset that includes meteorological data, built environment characteristics, and fisheye image-derived shading information, our model enhances predictive accuracy while maintaining physical consistency. Our experimental results demonstrate that the proposed PINN framework outperforms conventional deep learning models, with the best-performing configurations achieving an RMSE of 3.50 and an $R^2$ of 0.88. This approach highlights the potential of physics-informed machine learning in bridging the gap between computational modeling and real-world applications, offering a scalable and interpretable solution for urban thermal comfort assessments.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:36:08 GMT" } ]
2025-03-12T00:00:00
[ [ "Shaeri", "Pouya", "" ], [ "AlKhaled", "Saud", "" ], [ "Middel", "Ariane", "" ] ]
TITLE: A Multimodal Physics-Informed Neural Network Approach for Mean Radiant Temperature Modeling ABSTRACT: Outdoor thermal comfort is a critical determinant of urban livability, particularly in hot desert climates where extreme heat poses challenges to public health, energy consumption, and urban planning. Mean Radiant Temperature ($T_{mrt}$) is a key parameter for evaluating outdoor thermal comfort, especially in urban environments where radiation dynamics significantly impact human thermal exposure. Traditional methods of estimating $T_{mrt}$ rely on field measurements and computational simulations, both of which are resource intensive. This study introduces a Physics-Informed Neural Network (PINN) approach that integrates shortwave and longwave radiation modeling with deep learning techniques. By leveraging a multimodal dataset that includes meteorological data, built environment characteristics, and fisheye image-derived shading information, our model enhances predictive accuracy while maintaining physical consistency. Our experimental results demonstrate that the proposed PINN framework outperforms conventional deep learning models, with the best-performing configurations achieving an RMSE of 3.50 and an $R^2$ of 0.88. This approach highlights the potential of physics-informed machine learning in bridging the gap between computational modeling and real-world applications, offering a scalable and interpretable solution for urban thermal comfort assessments.
no_new_dataset
0.946745
2503.08483
Abhishek Saroha
Nhat Phuong Anh Vu, Abhishek Saroha, Or Litany, Daniel Cremers
GAS-NeRF: Geometry-Aware Stylization of Dynamic Radiance Fields
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Current 3D stylization techniques primarily focus on static scenes, while our world is inherently dynamic, filled with moving objects and changing environments. Existing style transfer methods primarily target appearance -- such as color and texture transformation -- but often neglect the geometric characteristics of the style image, which are crucial for achieving a complete and coherent stylization effect. To overcome these shortcomings, we propose GAS-NeRF, a novel approach for joint appearance and geometry stylization in dynamic Radiance Fields. Our method leverages depth maps to extract and transfer geometric details into the radiance field, followed by appearance transfer. Experimental results on synthetic and real-world datasets demonstrate that our approach significantly enhances the stylization quality while maintaining temporal coherence in dynamic scenes.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:37:06 GMT" } ]
2025-03-12T00:00:00
[ [ "Vu", "Nhat Phuong Anh", "" ], [ "Saroha", "Abhishek", "" ], [ "Litany", "Or", "" ], [ "Cremers", "Daniel", "" ] ]
TITLE: GAS-NeRF: Geometry-Aware Stylization of Dynamic Radiance Fields ABSTRACT: Current 3D stylization techniques primarily focus on static scenes, while our world is inherently dynamic, filled with moving objects and changing environments. Existing style transfer methods primarily target appearance -- such as color and texture transformation -- but often neglect the geometric characteristics of the style image, which are crucial for achieving a complete and coherent stylization effect. To overcome these shortcomings, we propose GAS-NeRF, a novel approach for joint appearance and geometry stylization in dynamic Radiance Fields. Our method leverages depth maps to extract and transfer geometric details into the radiance field, followed by appearance transfer. Experimental results on synthetic and real-world datasets demonstrate that our approach significantly enhances the stylization quality while maintaining temporal coherence in dynamic scenes.
no_new_dataset
0.954393
2503.08495
Han Cao
Han Cao, Lingwei Wei, Wei Zhou, Songlin Hu
Enhancing Multi-Hop Fact Verification with Structured Knowledge-Augmented Large Language Models
Accepted by AAAI 2025
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid development of social platforms exacerbates the dissemination of misinformation, which stimulates the research in fact verification. Recent studies tend to leverage semantic features to solve this problem as a single-hop task. However, the process of verifying a claim requires several pieces of evidence with complicated inner logic and relations to verify the given claim in real-world situations. Recent studies attempt to improve both understanding and reasoning abilities to enhance the performance, but they overlook the crucial relations between entities that benefit models to understand better and facilitate the prediction. To emphasize the significance of relations, we resort to Large Language Models (LLMs) considering their excellent understanding ability. Instead of other methods using LLMs as the predictor, we take them as relation extractors, for they do better in understanding rather than reasoning according to the experimental results. Thus, to solve the challenges above, we propose a novel Structured Knowledge-Augmented LLM-based Network (LLM-SKAN) for multi-hop fact verification. Specifically, we utilize an LLM-driven Knowledge Extractor to capture fine-grained information, including entities and their complicated relations. Besides, we leverage a Knowledge-Augmented Relation Graph Fusion module to interact with each node and learn better claim-evidence representations comprehensively. The experimental results on four common-used datasets demonstrate the effectiveness and superiority of our model.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:47:24 GMT" } ]
2025-03-12T00:00:00
[ [ "Cao", "Han", "" ], [ "Wei", "Lingwei", "" ], [ "Zhou", "Wei", "" ], [ "Hu", "Songlin", "" ] ]
TITLE: Enhancing Multi-Hop Fact Verification with Structured Knowledge-Augmented Large Language Models ABSTRACT: The rapid development of social platforms exacerbates the dissemination of misinformation, which stimulates the research in fact verification. Recent studies tend to leverage semantic features to solve this problem as a single-hop task. However, the process of verifying a claim requires several pieces of evidence with complicated inner logic and relations to verify the given claim in real-world situations. Recent studies attempt to improve both understanding and reasoning abilities to enhance the performance, but they overlook the crucial relations between entities that benefit models to understand better and facilitate the prediction. To emphasize the significance of relations, we resort to Large Language Models (LLMs) considering their excellent understanding ability. Instead of other methods using LLMs as the predictor, we take them as relation extractors, for they do better in understanding rather than reasoning according to the experimental results. Thus, to solve the challenges above, we propose a novel Structured Knowledge-Augmented LLM-based Network (LLM-SKAN) for multi-hop fact verification. Specifically, we utilize an LLM-driven Knowledge Extractor to capture fine-grained information, including entities and their complicated relations. Besides, we leverage a Knowledge-Augmented Relation Graph Fusion module to interact with each node and learn better claim-evidence representations comprehensively. The experimental results on four common-used datasets demonstrate the effectiveness and superiority of our model.
no_new_dataset
0.949716
2503.08496
Henry Senior
Henry Senior, Luca Rossi, Gregory Slabaugh, Shanxin Yuan
SuperCap: Multi-resolution Superpixel-based Image Captioning
12 pages, 4 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
It has been a longstanding goal within image captioning to move beyond a dependence on object detection. We investigate using superpixels coupled with Vision Language Models (VLMs) to bridge the gap between detector-based captioning architectures and those that solely pretrain on large datasets. Our novel superpixel approach ensures that the model receives object-like features whilst the use of VLMs provides our model with open set object understanding. Furthermore, we extend our architecture to make use of multi-resolution inputs, allowing our model to view images in different levels of detail, and use an attention mechanism to determine which parts are most relevant to the caption. We demonstrate our model's performance with multiple VLMs and through a range of ablations detailing the impact of different architectural choices. Our full model achieves a competitive CIDEr score of $136.9$ on the COCO Karpathy split.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:47:46 GMT" } ]
2025-03-12T00:00:00
[ [ "Senior", "Henry", "" ], [ "Rossi", "Luca", "" ], [ "Slabaugh", "Gregory", "" ], [ "Yuan", "Shanxin", "" ] ]
TITLE: SuperCap: Multi-resolution Superpixel-based Image Captioning ABSTRACT: It has been a longstanding goal within image captioning to move beyond a dependence on object detection. We investigate using superpixels coupled with Vision Language Models (VLMs) to bridge the gap between detector-based captioning architectures and those that solely pretrain on large datasets. Our novel superpixel approach ensures that the model receives object-like features whilst the use of VLMs provides our model with open set object understanding. Furthermore, we extend our architecture to make use of multi-resolution inputs, allowing our model to view images in different levels of detail, and use an attention mechanism to determine which parts are most relevant to the caption. We demonstrate our model's performance with multiple VLMs and through a range of ablations detailing the impact of different architectural choices. Our full model achieves a competitive CIDEr score of $136.9$ on the COCO Karpathy split.
no_new_dataset
0.945751
2503.08505
Fan Wu
Fan Wu, Sijun Dong, Xiaoliang Meng
CFNet: Optimizing Remote Sensing Change Detection through Content-Aware Enhancement
17 pages, 12 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Change detection is a crucial and widely applied task in remote sensing, aimed at identifying and analyzing changes occurring in the same geographical area over time. Due to variability in acquisition conditions, bi-temporal remote sensing images often exhibit significant differences in image style. Even with the powerful generalization capabilities of DNNs, these unpredictable style variations between bi-temporal images inevitably affect model's ability to accurately detect changed areas. To address issue above, we propose the Content Focuser Network (CFNet), which takes content-aware strategy as a key insight. CFNet employs EfficientNet-B5 as the backbone for feature extraction. To enhance the model's focus on the content features of images while mitigating the misleading effects of style features, we develop a constraint strategy that prioritizes the content features of bi-temporal images, termed Content-Aware. Furthermore, to enable the model to flexibly focus on changed and unchanged areas according to the requirements of different stages, we design a reweighting module based on the cosine distance between bi-temporal image features, termed Focuser. CFNet achieve outstanding performance across three well-known change detection datasets: CLCD (F1: 81.41%, IoU: 68.65%), LEVIR-CD (F1: 92.18%, IoU: 85.49%), and SYSU-CD (F1: 82.89%, IoU: 70.78%). The code and pretrained models of CFNet are publicly released at https://github.com/wifiBlack/CFNet.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:56:11 GMT" } ]
2025-03-12T00:00:00
[ [ "Wu", "Fan", "" ], [ "Dong", "Sijun", "" ], [ "Meng", "Xiaoliang", "" ] ]
TITLE: CFNet: Optimizing Remote Sensing Change Detection through Content-Aware Enhancement ABSTRACT: Change detection is a crucial and widely applied task in remote sensing, aimed at identifying and analyzing changes occurring in the same geographical area over time. Due to variability in acquisition conditions, bi-temporal remote sensing images often exhibit significant differences in image style. Even with the powerful generalization capabilities of DNNs, these unpredictable style variations between bi-temporal images inevitably affect model's ability to accurately detect changed areas. To address issue above, we propose the Content Focuser Network (CFNet), which takes content-aware strategy as a key insight. CFNet employs EfficientNet-B5 as the backbone for feature extraction. To enhance the model's focus on the content features of images while mitigating the misleading effects of style features, we develop a constraint strategy that prioritizes the content features of bi-temporal images, termed Content-Aware. Furthermore, to enable the model to flexibly focus on changed and unchanged areas according to the requirements of different stages, we design a reweighting module based on the cosine distance between bi-temporal image features, termed Focuser. CFNet achieve outstanding performance across three well-known change detection datasets: CLCD (F1: 81.41%, IoU: 68.65%), LEVIR-CD (F1: 92.18%, IoU: 85.49%), and SYSU-CD (F1: 82.89%, IoU: 70.78%). The code and pretrained models of CFNet are publicly released at https://github.com/wifiBlack/CFNet.
no_new_dataset
0.950641
2503.08506
Xian Gao
Xian Gao, Jiacheng Ruan, Jingsheng Gao, Ting Liu and Yuzhuo Fu
ReviewAgents: Bridging the Gap Between Human and AI-Generated Paper Reviews
Work in progress
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Academic paper review is a critical yet time-consuming task within the research community. With the increasing volume of academic publications, automating the review process has become a significant challenge. The primary issue lies in generating comprehensive, accurate, and reasoning-consistent review comments that align with human reviewers' judgments. In this paper, we address this challenge by proposing ReviewAgents, a framework that leverages large language models (LLMs) to generate academic paper reviews. We first introduce a novel dataset, Review-CoT, consisting of 142k review comments, designed for training LLM agents. This dataset emulates the structured reasoning process of human reviewers-summarizing the paper, referencing relevant works, identifying strengths and weaknesses, and generating a review conclusion. Building upon this, we train LLM reviewer agents capable of structured reasoning using a relevant-paper-aware training method. Furthermore, we construct ReviewAgents, a multi-role, multi-LLM agent review framework, to enhance the review comment generation process. Additionally, we propose ReviewBench, a benchmark for evaluating the review comments generated by LLMs. Our experimental results on ReviewBench demonstrate that while existing LLMs exhibit a certain degree of potential for automating the review process, there remains a gap when compared to human-generated reviews. Moreover, our ReviewAgents framework further narrows this gap, outperforming advanced LLMs in generating review comments.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:56:58 GMT" } ]
2025-03-12T00:00:00
[ [ "Gao", "Xian", "" ], [ "Ruan", "Jiacheng", "" ], [ "Gao", "Jingsheng", "" ], [ "Liu", "Ting", "" ], [ "Fu", "Yuzhuo", "" ] ]
TITLE: ReviewAgents: Bridging the Gap Between Human and AI-Generated Paper Reviews ABSTRACT: Academic paper review is a critical yet time-consuming task within the research community. With the increasing volume of academic publications, automating the review process has become a significant challenge. The primary issue lies in generating comprehensive, accurate, and reasoning-consistent review comments that align with human reviewers' judgments. In this paper, we address this challenge by proposing ReviewAgents, a framework that leverages large language models (LLMs) to generate academic paper reviews. We first introduce a novel dataset, Review-CoT, consisting of 142k review comments, designed for training LLM agents. This dataset emulates the structured reasoning process of human reviewers-summarizing the paper, referencing relevant works, identifying strengths and weaknesses, and generating a review conclusion. Building upon this, we train LLM reviewer agents capable of structured reasoning using a relevant-paper-aware training method. Furthermore, we construct ReviewAgents, a multi-role, multi-LLM agent review framework, to enhance the review comment generation process. Additionally, we propose ReviewBench, a benchmark for evaluating the review comments generated by LLMs. Our experimental results on ReviewBench demonstrate that while existing LLMs exhibit a certain degree of potential for automating the review process, there remains a gap when compared to human-generated reviews. Moreover, our ReviewAgents framework further narrows this gap, outperforming advanced LLMs in generating review comments.
new_dataset
0.962497
2503.08507
Qing Jiang
Qing Jiang, Lin Wu, Zhaoyang Zeng, Tianhe Ren, Yuda Xiong, Yihao Chen, Qin Liu, Lei Zhang
Referring to Any Person
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Humans are undoubtedly the most important participants in computer vision, and the ability to detect any individual given a natural language description, a task we define as referring to any person, holds substantial practical value. However, we find that existing models generally fail to achieve real-world usability, and current benchmarks are limited by their focus on one-to-one referring, that hinder progress in this area. In this work, we revisit this task from three critical perspectives: task definition, dataset design, and model architecture. We first identify five aspects of referable entities and three distinctive characteristics of this task. Next, we introduce HumanRef, a novel dataset designed to tackle these challenges and better reflect real-world applications. From a model design perspective, we integrate a multimodal large language model with an object detection framework, constructing a robust referring model named RexSeek. Experimental results reveal that state-of-the-art models, which perform well on commonly used benchmarks like RefCOCO/+/g, struggle with HumanRef due to their inability to detect multiple individuals. In contrast, RexSeek not only excels in human referring but also generalizes effectively to common object referring, making it broadly applicable across various perception tasks. Code is available at https://github.com/IDEA-Research/RexSeek
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:57:14 GMT" } ]
2025-03-12T00:00:00
[ [ "Jiang", "Qing", "" ], [ "Wu", "Lin", "" ], [ "Zeng", "Zhaoyang", "" ], [ "Ren", "Tianhe", "" ], [ "Xiong", "Yuda", "" ], [ "Chen", "Yihao", "" ], [ "Liu", "Qin", "" ], [ "Zhang", "Lei", "" ] ]
TITLE: Referring to Any Person ABSTRACT: Humans are undoubtedly the most important participants in computer vision, and the ability to detect any individual given a natural language description, a task we define as referring to any person, holds substantial practical value. However, we find that existing models generally fail to achieve real-world usability, and current benchmarks are limited by their focus on one-to-one referring, that hinder progress in this area. In this work, we revisit this task from three critical perspectives: task definition, dataset design, and model architecture. We first identify five aspects of referable entities and three distinctive characteristics of this task. Next, we introduce HumanRef, a novel dataset designed to tackle these challenges and better reflect real-world applications. From a model design perspective, we integrate a multimodal large language model with an object detection framework, constructing a robust referring model named RexSeek. Experimental results reveal that state-of-the-art models, which perform well on commonly used benchmarks like RefCOCO/+/g, struggle with HumanRef due to their inability to detect multiple individuals. In contrast, RexSeek not only excels in human referring but also generalizes effectively to common object referring, making it broadly applicable across various perception tasks. Code is available at https://github.com/IDEA-Research/RexSeek
new_dataset
0.960473
2503.08508
Weijie Zhou
Weijie Zhou (1,2), Yi Peng (2), Manli Tao (2), Chaoyang Zhao (2,3), Honghui Dong (1), Ming Tang (2), Jinqiao Wang (2,3) ((1) School of Traffic and Transportation, Beijing Jiaotong University, (2) Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, (3) objecteye.Inc)
LightPlanner: Unleashing the Reasoning Capabilities of Lightweight Large Language Models in Task Planning
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
In recent years, lightweight large language models (LLMs) have garnered significant attention in the robotics field due to their low computational resource requirements and suitability for edge deployment. However, in task planning -- particularly for complex tasks that involve dynamic semantic logic reasoning -- lightweight LLMs have underperformed. To address this limitation, we propose a novel task planner, LightPlanner, which enhances the performance of lightweight LLMs in complex task planning by fully leveraging their reasoning capabilities. Unlike conventional planners that use fixed skill templates, LightPlanner controls robot actions via parameterized function calls, dynamically generating parameter values. This approach allows for fine-grained skill control and improves task planning success rates in complex scenarios. Furthermore, we introduce hierarchical deep reasoning. Before generating each action decision step, LightPlanner thoroughly considers three levels: action execution (feedback verification), semantic parsing (goal consistency verification), and parameter generation (parameter validity verification). This ensures the correctness of subsequent action controls. Additionally, we incorporate a memory module to store historical actions, thereby reducing context length and enhancing planning efficiency for long-term tasks. We train the LightPlanner-1.5B model on our LightPlan-40k dataset, which comprises 40,000 action controls across tasks with 2 to 13 action steps. Experiments demonstrate that our model achieves the highest task success rate despite having the smallest number of parameters. In tasks involving spatial semantic reasoning, the success rate exceeds that of ReAct by 14.9 percent. Moreover, we demonstrate LightPlanner's potential to operate on edge devices.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 14:57:53 GMT" } ]
2025-03-12T00:00:00
[ [ "Zhou", "Weijie", "" ], [ "Peng", "Yi", "" ], [ "Tao", "Manli", "" ], [ "Zhao", "Chaoyang", "" ], [ "Dong", "Honghui", "" ], [ "Tang", "Ming", "" ], [ "Wang", "Jinqiao", "" ] ]
TITLE: LightPlanner: Unleashing the Reasoning Capabilities of Lightweight Large Language Models in Task Planning ABSTRACT: In recent years, lightweight large language models (LLMs) have garnered significant attention in the robotics field due to their low computational resource requirements and suitability for edge deployment. However, in task planning -- particularly for complex tasks that involve dynamic semantic logic reasoning -- lightweight LLMs have underperformed. To address this limitation, we propose a novel task planner, LightPlanner, which enhances the performance of lightweight LLMs in complex task planning by fully leveraging their reasoning capabilities. Unlike conventional planners that use fixed skill templates, LightPlanner controls robot actions via parameterized function calls, dynamically generating parameter values. This approach allows for fine-grained skill control and improves task planning success rates in complex scenarios. Furthermore, we introduce hierarchical deep reasoning. Before generating each action decision step, LightPlanner thoroughly considers three levels: action execution (feedback verification), semantic parsing (goal consistency verification), and parameter generation (parameter validity verification). This ensures the correctness of subsequent action controls. Additionally, we incorporate a memory module to store historical actions, thereby reducing context length and enhancing planning efficiency for long-term tasks. We train the LightPlanner-1.5B model on our LightPlan-40k dataset, which comprises 40,000 action controls across tasks with 2 to 13 action steps. Experiments demonstrate that our model achieves the highest task success rate despite having the smallest number of parameters. In tasks involving spatial semantic reasoning, the success rate exceeds that of ReAct by 14.9 percent. Moreover, we demonstrate LightPlanner's potential to operate on edge devices.
new_dataset
0.958343
2503.08510
Da-Wei Zhou
Da-Wei Zhou, Kai-Wen Li, Jingyi Ning, Han-Jia Ye, Lijun Zhang, De-Chuan Zhan
External Knowledge Injection for CLIP-Based Class-Incremental Learning
Code is available at: https://github.com/RenaissCode/ENGINE
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Class-Incremental Learning (CIL) enables learning systems to continuously adapt to evolving data streams. With the advancement of pre-training, leveraging pre-trained vision-language models (e.g., CLIP) offers a promising starting point for CIL. However, CLIP makes decisions by matching visual embeddings to class names, overlooking the rich contextual information conveyed through language. For instance, the concept of ``cat'' can be decomposed into features like tail, fur, and face for recognition. Besides, since the model is continually updated, these detailed features are overwritten in CIL, requiring external knowledge for compensation. In this paper, we introduce ExterNal knowledGe INjEction (ENGINE) for CLIP-based CIL. To enhance knowledge transfer from outside the dataset, we propose a dual-branch injection tuning framework that encodes informative knowledge from both visual and textual modalities. The visual branch is enhanced with data augmentation to enrich the visual features, while the textual branch leverages GPT-4 to rewrite discriminative descriptors. In addition to this on-the-fly knowledge injection, we also implement post-tuning knowledge by re-ranking the prediction results during inference. With the injected knowledge, the model can better capture informative features for downstream tasks as data evolves. Extensive experiments demonstrate the state-of-the-art performance of ENGINE. Code is available at: https://github.com/RenaissCode/ENGINE
[ { "version": "v1", "created": "Tue, 11 Mar 2025 15:00:22 GMT" } ]
2025-03-12T00:00:00
[ [ "Zhou", "Da-Wei", "" ], [ "Li", "Kai-Wen", "" ], [ "Ning", "Jingyi", "" ], [ "Ye", "Han-Jia", "" ], [ "Zhang", "Lijun", "" ], [ "Zhan", "De-Chuan", "" ] ]
TITLE: External Knowledge Injection for CLIP-Based Class-Incremental Learning ABSTRACT: Class-Incremental Learning (CIL) enables learning systems to continuously adapt to evolving data streams. With the advancement of pre-training, leveraging pre-trained vision-language models (e.g., CLIP) offers a promising starting point for CIL. However, CLIP makes decisions by matching visual embeddings to class names, overlooking the rich contextual information conveyed through language. For instance, the concept of ``cat'' can be decomposed into features like tail, fur, and face for recognition. Besides, since the model is continually updated, these detailed features are overwritten in CIL, requiring external knowledge for compensation. In this paper, we introduce ExterNal knowledGe INjEction (ENGINE) for CLIP-based CIL. To enhance knowledge transfer from outside the dataset, we propose a dual-branch injection tuning framework that encodes informative knowledge from both visual and textual modalities. The visual branch is enhanced with data augmentation to enrich the visual features, while the textual branch leverages GPT-4 to rewrite discriminative descriptors. In addition to this on-the-fly knowledge injection, we also implement post-tuning knowledge by re-ranking the prediction results during inference. With the injected knowledge, the model can better capture informative features for downstream tasks as data evolves. Extensive experiments demonstrate the state-of-the-art performance of ENGINE. Code is available at: https://github.com/RenaissCode/ENGINE
no_new_dataset
0.945551
2503.08512
Zhuoyuan Li
Zhuoyuan Li, Jiahao Lu, Jiacheng Deng, Hanzhi Chang, Lifan Wu, Yanzhe Liang, Tianzhu Zhang
SAS: Segment Any 3D Scene with Integrated 2D Priors
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The open vocabulary capability of 3D models is increasingly valued, as traditional methods with models trained with fixed categories fail to recognize unseen objects in complex dynamic 3D scenes. In this paper, we propose a simple yet effective approach, SAS, to integrate the open vocabulary capability of multiple 2D models and migrate it to 3D domain. Specifically, we first propose Model Alignment via Text to map different 2D models into the same embedding space using text as a bridge. Then, we propose Annotation-Free Model Capability Construction to explicitly quantify the 2D model's capability of recognizing different categories using diffusion models. Following this, point cloud features from different 2D models are fused with the guide of constructed model capabilities. Finally, the integrated 2D open vocabulary capability is transferred to 3D domain through feature distillation. SAS outperforms previous methods by a large margin across multiple datasets, including ScanNet v2, Matterport3D, and nuScenes, while its generalizability is further validated on downstream tasks, e.g., gaussian segmentation and instance segmentation.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 15:01:54 GMT" } ]
2025-03-12T00:00:00
[ [ "Li", "Zhuoyuan", "" ], [ "Lu", "Jiahao", "" ], [ "Deng", "Jiacheng", "" ], [ "Chang", "Hanzhi", "" ], [ "Wu", "Lifan", "" ], [ "Liang", "Yanzhe", "" ], [ "Zhang", "Tianzhu", "" ] ]
TITLE: SAS: Segment Any 3D Scene with Integrated 2D Priors ABSTRACT: The open vocabulary capability of 3D models is increasingly valued, as traditional methods with models trained with fixed categories fail to recognize unseen objects in complex dynamic 3D scenes. In this paper, we propose a simple yet effective approach, SAS, to integrate the open vocabulary capability of multiple 2D models and migrate it to 3D domain. Specifically, we first propose Model Alignment via Text to map different 2D models into the same embedding space using text as a bridge. Then, we propose Annotation-Free Model Capability Construction to explicitly quantify the 2D model's capability of recognizing different categories using diffusion models. Following this, point cloud features from different 2D models are fused with the guide of constructed model capabilities. Finally, the integrated 2D open vocabulary capability is transferred to 3D domain through feature distillation. SAS outperforms previous methods by a large margin across multiple datasets, including ScanNet v2, Matterport3D, and nuScenes, while its generalizability is further validated on downstream tasks, e.g., gaussian segmentation and instance segmentation.
no_new_dataset
0.9463
2503.08515
David Vallmanya Poch
David Vallmanya Poch, Yorick Estievenart, Elnura Zhalieva, Sukanya Patra, Mohammad Yaqub, Souhaib Ben Taieb
Segmentation-Guided CT Synthesis with Pixel-Wise Conformal Uncertainty Bounds
MICCAI 2025 Conference Submission. Follows the required LNCS format. 12 pages including references. Contains 4 figures and 1 table
null
null
null
cs.CV physics.med-ph
http://creativecommons.org/licenses/by/4.0/
Accurate dose calculations in proton therapy rely on high-quality CT images. While planning CTs (pCTs) serve as a reference for dosimetric planning, Cone Beam CT (CBCT) is used throughout Adaptive Radiotherapy (ART) to generate sCTs for improved dose calculations. Despite its lower cost and reduced radiation exposure advantages, CBCT suffers from severe artefacts and poor image quality, making it unsuitable for precise dosimetry. Deep learning-based CBCT-to-CT translation has emerged as a promising approach. Still, existing methods often introduce anatomical inconsistencies and lack reliable uncertainty estimates, limiting their clinical adoption. To bridge this gap, we propose STF-RUE, a novel framework integrating two key components. First, STF, a segmentation-guided CBCT-to-CT translation method that enhances anatomical consistency by leveraging segmentation priors extracted from pCTs. Second, RUE, a conformal prediction method that augments predicted CTs with pixel-wise conformal prediction intervals, providing clinicians with robust reliability indicator. Comprehensive experiments using UNet++ and Fast-DDPM on two benchmark datasets demonstrate that STF-RUE significantly improves translation accuracy, as measured by a novel soft-tissue-focused metric designed for precise dose computation. Additionally, STF-RUE provides better-calibrated uncertainty sets for synthetic CT, reinforcing trust in synthetic CTs. By addressing both anatomical fidelity and uncertainty quantification, STF-RUE marks a crucial step toward safer and more effective adaptive proton therapy. Code is available at https://anonymous.4open.science/r/cbct2ct_translation-B2D9/.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 15:07:16 GMT" } ]
2025-03-12T00:00:00
[ [ "Poch", "David Vallmanya", "" ], [ "Estievenart", "Yorick", "" ], [ "Zhalieva", "Elnura", "" ], [ "Patra", "Sukanya", "" ], [ "Yaqub", "Mohammad", "" ], [ "Taieb", "Souhaib Ben", "" ] ]
TITLE: Segmentation-Guided CT Synthesis with Pixel-Wise Conformal Uncertainty Bounds ABSTRACT: Accurate dose calculations in proton therapy rely on high-quality CT images. While planning CTs (pCTs) serve as a reference for dosimetric planning, Cone Beam CT (CBCT) is used throughout Adaptive Radiotherapy (ART) to generate sCTs for improved dose calculations. Despite its lower cost and reduced radiation exposure advantages, CBCT suffers from severe artefacts and poor image quality, making it unsuitable for precise dosimetry. Deep learning-based CBCT-to-CT translation has emerged as a promising approach. Still, existing methods often introduce anatomical inconsistencies and lack reliable uncertainty estimates, limiting their clinical adoption. To bridge this gap, we propose STF-RUE, a novel framework integrating two key components. First, STF, a segmentation-guided CBCT-to-CT translation method that enhances anatomical consistency by leveraging segmentation priors extracted from pCTs. Second, RUE, a conformal prediction method that augments predicted CTs with pixel-wise conformal prediction intervals, providing clinicians with robust reliability indicator. Comprehensive experiments using UNet++ and Fast-DDPM on two benchmark datasets demonstrate that STF-RUE significantly improves translation accuracy, as measured by a novel soft-tissue-focused metric designed for precise dose computation. Additionally, STF-RUE provides better-calibrated uncertainty sets for synthetic CT, reinforcing trust in synthetic CTs. By addressing both anatomical fidelity and uncertainty quantification, STF-RUE marks a crucial step toward safer and more effective adaptive proton therapy. Code is available at https://anonymous.4open.science/r/cbct2ct_translation-B2D9/.
no_new_dataset
0.950732
2503.08529
Ryan Wong
Ryan Wong, Necati Cihan Camgoz, Richard Bowden
SignRep: Enhancing Self-Supervised Sign Representations
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Sign language representation learning presents unique challenges due to the complex spatio-temporal nature of signs and the scarcity of labeled datasets. Existing methods often rely either on models pre-trained on general visual tasks, that lack sign-specific features, or use complex multimodal and multi-branch architectures. To bridge this gap, we introduce a scalable, self-supervised framework for sign representation learning. We leverage important inductive (sign) priors during the training of our RGB model. To do this, we leverage simple but important cues based on skeletons while pretraining a masked autoencoder. These sign specific priors alongside feature regularization and an adversarial style agnostic loss provide a powerful backbone. Notably, our model does not require skeletal keypoints during inference, avoiding the limitations of keypoint-based models during downstream tasks. When finetuned, we achieve state-of-the-art performance for sign recognition on the WLASL, ASL-Citizen and NMFs-CSL datasets, using a simpler architecture and with only a single-modality. Beyond recognition, our frozen model excels in sign dictionary retrieval and sign translation, surpassing standard MAE pretraining and skeletal-based representations in retrieval. It also reduces computational costs for training existing sign translation models while maintaining strong performance on Phoenix2014T, CSL-Daily and How2Sign.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 15:20:01 GMT" } ]
2025-03-12T00:00:00
[ [ "Wong", "Ryan", "" ], [ "Camgoz", "Necati Cihan", "" ], [ "Bowden", "Richard", "" ] ]
TITLE: SignRep: Enhancing Self-Supervised Sign Representations ABSTRACT: Sign language representation learning presents unique challenges due to the complex spatio-temporal nature of signs and the scarcity of labeled datasets. Existing methods often rely either on models pre-trained on general visual tasks, that lack sign-specific features, or use complex multimodal and multi-branch architectures. To bridge this gap, we introduce a scalable, self-supervised framework for sign representation learning. We leverage important inductive (sign) priors during the training of our RGB model. To do this, we leverage simple but important cues based on skeletons while pretraining a masked autoencoder. These sign specific priors alongside feature regularization and an adversarial style agnostic loss provide a powerful backbone. Notably, our model does not require skeletal keypoints during inference, avoiding the limitations of keypoint-based models during downstream tasks. When finetuned, we achieve state-of-the-art performance for sign recognition on the WLASL, ASL-Citizen and NMFs-CSL datasets, using a simpler architecture and with only a single-modality. Beyond recognition, our frozen model excels in sign dictionary retrieval and sign translation, surpassing standard MAE pretraining and skeletal-based representations in retrieval. It also reduces computational costs for training existing sign translation models while maintaining strong performance on Phoenix2014T, CSL-Daily and How2Sign.
no_new_dataset
0.946498
2503.08532
Julian Aron Prenner
Julian Aron Prenner and Romain Robbes
Bogus Bugs, Duplicates, and Revealing Comments: Data Quality Issues in NPR
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The performance of a machine learning system is not only determined by the model but also, to a substantial degree, by the data it is trained on. With the increasing use of machine learning, issues related to data quality have become a concern also in automated program repair research. In this position paper, we report some of the data-related issues we have come across when working with several large APR datasets and benchmarks, including, for instance, duplicates or "bogus bugs". We briefly discuss the potential impact of these problems on repair performance and propose possible remedies. We believe that more data-focused approaches could improve the performance and robustness of current and future APR systems.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 15:23:13 GMT" } ]
2025-03-12T00:00:00
[ [ "Prenner", "Julian Aron", "" ], [ "Robbes", "Romain", "" ] ]
TITLE: Bogus Bugs, Duplicates, and Revealing Comments: Data Quality Issues in NPR ABSTRACT: The performance of a machine learning system is not only determined by the model but also, to a substantial degree, by the data it is trained on. With the increasing use of machine learning, issues related to data quality have become a concern also in automated program repair research. In this position paper, we report some of the data-related issues we have come across when working with several large APR datasets and benchmarks, including, for instance, duplicates or "bogus bugs". We briefly discuss the potential impact of these problems on repair performance and propose possible remedies. We believe that more data-focused approaches could improve the performance and robustness of current and future APR systems.
no_new_dataset
0.956104
2503.08533
Siddhant Arora
Siddhant Arora, Yifan Peng, Jiatong Shi, Jinchuan Tian, William Chen, Shikhar Bharadwaj, Hayato Futami, Yosuke Kashiwagi, Emiru Tsunoo, Shuichiro Shimizu, Vaibhav Srivastav, Shinji Watanabe
ESPnet-SDS: Unified Toolkit and Demo for Spoken Dialogue Systems
Accepted at NAACL 2025 Demo Track
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advancements in audio foundation models (FMs) have fueled interest in end-to-end (E2E) spoken dialogue systems, but different web interfaces for each system makes it challenging to compare and contrast them effectively. Motivated by this, we introduce an open-source, user-friendly toolkit designed to build unified web interfaces for various cascaded and E2E spoken dialogue systems. Our demo further provides users with the option to get on-the-fly automated evaluation metrics such as (1) latency, (2) ability to understand user input, (3) coherence, diversity, and relevance of system response, and (4) intelligibility and audio quality of system output. Using the evaluation metrics, we compare various cascaded and E2E spoken dialogue systems with a human-human conversation dataset as a proxy. Our analysis demonstrates that the toolkit allows researchers to effortlessly compare and contrast different technologies, providing valuable insights such as current E2E systems having poorer audio quality and less diverse responses. An example demo produced using our toolkit is publicly available here: https://huggingface.co/spaces/Siddhant/Voice_Assistant_Demo.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 15:24:02 GMT" } ]
2025-03-12T00:00:00
[ [ "Arora", "Siddhant", "" ], [ "Peng", "Yifan", "" ], [ "Shi", "Jiatong", "" ], [ "Tian", "Jinchuan", "" ], [ "Chen", "William", "" ], [ "Bharadwaj", "Shikhar", "" ], [ "Futami", "Hayato", "" ], [ "Kashiwagi", "Yosuke", "" ], [ "Tsunoo", "Emiru", "" ], [ "Shimizu", "Shuichiro", "" ], [ "Srivastav", "Vaibhav", "" ], [ "Watanabe", "Shinji", "" ] ]
TITLE: ESPnet-SDS: Unified Toolkit and Demo for Spoken Dialogue Systems ABSTRACT: Advancements in audio foundation models (FMs) have fueled interest in end-to-end (E2E) spoken dialogue systems, but different web interfaces for each system makes it challenging to compare and contrast them effectively. Motivated by this, we introduce an open-source, user-friendly toolkit designed to build unified web interfaces for various cascaded and E2E spoken dialogue systems. Our demo further provides users with the option to get on-the-fly automated evaluation metrics such as (1) latency, (2) ability to understand user input, (3) coherence, diversity, and relevance of system response, and (4) intelligibility and audio quality of system output. Using the evaluation metrics, we compare various cascaded and E2E spoken dialogue systems with a human-human conversation dataset as a proxy. Our analysis demonstrates that the toolkit allows researchers to effortlessly compare and contrast different technologies, providing valuable insights such as current E2E systems having poorer audio quality and less diverse responses. An example demo produced using our toolkit is publicly available here: https://huggingface.co/spaces/Siddhant/Voice_Assistant_Demo.
no_new_dataset
0.945901
2503.08534
Mingshi Li
Mingshi Li, Dusan Grujicic, Ben Somers, Stien Heremans, Steven De Saeger, Matthew B. Blaschko
ChromaFormer: A Scalable and Accurate Transformer Architecture for Land Cover Classification
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Remote sensing imagery from systems such as Sentinel provides full coverage of the Earth's surface at around 10-meter resolution. The remote sensing community has transitioned to extensive use of deep learning models due to their high performance on benchmarks such as the UCMerced and ISPRS Vaihingen datasets. Convolutional models such as UNet and ResNet variations are commonly employed for remote sensing but typically only accept three channels, as they were developed for RGB imagery, while satellite systems provide more than ten. Recently, several transformer architectures have been proposed for remote sensing, but they have not been extensively benchmarked and are typically used on small datasets such as Salinas Valley. Meanwhile, it is becoming feasible to obtain dense spatial land-use labels for entire first-level administrative divisions of some countries. Scaling law observations suggest that substantially larger multi-spectral transformer models could provide a significant leap in remote sensing performance in these settings. In this work, we propose ChromaFormer, a family of multi-spectral transformer models, which we evaluate across orders of magnitude differences in model parameters to assess their performance and scaling effectiveness on a densely labeled imagery dataset of Flanders, Belgium, covering more than 13,500 km^2 and containing 15 classes. We propose a novel multi-spectral attention strategy and demonstrate its effectiveness through ablations. Furthermore, we show that models many orders of magnitude larger than conventional architectures, such as UNet, lead to substantial accuracy improvements: a UNet++ model with 23M parameters achieves less than 65% accuracy, while a multi-spectral transformer with 655M parameters achieves over 95% accuracy on the Biological Valuation Map of Flanders.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 15:24:50 GMT" } ]
2025-03-12T00:00:00
[ [ "Li", "Mingshi", "" ], [ "Grujicic", "Dusan", "" ], [ "Somers", "Ben", "" ], [ "Heremans", "Stien", "" ], [ "De Saeger", "Steven", "" ], [ "Blaschko", "Matthew B.", "" ] ]
TITLE: ChromaFormer: A Scalable and Accurate Transformer Architecture for Land Cover Classification ABSTRACT: Remote sensing imagery from systems such as Sentinel provides full coverage of the Earth's surface at around 10-meter resolution. The remote sensing community has transitioned to extensive use of deep learning models due to their high performance on benchmarks such as the UCMerced and ISPRS Vaihingen datasets. Convolutional models such as UNet and ResNet variations are commonly employed for remote sensing but typically only accept three channels, as they were developed for RGB imagery, while satellite systems provide more than ten. Recently, several transformer architectures have been proposed for remote sensing, but they have not been extensively benchmarked and are typically used on small datasets such as Salinas Valley. Meanwhile, it is becoming feasible to obtain dense spatial land-use labels for entire first-level administrative divisions of some countries. Scaling law observations suggest that substantially larger multi-spectral transformer models could provide a significant leap in remote sensing performance in these settings. In this work, we propose ChromaFormer, a family of multi-spectral transformer models, which we evaluate across orders of magnitude differences in model parameters to assess their performance and scaling effectiveness on a densely labeled imagery dataset of Flanders, Belgium, covering more than 13,500 km^2 and containing 15 classes. We propose a novel multi-spectral attention strategy and demonstrate its effectiveness through ablations. Furthermore, we show that models many orders of magnitude larger than conventional architectures, such as UNet, lead to substantial accuracy improvements: a UNet++ model with 23M parameters achieves less than 65% accuracy, while a multi-spectral transformer with 655M parameters achieves over 95% accuracy on the Biological Valuation Map of Flanders.
no_new_dataset
0.950041
2503.08540
Soham Deshmukh
Soham Deshmukh, Satvik Dixit, Rita Singh, Bhiksha Raj
Mellow: a small audio language model for reasoning
Checkpoint and dataset available at: https://github.com/soham97/mellow
null
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
Multimodal Audio-Language Models (ALMs) can understand and reason over both audio and text. Typically, reasoning performance correlates with model size, with the best results achieved by models exceeding 8 billion parameters. However, no prior work has explored enabling small audio-language models to perform reasoning tasks, despite the potential applications for edge devices. To address this gap, we introduce Mellow, a small Audio-Language Model specifically designed for reasoning. Mellow achieves state-of-the-art performance among existing small audio-language models and surpasses several larger models in reasoning capabilities. For instance, Mellow scores 52.11 on MMAU, comparable to SoTA Qwen2 Audio (which scores 52.5) while using 50 times fewer parameters and being trained on 60 times less data (audio hrs). To train Mellow, we introduce ReasonAQA, a dataset designed to enhance audio-grounded reasoning in models. It consists of a mixture of existing datasets (30% of the data) and synthetically generated data (70%). The synthetic dataset is derived from audio captioning datasets, where Large Language Models (LLMs) generate detailed and multiple-choice questions focusing on audio events, objects, acoustic scenes, signal properties, semantics, and listener emotions. To evaluate Mellow's reasoning ability, we benchmark it on a diverse set of tasks, assessing on both in-distribution and out-of-distribution data, including audio understanding, deductive reasoning, and comparative reasoning. Finally, we conduct extensive ablation studies to explore the impact of projection layer choices, synthetic data generation methods, and language model pretraining on reasoning performance. Our training dataset, findings, and baseline pave the way for developing small ALMs capable of reasoning.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 15:29:00 GMT" } ]
2025-03-12T00:00:00
[ [ "Deshmukh", "Soham", "" ], [ "Dixit", "Satvik", "" ], [ "Singh", "Rita", "" ], [ "Raj", "Bhiksha", "" ] ]
TITLE: Mellow: a small audio language model for reasoning ABSTRACT: Multimodal Audio-Language Models (ALMs) can understand and reason over both audio and text. Typically, reasoning performance correlates with model size, with the best results achieved by models exceeding 8 billion parameters. However, no prior work has explored enabling small audio-language models to perform reasoning tasks, despite the potential applications for edge devices. To address this gap, we introduce Mellow, a small Audio-Language Model specifically designed for reasoning. Mellow achieves state-of-the-art performance among existing small audio-language models and surpasses several larger models in reasoning capabilities. For instance, Mellow scores 52.11 on MMAU, comparable to SoTA Qwen2 Audio (which scores 52.5) while using 50 times fewer parameters and being trained on 60 times less data (audio hrs). To train Mellow, we introduce ReasonAQA, a dataset designed to enhance audio-grounded reasoning in models. It consists of a mixture of existing datasets (30% of the data) and synthetically generated data (70%). The synthetic dataset is derived from audio captioning datasets, where Large Language Models (LLMs) generate detailed and multiple-choice questions focusing on audio events, objects, acoustic scenes, signal properties, semantics, and listener emotions. To evaluate Mellow's reasoning ability, we benchmark it on a diverse set of tasks, assessing on both in-distribution and out-of-distribution data, including audio understanding, deductive reasoning, and comparative reasoning. Finally, we conduct extensive ablation studies to explore the impact of projection layer choices, synthetic data generation methods, and language model pretraining on reasoning performance. Our training dataset, findings, and baseline pave the way for developing small ALMs capable of reasoning.
new_dataset
0.968827
2503.08548
Peng Hao
Peng Hao, Chaofan Zhang, Dingzhe Li, Xiaoge Cao, Xiaoshuai Hao, Shaowei Cui, Shuo Wang
TLA: Tactile-Language-Action Model for Contact-Rich Manipulation
null
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Significant progress has been made in vision-language models. However, language-conditioned robotic manipulation for contact-rich tasks remains underexplored, particularly in terms of tactile sensing. To address this gap, we introduce the Tactile-Language-Action (TLA) model, which effectively processes sequential tactile feedback via cross-modal language grounding to enable robust policy generation in contact-intensive scenarios. In addition, we construct a comprehensive dataset that contains 24k pairs of tactile action instruction data, customized for fingertip peg-in-hole assembly, providing essential resources for TLA training and evaluation. Our results show that TLA significantly outperforms traditional imitation learning methods (e.g., diffusion policy) in terms of effective action generation and action accuracy, while demonstrating strong generalization capabilities by achieving over 85\% success rate on previously unseen assembly clearances and peg shapes. We publicly release all data and code in the hope of advancing research in language-conditioned tactile manipulation skill learning. Project website: https://sites.google.com/view/tactile-language-action/
[ { "version": "v1", "created": "Tue, 11 Mar 2025 15:36:28 GMT" } ]
2025-03-12T00:00:00
[ [ "Hao", "Peng", "" ], [ "Zhang", "Chaofan", "" ], [ "Li", "Dingzhe", "" ], [ "Cao", "Xiaoge", "" ], [ "Hao", "Xiaoshuai", "" ], [ "Cui", "Shaowei", "" ], [ "Wang", "Shuo", "" ] ]
TITLE: TLA: Tactile-Language-Action Model for Contact-Rich Manipulation ABSTRACT: Significant progress has been made in vision-language models. However, language-conditioned robotic manipulation for contact-rich tasks remains underexplored, particularly in terms of tactile sensing. To address this gap, we introduce the Tactile-Language-Action (TLA) model, which effectively processes sequential tactile feedback via cross-modal language grounding to enable robust policy generation in contact-intensive scenarios. In addition, we construct a comprehensive dataset that contains 24k pairs of tactile action instruction data, customized for fingertip peg-in-hole assembly, providing essential resources for TLA training and evaluation. Our results show that TLA significantly outperforms traditional imitation learning methods (e.g., diffusion policy) in terms of effective action generation and action accuracy, while demonstrating strong generalization capabilities by achieving over 85\% success rate on previously unseen assembly clearances and peg shapes. We publicly release all data and code in the hope of advancing research in language-conditioned tactile manipulation skill learning. Project website: https://sites.google.com/view/tactile-language-action/
new_dataset
0.960212
2503.08551
Wanyong Feng
Wanyong Feng, Peter Tran, Stephen Sireci, Andrew Lan
Reasoning and Sampling-Augmented MCQ Difficulty Prediction via LLMs
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The difficulty of multiple-choice questions (MCQs) is a crucial factor for educational assessments. Predicting MCQ difficulty is challenging since it requires understanding both the complexity of reaching the correct option and the plausibility of distractors, i.e., incorrect options. In this paper, we propose a novel, two-stage method to predict the difficulty of MCQs. First, to better estimate the complexity of each MCQ, we use large language models (LLMs) to augment the reasoning steps required to reach each option. We use not just the MCQ itself but also these reasoning steps as input to predict the difficulty. Second, to capture the plausibility of distractors, we sample knowledge levels from a distribution to account for variation among students responding to the MCQ. This setup, inspired by item response theory (IRT), enable us to estimate the likelihood of students selecting each (both correct and incorrect) option. We align these predictions with their ground truth values, using a Kullback-Leibler (KL) divergence-based regularization objective, and use estimated likelihoods to predict MCQ difficulty. We evaluate our method on two real-world \emph{math} MCQ and response datasets with ground truth difficulty values estimated using IRT. Experimental results show that our method outperforms all baselines, up to a 28.3\% reduction in mean squared error and a 34.6\% improvement in the coefficient of determination. We also qualitatively discuss how our novel method results in higher accuracy in predicting MCQ difficulty.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 15:39:43 GMT" } ]
2025-03-12T00:00:00
[ [ "Feng", "Wanyong", "" ], [ "Tran", "Peter", "" ], [ "Sireci", "Stephen", "" ], [ "Lan", "Andrew", "" ] ]
TITLE: Reasoning and Sampling-Augmented MCQ Difficulty Prediction via LLMs ABSTRACT: The difficulty of multiple-choice questions (MCQs) is a crucial factor for educational assessments. Predicting MCQ difficulty is challenging since it requires understanding both the complexity of reaching the correct option and the plausibility of distractors, i.e., incorrect options. In this paper, we propose a novel, two-stage method to predict the difficulty of MCQs. First, to better estimate the complexity of each MCQ, we use large language models (LLMs) to augment the reasoning steps required to reach each option. We use not just the MCQ itself but also these reasoning steps as input to predict the difficulty. Second, to capture the plausibility of distractors, we sample knowledge levels from a distribution to account for variation among students responding to the MCQ. This setup, inspired by item response theory (IRT), enable us to estimate the likelihood of students selecting each (both correct and incorrect) option. We align these predictions with their ground truth values, using a Kullback-Leibler (KL) divergence-based regularization objective, and use estimated likelihoods to predict MCQ difficulty. We evaluate our method on two real-world \emph{math} MCQ and response datasets with ground truth difficulty values estimated using IRT. Experimental results show that our method outperforms all baselines, up to a 28.3\% reduction in mean squared error and a 34.6\% improvement in the coefficient of determination. We also qualitatively discuss how our novel method results in higher accuracy in predicting MCQ difficulty.
no_new_dataset
0.948822
2503.08569
Yixuan Weng
Minjun Zhu, Yixuan Weng, Linyi Yang, Yue Zhang
DeepReview: Improving LLM-based Paper Review with Human-like Deep Thinking Process
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) are increasingly utilized in scientific research assessment, particularly in automated paper review. However, existing LLM-based review systems face significant challenges, including limited domain expertise, hallucinated reasoning, and a lack of structured evaluation. To address these limitations, we introduce DeepReview, a multi-stage framework designed to emulate expert reviewers by incorporating structured analysis, literature retrieval, and evidence-based argumentation. Using DeepReview-13K, a curated dataset with structured annotations, we train DeepReviewer-14B, which outperforms CycleReviewer-70B with fewer tokens. In its best mode, DeepReviewer-14B achieves win rates of 88.21\% and 80.20\% against GPT-o1 and DeepSeek-R1 in evaluations. Our work sets a new benchmark for LLM-based paper review, with all resources publicly available. The code, model, dataset and demo have be released in http://ai-researcher.net.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 15:59:43 GMT" } ]
2025-03-12T00:00:00
[ [ "Zhu", "Minjun", "" ], [ "Weng", "Yixuan", "" ], [ "Yang", "Linyi", "" ], [ "Zhang", "Yue", "" ] ]
TITLE: DeepReview: Improving LLM-based Paper Review with Human-like Deep Thinking Process ABSTRACT: Large Language Models (LLMs) are increasingly utilized in scientific research assessment, particularly in automated paper review. However, existing LLM-based review systems face significant challenges, including limited domain expertise, hallucinated reasoning, and a lack of structured evaluation. To address these limitations, we introduce DeepReview, a multi-stage framework designed to emulate expert reviewers by incorporating structured analysis, literature retrieval, and evidence-based argumentation. Using DeepReview-13K, a curated dataset with structured annotations, we train DeepReviewer-14B, which outperforms CycleReviewer-70B with fewer tokens. In its best mode, DeepReviewer-14B achieves win rates of 88.21\% and 80.20\% against GPT-o1 and DeepSeek-R1 in evaluations. Our work sets a new benchmark for LLM-based paper review, with all resources publicly available. The code, model, dataset and demo have be released in http://ai-researcher.net.
new_dataset
0.951414
2503.08576
Xichen Tan
Xichen Tan, Yunfan Ye, Yuanjing Luo, Qian Wan, Fang Liu, Zhiping Cai
RAG-Adapter: A Plug-and-Play RAG-enhanced Framework for Long Video Understanding
37 pages, 36 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-modal Large Language Models (MLLMs) capable of video understanding are advancing rapidly. To effectively assess their video comprehension capabilities, long video understanding benchmarks, such as Video-MME and MLVU, are proposed. However, these benchmarks directly use uniform frame sampling for testing, which results in significant information loss and affects the accuracy of the evaluations in reflecting the true abilities of MLLMs. To address this, we propose RAG-Adapter, a plug-and-play framework that reduces information loss during testing by sampling frames most relevant to the given question. Additionally, we introduce a Grouped-supervised Contrastive Learning (GCL) method to further enhance sampling effectiveness of RAG-Adapter through fine-tuning on our constructed MMAT dataset. Finally, we test numerous baseline MLLMs on various video understanding benchmarks, finding that RAG-Adapter sampling consistently outperforms uniform sampling (e.g., Accuracy of GPT-4o increases by 9.3 percent on Video-MME), providing a more accurate testing method for long video benchmarks.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 16:10:43 GMT" } ]
2025-03-12T00:00:00
[ [ "Tan", "Xichen", "" ], [ "Ye", "Yunfan", "" ], [ "Luo", "Yuanjing", "" ], [ "Wan", "Qian", "" ], [ "Liu", "Fang", "" ], [ "Cai", "Zhiping", "" ] ]
TITLE: RAG-Adapter: A Plug-and-Play RAG-enhanced Framework for Long Video Understanding ABSTRACT: Multi-modal Large Language Models (MLLMs) capable of video understanding are advancing rapidly. To effectively assess their video comprehension capabilities, long video understanding benchmarks, such as Video-MME and MLVU, are proposed. However, these benchmarks directly use uniform frame sampling for testing, which results in significant information loss and affects the accuracy of the evaluations in reflecting the true abilities of MLLMs. To address this, we propose RAG-Adapter, a plug-and-play framework that reduces information loss during testing by sampling frames most relevant to the given question. Additionally, we introduce a Grouped-supervised Contrastive Learning (GCL) method to further enhance sampling effectiveness of RAG-Adapter through fine-tuning on our constructed MMAT dataset. Finally, we test numerous baseline MLLMs on various video understanding benchmarks, finding that RAG-Adapter sampling consistently outperforms uniform sampling (e.g., Accuracy of GPT-4o increases by 9.3 percent on Video-MME), providing a more accurate testing method for long video benchmarks.
new_dataset
0.946101
2503.08585
Shehreen Azad
Shehreen Azad, Vibhav Vineet, Yogesh Singh Rawat
HierarQ: Task-Aware Hierarchical Q-Former for Enhanced Video Understanding
Accepted in CVPR 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Despite advancements in multimodal large language models (MLLMs), current approaches struggle in medium-to-long video understanding due to frame and context length limitations. As a result, these models often depend on frame sampling, which risks missing key information over time and lacks task-specific relevance. To address these challenges, we introduce HierarQ, a task-aware hierarchical Q-Former based framework that sequentially processes frames to bypass the need for frame sampling, while avoiding LLM's context length limitations. We introduce a lightweight two-stream language-guided feature modulator to incorporate task awareness in video understanding, with the entity stream capturing frame-level object information within a short context and the scene stream identifying their broader interactions over longer period of time. Each stream is supported by dedicated memory banks which enables our proposed Hierachical Querying transformer (HierarQ) to effectively capture short and long-term context. Extensive evaluations on 10 video benchmarks across video understanding, question answering, and captioning tasks demonstrate HierarQ's state-of-the-art performance across most datasets, proving its robustness and efficiency for comprehensive video analysis.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 16:21:23 GMT" } ]
2025-03-12T00:00:00
[ [ "Azad", "Shehreen", "" ], [ "Vineet", "Vibhav", "" ], [ "Rawat", "Yogesh Singh", "" ] ]
TITLE: HierarQ: Task-Aware Hierarchical Q-Former for Enhanced Video Understanding ABSTRACT: Despite advancements in multimodal large language models (MLLMs), current approaches struggle in medium-to-long video understanding due to frame and context length limitations. As a result, these models often depend on frame sampling, which risks missing key information over time and lacks task-specific relevance. To address these challenges, we introduce HierarQ, a task-aware hierarchical Q-Former based framework that sequentially processes frames to bypass the need for frame sampling, while avoiding LLM's context length limitations. We introduce a lightweight two-stream language-guided feature modulator to incorporate task awareness in video understanding, with the entity stream capturing frame-level object information within a short context and the scene stream identifying their broader interactions over longer period of time. Each stream is supported by dedicated memory banks which enables our proposed Hierachical Querying transformer (HierarQ) to effectively capture short and long-term context. Extensive evaluations on 10 video benchmarks across video understanding, question answering, and captioning tasks demonstrate HierarQ's state-of-the-art performance across most datasets, proving its robustness and efficiency for comprehensive video analysis.
no_new_dataset
0.943243
2503.08589
Paul Calle
Paul Calle, Averi Bates, Justin C. Reynolds, Yunlong Liu, Haoyang Cui, Sinaro Ly, Chen Wang, Qinghao Zhang, Alberto J. de Armendi, Shashank S. Shettar, Kar Ming Fung, Qinggong Tang, Chongle Pan
Integration of nested cross-validation, automated hyperparameter optimization, high-performance computing to reduce and quantify the variance of test performance estimation of deep learning models
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The variability and biases in the real-world performance benchmarking of deep learning models for medical imaging compromise their trustworthiness for real-world deployment. The common approach of holding out a single fixed test set fails to quantify the variance in the estimation of test performance metrics. This study introduces NACHOS (Nested and Automated Cross-validation and Hyperparameter Optimization using Supercomputing) to reduce and quantify the variance of test performance metrics of deep learning models. NACHOS integrates Nested Cross-Validation (NCV) and Automated Hyperparameter Optimization (AHPO) within a parallelized high-performance computing (HPC) framework. NACHOS was demonstrated on a chest X-ray repository and an Optical Coherence Tomography (OCT) dataset under multiple data partitioning schemes. Beyond performance estimation, DACHOS (Deployment with Automated Cross-validation and Hyperparameter Optimization using Supercomputing) is introduced to leverage AHPO and cross-validation to build the final model on the full dataset, improving expected deployment performance. The findings underscore the importance of NCV in quantifying and reducing estimation variance, AHPO in optimizing hyperparameters consistently across test folds, and HPC in ensuring computational feasibility. By integrating these methodologies, NACHOS and DACHOS provide a scalable, reproducible, and trustworthy framework for DL model evaluation and deployment in medical imaging.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 16:25:44 GMT" } ]
2025-03-12T00:00:00
[ [ "Calle", "Paul", "" ], [ "Bates", "Averi", "" ], [ "Reynolds", "Justin C.", "" ], [ "Liu", "Yunlong", "" ], [ "Cui", "Haoyang", "" ], [ "Ly", "Sinaro", "" ], [ "Wang", "Chen", "" ], [ "Zhang", "Qinghao", "" ], [ "de Armendi", "Alberto J.", "" ], [ "Shettar", "Shashank S.", "" ], [ "Fung", "Kar Ming", "" ], [ "Tang", "Qinggong", "" ], [ "Pan", "Chongle", "" ] ]
TITLE: Integration of nested cross-validation, automated hyperparameter optimization, high-performance computing to reduce and quantify the variance of test performance estimation of deep learning models ABSTRACT: The variability and biases in the real-world performance benchmarking of deep learning models for medical imaging compromise their trustworthiness for real-world deployment. The common approach of holding out a single fixed test set fails to quantify the variance in the estimation of test performance metrics. This study introduces NACHOS (Nested and Automated Cross-validation and Hyperparameter Optimization using Supercomputing) to reduce and quantify the variance of test performance metrics of deep learning models. NACHOS integrates Nested Cross-Validation (NCV) and Automated Hyperparameter Optimization (AHPO) within a parallelized high-performance computing (HPC) framework. NACHOS was demonstrated on a chest X-ray repository and an Optical Coherence Tomography (OCT) dataset under multiple data partitioning schemes. Beyond performance estimation, DACHOS (Deployment with Automated Cross-validation and Hyperparameter Optimization using Supercomputing) is introduced to leverage AHPO and cross-validation to build the final model on the full dataset, improving expected deployment performance. The findings underscore the importance of NCV in quantifying and reducing estimation variance, AHPO in optimizing hyperparameters consistently across test folds, and HPC in ensuring computational feasibility. By integrating these methodologies, NACHOS and DACHOS provide a scalable, reproducible, and trustworthy framework for DL model evaluation and deployment in medical imaging.
no_new_dataset
0.949106
2503.08596
Feiran Wang
Feiran Wang, Jiachen Tao, Junyi Wu, Haoxuan Wang, Bin Duan, Kai Wang, Zongxin Yang, Yan Yan
X-Field: A Physically Grounded Representation for 3D X-ray Reconstruction
Project Page: \url{https://brack-wang.github.io/XField/}, Github Code: \url{https://github.com/Brack-Wang/X-Field}
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
X-ray imaging is indispensable in medical diagnostics, yet its use is tightly regulated due to potential health risks. To mitigate radiation exposure, recent research focuses on generating novel views from sparse inputs and reconstructing Computed Tomography (CT) volumes, borrowing representations from the 3D reconstruction area. However, these representations originally target visible light imaging that emphasizes reflection and scattering effects, while neglecting penetration and attenuation properties of X-ray imaging. In this paper, we introduce X-Field, the first 3D representation specifically designed for X-ray imaging, rooted in the energy absorption rates across different materials. To accurately model diverse materials within internal structures, we employ 3D ellipsoids with distinct attenuation coefficients. To estimate each material's energy absorption of X-rays, we devise an efficient path partitioning algorithm accounting for complex ellipsoid intersections. We further propose hybrid progressive initialization to refine the geometric accuracy of X-Filed and incorporate material-based optimization to enhance model fitting along material boundaries. Experiments show that X-Field achieves superior visual fidelity on both real-world human organ and synthetic object datasets, outperforming state-of-the-art methods in X-ray Novel View Synthesis and CT Reconstruction.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 16:31:56 GMT" } ]
2025-03-12T00:00:00
[ [ "Wang", "Feiran", "" ], [ "Tao", "Jiachen", "" ], [ "Wu", "Junyi", "" ], [ "Wang", "Haoxuan", "" ], [ "Duan", "Bin", "" ], [ "Wang", "Kai", "" ], [ "Yang", "Zongxin", "" ], [ "Yan", "Yan", "" ] ]
TITLE: X-Field: A Physically Grounded Representation for 3D X-ray Reconstruction ABSTRACT: X-ray imaging is indispensable in medical diagnostics, yet its use is tightly regulated due to potential health risks. To mitigate radiation exposure, recent research focuses on generating novel views from sparse inputs and reconstructing Computed Tomography (CT) volumes, borrowing representations from the 3D reconstruction area. However, these representations originally target visible light imaging that emphasizes reflection and scattering effects, while neglecting penetration and attenuation properties of X-ray imaging. In this paper, we introduce X-Field, the first 3D representation specifically designed for X-ray imaging, rooted in the energy absorption rates across different materials. To accurately model diverse materials within internal structures, we employ 3D ellipsoids with distinct attenuation coefficients. To estimate each material's energy absorption of X-rays, we devise an efficient path partitioning algorithm accounting for complex ellipsoid intersections. We further propose hybrid progressive initialization to refine the geometric accuracy of X-Filed and incorporate material-based optimization to enhance model fitting along material boundaries. Experiments show that X-Field achieves superior visual fidelity on both real-world human organ and synthetic object datasets, outperforming state-of-the-art methods in X-ray Novel View Synthesis and CT Reconstruction.
no_new_dataset
0.949201
2503.08601
Du\v{s}an Mali\'c
Du\v{s}an Mali\'c, Christian Fruhwirth-Reisinger, Samuel Schulter, Horst Possegger
LiSu: A Dataset and Method for LiDAR Surface Normal Estimation
Accepted at CVPR 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
While surface normals are widely used to analyse 3D scene geometry, surface normal estimation from LiDAR point clouds remains severely underexplored. This is caused by the lack of large-scale annotated datasets on the one hand, and lack of methods that can robustly handle the sparse and often noisy LiDAR data in a reasonable time on the other hand. We address these limitations using a traffic simulation engine and present LiSu, the first large-scale, synthetic LiDAR point cloud dataset with ground truth surface normal annotations, eliminating the need for tedious manual labeling. Additionally, we propose a novel method that exploits the spatiotemporal characteristics of autonomous driving data to enhance surface normal estimation accuracy. By incorporating two regularization terms, we enforce spatial consistency among neighboring points and temporal smoothness across consecutive LiDAR frames. These regularizers are particularly effective in self-training settings, where they mitigate the impact of noisy pseudo-labels, enabling robust real-world deployment. We demonstrate the effectiveness of our method on LiSu, achieving state-of-the-art performance in LiDAR surface normal estimation. Moreover, we showcase its full potential in addressing the challenging task of synthetic-to-real domain adaptation, leading to improved neural surface reconstruction on real-world data.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 16:35:22 GMT" } ]
2025-03-12T00:00:00
[ [ "Malić", "Dušan", "" ], [ "Fruhwirth-Reisinger", "Christian", "" ], [ "Schulter", "Samuel", "" ], [ "Possegger", "Horst", "" ] ]
TITLE: LiSu: A Dataset and Method for LiDAR Surface Normal Estimation ABSTRACT: While surface normals are widely used to analyse 3D scene geometry, surface normal estimation from LiDAR point clouds remains severely underexplored. This is caused by the lack of large-scale annotated datasets on the one hand, and lack of methods that can robustly handle the sparse and often noisy LiDAR data in a reasonable time on the other hand. We address these limitations using a traffic simulation engine and present LiSu, the first large-scale, synthetic LiDAR point cloud dataset with ground truth surface normal annotations, eliminating the need for tedious manual labeling. Additionally, we propose a novel method that exploits the spatiotemporal characteristics of autonomous driving data to enhance surface normal estimation accuracy. By incorporating two regularization terms, we enforce spatial consistency among neighboring points and temporal smoothness across consecutive LiDAR frames. These regularizers are particularly effective in self-training settings, where they mitigate the impact of noisy pseudo-labels, enabling robust real-world deployment. We demonstrate the effectiveness of our method on LiSu, achieving state-of-the-art performance in LiDAR surface normal estimation. Moreover, we showcase its full potential in addressing the challenging task of synthetic-to-real domain adaptation, leading to improved neural surface reconstruction on real-world data.
no_new_dataset
0.906031
2503.08603
R\"uveyda Yilmaz
R\"uveyda Yilmaz, Zhu Chen, Yuli Wu and Johannes Stegmaier
CellStyle: Improved Zero-Shot Cell Segmentation via Style Transfer
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cell microscopy data are abundant; however, corresponding segmentation annotations remain scarce. Moreover, variations in cell types, imaging devices, and staining techniques introduce significant domain gaps between datasets. As a result, even large, pretrained segmentation models trained on diverse datasets (source datasets) struggle to generalize to unseen datasets (target datasets). To overcome this generalization problem, we propose CellStyle, which improves the segmentation quality of such models without requiring labels for the target dataset, thereby enabling zero-shot adaptation. CellStyle transfers the attributes of an unannotated target dataset, such as texture, color, and noise, to the annotated source dataset. This transfer is performed while preserving the cell shapes of the source images, ensuring that the existing source annotations can still be used while maintaining the visual characteristics of the target dataset. The styled synthetic images with the existing annotations enable the finetuning of a generalist segmentation model for application to the unannotated target data. We demonstrate that CellStyle significantly improves zero-shot cell segmentation performance across diverse datasets by finetuning multiple segmentation models on the style-transferred data. The code will be made publicly available.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 16:39:09 GMT" } ]
2025-03-12T00:00:00
[ [ "Yilmaz", "Rüveyda", "" ], [ "Chen", "Zhu", "" ], [ "Wu", "Yuli", "" ], [ "Stegmaier", "Johannes", "" ] ]
TITLE: CellStyle: Improved Zero-Shot Cell Segmentation via Style Transfer ABSTRACT: Cell microscopy data are abundant; however, corresponding segmentation annotations remain scarce. Moreover, variations in cell types, imaging devices, and staining techniques introduce significant domain gaps between datasets. As a result, even large, pretrained segmentation models trained on diverse datasets (source datasets) struggle to generalize to unseen datasets (target datasets). To overcome this generalization problem, we propose CellStyle, which improves the segmentation quality of such models without requiring labels for the target dataset, thereby enabling zero-shot adaptation. CellStyle transfers the attributes of an unannotated target dataset, such as texture, color, and noise, to the annotated source dataset. This transfer is performed while preserving the cell shapes of the source images, ensuring that the existing source annotations can still be used while maintaining the visual characteristics of the target dataset. The styled synthetic images with the existing annotations enable the finetuning of a generalist segmentation model for application to the unannotated target data. We demonstrate that CellStyle significantly improves zero-shot cell segmentation performance across diverse datasets by finetuning multiple segmentation models on the style-transferred data. The code will be made publicly available.
no_new_dataset
0.9549
2503.08604
Dongping Li
Dongping Li, Tielong Cai, Tianci Tang, Wenhao Chai, Katherine Rose Driggs-Campbell, Gaoang Wang
EMMOE: A Comprehensive Benchmark for Embodied Mobile Manipulation in Open Environments
null
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Developing autonomous home robots controlled by natural language has long been a pursuit of human. While advancements in large language models (LLMs) and embodied intelligence make this goal closer, several challenges persist: the lack of a unified benchmark for more complex robot tasks, limited evaluation methods and metrics, data incompatibility between LLMs and mobile manipulation trajectories. To address these issues, we introduce Embodied Mobile Manipulation in Open Environments (EMMOE), which requires agents to interpret user instructions and execute long-horizon everyday tasks in continuous space. EMMOE seamlessly integrates high-level and low-level embodied tasks into a unified framework, along with three new metrics for more diverse assessment. Additionally, we collect EMMOE-100, which features in various task attributes, detailed process annotations, re-plans after failures, and two sub-datasets for LLM training. Furthermore, we design HomieBot, a sophisticated agent system consists of LLM with Direct Preference Optimization (DPO), light weighted navigation and manipulation models, and multiple error detection mechanisms. Finally, we demonstrate HomieBot's performance and the evaluation of different models and policies.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 16:42:36 GMT" } ]
2025-03-12T00:00:00
[ [ "Li", "Dongping", "" ], [ "Cai", "Tielong", "" ], [ "Tang", "Tianci", "" ], [ "Chai", "Wenhao", "" ], [ "Driggs-Campbell", "Katherine Rose", "" ], [ "Wang", "Gaoang", "" ] ]
TITLE: EMMOE: A Comprehensive Benchmark for Embodied Mobile Manipulation in Open Environments ABSTRACT: Developing autonomous home robots controlled by natural language has long been a pursuit of human. While advancements in large language models (LLMs) and embodied intelligence make this goal closer, several challenges persist: the lack of a unified benchmark for more complex robot tasks, limited evaluation methods and metrics, data incompatibility between LLMs and mobile manipulation trajectories. To address these issues, we introduce Embodied Mobile Manipulation in Open Environments (EMMOE), which requires agents to interpret user instructions and execute long-horizon everyday tasks in continuous space. EMMOE seamlessly integrates high-level and low-level embodied tasks into a unified framework, along with three new metrics for more diverse assessment. Additionally, we collect EMMOE-100, which features in various task attributes, detailed process annotations, re-plans after failures, and two sub-datasets for LLM training. Furthermore, we design HomieBot, a sophisticated agent system consists of LLM with Direct Preference Optimization (DPO), light weighted navigation and manipulation models, and multiple error detection mechanisms. Finally, we demonstrate HomieBot's performance and the evaluation of different models and policies.
new_dataset
0.95275
2503.08612
Erkang Cheng
Yingqi Tang, Zhuoran Xu, Zhaotie Meng, Erkang Cheng
HiP-AD: Hierarchical and Multi-Granularity Planning with Deformable Attention for Autonomous Driving in a Single Decoder
null
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although end-to-end autonomous driving (E2E-AD) technologies have made significant progress in recent years, there remains an unsatisfactory performance on closed-loop evaluation. The potential of leveraging planning in query design and interaction has not yet been fully explored. In this paper, we introduce a multi-granularity planning query representation that integrates heterogeneous waypoints, including spatial, temporal, and driving-style waypoints across various sampling patterns. It provides additional supervision for trajectory prediction, enhancing precise closed-loop control for the ego vehicle. Additionally, we explicitly utilize the geometric properties of planning trajectories to effectively retrieve relevant image features based on physical locations using deformable attention. By combining these strategies, we propose a novel end-to-end autonomous driving framework, termed HiP-AD, which simultaneously performs perception, prediction, and planning within a unified decoder. HiP-AD enables comprehensive interaction by allowing planning queries to iteratively interact with perception queries in the BEV space while dynamically extracting image features from perspective views. Experiments demonstrate that HiP-AD outperforms all existing end-to-end autonomous driving methods on the closed-loop benchmark Bench2Drive and achieves competitive performance on the real-world dataset nuScenes.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 16:52:45 GMT" } ]
2025-03-12T00:00:00
[ [ "Tang", "Yingqi", "" ], [ "Xu", "Zhuoran", "" ], [ "Meng", "Zhaotie", "" ], [ "Cheng", "Erkang", "" ] ]
TITLE: HiP-AD: Hierarchical and Multi-Granularity Planning with Deformable Attention for Autonomous Driving in a Single Decoder ABSTRACT: Although end-to-end autonomous driving (E2E-AD) technologies have made significant progress in recent years, there remains an unsatisfactory performance on closed-loop evaluation. The potential of leveraging planning in query design and interaction has not yet been fully explored. In this paper, we introduce a multi-granularity planning query representation that integrates heterogeneous waypoints, including spatial, temporal, and driving-style waypoints across various sampling patterns. It provides additional supervision for trajectory prediction, enhancing precise closed-loop control for the ego vehicle. Additionally, we explicitly utilize the geometric properties of planning trajectories to effectively retrieve relevant image features based on physical locations using deformable attention. By combining these strategies, we propose a novel end-to-end autonomous driving framework, termed HiP-AD, which simultaneously performs perception, prediction, and planning within a unified decoder. HiP-AD enables comprehensive interaction by allowing planning queries to iteratively interact with perception queries in the BEV space while dynamically extracting image features from perspective views. Experiments demonstrate that HiP-AD outperforms all existing end-to-end autonomous driving methods on the closed-loop benchmark Bench2Drive and achieves competitive performance on the real-world dataset nuScenes.
no_new_dataset
0.943815
2503.08619
Xianfeng Wu
Xianfeng Wu, Yajing Bai, Haoze Zheng, Harold Haodong Chen, Yexin Liu, Zihao Wang, Xuran Ma, Wen-Jie Shu, Xianzu Wu, Harry Yang, Ser-Nam Lim
LightGen: Efficient Image Generation through Knowledge Distillation and Direct Preference Optimization
Code: https://github.com/XianfengWu01/LightGen
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in text-to-image generation have primarily relied on extensive datasets and parameter-heavy architectures. These requirements severely limit accessibility for researchers and practitioners who lack substantial computational resources. In this paper, we introduce \model, an efficient training paradigm for image generation models that uses knowledge distillation (KD) and Direct Preference Optimization (DPO). Drawing inspiration from the success of data KD techniques widely adopted in Multi-Modal Large Language Models (MLLMs), LightGen distills knowledge from state-of-the-art (SOTA) text-to-image models into a compact Masked Autoregressive (MAR) architecture with only $0.7B$ parameters. Using a compact synthetic dataset of just $2M$ high-quality images generated from varied captions, we demonstrate that data diversity significantly outweighs data volume in determining model performance. This strategy dramatically reduces computational demands and reduces pre-training time from potentially thousands of GPU-days to merely 88 GPU-days. Furthermore, to address the inherent shortcomings of synthetic data, particularly poor high-frequency details and spatial inaccuracies, we integrate the DPO technique that refines image fidelity and positional accuracy. Comprehensive experiments confirm that LightGen achieves image generation quality comparable to SOTA models while significantly reducing computational resources and expanding accessibility for resource-constrained environments. Code is available at https://github.com/XianfengWu01/LightGen
[ { "version": "v1", "created": "Tue, 11 Mar 2025 16:58:02 GMT" } ]
2025-03-12T00:00:00
[ [ "Wu", "Xianfeng", "" ], [ "Bai", "Yajing", "" ], [ "Zheng", "Haoze", "" ], [ "Chen", "Harold Haodong", "" ], [ "Liu", "Yexin", "" ], [ "Wang", "Zihao", "" ], [ "Ma", "Xuran", "" ], [ "Shu", "Wen-Jie", "" ], [ "Wu", "Xianzu", "" ], [ "Yang", "Harry", "" ], [ "Lim", "Ser-Nam", "" ] ]
TITLE: LightGen: Efficient Image Generation through Knowledge Distillation and Direct Preference Optimization ABSTRACT: Recent advances in text-to-image generation have primarily relied on extensive datasets and parameter-heavy architectures. These requirements severely limit accessibility for researchers and practitioners who lack substantial computational resources. In this paper, we introduce \model, an efficient training paradigm for image generation models that uses knowledge distillation (KD) and Direct Preference Optimization (DPO). Drawing inspiration from the success of data KD techniques widely adopted in Multi-Modal Large Language Models (MLLMs), LightGen distills knowledge from state-of-the-art (SOTA) text-to-image models into a compact Masked Autoregressive (MAR) architecture with only $0.7B$ parameters. Using a compact synthetic dataset of just $2M$ high-quality images generated from varied captions, we demonstrate that data diversity significantly outweighs data volume in determining model performance. This strategy dramatically reduces computational demands and reduces pre-training time from potentially thousands of GPU-days to merely 88 GPU-days. Furthermore, to address the inherent shortcomings of synthetic data, particularly poor high-frequency details and spatial inaccuracies, we integrate the DPO technique that refines image fidelity and positional accuracy. Comprehensive experiments confirm that LightGen achieves image generation quality comparable to SOTA models while significantly reducing computational resources and expanding accessibility for resource-constrained environments. Code is available at https://github.com/XianfengWu01/LightGen
no_new_dataset
0.940517
2503.08622
Apan Dastider
Apan Dastider, Hao Fang and Mingjie Lin
Cross-Embodiment Robotic Manipulation Synthesis via Guided Demonstrations through CycleVAE and Human Behavior Transformer
Under Review in IROS 2025
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Cross-embodiment robotic manipulation synthesis for complicated tasks is challenging, partially due to the scarcity of paired cross-embodiment datasets and the impediment of designing intricate controllers. Inspired by robotic learning via guided human expert demonstration, we here propose a novel cross-embodiment robotic manipulation algorithm via CycleVAE and human behavior transformer. First, we utilize unsupervised CycleVAE together with a bidirectional subspace alignment algorithm to align latent motion sequences between cross-embodiments. Second, we propose a casual human behavior transformer design to learn the intrinsic motion dynamics of human expert demonstrations. During the test case, we leverage the proposed transformer for the human expert demonstration generation, which will be aligned using CycleVAE for the final human-robotic manipulation synthesis. We validated our proposed algorithm through extensive experiments using a dexterous robotic manipulator with the robotic hand. Our results successfully generate smooth trajectories across intricate tasks, outperforming prior learning-based robotic motion planning algorithms. These results have implications for performing unsupervised cross-embodiment alignment and future autonomous robotics design. Complete video demonstrations of our experiments can be found in https://sites.google.com/view/humanrobots/home.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 17:02:08 GMT" } ]
2025-03-12T00:00:00
[ [ "Dastider", "Apan", "" ], [ "Fang", "Hao", "" ], [ "Lin", "Mingjie", "" ] ]
TITLE: Cross-Embodiment Robotic Manipulation Synthesis via Guided Demonstrations through CycleVAE and Human Behavior Transformer ABSTRACT: Cross-embodiment robotic manipulation synthesis for complicated tasks is challenging, partially due to the scarcity of paired cross-embodiment datasets and the impediment of designing intricate controllers. Inspired by robotic learning via guided human expert demonstration, we here propose a novel cross-embodiment robotic manipulation algorithm via CycleVAE and human behavior transformer. First, we utilize unsupervised CycleVAE together with a bidirectional subspace alignment algorithm to align latent motion sequences between cross-embodiments. Second, we propose a casual human behavior transformer design to learn the intrinsic motion dynamics of human expert demonstrations. During the test case, we leverage the proposed transformer for the human expert demonstration generation, which will be aligned using CycleVAE for the final human-robotic manipulation synthesis. We validated our proposed algorithm through extensive experiments using a dexterous robotic manipulator with the robotic hand. Our results successfully generate smooth trajectories across intricate tasks, outperforming prior learning-based robotic motion planning algorithms. These results have implications for performing unsupervised cross-embodiment alignment and future autonomous robotics design. Complete video demonstrations of our experiments can be found in https://sites.google.com/view/humanrobots/home.
no_new_dataset
0.950273
2503.08639
Du\v{s}an Mali\'c
Du\v{s}an Mali\'c, Christian Fruhwirth-Reisinger, Samuel Schulter, Horst Possegger
GBlobs: Explicit Local Structure via Gaussian Blobs for Improved Cross-Domain LiDAR-based 3D Object Detection
Accepted at CVPR 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
LiDAR-based 3D detectors need large datasets for training, yet they struggle to generalize to novel domains. Domain Generalization (DG) aims to mitigate this by training detectors that are invariant to such domain shifts. Current DG approaches exclusively rely on global geometric features (point cloud Cartesian coordinates) as input features. Over-reliance on these global geometric features can, however, cause 3D detectors to prioritize object location and absolute position, resulting in poor cross-domain performance. To mitigate this, we propose to exploit explicit local point cloud structure for DG, in particular by encoding point cloud neighborhoods with Gaussian blobs, GBlobs. Our proposed formulation is highly efficient and requires no additional parameters. Without any bells and whistles, simply by integrating GBlobs in existing detectors, we beat the current state-of-the-art in challenging single-source DG benchmarks by over 21 mAP (Waymo->KITTI), 13 mAP (KITTI->Waymo), and 12 mAP (nuScenes->KITTI), without sacrificing in-domain performance. Additionally, GBlobs demonstrate exceptional performance in multi-source DG, surpassing the current state-of-the-art by 17, 12, and 5 mAP on Waymo, KITTI, and ONCE, respectively.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 17:29:56 GMT" } ]
2025-03-12T00:00:00
[ [ "Malić", "Dušan", "" ], [ "Fruhwirth-Reisinger", "Christian", "" ], [ "Schulter", "Samuel", "" ], [ "Possegger", "Horst", "" ] ]
TITLE: GBlobs: Explicit Local Structure via Gaussian Blobs for Improved Cross-Domain LiDAR-based 3D Object Detection ABSTRACT: LiDAR-based 3D detectors need large datasets for training, yet they struggle to generalize to novel domains. Domain Generalization (DG) aims to mitigate this by training detectors that are invariant to such domain shifts. Current DG approaches exclusively rely on global geometric features (point cloud Cartesian coordinates) as input features. Over-reliance on these global geometric features can, however, cause 3D detectors to prioritize object location and absolute position, resulting in poor cross-domain performance. To mitigate this, we propose to exploit explicit local point cloud structure for DG, in particular by encoding point cloud neighborhoods with Gaussian blobs, GBlobs. Our proposed formulation is highly efficient and requires no additional parameters. Without any bells and whistles, simply by integrating GBlobs in existing detectors, we beat the current state-of-the-art in challenging single-source DG benchmarks by over 21 mAP (Waymo->KITTI), 13 mAP (KITTI->Waymo), and 12 mAP (nuScenes->KITTI), without sacrificing in-domain performance. Additionally, GBlobs demonstrate exceptional performance in multi-source DG, surpassing the current state-of-the-art by 17, 12, and 5 mAP on Waymo, KITTI, and ONCE, respectively.
no_new_dataset
0.947769
2503.08642
Zecheng Zhang
Zecheng Zhang, Hao Liu, Wenjing Liao, Guang Lin
Coefficient-to-Basis Network: A Fine-Tunable Operator Learning Framework for Inverse Problems with Adaptive Discretizations and Theoretical Guarantees
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We propose a Coefficient-to-Basis Network (C2BNet), a novel framework for solving inverse problems within the operator learning paradigm. C2BNet efficiently adapts to different discretizations through fine-tuning, using a pre-trained model to significantly reduce computational cost while maintaining high accuracy. Unlike traditional approaches that require retraining from scratch for new discretizations, our method enables seamless adaptation without sacrificing predictive performance. Furthermore, we establish theoretical approximation and generalization error bounds for C2BNet by exploiting low-dimensional structures in the underlying datasets. Our analysis demonstrates that C2BNet adapts to low-dimensional structures without relying on explicit encoding mechanisms, highlighting its robustness and efficiency. To validate our theoretical findings, we conducted extensive numerical experiments that showcase the superior performance of C2BNet on several inverse problems. The results confirm that C2BNet effectively balances computational efficiency and accuracy, making it a promising tool to solve inverse problems in scientific computing and engineering applications.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 17:34:38 GMT" } ]
2025-03-12T00:00:00
[ [ "Zhang", "Zecheng", "" ], [ "Liu", "Hao", "" ], [ "Liao", "Wenjing", "" ], [ "Lin", "Guang", "" ] ]
TITLE: Coefficient-to-Basis Network: A Fine-Tunable Operator Learning Framework for Inverse Problems with Adaptive Discretizations and Theoretical Guarantees ABSTRACT: We propose a Coefficient-to-Basis Network (C2BNet), a novel framework for solving inverse problems within the operator learning paradigm. C2BNet efficiently adapts to different discretizations through fine-tuning, using a pre-trained model to significantly reduce computational cost while maintaining high accuracy. Unlike traditional approaches that require retraining from scratch for new discretizations, our method enables seamless adaptation without sacrificing predictive performance. Furthermore, we establish theoretical approximation and generalization error bounds for C2BNet by exploiting low-dimensional structures in the underlying datasets. Our analysis demonstrates that C2BNet adapts to low-dimensional structures without relying on explicit encoding mechanisms, highlighting its robustness and efficiency. To validate our theoretical findings, we conducted extensive numerical experiments that showcase the superior performance of C2BNet on several inverse problems. The results confirm that C2BNet effectively balances computational efficiency and accuracy, making it a promising tool to solve inverse problems in scientific computing and engineering applications.
no_new_dataset
0.95018
2503.08650
Zhenchen Wan
Zhenchen Wan, Yanwu xu, Dongting Hu, Weilun Cheng, Tianxi Chen, Zhaoqing Wang, Feng Liu, Tongliang Liu, Mingming Gong
MF-VITON: High-Fidelity Mask-Free Virtual Try-On with Minimal Input
The project page is available at: https://zhenchenwan.github.io/MF-VITON/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent advancements in Virtual Try-On (VITON) have significantly improved image realism and garment detail preservation, driven by powerful text-to-image (T2I) diffusion models. However, existing methods often rely on user-provided masks, introducing complexity and performance degradation due to imperfect inputs, as shown in Fig.1(a). To address this, we propose a Mask-Free VITON (MF-VITON) framework that achieves realistic VITON using only a single person image and a target garment, eliminating the requirement for auxiliary masks. Our approach introduces a novel two-stage pipeline: (1) We leverage existing Mask-based VITON models to synthesize a high-quality dataset. This dataset contains diverse, realistic pairs of person images and corresponding garments, augmented with varied backgrounds to mimic real-world scenarios. (2) The pre-trained Mask-based model is fine-tuned on the generated dataset, enabling garment transfer without mask dependencies. This stage simplifies the input requirements while preserving garment texture and shape fidelity. Our framework achieves state-of-the-art (SOTA) performance regarding garment transfer accuracy and visual realism. Notably, the proposed Mask-Free model significantly outperforms existing Mask-based approaches, setting a new benchmark and demonstrating a substantial lead over previous approaches. For more details, visit our project page: https://zhenchenwan.github.io/MF-VITON/.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 17:40:59 GMT" } ]
2025-03-12T00:00:00
[ [ "Wan", "Zhenchen", "" ], [ "xu", "Yanwu", "" ], [ "Hu", "Dongting", "" ], [ "Cheng", "Weilun", "" ], [ "Chen", "Tianxi", "" ], [ "Wang", "Zhaoqing", "" ], [ "Liu", "Feng", "" ], [ "Liu", "Tongliang", "" ], [ "Gong", "Mingming", "" ] ]
TITLE: MF-VITON: High-Fidelity Mask-Free Virtual Try-On with Minimal Input ABSTRACT: Recent advancements in Virtual Try-On (VITON) have significantly improved image realism and garment detail preservation, driven by powerful text-to-image (T2I) diffusion models. However, existing methods often rely on user-provided masks, introducing complexity and performance degradation due to imperfect inputs, as shown in Fig.1(a). To address this, we propose a Mask-Free VITON (MF-VITON) framework that achieves realistic VITON using only a single person image and a target garment, eliminating the requirement for auxiliary masks. Our approach introduces a novel two-stage pipeline: (1) We leverage existing Mask-based VITON models to synthesize a high-quality dataset. This dataset contains diverse, realistic pairs of person images and corresponding garments, augmented with varied backgrounds to mimic real-world scenarios. (2) The pre-trained Mask-based model is fine-tuned on the generated dataset, enabling garment transfer without mask dependencies. This stage simplifies the input requirements while preserving garment texture and shape fidelity. Our framework achieves state-of-the-art (SOTA) performance regarding garment transfer accuracy and visual realism. Notably, the proposed Mask-Free model significantly outperforms existing Mask-based approaches, setting a new benchmark and demonstrating a substantial lead over previous approaches. For more details, visit our project page: https://zhenchenwan.github.io/MF-VITON/.
new_dataset
0.967472
2503.08652
Wei Chen
Wei Chen, Qiang Qiu
Extra Clients at No Extra Cost: Overcome Data Heterogeneity in Federated Learning with Filter Decomposition
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Data heterogeneity is one of the major challenges in federated learning (FL), which results in substantial client variance and slow convergence. In this study, we propose a novel solution: decomposing a convolutional filter in FL into a linear combination of filter subspace elements, i.e., filter atoms. This simple technique transforms global filter aggregation in FL into aggregating filter atoms and their atom coefficients. The key advantage here involves mathematically generating numerous cross-terms by expanding the product of two weighted sums from filter atom and atom coefficient. These cross-terms effectively emulate many additional latent clients, significantly reducing model variance, which is validated by our theoretical analysis and empirical observation. Furthermore, our method permits different training schemes for filter atoms and atom coefficients for highly adaptive model personalization and communication efficiency. Empirical results on benchmark datasets demonstrate that our filter decomposition technique substantially improves the accuracy of FL methods, confirming its efficacy in addressing data heterogeneity.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 17:42:36 GMT" } ]
2025-03-12T00:00:00
[ [ "Chen", "Wei", "" ], [ "Qiu", "Qiang", "" ] ]
TITLE: Extra Clients at No Extra Cost: Overcome Data Heterogeneity in Federated Learning with Filter Decomposition ABSTRACT: Data heterogeneity is one of the major challenges in federated learning (FL), which results in substantial client variance and slow convergence. In this study, we propose a novel solution: decomposing a convolutional filter in FL into a linear combination of filter subspace elements, i.e., filter atoms. This simple technique transforms global filter aggregation in FL into aggregating filter atoms and their atom coefficients. The key advantage here involves mathematically generating numerous cross-terms by expanding the product of two weighted sums from filter atom and atom coefficient. These cross-terms effectively emulate many additional latent clients, significantly reducing model variance, which is validated by our theoretical analysis and empirical observation. Furthermore, our method permits different training schemes for filter atoms and atom coefficients for highly adaptive model personalization and communication efficiency. Empirical results on benchmark datasets demonstrate that our filter decomposition technique substantially improves the accuracy of FL methods, confirming its efficacy in addressing data heterogeneity.
no_new_dataset
0.953101
2503.08663
Pierre Sermanet
Pierre Sermanet, Anirudha Majumdar, Alex Irpan, Dmitry Kalashnikov, Vikas Sindhwani
Generating Robot Constitutions & Benchmarks for Semantic Safety
null
null
null
null
cs.RO cs.AI cs.CV cs.CY cs.HC
http://creativecommons.org/licenses/by/4.0/
Until recently, robotics safety research was predominantly about collision avoidance and hazard reduction in the immediate vicinity of a robot. Since the advent of large vision and language models (VLMs), robots are now also capable of higher-level semantic scene understanding and natural language interactions with humans. Despite their known vulnerabilities (e.g. hallucinations or jail-breaking), VLMs are being handed control of robots capable of physical contact with the real world. This can lead to dangerous behaviors, making semantic safety for robots a matter of immediate concern. Our contributions in this paper are two fold: first, to address these emerging risks, we release the ASIMOV Benchmark, a large-scale and comprehensive collection of datasets for evaluating and improving semantic safety of foundation models serving as robot brains. Our data generation recipe is highly scalable: by leveraging text and image generation techniques, we generate undesirable situations from real-world visual scenes and human injury reports from hospitals. Secondly, we develop a framework to automatically generate robot constitutions from real-world data to steer a robot's behavior using Constitutional AI mechanisms. We propose a novel auto-amending process that is able to introduce nuances in written rules of behavior; this can lead to increased alignment with human preferences on behavior desirability and safety. We explore trade-offs between generality and specificity across a diverse set of constitutions of different lengths, and demonstrate that a robot is able to effectively reject unconstitutional actions. We measure a top alignment rate of 84.3% on the ASIMOV Benchmark using generated constitutions, outperforming no-constitution baselines and human-written constitutions. Data is available at asimov-benchmark.github.io
[ { "version": "v1", "created": "Tue, 11 Mar 2025 17:50:47 GMT" } ]
2025-03-12T00:00:00
[ [ "Sermanet", "Pierre", "" ], [ "Majumdar", "Anirudha", "" ], [ "Irpan", "Alex", "" ], [ "Kalashnikov", "Dmitry", "" ], [ "Sindhwani", "Vikas", "" ] ]
TITLE: Generating Robot Constitutions & Benchmarks for Semantic Safety ABSTRACT: Until recently, robotics safety research was predominantly about collision avoidance and hazard reduction in the immediate vicinity of a robot. Since the advent of large vision and language models (VLMs), robots are now also capable of higher-level semantic scene understanding and natural language interactions with humans. Despite their known vulnerabilities (e.g. hallucinations or jail-breaking), VLMs are being handed control of robots capable of physical contact with the real world. This can lead to dangerous behaviors, making semantic safety for robots a matter of immediate concern. Our contributions in this paper are two fold: first, to address these emerging risks, we release the ASIMOV Benchmark, a large-scale and comprehensive collection of datasets for evaluating and improving semantic safety of foundation models serving as robot brains. Our data generation recipe is highly scalable: by leveraging text and image generation techniques, we generate undesirable situations from real-world visual scenes and human injury reports from hospitals. Secondly, we develop a framework to automatically generate robot constitutions from real-world data to steer a robot's behavior using Constitutional AI mechanisms. We propose a novel auto-amending process that is able to introduce nuances in written rules of behavior; this can lead to increased alignment with human preferences on behavior desirability and safety. We explore trade-offs between generality and specificity across a diverse set of constitutions of different lengths, and demonstrate that a robot is able to effectively reject unconstitutional actions. We measure a top alignment rate of 84.3% on the ASIMOV Benchmark using generated constitutions, outperforming no-constitution baselines and human-written constitutions. Data is available at asimov-benchmark.github.io
new_dataset
0.767429
2503.08674
Tobias Kreiman
Tobias Kreiman and Aditi S. Krishnapriyan
Understanding and Mitigating Distribution Shifts For Machine Learning Force Fields
null
null
null
null
cs.LG cond-mat.mtrl-sci physics.chem-ph q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Machine Learning Force Fields (MLFFs) are a promising alternative to expensive ab initio quantum mechanical molecular simulations. Given the diversity of chemical spaces that are of interest and the cost of generating new data, it is important to understand how MLFFs generalize beyond their training distributions. In order to characterize and better understand distribution shifts in MLFFs, we conduct diagnostic experiments on chemical datasets, revealing common shifts that pose significant challenges, even for large foundation models trained on extensive data. Based on these observations, we hypothesize that current supervised training methods inadequately regularize MLFFs, resulting in overfitting and learning poor representations of out-of-distribution systems. We then propose two new methods as initial steps for mitigating distribution shifts for MLFFs. Our methods focus on test-time refinement strategies that incur minimal computational cost and do not use expensive ab initio reference labels. The first strategy, based on spectral graph theory, modifies the edges of test graphs to align with graph structures seen during training. Our second strategy improves representations for out-of-distribution systems at test-time by taking gradient steps using an auxiliary objective, such as a cheap physical prior. Our test-time refinement strategies significantly reduce errors on out-of-distribution systems, suggesting that MLFFs are capable of and can move towards modeling diverse chemical spaces, but are not being effectively trained to do so. Our experiments establish clear benchmarks for evaluating the generalization capabilities of the next generation of MLFFs. Our code is available at https://tkreiman.github.io/projects/mlff_distribution_shifts/.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 17:54:29 GMT" } ]
2025-03-12T00:00:00
[ [ "Kreiman", "Tobias", "" ], [ "Krishnapriyan", "Aditi S.", "" ] ]
TITLE: Understanding and Mitigating Distribution Shifts For Machine Learning Force Fields ABSTRACT: Machine Learning Force Fields (MLFFs) are a promising alternative to expensive ab initio quantum mechanical molecular simulations. Given the diversity of chemical spaces that are of interest and the cost of generating new data, it is important to understand how MLFFs generalize beyond their training distributions. In order to characterize and better understand distribution shifts in MLFFs, we conduct diagnostic experiments on chemical datasets, revealing common shifts that pose significant challenges, even for large foundation models trained on extensive data. Based on these observations, we hypothesize that current supervised training methods inadequately regularize MLFFs, resulting in overfitting and learning poor representations of out-of-distribution systems. We then propose two new methods as initial steps for mitigating distribution shifts for MLFFs. Our methods focus on test-time refinement strategies that incur minimal computational cost and do not use expensive ab initio reference labels. The first strategy, based on spectral graph theory, modifies the edges of test graphs to align with graph structures seen during training. Our second strategy improves representations for out-of-distribution systems at test-time by taking gradient steps using an auxiliary objective, such as a cheap physical prior. Our test-time refinement strategies significantly reduce errors on out-of-distribution systems, suggesting that MLFFs are capable of and can move towards modeling diverse chemical spaces, but are not being effectively trained to do so. Our experiments establish clear benchmarks for evaluating the generalization capabilities of the next generation of MLFFs. Our code is available at https://tkreiman.github.io/projects/mlff_distribution_shifts/.
no_new_dataset
0.951323
2105.05717
Lunchen Xie
Lunchen Xie, Jiaqi Liu, Songtao Lu, Tsung-hui Chang, Qingjiang Shi
An Efficient Learning Framework For Federated XGBoost Using Secret Sharing And Distributed Optimization
24 pages, Special issue of ACM Transactions on Intelligent Systems and Technology
null
10.1145/3523061
null
cs.LG cs.AI cs.CR
http://creativecommons.org/licenses/by/4.0/
XGBoost is one of the most widely used machine learning models in the industry due to its superior learning accuracy and efficiency. Targeting at data isolation issues in the big data problems, it is crucial to deploy a secure and efficient federated XGBoost (FedXGB) model. Existing FedXGB models either have data leakage issues or are only applicable to the two-party setting with heavy communication and computation overheads. In this paper, a lossless multi-party federated XGB learning framework is proposed with a security guarantee, which reshapes the XGBoost's split criterion calculation process under a secret sharing setting and solves the leaf weight calculation problem by leveraging distributed optimization. Remarkably, a thorough analysis of model security is provided as well, and multiple numerical results showcase the superiority of the proposed FedXGB compared with the state-of-the-art models on benchmark datasets.
[ { "version": "v1", "created": "Wed, 12 May 2021 15:04:18 GMT" } ]
2025-03-11T00:00:00
[ [ "Xie", "Lunchen", "" ], [ "Liu", "Jiaqi", "" ], [ "Lu", "Songtao", "" ], [ "Chang", "Tsung-hui", "" ], [ "Shi", "Qingjiang", "" ] ]
TITLE: An Efficient Learning Framework For Federated XGBoost Using Secret Sharing And Distributed Optimization ABSTRACT: XGBoost is one of the most widely used machine learning models in the industry due to its superior learning accuracy and efficiency. Targeting at data isolation issues in the big data problems, it is crucial to deploy a secure and efficient federated XGBoost (FedXGB) model. Existing FedXGB models either have data leakage issues or are only applicable to the two-party setting with heavy communication and computation overheads. In this paper, a lossless multi-party federated XGB learning framework is proposed with a security guarantee, which reshapes the XGBoost's split criterion calculation process under a secret sharing setting and solves the leaf weight calculation problem by leveraging distributed optimization. Remarkably, a thorough analysis of model security is provided as well, and multiple numerical results showcase the superiority of the proposed FedXGB compared with the state-of-the-art models on benchmark datasets.
no_new_dataset
0.939637
2108.08618
Martijn Pieter Anton Starmans
Martijn P. A. Starmans, Sebastian R. van der Voort, Thomas Phil, Milea J. M. Timbergen, Melissa Vos, Guillaume A. Padmos, Wouter Kessels, David Hanff, Dirk J. Grunhagen, Cornelis Verhoef, Stefan Sleijfer, Martin J. van den Bent, Marion Smits, Roy S. Dwarkasing, Christopher J. Els, Federico Fiduzi, Geert J. L. H. van Leenders, Anela Blazevic, Johannes Hofland, Tessa Brabander, Renza A. H. van Gils, Gaston J. H. Franssen, Richard A. Feelders, Wouter W. de Herder, Florian E. Buisman, Francois E. J. A. Willemssen, Bas Groot Koerkamp, Lindsay Angus, Astrid A. M. van der Veldt, Ana Rajicic, Arlette E. Odink, Mitchell Deen, Jose M. Castillo T., Jifke Veenland, Ivo Schoots, Michel Renckens, Michail Doukas, Rob A. de Man, Jan N. M. IJzermans, Razvan L. Miclea, Peter B. Vermeulen, Esther E. Bron, Maarten G. Thomeer, Jacob J. Visser, Wiro J. Niessen, Stefan Klein (for the Alzheimers Disease Neuroimaging Initiative)
An automated machine learning framework to optimize radiomics model construction validated on twelve clinical applications
22 pages, 3 figures, 2 tables, 1 algorithm, 3 supplementary figures, 4 supplementary tables, 1 supplementary algorithm
null
null
null
eess.IV cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Predicting clinical outcomes from medical images using quantitative features (``radiomics'') requires many method design choices, Currently, in new clinical applications, finding the optimal radiomics method out of the wide range of methods relies on a manual, heuristic trial-and-error process. We introduce a novel automated framework that optimizes radiomics workflow construction per application by standardizing the radiomics workflow in modular components, including a large collection of algorithms for each component, and formulating a combined algorithm selection and hyperparameter optimization problem. To solve it, we employ automated machine learning through two strategies (random search and Bayesian optimization) and three ensembling approaches. Results show that a medium-sized random search and straight-forward ensembling perform similar to more advanced methods while being more efficient. Validated across twelve clinical applications, our approach outperforms both a radiomics baseline and human experts. Concluding, our framework improves and streamlines radiomics research by fully automatically optimizing radiomics workflow construction. To facilitate reproducibility, we publicly release six datasets, software of the method, and code to reproduce this study.
[ { "version": "v1", "created": "Thu, 19 Aug 2021 11:03:54 GMT" }, { "version": "v2", "created": "Fri, 29 Jul 2022 13:36:52 GMT" }, { "version": "v3", "created": "Mon, 10 Mar 2025 12:20:03 GMT" } ]
2025-03-11T00:00:00
[ [ "Starmans", "Martijn P. A.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "van der Voort", "Sebastian R.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Phil", "Thomas", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Timbergen", "Milea J. M.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Vos", "Melissa", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Padmos", "Guillaume A.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Kessels", "Wouter", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Hanff", "David", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Grunhagen", "Dirk J.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Verhoef", "Cornelis", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Sleijfer", "Stefan", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Bent", "Martin J. van den", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Smits", "Marion", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Dwarkasing", "Roy S.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Els", "Christopher J.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Fiduzi", "Federico", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "van Leenders", "Geert J. L. H.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Blazevic", "Anela", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Hofland", "Johannes", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Brabander", "Tessa", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "van Gils", "Renza A. H.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Franssen", "Gaston J. H.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Feelders", "Richard A.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "de Herder", "Wouter W.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Buisman", "Florian E.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Willemssen", "Francois E. J. A.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Koerkamp", "Bas Groot", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Angus", "Lindsay", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "van der Veldt", "Astrid A. M.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Rajicic", "Ana", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Odink", "Arlette E.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Deen", "Mitchell", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "T.", "Jose M. Castillo", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Veenland", "Jifke", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Schoots", "Ivo", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Renckens", "Michel", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Doukas", "Michail", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "de Man", "Rob A.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "IJzermans", "Jan N. M.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Miclea", "Razvan L.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Vermeulen", "Peter B.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Bron", "Esther E.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Thomeer", "Maarten G.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Visser", "Jacob J.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Niessen", "Wiro J.", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ], [ "Klein", "Stefan", "", "for the Alzheimers Disease\n Neuroimaging Initiative" ] ]
TITLE: An automated machine learning framework to optimize radiomics model construction validated on twelve clinical applications ABSTRACT: Predicting clinical outcomes from medical images using quantitative features (``radiomics'') requires many method design choices, Currently, in new clinical applications, finding the optimal radiomics method out of the wide range of methods relies on a manual, heuristic trial-and-error process. We introduce a novel automated framework that optimizes radiomics workflow construction per application by standardizing the radiomics workflow in modular components, including a large collection of algorithms for each component, and formulating a combined algorithm selection and hyperparameter optimization problem. To solve it, we employ automated machine learning through two strategies (random search and Bayesian optimization) and three ensembling approaches. Results show that a medium-sized random search and straight-forward ensembling perform similar to more advanced methods while being more efficient. Validated across twelve clinical applications, our approach outperforms both a radiomics baseline and human experts. Concluding, our framework improves and streamlines radiomics research by fully automatically optimizing radiomics workflow construction. To facilitate reproducibility, we publicly release six datasets, software of the method, and code to reproduce this study.
new_dataset
0.585823
2112.11594
Haoran You
Haoran You, Tong Geng, Yongan Zhang, Ang Li, Yingyan Celine Lin
GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Published as a conference paper at HPCA 2022
null
null
null
cs.AR cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art graph learning model. However, it can be notoriously challenging to inference GCNs over large graph datasets, limiting their application to large real-world graphs and hindering the exploration of deeper and more sophisticated GCN graphs. This is because real-world graphs can be extremely large and sparse. Furthermore, the node degree of GCNs tends to follow the power-law distribution and therefore have highly irregular adjacency matrices, resulting in prohibitive inefficiencies in both data processing and movement and thus substantially limiting the achievable GCN acceleration efficiency. To this end, this paper proposes a GCN algorithm and accelerator Co-Design framework dubbed GCoD which can largely alleviate the aforementioned GCN irregularity and boost GCNs' inference efficiency. Specifically, on the algorithm level, GCoD integrates a split and conquer GCN training strategy that polarizes the graphs to be either denser or sparser in local neighborhoods without compromising the model accuracy, resulting in graph adjacency matrices that (mostly) have merely two levels of workload and enjoys largely enhanced regularity and thus ease of acceleration. On the hardware level, we further develop a dedicated two-pronged accelerator with a separated engine to process each of the aforementioned denser and sparser workloads, further boosting the overall utilization and acceleration efficiency. Extensive experiments and ablation studies validate that our GCoD consistently reduces the number of off-chip accesses, leading to speedups of 15286x, 294x, 7.8x, and 2.5x as compared to CPUs, GPUs, and prior-art GCN accelerators including HyGCN and AWB-GCN, respectively, while maintaining or even improving the task accuracy. Codes are available at https://github.com/RICE-EIC/GCoD.
[ { "version": "v1", "created": "Wed, 22 Dec 2021 00:30:50 GMT" }, { "version": "v2", "created": "Wed, 30 Mar 2022 23:11:07 GMT" }, { "version": "v3", "created": "Sun, 9 Mar 2025 02:58:24 GMT" } ]
2025-03-11T00:00:00
[ [ "You", "Haoran", "" ], [ "Geng", "Tong", "" ], [ "Zhang", "Yongan", "" ], [ "Li", "Ang", "" ], [ "Lin", "Yingyan Celine", "" ] ]
TITLE: GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design ABSTRACT: Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art graph learning model. However, it can be notoriously challenging to inference GCNs over large graph datasets, limiting their application to large real-world graphs and hindering the exploration of deeper and more sophisticated GCN graphs. This is because real-world graphs can be extremely large and sparse. Furthermore, the node degree of GCNs tends to follow the power-law distribution and therefore have highly irregular adjacency matrices, resulting in prohibitive inefficiencies in both data processing and movement and thus substantially limiting the achievable GCN acceleration efficiency. To this end, this paper proposes a GCN algorithm and accelerator Co-Design framework dubbed GCoD which can largely alleviate the aforementioned GCN irregularity and boost GCNs' inference efficiency. Specifically, on the algorithm level, GCoD integrates a split and conquer GCN training strategy that polarizes the graphs to be either denser or sparser in local neighborhoods without compromising the model accuracy, resulting in graph adjacency matrices that (mostly) have merely two levels of workload and enjoys largely enhanced regularity and thus ease of acceleration. On the hardware level, we further develop a dedicated two-pronged accelerator with a separated engine to process each of the aforementioned denser and sparser workloads, further boosting the overall utilization and acceleration efficiency. Extensive experiments and ablation studies validate that our GCoD consistently reduces the number of off-chip accesses, leading to speedups of 15286x, 294x, 7.8x, and 2.5x as compared to CPUs, GPUs, and prior-art GCN accelerators including HyGCN and AWB-GCN, respectively, while maintaining or even improving the task accuracy. Codes are available at https://github.com/RICE-EIC/GCoD.
no_new_dataset
0.94474
2302.11341
A. R. Sricharan
Monika Henzinger and A. R. Sricharan and Teresa Anna Steiner
Differentially Private Continual Release of Histograms and Related Queries
Accepted at AISTATS 2025
null
null
null
cs.DS cs.CR
http://creativecommons.org/licenses/by-sa/4.0/
We study privately releasing column sums of a $d$-dimensional table with entries from a universe $\chi$ undergoing $T$ row updates, called histogram under continual release. Our mechanisms give better additive $\ell_\infty$-error than existing mechanisms for a large class of queries and input streams. Our first contribution is an output-sensitive mechanism in the insertions-only model ($\chi = \{0,1\}$) for maintaining (i) the histogram or (ii) queries that do not require maintaining the entire histogram, such as the maximum or minimum column sum, the median, or any quantiles. The mechanism has an additive error of $O(d\log^2 (dq^*)+\log T)$ whp, where $q^*$ is the maximum output value over all time steps on this dataset. The mechanism does not require $q^*$ as input. This breaks the $\Omega(d \log T)$ bound of prior work when $q^* \ll T$. Our second contribution is a mechanism for the turnstile model that admits negative entry updates ($\chi = \{-1, 0,1\}$). This mechanism has an additive error of $O(d \log^2 (dK) + \log T)$ whp, where $K$ is the number of times two consecutive data rows differ, and the mechanism does not require $K$ as input. This is useful when monitoring inputs that only vary under unusual circumstances. For $d=1$ this gives the first private mechanism with error $O(\log^2 K + \log T)$ for continual counting in the turnstile model, improving on the $O(\log^2 n + \log T)$ error bound by Dwork et al. [ASIACRYPT 2015], where $n$ is the number of ones in the stream, as well as allowing negative entries, while Dwork et al. [ASIACRYPT 2015] can only handle nonnegative entries ($\chi=\{0,1\}$).
[ { "version": "v1", "created": "Wed, 22 Feb 2023 12:38:02 GMT" }, { "version": "v2", "created": "Mon, 10 Mar 2025 12:40:22 GMT" } ]
2025-03-11T00:00:00
[ [ "Henzinger", "Monika", "" ], [ "Sricharan", "A. R.", "" ], [ "Steiner", "Teresa Anna", "" ] ]
TITLE: Differentially Private Continual Release of Histograms and Related Queries ABSTRACT: We study privately releasing column sums of a $d$-dimensional table with entries from a universe $\chi$ undergoing $T$ row updates, called histogram under continual release. Our mechanisms give better additive $\ell_\infty$-error than existing mechanisms for a large class of queries and input streams. Our first contribution is an output-sensitive mechanism in the insertions-only model ($\chi = \{0,1\}$) for maintaining (i) the histogram or (ii) queries that do not require maintaining the entire histogram, such as the maximum or minimum column sum, the median, or any quantiles. The mechanism has an additive error of $O(d\log^2 (dq^*)+\log T)$ whp, where $q^*$ is the maximum output value over all time steps on this dataset. The mechanism does not require $q^*$ as input. This breaks the $\Omega(d \log T)$ bound of prior work when $q^* \ll T$. Our second contribution is a mechanism for the turnstile model that admits negative entry updates ($\chi = \{-1, 0,1\}$). This mechanism has an additive error of $O(d \log^2 (dK) + \log T)$ whp, where $K$ is the number of times two consecutive data rows differ, and the mechanism does not require $K$ as input. This is useful when monitoring inputs that only vary under unusual circumstances. For $d=1$ this gives the first private mechanism with error $O(\log^2 K + \log T)$ for continual counting in the turnstile model, improving on the $O(\log^2 n + \log T)$ error bound by Dwork et al. [ASIACRYPT 2015], where $n$ is the number of ones in the stream, as well as allowing negative entries, while Dwork et al. [ASIACRYPT 2015] can only handle nonnegative entries ($\chi=\{0,1\}$).
no_new_dataset
0.939137
2306.17184
Vinoth Nandakumar
Vinoth Nandakumar, Qiang Qu, Peng Mi and Tongliang Liu
State space models can express n-gram languages
Published in "Transactions on Machine Learning Research", 2025
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent advancements in recurrent neural networks (RNNs) have reinvigorated interest in their application to natural language processing tasks, particularly with the development of more efficient and parallelizable variants known as state space models (SSMs), which have shown competitive performance against transformer models while maintaining a lower memory footprint. While RNNs and SSMs (e.g., Mamba) have been empirically more successful than rule-based systems based on n-gram models, a rigorous theoretical explanation for this success has not yet been developed, as it is unclear how these models encode the combinatorial rules that govern the next-word prediction task. In this paper, we construct state space language models that can solve the next-word prediction task for languages generated from n-gram rules, thereby showing that the former are more expressive. Our proof shows how SSMs can encode n-gram rules using new theoretical results on their memorization capacity, and demonstrates how their context window can be controlled by restricting the spectrum of the state transition matrix. We conduct experiments with a small dataset generated from n-gram rules to show how our framework can be applied to SSMs and RNNs obtained through gradient-based optimization.
[ { "version": "v1", "created": "Tue, 20 Jun 2023 10:41:23 GMT" }, { "version": "v2", "created": "Sun, 15 Dec 2024 00:24:59 GMT" }, { "version": "v3", "created": "Sun, 9 Mar 2025 06:40:39 GMT" } ]
2025-03-11T00:00:00
[ [ "Nandakumar", "Vinoth", "" ], [ "Qu", "Qiang", "" ], [ "Mi", "Peng", "" ], [ "Liu", "Tongliang", "" ] ]
TITLE: State space models can express n-gram languages ABSTRACT: Recent advancements in recurrent neural networks (RNNs) have reinvigorated interest in their application to natural language processing tasks, particularly with the development of more efficient and parallelizable variants known as state space models (SSMs), which have shown competitive performance against transformer models while maintaining a lower memory footprint. While RNNs and SSMs (e.g., Mamba) have been empirically more successful than rule-based systems based on n-gram models, a rigorous theoretical explanation for this success has not yet been developed, as it is unclear how these models encode the combinatorial rules that govern the next-word prediction task. In this paper, we construct state space language models that can solve the next-word prediction task for languages generated from n-gram rules, thereby showing that the former are more expressive. Our proof shows how SSMs can encode n-gram rules using new theoretical results on their memorization capacity, and demonstrates how their context window can be controlled by restricting the spectrum of the state transition matrix. We conduct experiments with a small dataset generated from n-gram rules to show how our framework can be applied to SSMs and RNNs obtained through gradient-based optimization.
no_new_dataset
0.94625
2307.03812
Iksung Kang
Iksung Kang, Qinrong Zhang, Stella X. Yu, Na Ji
Coordinate-based neural representations for computational adaptive optics in widefield microscopy
60 pages, 20 figures, 2 tables. Nat Mach Intell (2024)
null
10.1038/s42256-024-00853-3
null
eess.IV cs.SY eess.SY physics.optics
http://creativecommons.org/licenses/by-nc-nd/4.0/
Widefield microscopy is widely used for non-invasive imaging of biological structures at subcellular resolution. When applied to complex specimen, its image quality is degraded by sample-induced optical aberration. Adaptive optics can correct wavefront distortion and restore diffraction-limited resolution but require wavefront sensing and corrective devices, increasing system complexity and cost. Here, we describe a self-supervised machine learning algorithm, CoCoA, that performs joint wavefront estimation and three-dimensional structural information extraction from a single input 3D image stack without the need for external training dataset. We implemented CoCoA for widefield imaging of mouse brain tissues and validated its performance with direct-wavefront-sensing-based adaptive optics. Importantly, we systematically explored and quantitatively characterized the limiting factors of CoCoA's performance. Using CoCoA, we demonstrated the first in vivo widefield mouse brain imaging using machine-learning-based adaptive optics. Incorporating coordinate-based neural representations and a forward physics model, the self-supervised scheme of CoCoA should be applicable to microscopy modalities in general.
[ { "version": "v1", "created": "Fri, 7 Jul 2023 19:36:24 GMT" }, { "version": "v2", "created": "Thu, 25 Apr 2024 04:49:04 GMT" }, { "version": "v3", "created": "Wed, 1 May 2024 23:31:41 GMT" }, { "version": "v4", "created": "Mon, 24 Jun 2024 23:08:29 GMT" }, { "version": "v5", "created": "Fri, 7 Mar 2025 20:29:51 GMT" } ]
2025-03-11T00:00:00
[ [ "Kang", "Iksung", "" ], [ "Zhang", "Qinrong", "" ], [ "Yu", "Stella X.", "" ], [ "Ji", "Na", "" ] ]
TITLE: Coordinate-based neural representations for computational adaptive optics in widefield microscopy ABSTRACT: Widefield microscopy is widely used for non-invasive imaging of biological structures at subcellular resolution. When applied to complex specimen, its image quality is degraded by sample-induced optical aberration. Adaptive optics can correct wavefront distortion and restore diffraction-limited resolution but require wavefront sensing and corrective devices, increasing system complexity and cost. Here, we describe a self-supervised machine learning algorithm, CoCoA, that performs joint wavefront estimation and three-dimensional structural information extraction from a single input 3D image stack without the need for external training dataset. We implemented CoCoA for widefield imaging of mouse brain tissues and validated its performance with direct-wavefront-sensing-based adaptive optics. Importantly, we systematically explored and quantitatively characterized the limiting factors of CoCoA's performance. Using CoCoA, we demonstrated the first in vivo widefield mouse brain imaging using machine-learning-based adaptive optics. Incorporating coordinate-based neural representations and a forward physics model, the self-supervised scheme of CoCoA should be applicable to microscopy modalities in general.
no_new_dataset
0.948489
2310.08051
Yuzhe Tian
Jianchao Lu and Yuzhe Tian, Yang Zhang, Quan Z. Sheng, Xi Zheng
LGL-BCI: A Motor-Imagery-Based Brain-Computer Interface with Geometric Learning
Update the venue and copyright information
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain--computer interfaces are groundbreaking technology whereby brain signals are used to control external devices. Despite some advances in recent years, electroencephalogram (EEG)-based motor-imagery tasks face challenges, such as amplitude and phase variability and complex spatial correlations, with a need for smaller models and faster inference. In this study, we develop a prototype, called the Lightweight Geometric Learning Brain--Computer Interface (LGL-BCI), which uses our customized geometric deep learning architecture for swift model inference without sacrificing accuracy. LGL-BCI contains an EEG channel selection module via a feature decomposition algorithm to reduce the dimensionality of a symmetric positive definite matrix, providing adaptiveness among the continuously changing EEG signal. Meanwhile, a built-in lossless transformation helps boost the inference speed. The performance of our solution was evaluated using two real-world EEG devices and two public EEG datasets. LGL-BCI demonstrated significant improvements, achieving an accuracy of 82.54% compared to 62.22% for the state-of-the-art approach. Furthermore, LGL-BCI uses fewer parameters (64.9K vs. 183.7K), highlighting its computational efficiency. These findings underscore both the superior accuracy and computational efficiency of LGL-BCI, demonstrating the feasibility and robustness of geometric deep learning in motor-imagery brain--computer interface applications.
[ { "version": "v1", "created": "Thu, 12 Oct 2023 05:52:54 GMT" }, { "version": "v2", "created": "Wed, 8 Nov 2023 05:30:25 GMT" }, { "version": "v3", "created": "Tue, 21 Nov 2023 12:36:49 GMT" }, { "version": "v4", "created": "Wed, 26 Feb 2025 05:16:11 GMT" }, { "version": "v5", "created": "Sat, 8 Mar 2025 15:14:27 GMT" } ]
2025-03-11T00:00:00
[ [ "Lu", "Jianchao", "" ], [ "Tian", "Yuzhe", "" ], [ "Zhang", "Yang", "" ], [ "Sheng", "Quan Z.", "" ], [ "Zheng", "Xi", "" ] ]
TITLE: LGL-BCI: A Motor-Imagery-Based Brain-Computer Interface with Geometric Learning ABSTRACT: Brain--computer interfaces are groundbreaking technology whereby brain signals are used to control external devices. Despite some advances in recent years, electroencephalogram (EEG)-based motor-imagery tasks face challenges, such as amplitude and phase variability and complex spatial correlations, with a need for smaller models and faster inference. In this study, we develop a prototype, called the Lightweight Geometric Learning Brain--Computer Interface (LGL-BCI), which uses our customized geometric deep learning architecture for swift model inference without sacrificing accuracy. LGL-BCI contains an EEG channel selection module via a feature decomposition algorithm to reduce the dimensionality of a symmetric positive definite matrix, providing adaptiveness among the continuously changing EEG signal. Meanwhile, a built-in lossless transformation helps boost the inference speed. The performance of our solution was evaluated using two real-world EEG devices and two public EEG datasets. LGL-BCI demonstrated significant improvements, achieving an accuracy of 82.54% compared to 62.22% for the state-of-the-art approach. Furthermore, LGL-BCI uses fewer parameters (64.9K vs. 183.7K), highlighting its computational efficiency. These findings underscore both the superior accuracy and computational efficiency of LGL-BCI, demonstrating the feasibility and robustness of geometric deep learning in motor-imagery brain--computer interface applications.
no_new_dataset
0.949201
2310.12214
Meng Tong
Meng Tong and Kejiang Chen and Jie Zhang and Yuang Qi and Weiming Zhang and Nenghai Yu and Tianwei Zhang and Zhikun Zhang
InferDPT: Privacy-Preserving Inference for Black-box Large Language Model
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs), like ChatGPT, have greatly simplified text generation tasks. However, they have also raised concerns about privacy risks such as data leakage and unauthorized data collection. Existing solutions for privacy-preserving inference face practical challenges related to computation time and communication costs. In this paper, we propose InferDPT, the first practical framework for the privacy-preserving Inference of black-box LLMs, implementing Differential Privacy in Text generation. InferDPT comprises two key modules: the "perturbation module" utilizes the exponential mechanism to generate a perturbed prompt, facilitating privacy-preserving inference with black-box LLMs, and the "extraction module", inspired by knowledge distillation and retrieval-augmented generation, extracts coherent and consistent text from the perturbed generation result, ensuring successful text generation completion. To address privacy concerns related to previous exponential mechanisms' susceptibility to embedding revision attacks, we introduce RANTEXT, a novel differential privacy mechanism integrated into the perturbation module of InferDPT, which introduces the concept of "RANdom adjacency" for TEXT perturbation within the prompt. Experimental results across three datasets demonstrate that the text generation quality of InferDPT is comparable to that of non-private GPT-4, and RANTEXT surpasses existing state-of-the-art mechanisms, namely, SANTEXT+ and CUSTEXT+ in the trade-off between privacy and utility. Even with an privacy parameter epsilon value of 6.0, RANTEXT achieves an average privacy protection rate exceeding 90% against embedding revision attacks, which is 0.58 times higher than that of SANTEXT+ and 3.35 times higher than that of CUSTEXT+.
[ { "version": "v1", "created": "Wed, 18 Oct 2023 18:00:11 GMT" }, { "version": "v2", "created": "Sun, 22 Oct 2023 07:34:36 GMT" }, { "version": "v3", "created": "Tue, 24 Oct 2023 03:25:14 GMT" }, { "version": "v4", "created": "Fri, 8 Dec 2023 05:14:40 GMT" }, { "version": "v5", "created": "Mon, 11 Dec 2023 09:59:09 GMT" }, { "version": "v6", "created": "Wed, 27 Mar 2024 09:19:01 GMT" }, { "version": "v7", "created": "Mon, 10 Mar 2025 06:52:58 GMT" } ]
2025-03-11T00:00:00
[ [ "Tong", "Meng", "" ], [ "Chen", "Kejiang", "" ], [ "Zhang", "Jie", "" ], [ "Qi", "Yuang", "" ], [ "Zhang", "Weiming", "" ], [ "Yu", "Nenghai", "" ], [ "Zhang", "Tianwei", "" ], [ "Zhang", "Zhikun", "" ] ]
TITLE: InferDPT: Privacy-Preserving Inference for Black-box Large Language Model ABSTRACT: Large language models (LLMs), like ChatGPT, have greatly simplified text generation tasks. However, they have also raised concerns about privacy risks such as data leakage and unauthorized data collection. Existing solutions for privacy-preserving inference face practical challenges related to computation time and communication costs. In this paper, we propose InferDPT, the first practical framework for the privacy-preserving Inference of black-box LLMs, implementing Differential Privacy in Text generation. InferDPT comprises two key modules: the "perturbation module" utilizes the exponential mechanism to generate a perturbed prompt, facilitating privacy-preserving inference with black-box LLMs, and the "extraction module", inspired by knowledge distillation and retrieval-augmented generation, extracts coherent and consistent text from the perturbed generation result, ensuring successful text generation completion. To address privacy concerns related to previous exponential mechanisms' susceptibility to embedding revision attacks, we introduce RANTEXT, a novel differential privacy mechanism integrated into the perturbation module of InferDPT, which introduces the concept of "RANdom adjacency" for TEXT perturbation within the prompt. Experimental results across three datasets demonstrate that the text generation quality of InferDPT is comparable to that of non-private GPT-4, and RANTEXT surpasses existing state-of-the-art mechanisms, namely, SANTEXT+ and CUSTEXT+ in the trade-off between privacy and utility. Even with an privacy parameter epsilon value of 6.0, RANTEXT achieves an average privacy protection rate exceeding 90% against embedding revision attacks, which is 0.58 times higher than that of SANTEXT+ and 3.35 times higher than that of CUSTEXT+.
no_new_dataset
0.948632
2311.17093
Evelyn Mannix
Evelyn Mannix and Howard Bondell
A Mixture of Exemplars Approach for Efficient Out-of-Distribution Detection with Foundation Models
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
One of the early weaknesses identified in deep neural networks trained for image classification tasks was their inability to provide low confidence predictions on out-of-distribution (OOD) data that was significantly different from the in-distribution (ID) data used to train them. Representation learning, where neural networks are trained in specific ways that improve their ability to detect OOD examples, has emerged as a promising solution. However, these approaches require long training times and can add additional overhead to detect OOD examples. Recent developments in Vision Transformer (ViT) foundation models$\unicode{x2013}$large networks trained on large and diverse datasets with self-supervised approaches$\unicode{x2013}$also show strong performance in OOD detection, and could address these challenges. This paper presents Mixture of Exemplars (MoLAR), an efficient approach to tackling OOD detection challenges that is designed to maximise the benefit of training a classifier with a high quality, frozen, pretrained foundation model backbone. MoLAR provides strong OOD performance when only comparing the similarity of OOD examples to the exemplars, a small set of images chosen to be representative of the dataset, leading to up to 30 times faster OOD detection inference over other methods that provide best performance when the full ID dataset is used. In some cases, only using these exemplars actually improves performance with MoLAR. Extensive experiments demonstrate the improved OOD detection performance of MoLAR in comparison to comparable approaches in both supervised and semi-supervised settings, and code is available at github.com/emannix/molar-mixture-of-exemplars.
[ { "version": "v1", "created": "Tue, 28 Nov 2023 06:12:28 GMT" }, { "version": "v2", "created": "Thu, 7 Mar 2024 00:30:52 GMT" }, { "version": "v3", "created": "Fri, 24 May 2024 06:06:34 GMT" }, { "version": "v4", "created": "Fri, 22 Nov 2024 01:20:29 GMT" }, { "version": "v5", "created": "Sat, 8 Mar 2025 00:58:33 GMT" } ]
2025-03-11T00:00:00
[ [ "Mannix", "Evelyn", "" ], [ "Bondell", "Howard", "" ] ]
TITLE: A Mixture of Exemplars Approach for Efficient Out-of-Distribution Detection with Foundation Models ABSTRACT: One of the early weaknesses identified in deep neural networks trained for image classification tasks was their inability to provide low confidence predictions on out-of-distribution (OOD) data that was significantly different from the in-distribution (ID) data used to train them. Representation learning, where neural networks are trained in specific ways that improve their ability to detect OOD examples, has emerged as a promising solution. However, these approaches require long training times and can add additional overhead to detect OOD examples. Recent developments in Vision Transformer (ViT) foundation models$\unicode{x2013}$large networks trained on large and diverse datasets with self-supervised approaches$\unicode{x2013}$also show strong performance in OOD detection, and could address these challenges. This paper presents Mixture of Exemplars (MoLAR), an efficient approach to tackling OOD detection challenges that is designed to maximise the benefit of training a classifier with a high quality, frozen, pretrained foundation model backbone. MoLAR provides strong OOD performance when only comparing the similarity of OOD examples to the exemplars, a small set of images chosen to be representative of the dataset, leading to up to 30 times faster OOD detection inference over other methods that provide best performance when the full ID dataset is used. In some cases, only using these exemplars actually improves performance with MoLAR. Extensive experiments demonstrate the improved OOD detection performance of MoLAR in comparison to comparable approaches in both supervised and semi-supervised settings, and code is available at github.com/emannix/molar-mixture-of-exemplars.
no_new_dataset
0.948822
2311.17750
Gergely D\'aniel N\'emeth
Gergely D\'aniel N\'emeth, Miguel \'Angel Lozano, Novi Quadrianto, Nuria Oliver
Privacy and Accuracy Implications of Model Complexity and Integration in Heterogeneous Federated Learning
Code: https://github.com/ellisalicante/ma-fl-mia
IEEE Access 13 (2025) 40258-40274
10.1109/ACCESS.2025.3546478
null
cs.LG cs.AI cs.CR
http://creativecommons.org/licenses/by/4.0/
Federated Learning (FL) has been proposed as a privacy-preserving solution for distributed machine learning, particularly in heterogeneous FL settings where clients have varying computational capabilities and thus train models with different complexities compared to the server's model. However, FL is not without vulnerabilities: recent studies have shown that it is susceptible to membership inference attacks (MIA), which can compromise the privacy of client data. In this paper, we examine the intersection of these two aspects, heterogeneous FL and its privacy vulnerabilities, by focusing on the role of client model integration, the process through which the server integrates parameters from clients' smaller models into its larger model. To better understand this process, we first propose a taxonomy that categorizes existing heterogeneous FL methods and enables the design of seven novel heterogeneous FL model integration strategies. Using CIFAR-10, CIFAR-100, and FEMNIST vision datasets, we evaluate the privacy and accuracy trade-offs of these approaches under three types of MIAs. Our findings reveal significant differences in privacy leakage and performance depending on the integration method. Notably, introducing randomness in the model integration process enhances client privacy while maintaining competitive accuracy for both the clients and the server. This work provides quantitative light on the privacy-accuracy implications client model integration in heterogeneous FL settings, paving the way towards more secure and efficient FL systems.
[ { "version": "v1", "created": "Wed, 29 Nov 2023 15:54:15 GMT" }, { "version": "v2", "created": "Thu, 4 Jul 2024 08:33:33 GMT" }, { "version": "v3", "created": "Mon, 10 Mar 2025 11:10:50 GMT" } ]
2025-03-11T00:00:00
[ [ "Németh", "Gergely Dániel", "" ], [ "Lozano", "Miguel Ángel", "" ], [ "Quadrianto", "Novi", "" ], [ "Oliver", "Nuria", "" ] ]
TITLE: Privacy and Accuracy Implications of Model Complexity and Integration in Heterogeneous Federated Learning ABSTRACT: Federated Learning (FL) has been proposed as a privacy-preserving solution for distributed machine learning, particularly in heterogeneous FL settings where clients have varying computational capabilities and thus train models with different complexities compared to the server's model. However, FL is not without vulnerabilities: recent studies have shown that it is susceptible to membership inference attacks (MIA), which can compromise the privacy of client data. In this paper, we examine the intersection of these two aspects, heterogeneous FL and its privacy vulnerabilities, by focusing on the role of client model integration, the process through which the server integrates parameters from clients' smaller models into its larger model. To better understand this process, we first propose a taxonomy that categorizes existing heterogeneous FL methods and enables the design of seven novel heterogeneous FL model integration strategies. Using CIFAR-10, CIFAR-100, and FEMNIST vision datasets, we evaluate the privacy and accuracy trade-offs of these approaches under three types of MIAs. Our findings reveal significant differences in privacy leakage and performance depending on the integration method. Notably, introducing randomness in the model integration process enhances client privacy while maintaining competitive accuracy for both the clients and the server. This work provides quantitative light on the privacy-accuracy implications client model integration in heterogeneous FL settings, paving the way towards more secure and efficient FL systems.
no_new_dataset
0.945147
2312.05657
Nikos Kanakaris
Shukai Duan, Nikos Kanakaris, Xiongye Xiao, Heng Ping, Chenyu Zhou, Nesreen K. Ahmed, Guixiang Ma, Mihai Capota, Theodore L. Willke, Shahin Nazarian, Paul Bogdan
PerfRL: A Small Language Model Framework for Efficient Code Optimization
null
null
null
null
cs.LG cs.AI cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Code optimization is a challenging task requiring a substantial level of expertise from developers. Nonetheless, this level of human capacity is not sufficient considering the rapid evolution of new hardware architectures and software environments. In light of this, recent research proposes adopting machine learning and artificial intelligence techniques to automate the code optimization process. In this paper, we introduce PerfRL, an innovative framework designed to tackle the problem of code optimization. Our framework leverages the capabilities of small language models (SLMs) and reinforcement learning (RL), facilitating a system where SLMs can assimilate feedback from their environment during the fine-tuning phase, notably through unit tests. When benchmarked against existing models, PerfRL demonstrates superior efficiency in terms of speed and computational resource usage, attributed to its reduced need for training steps and its compatibility with SLMs. Furthermore, it substantially diminishes the risk of logical and syntactical errors. To evaluate our framework, we conduct experiments on the PIE dataset using a lightweight large language model (i.e., CodeT5) and a new reinforcement learning algorithm, namely RRHF. For evaluation purposes, we use a list of evaluation metrics related to optimization quality and speedup. The evaluation results show that our approach achieves similar or better results compared to state-of-the-art models using shorter training times and smaller pre-trained models.
[ { "version": "v1", "created": "Sat, 9 Dec 2023 19:50:23 GMT" }, { "version": "v2", "created": "Sun, 9 Mar 2025 05:01:42 GMT" } ]
2025-03-11T00:00:00
[ [ "Duan", "Shukai", "" ], [ "Kanakaris", "Nikos", "" ], [ "Xiao", "Xiongye", "" ], [ "Ping", "Heng", "" ], [ "Zhou", "Chenyu", "" ], [ "Ahmed", "Nesreen K.", "" ], [ "Ma", "Guixiang", "" ], [ "Capota", "Mihai", "" ], [ "Willke", "Theodore L.", "" ], [ "Nazarian", "Shahin", "" ], [ "Bogdan", "Paul", "" ] ]
TITLE: PerfRL: A Small Language Model Framework for Efficient Code Optimization ABSTRACT: Code optimization is a challenging task requiring a substantial level of expertise from developers. Nonetheless, this level of human capacity is not sufficient considering the rapid evolution of new hardware architectures and software environments. In light of this, recent research proposes adopting machine learning and artificial intelligence techniques to automate the code optimization process. In this paper, we introduce PerfRL, an innovative framework designed to tackle the problem of code optimization. Our framework leverages the capabilities of small language models (SLMs) and reinforcement learning (RL), facilitating a system where SLMs can assimilate feedback from their environment during the fine-tuning phase, notably through unit tests. When benchmarked against existing models, PerfRL demonstrates superior efficiency in terms of speed and computational resource usage, attributed to its reduced need for training steps and its compatibility with SLMs. Furthermore, it substantially diminishes the risk of logical and syntactical errors. To evaluate our framework, we conduct experiments on the PIE dataset using a lightweight large language model (i.e., CodeT5) and a new reinforcement learning algorithm, namely RRHF. For evaluation purposes, we use a list of evaluation metrics related to optimization quality and speedup. The evaluation results show that our approach achieves similar or better results compared to state-of-the-art models using shorter training times and smaller pre-trained models.
no_new_dataset
0.942612
2312.13440
Tonmoy Hossain
Tonmoy Hossain and Miaomiao Zhang
MGAug: Multimodal Geometric Augmentation in Latent Spaces of Image Deformations
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Geometric transformations have been widely used to augment the size of training images. Existing methods often assume a unimodal distribution of the underlying transformations between images, which limits their power when data with multimodal distributions occur. In this paper, we propose a novel model, Multimodal Geometric Augmentation (MGAug), that for the first time generates augmenting transformations in a multimodal latent space of geometric deformations. To achieve this, we first develop a deep network that embeds the learning of latent geometric spaces of diffeomorphic transformations (a.k.a. diffeomorphisms) in a variational autoencoder (VAE). A mixture of multivariate Gaussians is formulated in the tangent space of diffeomorphisms and serves as a prior to approximate the hidden distribution of image transformations. We then augment the original training dataset by deforming images using randomly sampled transformations from the learned multimodal latent space of VAE. To validate the efficiency of our model, we jointly learn the augmentation strategy with two distinct domain-specific tasks: multi-class classification on 2D synthetic datasets and segmentation on real 3D brain magnetic resonance images (MRIs). We also compare MGAug with state-of-the-art transformation-based image augmentation algorithms. Experimental results show that our proposed approach outperforms all baselines by significantly improved prediction accuracy. Our code is publicly available at https://github.com/tonmoy-hossain/MGAug.
[ { "version": "v1", "created": "Wed, 20 Dec 2023 21:30:55 GMT" }, { "version": "v2", "created": "Thu, 25 Jan 2024 18:31:49 GMT" }, { "version": "v3", "created": "Sun, 9 Mar 2025 07:55:41 GMT" } ]
2025-03-11T00:00:00
[ [ "Hossain", "Tonmoy", "" ], [ "Zhang", "Miaomiao", "" ] ]
TITLE: MGAug: Multimodal Geometric Augmentation in Latent Spaces of Image Deformations ABSTRACT: Geometric transformations have been widely used to augment the size of training images. Existing methods often assume a unimodal distribution of the underlying transformations between images, which limits their power when data with multimodal distributions occur. In this paper, we propose a novel model, Multimodal Geometric Augmentation (MGAug), that for the first time generates augmenting transformations in a multimodal latent space of geometric deformations. To achieve this, we first develop a deep network that embeds the learning of latent geometric spaces of diffeomorphic transformations (a.k.a. diffeomorphisms) in a variational autoencoder (VAE). A mixture of multivariate Gaussians is formulated in the tangent space of diffeomorphisms and serves as a prior to approximate the hidden distribution of image transformations. We then augment the original training dataset by deforming images using randomly sampled transformations from the learned multimodal latent space of VAE. To validate the efficiency of our model, we jointly learn the augmentation strategy with two distinct domain-specific tasks: multi-class classification on 2D synthetic datasets and segmentation on real 3D brain magnetic resonance images (MRIs). We also compare MGAug with state-of-the-art transformation-based image augmentation algorithms. Experimental results show that our proposed approach outperforms all baselines by significantly improved prediction accuracy. Our code is publicly available at https://github.com/tonmoy-hossain/MGAug.
no_new_dataset
0.948775
2312.15497
Corneliu Arsene Dr
Corneliu Arsene, Alessandra Parisio
Deep Convolutional Neural Networks for Short-Term Multi-Energy Demand Prediction of Integrated Energy Systems
29 pages, 40 figures
null
null
null
cs.LG cs.CE
http://creativecommons.org/licenses/by/4.0/
Forecasting power consumptions of integrated electrical, heat or gas network systems is essential in order to operate more efficiently the whole energy network. Multi-energy systems are increasingly seen as a key component of future energy systems, and a valuable source of flexibility, which can significantly contribute to a cleaner and more sustainable whole energy system. Therefore, there is a stringent need for developing novel and performant models for forecasting multi-energy demand of integrated energy systems, which to account for the different types of interacting energy vectors and of the coupling between them. Previous efforts in demand forecasting focused mainly on the single electrical power consumption or, more recently, on the single heat or gas power consumptions. In order to address this gap, in this paper six novel prediction models based on Convolutional Neural Networks (CNNs) are developed, for either individual or joint prediction of multi-energy power consumptions: the single input/single output CNN model with determining the optimum number of epochs (CNN_1), the multiple input/single output CNN model (CNN_2), the single input/ single output CNN model with training/validation/testing datasets (CNN_3), the joint prediction CNN model (CNN_4), the multiple-building input/output CNN model (CNN_5) and the federated learning CNN model (CNN_6). All six novel CNN models are applied in a comprehensive manner on a novel integrated electrical, heat and gas network system, which only recently has started to be used for forecasting. The forecast horizon is short-term (next half an hour) and all the predictions results are evaluated in terms of the Signal to Noise Ratio (SNR) and the Normalized Root Mean Square Error (NRMSE), while the Mean Absolute Percentage Error (MAPE) is used for comparison purposes with other existent results from literature.
[ { "version": "v1", "created": "Sun, 24 Dec 2023 14:56:23 GMT" }, { "version": "v2", "created": "Mon, 10 Mar 2025 14:05:46 GMT" } ]
2025-03-11T00:00:00
[ [ "Arsene", "Corneliu", "" ], [ "Parisio", "Alessandra", "" ] ]
TITLE: Deep Convolutional Neural Networks for Short-Term Multi-Energy Demand Prediction of Integrated Energy Systems ABSTRACT: Forecasting power consumptions of integrated electrical, heat or gas network systems is essential in order to operate more efficiently the whole energy network. Multi-energy systems are increasingly seen as a key component of future energy systems, and a valuable source of flexibility, which can significantly contribute to a cleaner and more sustainable whole energy system. Therefore, there is a stringent need for developing novel and performant models for forecasting multi-energy demand of integrated energy systems, which to account for the different types of interacting energy vectors and of the coupling between them. Previous efforts in demand forecasting focused mainly on the single electrical power consumption or, more recently, on the single heat or gas power consumptions. In order to address this gap, in this paper six novel prediction models based on Convolutional Neural Networks (CNNs) are developed, for either individual or joint prediction of multi-energy power consumptions: the single input/single output CNN model with determining the optimum number of epochs (CNN_1), the multiple input/single output CNN model (CNN_2), the single input/ single output CNN model with training/validation/testing datasets (CNN_3), the joint prediction CNN model (CNN_4), the multiple-building input/output CNN model (CNN_5) and the federated learning CNN model (CNN_6). All six novel CNN models are applied in a comprehensive manner on a novel integrated electrical, heat and gas network system, which only recently has started to be used for forecasting. The forecast horizon is short-term (next half an hour) and all the predictions results are evaluated in terms of the Signal to Noise Ratio (SNR) and the Normalized Root Mean Square Error (NRMSE), while the Mean Absolute Percentage Error (MAPE) is used for comparison purposes with other existent results from literature.
no_new_dataset
0.952662
2312.16810
Hanqi Su
Hanqi Su, Jay Lee
Machine Learning Approaches for Diagnostics and Prognostics of Industrial Systems Using Open Source Data from PHM Data Challenges: A Review
The paper submitted to the International Journal of Prognostics and Health Management (IJPHM) has been accepted
International Journal of Prognostics and Health Management, Volume 15, Issue 2, 2024
10.36001/ijphm.2024.v15i2.3993
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
In the field of Prognostics and Health Management (PHM), recent years have witnessed a significant surge in the application of machine learning (ML). Despite this growth, the field grapples with a lack of unified guidelines and systematic approaches for effectively implementing these ML techniques and comprehensive analysis regarding industrial open-source data across varied scenarios. To address these gaps, this paper provides a comprehensive review of ML approaches for diagnostics and prognostics of industrial systems using open-source datasets from PHM Data Challenge Competitions held between 2018 and 2023 by PHM Society and IEEE Reliability Society and summarizes a unified ML framework. This review systematically categorizes and scrutinizes the problems, challenges, methodologies, and advancements demonstrated in these competitions, highlighting the evolving role of both conventional machine learning and deep learning in tackling complex industrial tasks related to detection, diagnosis, assessment, and prognosis. Moreover, this paper delves into the common challenges in PHM data challenge competitions by emphasizing data-related and model-related issues and evaluating the limitations of these competitions. The potential solutions to address these challenges are also summarized. Finally, we identify key themes and potential directions for future research, providing opportunities and prospects for next-generation ML-PHM development in PHM domain.
[ { "version": "v1", "created": "Thu, 28 Dec 2023 04:00:25 GMT" }, { "version": "v2", "created": "Fri, 24 May 2024 21:09:34 GMT" }, { "version": "v3", "created": "Wed, 18 Sep 2024 17:45:20 GMT" } ]
2025-03-11T00:00:00
[ [ "Su", "Hanqi", "" ], [ "Lee", "Jay", "" ] ]
TITLE: Machine Learning Approaches for Diagnostics and Prognostics of Industrial Systems Using Open Source Data from PHM Data Challenges: A Review ABSTRACT: In the field of Prognostics and Health Management (PHM), recent years have witnessed a significant surge in the application of machine learning (ML). Despite this growth, the field grapples with a lack of unified guidelines and systematic approaches for effectively implementing these ML techniques and comprehensive analysis regarding industrial open-source data across varied scenarios. To address these gaps, this paper provides a comprehensive review of ML approaches for diagnostics and prognostics of industrial systems using open-source datasets from PHM Data Challenge Competitions held between 2018 and 2023 by PHM Society and IEEE Reliability Society and summarizes a unified ML framework. This review systematically categorizes and scrutinizes the problems, challenges, methodologies, and advancements demonstrated in these competitions, highlighting the evolving role of both conventional machine learning and deep learning in tackling complex industrial tasks related to detection, diagnosis, assessment, and prognosis. Moreover, this paper delves into the common challenges in PHM data challenge competitions by emphasizing data-related and model-related issues and evaluating the limitations of these competitions. The potential solutions to address these challenges are also summarized. Finally, we identify key themes and potential directions for future research, providing opportunities and prospects for next-generation ML-PHM development in PHM domain.
no_new_dataset
0.943971
2401.00260
Hao Ruan
Jun Wang, Hao Ruan, Liangjian Wen, Yong Dai, Mingjie Wang
GazeCLIP: Enhancing Gaze Estimation Through Text-Guided Multimodal Learning
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Visual gaze estimation, with its wide-ranging application scenarios, has garnered increasing attention within the research community. Although existing approaches infer gaze solely from image signals, recent advances in visual-language collaboration have demonstrated that the integration of linguistic information can significantly enhance performance across various visual tasks. Leveraging the remarkable transferability of large-scale Contrastive Language-Image Pre-training (CLIP) models, we address the open and urgent question of how to effectively apply linguistic cues to gaze estimation. In this work, we propose GazeCLIP, a novel gaze estimation framework that deeply explores text-face collaboration. Specifically, we introduce a meticulously designed linguistic description generator to produce text signals enriched with coarse directional cues. Furthermore, we present a CLIP-based backbone adept at characterizing text-face pairs for gaze estimation, complemented by a fine-grained multimodal fusion module that models the intricate interrelationships between heterogeneous inputs. Extensive experiments on three challenging datasets demonstrate the superiority of GazeCLIP, which achieves state-of-the-art accuracy. Our findings underscore the potential of using visual-language collaboration to advance gaze estimation and open new avenues for future research in multimodal learning for visual tasks. The implementation code and the pre-trained model will be made publicly available.
[ { "version": "v1", "created": "Sat, 30 Dec 2023 15:24:50 GMT" }, { "version": "v2", "created": "Sun, 7 Jan 2024 04:17:20 GMT" }, { "version": "v3", "created": "Fri, 26 Apr 2024 03:59:41 GMT" }, { "version": "v4", "created": "Sat, 8 Mar 2025 13:37:22 GMT" } ]
2025-03-11T00:00:00
[ [ "Wang", "Jun", "" ], [ "Ruan", "Hao", "" ], [ "Wen", "Liangjian", "" ], [ "Dai", "Yong", "" ], [ "Wang", "Mingjie", "" ] ]
TITLE: GazeCLIP: Enhancing Gaze Estimation Through Text-Guided Multimodal Learning ABSTRACT: Visual gaze estimation, with its wide-ranging application scenarios, has garnered increasing attention within the research community. Although existing approaches infer gaze solely from image signals, recent advances in visual-language collaboration have demonstrated that the integration of linguistic information can significantly enhance performance across various visual tasks. Leveraging the remarkable transferability of large-scale Contrastive Language-Image Pre-training (CLIP) models, we address the open and urgent question of how to effectively apply linguistic cues to gaze estimation. In this work, we propose GazeCLIP, a novel gaze estimation framework that deeply explores text-face collaboration. Specifically, we introduce a meticulously designed linguistic description generator to produce text signals enriched with coarse directional cues. Furthermore, we present a CLIP-based backbone adept at characterizing text-face pairs for gaze estimation, complemented by a fine-grained multimodal fusion module that models the intricate interrelationships between heterogeneous inputs. Extensive experiments on three challenging datasets demonstrate the superiority of GazeCLIP, which achieves state-of-the-art accuracy. Our findings underscore the potential of using visual-language collaboration to advance gaze estimation and open new avenues for future research in multimodal learning for visual tasks. The implementation code and the pre-trained model will be made publicly available.
no_new_dataset
0.937669
2401.10893
Deepak Banerjee
Deepak Banerjee, Anjali Ishaan
Location Sensitive Embedding for Knowledge Graph Reasoning
null
null
null
null
cs.IR cs.CL
http://creativecommons.org/licenses/by/4.0/
Embedding methods transform the knowledge graph into a continuous, low-dimensional space, facilitating inference and completion tasks. Existing methods are mainly divided into two types: translational distance models and semantic matching models. A key challenge in translational distance models is their inability to effectively differentiate between 'head' and 'tail' entities in graphs. To address this problem, a novel location-sensitive embedding (LSE) method has been developed. LSE innovatively modifies the head entity using relation-specific mappings, conceptualizing relations as linear transformations rather than mere translations. The theoretical foundations of LSE, including its representational capabilities and its connections to existing models, have been thoroughly examined. A more streamlined variant, LSEd, which employs a diagonal matrix for transformations to enhance practical efficiency, is also proposed. Experiments conducted on four large-scale KG datasets for link prediction show that LSEd either outperforms or is competitive with state-of-the-art related works.
[ { "version": "v1", "created": "Fri, 1 Dec 2023 22:35:19 GMT" }, { "version": "v2", "created": "Sat, 27 Jan 2024 22:25:09 GMT" }, { "version": "v3", "created": "Tue, 30 Jan 2024 03:14:11 GMT" }, { "version": "v4", "created": "Sat, 8 Mar 2025 00:43:01 GMT" } ]
2025-03-11T00:00:00
[ [ "Banerjee", "Deepak", "" ], [ "Ishaan", "Anjali", "" ] ]
TITLE: Location Sensitive Embedding for Knowledge Graph Reasoning ABSTRACT: Embedding methods transform the knowledge graph into a continuous, low-dimensional space, facilitating inference and completion tasks. Existing methods are mainly divided into two types: translational distance models and semantic matching models. A key challenge in translational distance models is their inability to effectively differentiate between 'head' and 'tail' entities in graphs. To address this problem, a novel location-sensitive embedding (LSE) method has been developed. LSE innovatively modifies the head entity using relation-specific mappings, conceptualizing relations as linear transformations rather than mere translations. The theoretical foundations of LSE, including its representational capabilities and its connections to existing models, have been thoroughly examined. A more streamlined variant, LSEd, which employs a diagonal matrix for transformations to enhance practical efficiency, is also proposed. Experiments conducted on four large-scale KG datasets for link prediction show that LSEd either outperforms or is competitive with state-of-the-art related works.
no_new_dataset
0.943138
2401.15199
Zahra Kharazian
Zahra Kharazian, Tony Lindgren, Sindri Magn\'usson, Olof Steinert, Oskar Andersson Reyna
SCANIA Component X Dataset: A Real-World Multivariate Time Series Dataset for Predictive Maintenance
12 pages, 8 figures
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Predicting failures and maintenance time in predictive maintenance is challenging due to the scarcity of comprehensive real-world datasets, and among those available, few are of time series format. This paper introduces a real-world, multivariate time series dataset collected exclusively from a single anonymized engine component (Component X) across a fleet of SCANIA trucks. The dataset includes operational data, repair records, and specifications related to Component X, while maintaining confidentiality through anonymization. It is well-suited for a range of machine learning applications, including classification, regression, survival analysis, and anomaly detection, particularly in predictive maintenance scenarios. The dataset's large population size, diverse features (in the form of histograms and numerical counters), and temporal information make it a unique resource in the field. The objective of releasing this dataset is to give a broad range of researchers the possibility of working with real-world data from an internationally well-known company and introduce a standard benchmark to the predictive maintenance field, fostering reproducible research.
[ { "version": "v1", "created": "Fri, 26 Jan 2024 20:51:55 GMT" }, { "version": "v2", "created": "Mon, 10 Mar 2025 09:12:04 GMT" } ]
2025-03-11T00:00:00
[ [ "Kharazian", "Zahra", "" ], [ "Lindgren", "Tony", "" ], [ "Magnússon", "Sindri", "" ], [ "Steinert", "Olof", "" ], [ "Reyna", "Oskar Andersson", "" ] ]
TITLE: SCANIA Component X Dataset: A Real-World Multivariate Time Series Dataset for Predictive Maintenance ABSTRACT: Predicting failures and maintenance time in predictive maintenance is challenging due to the scarcity of comprehensive real-world datasets, and among those available, few are of time series format. This paper introduces a real-world, multivariate time series dataset collected exclusively from a single anonymized engine component (Component X) across a fleet of SCANIA trucks. The dataset includes operational data, repair records, and specifications related to Component X, while maintaining confidentiality through anonymization. It is well-suited for a range of machine learning applications, including classification, regression, survival analysis, and anomaly detection, particularly in predictive maintenance scenarios. The dataset's large population size, diverse features (in the form of histograms and numerical counters), and temporal information make it a unique resource in the field. The objective of releasing this dataset is to give a broad range of researchers the possibility of working with real-world data from an internationally well-known company and introduce a standard benchmark to the predictive maintenance field, fostering reproducible research.
new_dataset
0.962532
2402.00906
Hamed Poursiami
Hamed Poursiami, Ihsen Alouani, Maryam Parsa
BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks
7 pages, 4 figures, 4 tables
2024 International Conference on Machine Learning and Applications (ICMLA), 2024, pp. 705-712
10.1109/ICMLA61862.2024.00102
null
cs.CR cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the mainstream integration of machine learning into security-sensitive domains such as healthcare and finance, concerns about data privacy have intensified. Conventional artificial neural networks (ANNs) have been found vulnerable to several attacks that can leak sensitive data. Particularly, model inversion (MI) attacks enable the reconstruction of data samples that have been used to train the model. Neuromorphic architectures have emerged as a paradigm shift in neural computing, enabling asynchronous and energy-efficient computation. However, little to no existing work has investigated the privacy of neuromorphic architectures against model inversion. Our study is motivated by the intuition that the non-differentiable aspect of spiking neural networks (SNNs) might result in inherent privacy-preserving properties, especially against gradient-based attacks. To investigate this hypothesis, we propose a thorough exploration of SNNs' privacy-preserving capabilities. Specifically, we develop novel inversion attack strategies that are comprehensively designed to target SNNs, offering a comparative analysis with their conventional ANN counterparts. Our experiments, conducted on diverse event-based and static datasets, demonstrate the effectiveness of the proposed attack strategies and therefore questions the assumption of inherent privacy-preserving in neuromorphic architectures.
[ { "version": "v1", "created": "Thu, 1 Feb 2024 03:16:40 GMT" }, { "version": "v2", "created": "Tue, 7 May 2024 05:53:46 GMT" } ]
2025-03-11T00:00:00
[ [ "Poursiami", "Hamed", "" ], [ "Alouani", "Ihsen", "" ], [ "Parsa", "Maryam", "" ] ]
TITLE: BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks ABSTRACT: With the mainstream integration of machine learning into security-sensitive domains such as healthcare and finance, concerns about data privacy have intensified. Conventional artificial neural networks (ANNs) have been found vulnerable to several attacks that can leak sensitive data. Particularly, model inversion (MI) attacks enable the reconstruction of data samples that have been used to train the model. Neuromorphic architectures have emerged as a paradigm shift in neural computing, enabling asynchronous and energy-efficient computation. However, little to no existing work has investigated the privacy of neuromorphic architectures against model inversion. Our study is motivated by the intuition that the non-differentiable aspect of spiking neural networks (SNNs) might result in inherent privacy-preserving properties, especially against gradient-based attacks. To investigate this hypothesis, we propose a thorough exploration of SNNs' privacy-preserving capabilities. Specifically, we develop novel inversion attack strategies that are comprehensively designed to target SNNs, offering a comparative analysis with their conventional ANN counterparts. Our experiments, conducted on diverse event-based and static datasets, demonstrate the effectiveness of the proposed attack strategies and therefore questions the assumption of inherent privacy-preserving in neuromorphic architectures.
no_new_dataset
0.947478
2402.07818
Zhihao Liu
Z Liu, J Lou, W Bao, Y Hu, B Li, Z Qin, K Ren
Differentially Private Zeroth-Order Methods for Scalable Large Language Model Finetuning
null
null
null
null
cs.LG cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fine-tuning on task-specific datasets is a widely-embraced paradigm of harnessing the powerful capability of pretrained LLMs for various downstream tasks. Due to the popularity of LLMs fine-tuning and its accompanying privacy concerns, differentially private (DP) fine-tuning of pretrained LLMs has been widely used to safeguarding the privacy of task-specific datasets. Lying at the design core of DP LLM fine-tuning methods is the satisfactory tradeoff among privacy, utility, and scalability. Most existing methods build upon the seminal work of DP-SGD. Despite pushing the scalability of DP-SGD to its limit, DP-SGD-based fine-tuning methods are unfortunately limited by the inherent inefficiency of SGD. In this paper, we investigate the potential of DP zeroth-order methods for LLM pretraining, which avoids the scalability bottleneck of SGD by approximating the gradient with the more efficient zeroth-order gradient. Rather than treating the zeroth-order method as a drop-in replacement for SGD, this paper presents a comprehensive study both theoretically and empirically. First, we propose the stagewise DP zeroth-order method (DP-ZOSO) that dynamically schedules key hyperparameters. This design is grounded on the synergy between DP random perturbation and the gradient approximation error of the zeroth-order method, and its effect on fine-tuning trajectory. We provide theoretical analysis for both proposed methods. We conduct extensive empirical analysis on both encoder-only masked language model and decoder-only autoregressive language model, achieving impressive results in terms of scalability and utility regardless of the class of tasks (compared with DPZero, DP-ZOPO improves $4.5\%$ on SST-5, $5.5\%$ on MNLI with RoBERTa-Large and 9.2\% on CB, 3.9\% on BoolQ with OPT-2.7b when $\epsilon=4$, demonstrates more significant enhancement in performance on more complicated tasks).
[ { "version": "v1", "created": "Mon, 12 Feb 2024 17:24:15 GMT" }, { "version": "v2", "created": "Wed, 21 Feb 2024 06:11:02 GMT" }, { "version": "v3", "created": "Wed, 8 May 2024 07:14:42 GMT" }, { "version": "v4", "created": "Thu, 9 May 2024 09:41:23 GMT" }, { "version": "v5", "created": "Mon, 2 Dec 2024 12:29:47 GMT" }, { "version": "v6", "created": "Mon, 10 Mar 2025 06:52:03 GMT" } ]
2025-03-11T00:00:00
[ [ "Liu", "Z", "" ], [ "Lou", "J", "" ], [ "Bao", "W", "" ], [ "Hu", "Y", "" ], [ "Li", "B", "" ], [ "Qin", "Z", "" ], [ "Ren", "K", "" ] ]
TITLE: Differentially Private Zeroth-Order Methods for Scalable Large Language Model Finetuning ABSTRACT: Fine-tuning on task-specific datasets is a widely-embraced paradigm of harnessing the powerful capability of pretrained LLMs for various downstream tasks. Due to the popularity of LLMs fine-tuning and its accompanying privacy concerns, differentially private (DP) fine-tuning of pretrained LLMs has been widely used to safeguarding the privacy of task-specific datasets. Lying at the design core of DP LLM fine-tuning methods is the satisfactory tradeoff among privacy, utility, and scalability. Most existing methods build upon the seminal work of DP-SGD. Despite pushing the scalability of DP-SGD to its limit, DP-SGD-based fine-tuning methods are unfortunately limited by the inherent inefficiency of SGD. In this paper, we investigate the potential of DP zeroth-order methods for LLM pretraining, which avoids the scalability bottleneck of SGD by approximating the gradient with the more efficient zeroth-order gradient. Rather than treating the zeroth-order method as a drop-in replacement for SGD, this paper presents a comprehensive study both theoretically and empirically. First, we propose the stagewise DP zeroth-order method (DP-ZOSO) that dynamically schedules key hyperparameters. This design is grounded on the synergy between DP random perturbation and the gradient approximation error of the zeroth-order method, and its effect on fine-tuning trajectory. We provide theoretical analysis for both proposed methods. We conduct extensive empirical analysis on both encoder-only masked language model and decoder-only autoregressive language model, achieving impressive results in terms of scalability and utility regardless of the class of tasks (compared with DPZero, DP-ZOPO improves $4.5\%$ on SST-5, $5.5\%$ on MNLI with RoBERTa-Large and 9.2\% on CB, 3.9\% on BoolQ with OPT-2.7b when $\epsilon=4$, demonstrates more significant enhancement in performance on more complicated tasks).
no_new_dataset
0.951953
2402.09469
Zhenmei Shi
Chenyang Li, Yingyu Liang, Zhenmei Shi, Zhao Song, Tianyi Zhou
Fourier Circuits in Neural Networks and Transformers: A Case Study of Modular Arithmetic with Multiple Inputs
AIStats 2025
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
In the evolving landscape of machine learning, a pivotal challenge lies in deciphering the internal representations harnessed by neural networks and Transformers. Building on recent progress toward comprehending how networks execute distinct target functions, our study embarks on an exploration of the underlying reasons behind networks adopting specific computational strategies. We direct our focus to the complex algebraic learning task of modular addition involving $k$ inputs. Our research presents a thorough analytical characterization of the features learned by stylized one-hidden layer neural networks and one-layer Transformers in addressing this task. A cornerstone of our theoretical framework is the elucidation of how the principle of margin maximization shapes the features adopted by one-hidden layer neural networks. Let $p$ denote the modulus, $D_p$ denote the dataset of modular arithmetic with $k$ inputs and $m$ denote the network width. We demonstrate that a neuron count of $ m \geq 2^{2k-2} \cdot (p-1) $, these networks attain a maximum $ L_{2,k+1} $-margin on the dataset $ D_p $. Furthermore, we establish that each hidden-layer neuron aligns with a specific Fourier spectrum, integral to solving modular addition problems. By correlating our findings with the empirical observations of similar studies, we contribute to a deeper comprehension of the intrinsic computational mechanisms of neural networks. Furthermore, we observe similar computational mechanisms in attention matrices of one-layer Transformers. Our work stands as a significant stride in unraveling their operation complexities, particularly in the realm of complex algebraic tasks.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 05:52:06 GMT" }, { "version": "v2", "created": "Fri, 24 May 2024 07:28:24 GMT" }, { "version": "v3", "created": "Wed, 16 Oct 2024 06:48:42 GMT" }, { "version": "v4", "created": "Sun, 9 Mar 2025 07:14:46 GMT" } ]
2025-03-11T00:00:00
[ [ "Li", "Chenyang", "" ], [ "Liang", "Yingyu", "" ], [ "Shi", "Zhenmei", "" ], [ "Song", "Zhao", "" ], [ "Zhou", "Tianyi", "" ] ]
TITLE: Fourier Circuits in Neural Networks and Transformers: A Case Study of Modular Arithmetic with Multiple Inputs ABSTRACT: In the evolving landscape of machine learning, a pivotal challenge lies in deciphering the internal representations harnessed by neural networks and Transformers. Building on recent progress toward comprehending how networks execute distinct target functions, our study embarks on an exploration of the underlying reasons behind networks adopting specific computational strategies. We direct our focus to the complex algebraic learning task of modular addition involving $k$ inputs. Our research presents a thorough analytical characterization of the features learned by stylized one-hidden layer neural networks and one-layer Transformers in addressing this task. A cornerstone of our theoretical framework is the elucidation of how the principle of margin maximization shapes the features adopted by one-hidden layer neural networks. Let $p$ denote the modulus, $D_p$ denote the dataset of modular arithmetic with $k$ inputs and $m$ denote the network width. We demonstrate that a neuron count of $ m \geq 2^{2k-2} \cdot (p-1) $, these networks attain a maximum $ L_{2,k+1} $-margin on the dataset $ D_p $. Furthermore, we establish that each hidden-layer neuron aligns with a specific Fourier spectrum, integral to solving modular addition problems. By correlating our findings with the empirical observations of similar studies, we contribute to a deeper comprehension of the intrinsic computational mechanisms of neural networks. Furthermore, we observe similar computational mechanisms in attention matrices of one-layer Transformers. Our work stands as a significant stride in unraveling their operation complexities, particularly in the realm of complex algebraic tasks.
no_new_dataset
0.94366
2402.11345
Nuojin Cheng
Nuojin Cheng and Stephen Becker
Variational Entropy Search for Adjusting Expected Improvement
This is a preliminary technical report. For a more comprehensive study, please refer to arXiv:2501.18756
null
null
null
stat.ML cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian optimization is a widely used technique for optimizing black-box functions, with Expected Improvement (EI) being the most commonly utilized acquisition function in this domain. While EI is often viewed as distinct from other information-theoretic acquisition functions, such as entropy search (ES) and max-value entropy search (MES), our work reveals that EI can be considered a special case of MES when approached through variational inference (VI). In this context, we have developed the Variational Entropy Search (VES) methodology and the VES-Gamma algorithm, which adapts EI by incorporating principles from information-theoretic concepts. The efficacy of VES-Gamma is demonstrated across a variety of test functions and read datasets, highlighting its theoretical and practical utilities in Bayesian optimization scenarios.
[ { "version": "v1", "created": "Sat, 17 Feb 2024 17:37:53 GMT" }, { "version": "v2", "created": "Sun, 9 Mar 2025 15:29:40 GMT" } ]
2025-03-11T00:00:00
[ [ "Cheng", "Nuojin", "" ], [ "Becker", "Stephen", "" ] ]
TITLE: Variational Entropy Search for Adjusting Expected Improvement ABSTRACT: Bayesian optimization is a widely used technique for optimizing black-box functions, with Expected Improvement (EI) being the most commonly utilized acquisition function in this domain. While EI is often viewed as distinct from other information-theoretic acquisition functions, such as entropy search (ES) and max-value entropy search (MES), our work reveals that EI can be considered a special case of MES when approached through variational inference (VI). In this context, we have developed the Variational Entropy Search (VES) methodology and the VES-Gamma algorithm, which adapts EI by incorporating principles from information-theoretic concepts. The efficacy of VES-Gamma is demonstrated across a variety of test functions and read datasets, highlighting its theoretical and practical utilities in Bayesian optimization scenarios.
no_new_dataset
0.945349
2402.12767
Zijian Li
Zijian Li, Ruichu Cai, Zhenhui Yang, Haiqin Huang, Guangyi Chen, Yifan Shen, Zhengming Chen, Xiangchen Song, Kun Zhang
Nonstationary Time Series Forecasting via Unknown Distribution Adaptation
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As environments evolve, temporal distribution shifts can degrade time series forecasting performance. A straightforward solution is to adapt to nonstationary changes while preserving stationary dependencies. Hence, some methods disentangle stationary and nonstationary components by assuming uniform distribution shifts, but it is impractical since when the distribution changes is unknown. To address this challenge, we propose the \textbf{U}nknown \textbf{D}istribution \textbf{A}daptation (\textbf{UDA}) model for nonstationary time series forecasting, which detects when distribution shifts occur and disentangles stationary/nonstationary latent variables, thus enabling adaptation to unknown distribution without assuming a uniform distribution shift. Specifically, under a Hidden Markov assumption of latent environments, we demonstrate that the latent environments are identifiable. Sequentially, we further disentangle stationary/nonstationary latent variables by leveraging the variability of historical information. Based on these theoretical results, we propose a variational autoencoder-based model, which incorporates an autoregressive hidden Markov model to estimate latent environments. Additionally, we further devise the modular prior networks to disentangle stationary/nonstationary latent variables. These two modules realize automatic adaptation and enhance nonstationary forecasting performance. Experimental results on several datasets validate the effectiveness of our approach.
[ { "version": "v1", "created": "Tue, 20 Feb 2024 07:16:12 GMT" }, { "version": "v2", "created": "Sat, 13 Apr 2024 20:03:26 GMT" }, { "version": "v3", "created": "Fri, 7 Jun 2024 11:11:31 GMT" }, { "version": "v4", "created": "Mon, 10 Mar 2025 02:11:51 GMT" } ]
2025-03-11T00:00:00
[ [ "Li", "Zijian", "" ], [ "Cai", "Ruichu", "" ], [ "Yang", "Zhenhui", "" ], [ "Huang", "Haiqin", "" ], [ "Chen", "Guangyi", "" ], [ "Shen", "Yifan", "" ], [ "Chen", "Zhengming", "" ], [ "Song", "Xiangchen", "" ], [ "Zhang", "Kun", "" ] ]
TITLE: Nonstationary Time Series Forecasting via Unknown Distribution Adaptation ABSTRACT: As environments evolve, temporal distribution shifts can degrade time series forecasting performance. A straightforward solution is to adapt to nonstationary changes while preserving stationary dependencies. Hence, some methods disentangle stationary and nonstationary components by assuming uniform distribution shifts, but it is impractical since when the distribution changes is unknown. To address this challenge, we propose the \textbf{U}nknown \textbf{D}istribution \textbf{A}daptation (\textbf{UDA}) model for nonstationary time series forecasting, which detects when distribution shifts occur and disentangles stationary/nonstationary latent variables, thus enabling adaptation to unknown distribution without assuming a uniform distribution shift. Specifically, under a Hidden Markov assumption of latent environments, we demonstrate that the latent environments are identifiable. Sequentially, we further disentangle stationary/nonstationary latent variables by leveraging the variability of historical information. Based on these theoretical results, we propose a variational autoencoder-based model, which incorporates an autoregressive hidden Markov model to estimate latent environments. Additionally, we further devise the modular prior networks to disentangle stationary/nonstationary latent variables. These two modules realize automatic adaptation and enhance nonstationary forecasting performance. Experimental results on several datasets validate the effectiveness of our approach.
no_new_dataset
0.942507
2402.15183
Zirui Guo
Zirui Guo, Lianghao Xia, Yanhua Yu, Yuling Wang, Kangkang Lu, Zhiyong Huang, Chao Huang
GraphEdit: Large Language Models for Graph Structure Learning
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data by generating novel graph structures. Graph Neural Networks (GNNs) have emerged as promising GSL solutions, utilizing recursive message passing to encode node-wise inter-dependencies. However, many existing GSL methods heavily depend on explicit graph structural information as supervision signals, leaving them susceptible to challenges such as data noise and sparsity. In this work, we propose GraphEdit, an approach that leverages large language models (LLMs) to learn complex node relationships in graph-structured data. By enhancing the reasoning capabilities of LLMs through instruction-tuning over graph structures, we aim to overcome the limitations associated with explicit graph structural information and enhance the reliability of graph structure learning. Our approach not only effectively denoises noisy connections but also identifies node-wise dependencies from a global perspective, providing a comprehensive understanding of the graph structure. We conduct extensive experiments on multiple benchmark datasets to demonstrate the effectiveness and robustness of GraphEdit across various settings. We have made our model implementation available at: https://github.com/HKUDS/GraphEdit.
[ { "version": "v1", "created": "Fri, 23 Feb 2024 08:29:42 GMT" }, { "version": "v2", "created": "Tue, 27 Feb 2024 08:22:11 GMT" }, { "version": "v3", "created": "Thu, 29 Feb 2024 04:15:44 GMT" }, { "version": "v4", "created": "Tue, 5 Mar 2024 05:22:00 GMT" }, { "version": "v5", "created": "Mon, 10 Mar 2025 14:04:39 GMT" } ]
2025-03-11T00:00:00
[ [ "Guo", "Zirui", "" ], [ "Xia", "Lianghao", "" ], [ "Yu", "Yanhua", "" ], [ "Wang", "Yuling", "" ], [ "Lu", "Kangkang", "" ], [ "Huang", "Zhiyong", "" ], [ "Huang", "Chao", "" ] ]
TITLE: GraphEdit: Large Language Models for Graph Structure Learning ABSTRACT: Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data by generating novel graph structures. Graph Neural Networks (GNNs) have emerged as promising GSL solutions, utilizing recursive message passing to encode node-wise inter-dependencies. However, many existing GSL methods heavily depend on explicit graph structural information as supervision signals, leaving them susceptible to challenges such as data noise and sparsity. In this work, we propose GraphEdit, an approach that leverages large language models (LLMs) to learn complex node relationships in graph-structured data. By enhancing the reasoning capabilities of LLMs through instruction-tuning over graph structures, we aim to overcome the limitations associated with explicit graph structural information and enhance the reliability of graph structure learning. Our approach not only effectively denoises noisy connections but also identifies node-wise dependencies from a global perspective, providing a comprehensive understanding of the graph structure. We conduct extensive experiments on multiple benchmark datasets to demonstrate the effectiveness and robustness of GraphEdit across various settings. We have made our model implementation available at: https://github.com/HKUDS/GraphEdit.
no_new_dataset
0.951369