Search is not available for this dataset
id
string
submitter
string
authors
string
title
string
comments
string
journal-ref
string
doi
string
report-no
string
categories
string
license
string
abstract
string
versions
list
update_date
timestamp[s]
authors_parsed
sequence
prompt
string
2503.13053
Nassim Ali Ousalah
Nassim Ali Ousalah, Anis Kacem, Enjie Ghorbel, Emmanuel Koumandakis and Djamila Aouada
Uncertainty-Aware Knowledge Distillation for Compact and Efficient 6DoF Pose Estimation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Compact and efficient 6DoF object pose estimation is crucial in applications such as robotics, augmented reality, and space autonomous navigation systems, where lightweight models are critical for real-time accurate performance. This paper introduces a novel uncertainty-aware end-to-end Knowledge Distillation (KD) framework focused on keypoint-based 6DoF pose estimation. Keypoints predicted by a large teacher model exhibit varying levels of uncertainty that can be exploited within the distillation process to enhance the accuracy of the student model while ensuring its compactness. To this end, we propose a distillation strategy that aligns the student and teacher predictions by adjusting the knowledge transfer based on the uncertainty associated with each teacher keypoint prediction. Additionally, the proposed KD leverages this uncertainty-aware alignment of keypoints to transfer the knowledge at key locations of their respective feature maps. Experiments on the widely-used LINEMOD benchmark demonstrate the effectiveness of our method, achieving superior 6DoF object pose estimation with lightweight models compared to state-of-the-art approaches. Further validation on the SPEED+ dataset for spacecraft pose estimation highlights the robustness of our approach under diverse 6DoF pose estimation scenarios.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 10:56:30 GMT" } ]
2025-03-18T00:00:00
[ [ "Ousalah", "Nassim Ali", "" ], [ "Kacem", "Anis", "" ], [ "Ghorbel", "Enjie", "" ], [ "Koumandakis", "Emmanuel", "" ], [ "Aouada", "Djamila", "" ] ]
TITLE: Uncertainty-Aware Knowledge Distillation for Compact and Efficient 6DoF Pose Estimation ABSTRACT: Compact and efficient 6DoF object pose estimation is crucial in applications such as robotics, augmented reality, and space autonomous navigation systems, where lightweight models are critical for real-time accurate performance. This paper introduces a novel uncertainty-aware end-to-end Knowledge Distillation (KD) framework focused on keypoint-based 6DoF pose estimation. Keypoints predicted by a large teacher model exhibit varying levels of uncertainty that can be exploited within the distillation process to enhance the accuracy of the student model while ensuring its compactness. To this end, we propose a distillation strategy that aligns the student and teacher predictions by adjusting the knowledge transfer based on the uncertainty associated with each teacher keypoint prediction. Additionally, the proposed KD leverages this uncertainty-aware alignment of keypoints to transfer the knowledge at key locations of their respective feature maps. Experiments on the widely-used LINEMOD benchmark demonstrate the effectiveness of our method, achieving superior 6DoF object pose estimation with lightweight models compared to state-of-the-art approaches. Further validation on the SPEED+ dataset for spacecraft pose estimation highlights the robustness of our approach under diverse 6DoF pose estimation scenarios.
2503.13055
Yu-Hong Shen
Yu-Hong Shen, Chuan-Yu Wu, Yi-Ru Yang, Yen-Ling Tai, Yi-Ting Chen
Mitigating Cross-Modal Distraction and Ensuring Geometric Feasibility via Affordance-Guided, Self-Consistent MLLMs for Food Preparation Task Planning
null
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
We study Multimodal Large Language Models (MLLMs) with in-context learning for food preparation task planning. In this context, we identify two key challenges: cross-modal distraction and geometric feasibility. Cross-modal distraction occurs when the inclusion of visual input degrades the reasoning performance of a MLLM. Geometric feasibility refers to the ability of MLLMs to ensure that the selected skills are physically executable in the environment. To address these issues, we adapt Chain of Thought (CoT) with Self-Consistency to mitigate reasoning loss from cross-modal distractions and use affordance predictor as skill preconditions to guide MLLM on geometric feasibility. We construct a dataset to evaluate the ability of MLLMs on quantity estimation, reachability analysis, relative positioning and collision avoidance. We conducted a detailed evaluation to identify issues among different baselines and analyze the reasons for improvement, providing insights into each approach. Our method reaches a success rate of 76.7% on the entire dataset, showing a substantial improvement over the CoT baseline at 36.7%.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 11:01:02 GMT" } ]
2025-03-18T00:00:00
[ [ "Shen", "Yu-Hong", "" ], [ "Wu", "Chuan-Yu", "" ], [ "Yang", "Yi-Ru", "" ], [ "Tai", "Yen-Ling", "" ], [ "Chen", "Yi-Ting", "" ] ]
TITLE: Mitigating Cross-Modal Distraction and Ensuring Geometric Feasibility via Affordance-Guided, Self-Consistent MLLMs for Food Preparation Task Planning ABSTRACT: We study Multimodal Large Language Models (MLLMs) with in-context learning for food preparation task planning. In this context, we identify two key challenges: cross-modal distraction and geometric feasibility. Cross-modal distraction occurs when the inclusion of visual input degrades the reasoning performance of a MLLM. Geometric feasibility refers to the ability of MLLMs to ensure that the selected skills are physically executable in the environment. To address these issues, we adapt Chain of Thought (CoT) with Self-Consistency to mitigate reasoning loss from cross-modal distractions and use affordance predictor as skill preconditions to guide MLLM on geometric feasibility. We construct a dataset to evaluate the ability of MLLMs on quantity estimation, reachability analysis, relative positioning and collision avoidance. We conducted a detailed evaluation to identify issues among different baselines and analyze the reasons for improvement, providing insights into each approach. Our method reaches a success rate of 76.7% on the entire dataset, showing a substantial improvement over the CoT baseline at 36.7%.
2503.13057
Robin Zbinden
Robin Zbinden, Nina van Tiel, Gencer Sumbul, Chiara Vanalli, Benjamin Kellenberger, Devis Tuia
MaskSDM with Shapley values to improve flexibility, robustness, and explainability in species distribution modeling
null
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Species Distribution Models (SDMs) play a vital role in biodiversity research, conservation planning, and ecological niche modeling by predicting species distributions based on environmental conditions. The selection of predictors is crucial, strongly impacting both model accuracy and how well the predictions reflect ecological patterns. To ensure meaningful insights, input variables must be carefully chosen to match the study objectives and the ecological requirements of the target species. However, existing SDMs, including both traditional and deep learning-based approaches, often lack key capabilities for variable selection: (i) flexibility to choose relevant predictors at inference without retraining; (ii) robustness to handle missing predictor values without compromising accuracy; and (iii) explainability to interpret and accurately quantify each predictor's contribution. To overcome these limitations, we introduce MaskSDM, a novel deep learning-based SDM that enables flexible predictor selection by employing a masked training strategy. This approach allows the model to make predictions with arbitrary subsets of input variables while remaining robust to missing data. It also provides a clearer understanding of how adding or removing a given predictor affects model performance and predictions. Additionally, MaskSDM leverages Shapley values for precise predictor contribution assessments, improving upon traditional approximations. We evaluate MaskSDM on the global sPlotOpen dataset, modeling the distributions of 12,738 plant species. Our results show that MaskSDM outperforms imputation-based methods and approximates models trained on specific subsets of variables. These findings underscore MaskSDM's potential to increase the applicability and adoption of SDMs, laying the groundwork for developing foundation models in SDMs that can be readily applied to diverse ecological applications.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 11:02:28 GMT" } ]
2025-03-18T00:00:00
[ [ "Zbinden", "Robin", "" ], [ "van Tiel", "Nina", "" ], [ "Sumbul", "Gencer", "" ], [ "Vanalli", "Chiara", "" ], [ "Kellenberger", "Benjamin", "" ], [ "Tuia", "Devis", "" ] ]
TITLE: MaskSDM with Shapley values to improve flexibility, robustness, and explainability in species distribution modeling ABSTRACT: Species Distribution Models (SDMs) play a vital role in biodiversity research, conservation planning, and ecological niche modeling by predicting species distributions based on environmental conditions. The selection of predictors is crucial, strongly impacting both model accuracy and how well the predictions reflect ecological patterns. To ensure meaningful insights, input variables must be carefully chosen to match the study objectives and the ecological requirements of the target species. However, existing SDMs, including both traditional and deep learning-based approaches, often lack key capabilities for variable selection: (i) flexibility to choose relevant predictors at inference without retraining; (ii) robustness to handle missing predictor values without compromising accuracy; and (iii) explainability to interpret and accurately quantify each predictor's contribution. To overcome these limitations, we introduce MaskSDM, a novel deep learning-based SDM that enables flexible predictor selection by employing a masked training strategy. This approach allows the model to make predictions with arbitrary subsets of input variables while remaining robust to missing data. It also provides a clearer understanding of how adding or removing a given predictor affects model performance and predictions. Additionally, MaskSDM leverages Shapley values for precise predictor contribution assessments, improving upon traditional approximations. We evaluate MaskSDM on the global sPlotOpen dataset, modeling the distributions of 12,738 plant species. Our results show that MaskSDM outperforms imputation-based methods and approximates models trained on specific subsets of variables. These findings underscore MaskSDM's potential to increase the applicability and adoption of SDMs, laying the groundwork for developing foundation models in SDMs that can be readily applied to diverse ecological applications.
2503.13058
Zeyi Huang Mr
Zeyi Huang, Utkarsh Ojha, Yuyang Ji, Donghyun Lee, Yong Jae Lee
Do Vision Models Develop Human-Like Progressive Difficulty Understanding?
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When a human undertakes a test, their responses likely follow a pattern: if they answered an easy question $(2 \times 3)$ incorrectly, they would likely answer a more difficult one $(2 \times 3 \times 4)$ incorrectly; and if they answered a difficult question correctly, they would likely answer the easy one correctly. Anything else hints at memorization. Do current visual recognition models exhibit a similarly structured learning capacity? In this work, we consider the task of image classification and study if those models' responses follow that pattern. Since real images aren't labeled with difficulty, we first create a dataset of 100 categories, 10 attributes, and 3 difficulty levels using recent generative models: for each category (e.g., dog) and attribute (e.g., occlusion), we generate images of increasing difficulty (e.g., a dog without occlusion, a dog only partly visible). We find that most of the models do in fact behave similarly to the aforementioned pattern around 80-90% of the time. Using this property, we then explore a new way to evaluate those models. Instead of testing the model on every possible test image, we create an adaptive test akin to GRE, in which the model's performance on the current round of images determines the test images in the next round. This allows the model to skip over questions too easy/hard for itself, and helps us get its overall performance in fewer steps.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 11:02:53 GMT" } ]
2025-03-18T00:00:00
[ [ "Huang", "Zeyi", "" ], [ "Ojha", "Utkarsh", "" ], [ "Ji", "Yuyang", "" ], [ "Lee", "Donghyun", "" ], [ "Lee", "Yong Jae", "" ] ]
TITLE: Do Vision Models Develop Human-Like Progressive Difficulty Understanding? ABSTRACT: When a human undertakes a test, their responses likely follow a pattern: if they answered an easy question $(2 \times 3)$ incorrectly, they would likely answer a more difficult one $(2 \times 3 \times 4)$ incorrectly; and if they answered a difficult question correctly, they would likely answer the easy one correctly. Anything else hints at memorization. Do current visual recognition models exhibit a similarly structured learning capacity? In this work, we consider the task of image classification and study if those models' responses follow that pattern. Since real images aren't labeled with difficulty, we first create a dataset of 100 categories, 10 attributes, and 3 difficulty levels using recent generative models: for each category (e.g., dog) and attribute (e.g., occlusion), we generate images of increasing difficulty (e.g., a dog without occlusion, a dog only partly visible). We find that most of the models do in fact behave similarly to the aforementioned pattern around 80-90% of the time. Using this property, we then explore a new way to evaluate those models. Instead of testing the model on every possible test image, we create an adaptive test akin to GRE, in which the model's performance on the current round of images determines the test images in the next round. This allows the model to skip over questions too easy/hard for itself, and helps us get its overall performance in fewer steps.
2503.13063
Zheng Wang
Zheng Wang, Zihui Wang, Zheng Wang, Xiaoliang Fan, Cheng Wang
Federated Learning with Domain Shift Eraser
Accepted by CVPR2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Federated learning (FL) is emerging as a promising technique for collaborative learning without local data leaving their devices. However, clients' data originating from diverse domains may degrade model performance due to domain shifts, preventing the model from learning consistent representation space. In this paper, we propose a novel FL framework, Federated Domain Shift Eraser (FDSE), to improve model performance by differently erasing each client's domain skew and enhancing their consensus. First, we formulate the model forward passing as an iterative deskewing process that extracts and then deskews features alternatively. This is efficiently achieved by decomposing each original layer in the neural network into a Domain-agnostic Feature Extractor (DFE) and a Domain-specific Skew Eraser (DSE). Then, a regularization term is applied to promise the effectiveness of feature deskewing by pulling local statistics of DSE's outputs close to the globally consistent ones. Finally, DFE modules are fairly aggregated and broadcast to all the clients to maximize their consensus, and DSE modules are personalized for each client via similarity-aware aggregation to erase their domain skew differently. Comprehensive experiments were conducted on three datasets to confirm the advantages of our method in terms of accuracy, efficiency, and generalizability.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 11:10:31 GMT" } ]
2025-03-18T00:00:00
[ [ "Wang", "Zheng", "" ], [ "Wang", "Zihui", "" ], [ "Wang", "Zheng", "" ], [ "Fan", "Xiaoliang", "" ], [ "Wang", "Cheng", "" ] ]
TITLE: Federated Learning with Domain Shift Eraser ABSTRACT: Federated learning (FL) is emerging as a promising technique for collaborative learning without local data leaving their devices. However, clients' data originating from diverse domains may degrade model performance due to domain shifts, preventing the model from learning consistent representation space. In this paper, we propose a novel FL framework, Federated Domain Shift Eraser (FDSE), to improve model performance by differently erasing each client's domain skew and enhancing their consensus. First, we formulate the model forward passing as an iterative deskewing process that extracts and then deskews features alternatively. This is efficiently achieved by decomposing each original layer in the neural network into a Domain-agnostic Feature Extractor (DFE) and a Domain-specific Skew Eraser (DSE). Then, a regularization term is applied to promise the effectiveness of feature deskewing by pulling local statistics of DSE's outputs close to the globally consistent ones. Finally, DFE modules are fairly aggregated and broadcast to all the clients to maximize their consensus, and DSE modules are personalized for each client via similarity-aware aggregation to erase their domain skew differently. Comprehensive experiments were conducted on three datasets to confirm the advantages of our method in terms of accuracy, efficiency, and generalizability.
2503.13068
Henghui Du
Henghui Du, Guangyao Li, Chang Zhou, Chunjie Zhang, Alan Zhao, Di Hu
Crab: A Unified Audio-Visual Scene Understanding Model with Explicit Cooperation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In recent years, numerous tasks have been proposed to encourage model to develop specified capability in understanding audio-visual scene, primarily categorized into temporal localization, spatial localization, spatio-temporal reasoning, and pixel-level understanding. Instead, human possesses a unified understanding ability for diversified tasks. Therefore, designing an audio-visual model with general capability to unify these tasks is of great value. However, simply joint training for all tasks can lead to interference due to the heterogeneity of audiovisual data and complex relationship among tasks. We argue that this problem can be solved through explicit cooperation among tasks. To achieve this goal, we propose a unified learning method which achieves explicit inter-task cooperation from both the perspectives of data and model thoroughly. Specifically, considering the labels of existing datasets are simple words, we carefully refine these datasets and construct an Audio-Visual Unified Instruction-tuning dataset with Explicit reasoning process (AV-UIE), which clarifies the cooperative relationship among tasks. Subsequently, to facilitate concrete cooperation in learning stage, an interaction-aware LoRA structure with multiple LoRA heads is designed to learn different aspects of audiovisual data interaction. By unifying the explicit cooperation across the data and model aspect, our method not only surpasses existing unified audio-visual model on multiple tasks, but also outperforms most specialized models for certain tasks. Furthermore, we also visualize the process of explicit cooperation and surprisingly find that each LoRA head has certain audio-visual understanding ability. Code and dataset: https://github.com/GeWu-Lab/Crab
[ { "version": "v1", "created": "Mon, 17 Mar 2025 11:19:03 GMT" } ]
2025-03-18T00:00:00
[ [ "Du", "Henghui", "" ], [ "Li", "Guangyao", "" ], [ "Zhou", "Chang", "" ], [ "Zhang", "Chunjie", "" ], [ "Zhao", "Alan", "" ], [ "Hu", "Di", "" ] ]
TITLE: Crab: A Unified Audio-Visual Scene Understanding Model with Explicit Cooperation ABSTRACT: In recent years, numerous tasks have been proposed to encourage model to develop specified capability in understanding audio-visual scene, primarily categorized into temporal localization, spatial localization, spatio-temporal reasoning, and pixel-level understanding. Instead, human possesses a unified understanding ability for diversified tasks. Therefore, designing an audio-visual model with general capability to unify these tasks is of great value. However, simply joint training for all tasks can lead to interference due to the heterogeneity of audiovisual data and complex relationship among tasks. We argue that this problem can be solved through explicit cooperation among tasks. To achieve this goal, we propose a unified learning method which achieves explicit inter-task cooperation from both the perspectives of data and model thoroughly. Specifically, considering the labels of existing datasets are simple words, we carefully refine these datasets and construct an Audio-Visual Unified Instruction-tuning dataset with Explicit reasoning process (AV-UIE), which clarifies the cooperative relationship among tasks. Subsequently, to facilitate concrete cooperation in learning stage, an interaction-aware LoRA structure with multiple LoRA heads is designed to learn different aspects of audiovisual data interaction. By unifying the explicit cooperation across the data and model aspect, our method not only surpasses existing unified audio-visual model on multiple tasks, but also outperforms most specialized models for certain tasks. Furthermore, we also visualize the process of explicit cooperation and surprisingly find that each LoRA head has certain audio-visual understanding ability. Code and dataset: https://github.com/GeWu-Lab/Crab
2503.13073
Zhicheng Zhao
Zhicheng Zhao and Jinquan Yan and Chenglong Li and Xiao Wang and Jin Tang
DehazeMamba: SAR-guided Optical Remote Sensing Image Dehazing with Adaptive State Space Model
null
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optical remote sensing image dehazing presents significant challenges due to its extensive spatial scale and highly non-uniform haze distribution, which traditional single-image dehazing methods struggle to address effectively. While Synthetic Aperture Radar (SAR) imagery offers inherently haze-free reference information for large-scale scenes, existing SAR-guided dehazing approaches face two critical limitations: the integration of SAR information often diminishes the quality of haze-free regions, and the instability of feature quality further exacerbates cross-modal domain shift. To overcome these challenges, we introduce DehazeMamba, a novel SAR-guided dehazing network built on a progressive haze decoupling fusion strategy. Our approach incorporates two key innovations: a Haze Perception and Decoupling Module (HPDM) that dynamically identifies haze-affected regions through optical-SAR difference analysis, and a Progressive Fusion Module (PFM) that mitigates domain shift through a two-stage fusion process based on feature quality assessment. To facilitate research in this domain, we present MRSHaze, a large-scale benchmark dataset comprising 8,000 pairs of temporally synchronized, precisely geo-registered SAR-optical images with high resolution and diverse haze conditions. Extensive experiments demonstrate that DehazeMamba significantly outperforms state-of-the-art methods, achieving a 0.73 dB improvement in PSNR and substantial enhancements in downstream tasks such as semantic segmentation. The dataset is available at https://github.com/mmic-lcl/Datasets-and-benchmark-code.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 11:25:05 GMT" } ]
2025-03-18T00:00:00
[ [ "Zhao", "Zhicheng", "" ], [ "Yan", "Jinquan", "" ], [ "Li", "Chenglong", "" ], [ "Wang", "Xiao", "" ], [ "Tang", "Jin", "" ] ]
TITLE: DehazeMamba: SAR-guided Optical Remote Sensing Image Dehazing with Adaptive State Space Model ABSTRACT: Optical remote sensing image dehazing presents significant challenges due to its extensive spatial scale and highly non-uniform haze distribution, which traditional single-image dehazing methods struggle to address effectively. While Synthetic Aperture Radar (SAR) imagery offers inherently haze-free reference information for large-scale scenes, existing SAR-guided dehazing approaches face two critical limitations: the integration of SAR information often diminishes the quality of haze-free regions, and the instability of feature quality further exacerbates cross-modal domain shift. To overcome these challenges, we introduce DehazeMamba, a novel SAR-guided dehazing network built on a progressive haze decoupling fusion strategy. Our approach incorporates two key innovations: a Haze Perception and Decoupling Module (HPDM) that dynamically identifies haze-affected regions through optical-SAR difference analysis, and a Progressive Fusion Module (PFM) that mitigates domain shift through a two-stage fusion process based on feature quality assessment. To facilitate research in this domain, we present MRSHaze, a large-scale benchmark dataset comprising 8,000 pairs of temporally synchronized, precisely geo-registered SAR-optical images with high resolution and diverse haze conditions. Extensive experiments demonstrate that DehazeMamba significantly outperforms state-of-the-art methods, achieving a 0.73 dB improvement in PSNR and substantial enhancements in downstream tasks such as semantic segmentation. The dataset is available at https://github.com/mmic-lcl/Datasets-and-benchmark-code.
2503.13082
Runyu Jiao
Runyu Jiao, Alice Fasoli, Francesco Giuliari, Matteo Bortolon, Sergio Povoli, Guofeng Mei, Yiming Wang, Fabio Poiesi
Free-form language-based robotic reasoning and grasping
Project website: https://tev-fbk.github.io/FreeGrasp/
null
null
null
cs.RO cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Performing robotic grasping from a cluttered bin based on human instructions is a challenging task, as it requires understanding both the nuances of free-form language and the spatial relationships between objects. Vision-Language Models (VLMs) trained on web-scale data, such as GPT-4o, have demonstrated remarkable reasoning capabilities across both text and images. But can they truly be used for this task in a zero-shot setting? And what are their limitations? In this paper, we explore these research questions via the free-form language-based robotic grasping task, and propose a novel method, FreeGrasp, leveraging the pre-trained VLMs' world knowledge to reason about human instructions and object spatial arrangements. Our method detects all objects as keypoints and uses these keypoints to annotate marks on images, aiming to facilitate GPT-4o's zero-shot spatial reasoning. This allows our method to determine whether a requested object is directly graspable or if other objects must be grasped and removed first. Since no existing dataset is specifically designed for this task, we introduce a synthetic dataset FreeGraspData by extending the MetaGraspNetV2 dataset with human-annotated instructions and ground-truth grasping sequences. We conduct extensive analyses with both FreeGraspData and real-world validation with a gripper-equipped robotic arm, demonstrating state-of-the-art performance in grasp reasoning and execution. Project website: https://tev-fbk.github.io/FreeGrasp/.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 11:41:16 GMT" } ]
2025-03-18T00:00:00
[ [ "Jiao", "Runyu", "" ], [ "Fasoli", "Alice", "" ], [ "Giuliari", "Francesco", "" ], [ "Bortolon", "Matteo", "" ], [ "Povoli", "Sergio", "" ], [ "Mei", "Guofeng", "" ], [ "Wang", "Yiming", "" ], [ "Poiesi", "Fabio", "" ] ]
TITLE: Free-form language-based robotic reasoning and grasping ABSTRACT: Performing robotic grasping from a cluttered bin based on human instructions is a challenging task, as it requires understanding both the nuances of free-form language and the spatial relationships between objects. Vision-Language Models (VLMs) trained on web-scale data, such as GPT-4o, have demonstrated remarkable reasoning capabilities across both text and images. But can they truly be used for this task in a zero-shot setting? And what are their limitations? In this paper, we explore these research questions via the free-form language-based robotic grasping task, and propose a novel method, FreeGrasp, leveraging the pre-trained VLMs' world knowledge to reason about human instructions and object spatial arrangements. Our method detects all objects as keypoints and uses these keypoints to annotate marks on images, aiming to facilitate GPT-4o's zero-shot spatial reasoning. This allows our method to determine whether a requested object is directly graspable or if other objects must be grasped and removed first. Since no existing dataset is specifically designed for this task, we introduce a synthetic dataset FreeGraspData by extending the MetaGraspNetV2 dataset with human-annotated instructions and ground-truth grasping sequences. We conduct extensive analyses with both FreeGraspData and real-world validation with a gripper-equipped robotic arm, demonstrating state-of-the-art performance in grasp reasoning and execution. Project website: https://tev-fbk.github.io/FreeGrasp/.
2503.13086
Yiwei Xu
Yiwei Xu, Yifei Yu, Wentian Gan, Tengfei Wang, Zongqian Zhan, Hao Cheng and Xin Wang
Gaussian On-the-Fly Splatting: A Progressive Framework for Robust Near Real-Time 3DGS Optimization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D Gaussian Splatting (3DGS) achieves high-fidelity rendering with fast real-time performance, but existing methods rely on offline training after full Structure-from-Motion (SfM) processing. In contrast, this work introduces On-the-Fly GS, a progressive framework enabling near real-time 3DGS optimization during image capture. As each image arrives, its pose and sparse points are updated via on-the-fly SfM, and newly optimized Gaussians are immediately integrated into the 3DGS field. We propose a progressive local optimization strategy to prioritize new images and their neighbors by their corresponding overlapping relationship, allowing the new image and its overlapping images to get more training. To further stabilize training across old and new images, an adaptive learning rate schedule balances the iterations and the learning rate. Moreover, to maintain overall quality of the 3DGS field, an efficient global optimization scheme prevents overfitting to the newly added images. Experiments on multiple benchmark datasets show that our On-the-Fly GS reduces training time significantly, optimizing each new image in seconds with minimal rendering loss, offering the first practical step toward rapid, progressive 3DGS reconstruction.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 11:47:58 GMT" } ]
2025-03-18T00:00:00
[ [ "Xu", "Yiwei", "" ], [ "Yu", "Yifei", "" ], [ "Gan", "Wentian", "" ], [ "Wang", "Tengfei", "" ], [ "Zhan", "Zongqian", "" ], [ "Cheng", "Hao", "" ], [ "Wang", "Xin", "" ] ]
TITLE: Gaussian On-the-Fly Splatting: A Progressive Framework for Robust Near Real-Time 3DGS Optimization ABSTRACT: 3D Gaussian Splatting (3DGS) achieves high-fidelity rendering with fast real-time performance, but existing methods rely on offline training after full Structure-from-Motion (SfM) processing. In contrast, this work introduces On-the-Fly GS, a progressive framework enabling near real-time 3DGS optimization during image capture. As each image arrives, its pose and sparse points are updated via on-the-fly SfM, and newly optimized Gaussians are immediately integrated into the 3DGS field. We propose a progressive local optimization strategy to prioritize new images and their neighbors by their corresponding overlapping relationship, allowing the new image and its overlapping images to get more training. To further stabilize training across old and new images, an adaptive learning rate schedule balances the iterations and the learning rate. Moreover, to maintain overall quality of the 3DGS field, an efficient global optimization scheme prevents overfitting to the newly added images. Experiments on multiple benchmark datasets show that our On-the-Fly GS reduces training time significantly, optimizing each new image in seconds with minimal rendering loss, offering the first practical step toward rapid, progressive 3DGS reconstruction.
2503.13090
Tom\'a\v{s} Pivo\v{n}ka
V\'aclav Truhla\v{r}\'ik, Tom\'a\v{s} Pivo\v{n}ka, Michal Kasarda, Libor P\v{r}eu\v{c}il
Multi-Platform Teach-and-Repeat Navigation by Visual Place Recognition Based on Deep-Learned Local Features
6 pages, 5 figures
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
Uniform and variable environments still remain a challenge for stable visual localization and mapping in mobile robot navigation. One of the possible approaches suitable for such environments is appearance-based teach-and-repeat navigation, relying on simplified localization and reactive robot motion control - all without a need for standard mapping. This work brings an innovative solution to such a system based on visual place recognition techniques. Here, the major contributions stand in the employment of a new visual place recognition technique, a novel horizontal shift computation approach, and a multi-platform system design for applications across various types of mobile robots. Secondly, a new public dataset for experimental testing of appearance-based navigation methods is introduced. Moreover, the work also provides real-world experimental testing and performance comparison of the introduced navigation system against other state-of-the-art methods. The results confirm that the new system outperforms existing methods in several testing scenarios, is capable of operation indoors and outdoors, and exhibits robustness to day and night scene variations.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 11:57:41 GMT" } ]
2025-03-18T00:00:00
[ [ "Truhlařík", "Václav", "" ], [ "Pivoňka", "Tomáš", "" ], [ "Kasarda", "Michal", "" ], [ "Přeučil", "Libor", "" ] ]
TITLE: Multi-Platform Teach-and-Repeat Navigation by Visual Place Recognition Based on Deep-Learned Local Features ABSTRACT: Uniform and variable environments still remain a challenge for stable visual localization and mapping in mobile robot navigation. One of the possible approaches suitable for such environments is appearance-based teach-and-repeat navigation, relying on simplified localization and reactive robot motion control - all without a need for standard mapping. This work brings an innovative solution to such a system based on visual place recognition techniques. Here, the major contributions stand in the employment of a new visual place recognition technique, a novel horizontal shift computation approach, and a multi-platform system design for applications across various types of mobile robots. Secondly, a new public dataset for experimental testing of appearance-based navigation methods is introduced. Moreover, the work also provides real-world experimental testing and performance comparison of the introduced navigation system against other state-of-the-art methods. The results confirm that the new system outperforms existing methods in several testing scenarios, is capable of operation indoors and outdoors, and exhibits robustness to day and night scene variations.
2503.13101
Babangida Sani
Babangida Sani, Aakansha Soy, Sukairaj Hafiz Imam, Ahmad Mustapha, Lukman Jibril Aliyu, Idris Abdulmumin, Ibrahim Said Ahmad, Shamsuddeen Hassan Muhammad
Who Wrote This? Identifying Machine vs Human-Generated Text in Hausa
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The advancement of large language models (LLMs) has allowed them to be proficient in various tasks, including content generation. However, their unregulated usage can lead to malicious activities such as plagiarism and generating and spreading fake news, especially for low-resource languages. Most existing machine-generated text detectors are trained on high-resource languages like English, French, etc. In this study, we developed the first large-scale detector that can distinguish between human- and machine-generated content in Hausa. We scrapped seven Hausa-language media outlets for the human-generated text and the Gemini-2.0 flash model to automatically generate the corresponding Hausa-language articles based on the human-generated article headlines. We fine-tuned four pre-trained Afri-centric models (AfriTeVa, AfriBERTa, AfroXLMR, and AfroXLMR-76L) on the resulting dataset and assessed their performance using accuracy and F1-score metrics. AfroXLMR achieved the highest performance with an accuracy of 99.23% and an F1 score of 99.21%, demonstrating its effectiveness for Hausa text detection. Our dataset is made publicly available to enable further research.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 12:13:37 GMT" } ]
2025-03-18T00:00:00
[ [ "Sani", "Babangida", "" ], [ "Soy", "Aakansha", "" ], [ "Imam", "Sukairaj Hafiz", "" ], [ "Mustapha", "Ahmad", "" ], [ "Aliyu", "Lukman Jibril", "" ], [ "Abdulmumin", "Idris", "" ], [ "Ahmad", "Ibrahim Said", "" ], [ "Muhammad", "Shamsuddeen Hassan", "" ] ]
TITLE: Who Wrote This? Identifying Machine vs Human-Generated Text in Hausa ABSTRACT: The advancement of large language models (LLMs) has allowed them to be proficient in various tasks, including content generation. However, their unregulated usage can lead to malicious activities such as plagiarism and generating and spreading fake news, especially for low-resource languages. Most existing machine-generated text detectors are trained on high-resource languages like English, French, etc. In this study, we developed the first large-scale detector that can distinguish between human- and machine-generated content in Hausa. We scrapped seven Hausa-language media outlets for the human-generated text and the Gemini-2.0 flash model to automatically generate the corresponding Hausa-language articles based on the human-generated article headlines. We fine-tuned four pre-trained Afri-centric models (AfriTeVa, AfriBERTa, AfroXLMR, and AfroXLMR-76L) on the resulting dataset and assessed their performance using accuracy and F1-score metrics. AfroXLMR achieved the highest performance with an accuracy of 99.23% and an F1 score of 99.21%, demonstrating its effectiveness for Hausa text detection. Our dataset is made publicly available to enable further research.
2503.13102
Ekaterina Artemova
Alexander Pugachev, Alena Fenogenova, Vladislav Mikhailov, Ekaterina Artemova
REPA: Russian Error Types Annotation for Evaluating Text Generation and Judgment Capabilities
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent advances in large language models (LLMs) have introduced the novel paradigm of using LLMs as judges, where an LLM evaluates and scores the outputs of another LLM, which often correlates highly with human preferences. However, the use of LLM-as-a-judge has been primarily studied in English. In this paper, we evaluate this framework in Russian by introducing the Russian Error tyPes Annotation dataset (REPA), a dataset of 1k user queries and 2k LLM-generated responses. Human annotators labeled each response pair expressing their preferences across ten specific error types, as well as selecting an overall preference. We rank six generative LLMs across the error types using three rating systems based on human preferences. We also evaluate responses using eight LLM judges in zero-shot and few-shot settings. We describe the results of analyzing the judges and position and length biases. Our findings reveal a notable gap between LLM judge performance in Russian and English. However, rankings based on human and LLM preferences show partial alignment, suggesting that while current LLM judges struggle with fine-grained evaluation in Russian, there is potential for improvement.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 12:15:16 GMT" } ]
2025-03-18T00:00:00
[ [ "Pugachev", "Alexander", "" ], [ "Fenogenova", "Alena", "" ], [ "Mikhailov", "Vladislav", "" ], [ "Artemova", "Ekaterina", "" ] ]
TITLE: REPA: Russian Error Types Annotation for Evaluating Text Generation and Judgment Capabilities ABSTRACT: Recent advances in large language models (LLMs) have introduced the novel paradigm of using LLMs as judges, where an LLM evaluates and scores the outputs of another LLM, which often correlates highly with human preferences. However, the use of LLM-as-a-judge has been primarily studied in English. In this paper, we evaluate this framework in Russian by introducing the Russian Error tyPes Annotation dataset (REPA), a dataset of 1k user queries and 2k LLM-generated responses. Human annotators labeled each response pair expressing their preferences across ten specific error types, as well as selecting an overall preference. We rank six generative LLMs across the error types using three rating systems based on human preferences. We also evaluate responses using eight LLM judges in zero-shot and few-shot settings. We describe the results of analyzing the judges and position and length biases. Our findings reveal a notable gap between LLM judge performance in Russian and English. However, rankings based on human and LLM preferences show partial alignment, suggesting that while current LLM judges struggle with fine-grained evaluation in Russian, there is potential for improvement.
2503.13109
Kedi Chen
Kedi Chen, Zhikai Lei, Fan Zhang, Yinqi Zhang, Qin Chen, Jie Zhou, Liang He, Qipeng Guo, Kai Chen, Wei Zhang
Code-Driven Inductive Synthesis: Enhancing Reasoning Abilities of Large Language Models with Sequences
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Large language models make remarkable progress in reasoning capabilities. Existing works focus mainly on deductive reasoning tasks (e.g., code and math), while another type of reasoning mode that better aligns with human learning, inductive reasoning, is not well studied. We attribute the reason to the fact that obtaining high-quality process supervision data is challenging for inductive reasoning. Towards this end, we novelly employ number sequences as the source of inductive reasoning data. We package sequences into algorithmic problems to find the general term of each sequence through a code solution. In this way, we can verify whether the code solution holds for any term in the current sequence, and inject case-based supervision signals by using code unit tests. We build a sequence synthetic data pipeline and form a training dataset CodeSeq. Experimental results show that the models tuned with CodeSeq improve on both code and comprehensive reasoning benchmarks.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 12:33:26 GMT" } ]
2025-03-18T00:00:00
[ [ "Chen", "Kedi", "" ], [ "Lei", "Zhikai", "" ], [ "Zhang", "Fan", "" ], [ "Zhang", "Yinqi", "" ], [ "Chen", "Qin", "" ], [ "Zhou", "Jie", "" ], [ "He", "Liang", "" ], [ "Guo", "Qipeng", "" ], [ "Chen", "Kai", "" ], [ "Zhang", "Wei", "" ] ]
TITLE: Code-Driven Inductive Synthesis: Enhancing Reasoning Abilities of Large Language Models with Sequences ABSTRACT: Large language models make remarkable progress in reasoning capabilities. Existing works focus mainly on deductive reasoning tasks (e.g., code and math), while another type of reasoning mode that better aligns with human learning, inductive reasoning, is not well studied. We attribute the reason to the fact that obtaining high-quality process supervision data is challenging for inductive reasoning. Towards this end, we novelly employ number sequences as the source of inductive reasoning data. We package sequences into algorithmic problems to find the general term of each sequence through a code solution. In this way, we can verify whether the code solution holds for any term in the current sequence, and inject case-based supervision signals by using code unit tests. We build a sequence synthetic data pipeline and form a training dataset CodeSeq. Experimental results show that the models tuned with CodeSeq improve on both code and comprehensive reasoning benchmarks.
2503.13110
Jing Li
Jing Li, Yihang Fu, Falai Chen
DTGBrepGen: A Novel B-rep Generative Model through Decoupling Topology and Geometry
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Boundary representation (B-rep) of geometric models is a fundamental format in Computer-Aided Design (CAD). However, automatically generating valid and high-quality B-rep models remains challenging due to the complex interdependence between the topology and geometry of the models. Existing methods tend to prioritize geometric representation while giving insufficient attention to topological constraints, making it difficult to maintain structural validity and geometric accuracy. In this paper, we propose DTGBrepGen, a novel topology-geometry decoupled framework for B-rep generation that explicitly addresses both aspects. Our approach first generates valid topological structures through a two-stage process that independently models edge-face and edge-vertex adjacency relationships. Subsequently, we employ Transformer-based diffusion models for sequential geometry generation, progressively generating vertex coordinates, followed by edge geometries and face geometries which are represented as B-splines. Extensive experiments on diverse CAD datasets show that DTGBrepGen significantly outperforms existing methods in both topological validity and geometric accuracy, achieving higher validity rates and producing more diverse and realistic B-reps. Our code is publicly available at https://github.com/jinli99/DTGBrepGen.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 12:34:14 GMT" } ]
2025-03-18T00:00:00
[ [ "Li", "Jing", "" ], [ "Fu", "Yihang", "" ], [ "Chen", "Falai", "" ] ]
TITLE: DTGBrepGen: A Novel B-rep Generative Model through Decoupling Topology and Geometry ABSTRACT: Boundary representation (B-rep) of geometric models is a fundamental format in Computer-Aided Design (CAD). However, automatically generating valid and high-quality B-rep models remains challenging due to the complex interdependence between the topology and geometry of the models. Existing methods tend to prioritize geometric representation while giving insufficient attention to topological constraints, making it difficult to maintain structural validity and geometric accuracy. In this paper, we propose DTGBrepGen, a novel topology-geometry decoupled framework for B-rep generation that explicitly addresses both aspects. Our approach first generates valid topological structures through a two-stage process that independently models edge-face and edge-vertex adjacency relationships. Subsequently, we employ Transformer-based diffusion models for sequential geometry generation, progressively generating vertex coordinates, followed by edge geometries and face geometries which are represented as B-splines. Extensive experiments on diverse CAD datasets show that DTGBrepGen significantly outperforms existing methods in both topological validity and geometric accuracy, achieving higher validity rates and producing more diverse and realistic B-reps. Our code is publicly available at https://github.com/jinli99/DTGBrepGen.
2503.13111
Erik Daxberger
Erik Daxberger, Nina Wenzel, David Griffiths, Haiming Gang, Justin Lazarow, Gefen Kohavi, Kai Kang, Marcin Eichner, Yinfei Yang, Afshin Dehghan, Peter Grasch
MM-Spatial: Exploring 3D Spatial Understanding in Multimodal LLMs
null
null
null
null
cs.CV cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal large language models (MLLMs) excel at 2D visual understanding but remain limited in their ability to reason about 3D space. In this work, we leverage large-scale high-quality 3D scene data with open-set annotations to introduce 1) a novel supervised fine-tuning dataset and 2) a new evaluation benchmark, focused on indoor scenes. Our Cubify Anything VQA (CA-VQA) data covers diverse spatial tasks including spatial relationship prediction, metric size and distance estimation, and 3D grounding. We show that CA-VQA enables us to train MM-Spatial, a strong generalist MLLM that also achieves state-of-the-art performance on 3D spatial understanding benchmarks, including our own. We show how incorporating metric depth and multi-view inputs (provided in CA-VQA) can further improve 3D understanding, and demonstrate that data alone allows our model to achieve depth perception capabilities comparable to dedicated monocular depth estimation models. We will publish our SFT dataset and benchmark.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 12:34:22 GMT" } ]
2025-03-18T00:00:00
[ [ "Daxberger", "Erik", "" ], [ "Wenzel", "Nina", "" ], [ "Griffiths", "David", "" ], [ "Gang", "Haiming", "" ], [ "Lazarow", "Justin", "" ], [ "Kohavi", "Gefen", "" ], [ "Kang", "Kai", "" ], [ "Eichner", "Marcin", "" ], [ "Yang", "Yinfei", "" ], [ "Dehghan", "Afshin", "" ], [ "Grasch", "Peter", "" ] ]
TITLE: MM-Spatial: Exploring 3D Spatial Understanding in Multimodal LLMs ABSTRACT: Multimodal large language models (MLLMs) excel at 2D visual understanding but remain limited in their ability to reason about 3D space. In this work, we leverage large-scale high-quality 3D scene data with open-set annotations to introduce 1) a novel supervised fine-tuning dataset and 2) a new evaluation benchmark, focused on indoor scenes. Our Cubify Anything VQA (CA-VQA) data covers diverse spatial tasks including spatial relationship prediction, metric size and distance estimation, and 3D grounding. We show that CA-VQA enables us to train MM-Spatial, a strong generalist MLLM that also achieves state-of-the-art performance on 3D spatial understanding benchmarks, including our own. We show how incorporating metric depth and multi-view inputs (provided in CA-VQA) can further improve 3D understanding, and demonstrate that data alone allows our model to achieve depth perception capabilities comparable to dedicated monocular depth estimation models. We will publish our SFT dataset and benchmark.
2503.13113
Arjun Pakrashi
Gabriele Sanguin, Arjun Pakrashi, Marco Viola, Francesco Rinaldi
Exploring the Potential of Bilevel Optimization for Calibrating Neural Networks
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Handling uncertainty is critical for ensuring reliable decision-making in intelligent systems. Modern neural networks are known to be poorly calibrated, resulting in predicted confidence scores that are difficult to use. This article explores improving confidence estimation and calibration through the application of bilevel optimization, a framework designed to solve hierarchical problems with interdependent optimization levels. A self-calibrating bilevel neural-network training approach is introduced to improve a model's predicted confidence scores. The effectiveness of the proposed framework is analyzed using toy datasets, such as Blobs and Spirals, as well as more practical simulated datasets, such as Blood Alcohol Concentration (BAC). It is compared with a well-known and widely used calibration strategy, isotonic regression. The reported experimental results reveal that the proposed bilevel optimization approach reduces the calibration error while preserving accuracy.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 12:34:55 GMT" } ]
2025-03-18T00:00:00
[ [ "Sanguin", "Gabriele", "" ], [ "Pakrashi", "Arjun", "" ], [ "Viola", "Marco", "" ], [ "Rinaldi", "Francesco", "" ] ]
TITLE: Exploring the Potential of Bilevel Optimization for Calibrating Neural Networks ABSTRACT: Handling uncertainty is critical for ensuring reliable decision-making in intelligent systems. Modern neural networks are known to be poorly calibrated, resulting in predicted confidence scores that are difficult to use. This article explores improving confidence estimation and calibration through the application of bilevel optimization, a framework designed to solve hierarchical problems with interdependent optimization levels. A self-calibrating bilevel neural-network training approach is introduced to improve a model's predicted confidence scores. The effectiveness of the proposed framework is analyzed using toy datasets, such as Blobs and Spirals, as well as more practical simulated datasets, such as Blood Alcohol Concentration (BAC). It is compared with a well-known and widely used calibration strategy, isotonic regression. The reported experimental results reveal that the proposed bilevel optimization approach reduces the calibration error while preserving accuracy.
2503.13116
Zeng Wang
Zeng Wang, Minghao Shao, Mohammed Nabeel, Prithwish Basu Roy, Likhitha Mankali, Jitendra Bhandari, Ramesh Karri, Ozgur Sinanoglu, Muhammad Shafique, Johann Knechtel
VeriLeaky: Navigating IP Protection vs Utility in Fine-Tuning for LLM-Driven Verilog Coding
null
null
null
null
cs.CR cs.AR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) offer significant potential for coding, yet fine-tuning (FT) with curated data is essential for niche languages like Verilog. Using proprietary intellectual property (IP) for FT presents a serious risk, as FT data can be leaked through LLM inference. This leads to a critical dilemma for design houses: seeking to build externally accessible LLMs offering competitive Verilog coding, how can they leverage in-house IP to enhance FT utility while ensuring IP protection? For the first time in the literature, we study this dilemma. Using LLaMA 3.1-8B, we conduct in-house FT on a baseline Verilog dataset (RTLCoder) supplemented with our own in-house IP, which is validated through multiple tape-outs. To rigorously assess IP leakage, we quantify structural similarity (AST/Dolos) and functional equivalence (Synopsys Formality) between generated codes and our in-house IP. We show that our IP can indeed be leaked, confirming the threat. As defense, we evaluate logic locking of Verilog codes (ASSURE). This offers some level of protection, yet reduces the IP's utility for FT and degrades the LLM's performance. Our study shows the need for novel strategies that are both effective and minimally disruptive to FT, an essential effort for enabling design houses to fully utilize their proprietary IP toward LLM-driven Verilog coding.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 12:38:03 GMT" } ]
2025-03-18T00:00:00
[ [ "Wang", "Zeng", "" ], [ "Shao", "Minghao", "" ], [ "Nabeel", "Mohammed", "" ], [ "Roy", "Prithwish Basu", "" ], [ "Mankali", "Likhitha", "" ], [ "Bhandari", "Jitendra", "" ], [ "Karri", "Ramesh", "" ], [ "Sinanoglu", "Ozgur", "" ], [ "Shafique", "Muhammad", "" ], [ "Knechtel", "Johann", "" ] ]
TITLE: VeriLeaky: Navigating IP Protection vs Utility in Fine-Tuning for LLM-Driven Verilog Coding ABSTRACT: Large language models (LLMs) offer significant potential for coding, yet fine-tuning (FT) with curated data is essential for niche languages like Verilog. Using proprietary intellectual property (IP) for FT presents a serious risk, as FT data can be leaked through LLM inference. This leads to a critical dilemma for design houses: seeking to build externally accessible LLMs offering competitive Verilog coding, how can they leverage in-house IP to enhance FT utility while ensuring IP protection? For the first time in the literature, we study this dilemma. Using LLaMA 3.1-8B, we conduct in-house FT on a baseline Verilog dataset (RTLCoder) supplemented with our own in-house IP, which is validated through multiple tape-outs. To rigorously assess IP leakage, we quantify structural similarity (AST/Dolos) and functional equivalence (Synopsys Formality) between generated codes and our in-house IP. We show that our IP can indeed be leaked, confirming the threat. As defense, we evaluate logic locking of Verilog codes (ASSURE). This offers some level of protection, yet reduces the IP's utility for FT and degrades the LLM's performance. Our study shows the need for novel strategies that are both effective and minimally disruptive to FT, an essential effort for enabling design houses to fully utilize their proprietary IP toward LLM-driven Verilog coding.
2503.13120
Siyuan Fan
Siyuan Fan, Wenke Huang, Xiantao Cai, Bo Du
3D Human Interaction Generation: A Survey
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D human interaction generation has emerged as a key research area, focusing on producing dynamic and contextually relevant interactions between humans and various interactive entities. Recent rapid advancements in 3D model representation methods, motion capture technologies, and generative models have laid a solid foundation for the growing interest in this domain. Existing research in this field can be broadly categorized into three areas: human-scene interaction, human-object interaction, and human-human interaction. Despite the rapid advancements in this area, challenges remain due to the need for naturalness in human motion generation and the accurate interaction between humans and interactive entities. In this survey, we present a comprehensive literature review of human interaction generation, which, to the best of our knowledge, is the first of its kind. We begin by introducing the foundational technologies, including model representations, motion capture methods, and generative models. Subsequently, we introduce the approaches proposed for the three sub-tasks, along with their corresponding datasets and evaluation metrics. Finally, we discuss potential future research directions in this area and conclude the survey. Through this survey, we aim to offer a comprehensive overview of the current advancements in the field, highlight key challenges, and inspire future research works.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 12:47:33 GMT" } ]
2025-03-18T00:00:00
[ [ "Fan", "Siyuan", "" ], [ "Huang", "Wenke", "" ], [ "Cai", "Xiantao", "" ], [ "Du", "Bo", "" ] ]
TITLE: 3D Human Interaction Generation: A Survey ABSTRACT: 3D human interaction generation has emerged as a key research area, focusing on producing dynamic and contextually relevant interactions between humans and various interactive entities. Recent rapid advancements in 3D model representation methods, motion capture technologies, and generative models have laid a solid foundation for the growing interest in this domain. Existing research in this field can be broadly categorized into three areas: human-scene interaction, human-object interaction, and human-human interaction. Despite the rapid advancements in this area, challenges remain due to the need for naturalness in human motion generation and the accurate interaction between humans and interactive entities. In this survey, we present a comprehensive literature review of human interaction generation, which, to the best of our knowledge, is the first of its kind. We begin by introducing the foundational technologies, including model representations, motion capture methods, and generative models. Subsequently, we introduce the approaches proposed for the three sub-tasks, along with their corresponding datasets and evaluation metrics. Finally, we discuss potential future research directions in this area and conclude the survey. Through this survey, we aim to offer a comprehensive overview of the current advancements in the field, highlight key challenges, and inspire future research works.
2503.13125
Pan Liu
Pan Liu
Non-Destructive Detection of Sub-Micron Imperceptible Scratches On Laser Chips Based On Consistent Texture Entropy Recursive Optimization Semi-Supervised Network
11 pages
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Laser chips, the core components of semiconductor lasers, are extensively utilized in various industries, showing great potential for future application. Smoothness emitting surfaces are crucial in chip production, as even imperceptible scratches can significantly degrade performance and lifespan, thus impeding production efficiency and yield. Therefore, non-destructively detecting these imperceptible scratches on the emitting surfaces is essential for enhancing yield and reducing costs. These sub-micron level scratches, barely visible against the background, are extremely difficult to detect with conventional methods, compounded by a lack of labeled datasets. To address this challenge, this paper introduces TexRecNet, a consistent texture entropy recursive optimization semi-supervised network. The network, based on a recursive optimization architecture, iteratively improves the detection accuracy of imperceptible scratch edges, using outputs from previous cycles to inform subsequent inputs and guide the network's positional encoding. It also introduces image texture entropy, utilizing a substantial amount of unlabeled data to expand the training set while maintaining training signal reliability. Ultimately, by analyzing the inconsistency of the network output sequences obtained during the recursive process, a semi-supervised training strategy with recursive consistency constraints is proposed, using outputs from the recursive process for non-destructive signal augmentation and consistently optimizes the loss function for efficient end-to-end training. Experimental results show that this method, utilizing a substantial amount of unsupervised data, achieves 75.6% accuracy and 74.8% recall in detecting imperceptible scratches, an 8.5% and 33.6% improvement over conventional Unet, enhancing quality control in laser chips.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 12:48:48 GMT" } ]
2025-03-18T00:00:00
[ [ "Liu", "Pan", "" ] ]
TITLE: Non-Destructive Detection of Sub-Micron Imperceptible Scratches On Laser Chips Based On Consistent Texture Entropy Recursive Optimization Semi-Supervised Network ABSTRACT: Laser chips, the core components of semiconductor lasers, are extensively utilized in various industries, showing great potential for future application. Smoothness emitting surfaces are crucial in chip production, as even imperceptible scratches can significantly degrade performance and lifespan, thus impeding production efficiency and yield. Therefore, non-destructively detecting these imperceptible scratches on the emitting surfaces is essential for enhancing yield and reducing costs. These sub-micron level scratches, barely visible against the background, are extremely difficult to detect with conventional methods, compounded by a lack of labeled datasets. To address this challenge, this paper introduces TexRecNet, a consistent texture entropy recursive optimization semi-supervised network. The network, based on a recursive optimization architecture, iteratively improves the detection accuracy of imperceptible scratch edges, using outputs from previous cycles to inform subsequent inputs and guide the network's positional encoding. It also introduces image texture entropy, utilizing a substantial amount of unlabeled data to expand the training set while maintaining training signal reliability. Ultimately, by analyzing the inconsistency of the network output sequences obtained during the recursive process, a semi-supervised training strategy with recursive consistency constraints is proposed, using outputs from the recursive process for non-destructive signal augmentation and consistently optimizes the loss function for efficient end-to-end training. Experimental results show that this method, utilizing a substantial amount of unsupervised data, achieves 75.6% accuracy and 74.8% recall in detecting imperceptible scratches, an 8.5% and 33.6% improvement over conventional Unet, enhancing quality control in laser chips.
2503.13130
Ling-An Zeng
Ling-An Zeng, Guohong Huang, Yi-Lin Wei, Shengbo Gu, Yu-Ming Tang, Jingke Meng, Wei-Shi Zheng
ChainHOI: Joint-based Kinematic Chain Modeling for Human-Object Interaction Generation
Accepted to CVPR 2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose ChainHOI, a novel approach for text-driven human-object interaction (HOI) generation that explicitly models interactions at both the joint and kinetic chain levels. Unlike existing methods that implicitly model interactions using full-body poses as tokens, we argue that explicitly modeling joint-level interactions is more natural and effective for generating realistic HOIs, as it directly captures the geometric and semantic relationships between joints, rather than modeling interactions in the latent pose space. To this end, ChainHOI introduces a novel joint graph to capture potential interactions with objects, and a Generative Spatiotemporal Graph Convolution Network to explicitly model interactions at the joint level. Furthermore, we propose a Kinematics-based Interaction Module that explicitly models interactions at the kinetic chain level, ensuring more realistic and biomechanically coherent motions. Evaluations on two public datasets demonstrate that ChainHOI significantly outperforms previous methods, generating more realistic, and semantically consistent HOIs. Code is available \href{https://github.com/qinghuannn/ChainHOI}{here}.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 12:55:34 GMT" } ]
2025-03-18T00:00:00
[ [ "Zeng", "Ling-An", "" ], [ "Huang", "Guohong", "" ], [ "Wei", "Yi-Lin", "" ], [ "Gu", "Shengbo", "" ], [ "Tang", "Yu-Ming", "" ], [ "Meng", "Jingke", "" ], [ "Zheng", "Wei-Shi", "" ] ]
TITLE: ChainHOI: Joint-based Kinematic Chain Modeling for Human-Object Interaction Generation ABSTRACT: We propose ChainHOI, a novel approach for text-driven human-object interaction (HOI) generation that explicitly models interactions at both the joint and kinetic chain levels. Unlike existing methods that implicitly model interactions using full-body poses as tokens, we argue that explicitly modeling joint-level interactions is more natural and effective for generating realistic HOIs, as it directly captures the geometric and semantic relationships between joints, rather than modeling interactions in the latent pose space. To this end, ChainHOI introduces a novel joint graph to capture potential interactions with objects, and a Generative Spatiotemporal Graph Convolution Network to explicitly model interactions at the joint level. Furthermore, we propose a Kinematics-based Interaction Module that explicitly models interactions at the kinetic chain level, ensuring more realistic and biomechanically coherent motions. Evaluations on two public datasets demonstrate that ChainHOI significantly outperforms previous methods, generating more realistic, and semantically consistent HOIs. Code is available \href{https://github.com/qinghuannn/ChainHOI}{here}.
2503.13134
Prakhar Bhardwaj
Prakhar Bhardwaj, Sheethal Bhat, Andreas Maier
Enhancing zero-shot learning in medical imaging: integrating clip with advanced techniques for improved chest x-ray analysis
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Due to the large volume of medical imaging data, advanced AI methodologies are needed to assist radiologists in diagnosing thoracic diseases from chest X-rays (CXRs). Existing deep learning models often require large, labeled datasets, which are scarce in medical imaging due to the time-consuming and expert-driven annotation process. In this paper, we extend the existing approach to enhance zero-shot learning in medical imaging by integrating Contrastive Language-Image Pre-training (CLIP) with Momentum Contrast (MoCo), resulting in our proposed model, MoCoCLIP. Our method addresses challenges posed by class-imbalanced and unlabeled datasets, enabling improved detection of pulmonary pathologies. Experimental results on the NIH ChestXray14 dataset demonstrate that MoCoCLIP outperforms the state-of-the-art CheXZero model, achieving relative improvement of approximately 6.5%. Furthermore, on the CheXpert dataset, MoCoCLIP demonstrates superior zero-shot performance, achieving an average AUC of 0.750 compared to CheXZero with 0.746 AUC, highlighting its enhanced generalization capabilities on unseen data.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 12:59:34 GMT" } ]
2025-03-18T00:00:00
[ [ "Bhardwaj", "Prakhar", "" ], [ "Bhat", "Sheethal", "" ], [ "Maier", "Andreas", "" ] ]
TITLE: Enhancing zero-shot learning in medical imaging: integrating clip with advanced techniques for improved chest x-ray analysis ABSTRACT: Due to the large volume of medical imaging data, advanced AI methodologies are needed to assist radiologists in diagnosing thoracic diseases from chest X-rays (CXRs). Existing deep learning models often require large, labeled datasets, which are scarce in medical imaging due to the time-consuming and expert-driven annotation process. In this paper, we extend the existing approach to enhance zero-shot learning in medical imaging by integrating Contrastive Language-Image Pre-training (CLIP) with Momentum Contrast (MoCo), resulting in our proposed model, MoCoCLIP. Our method addresses challenges posed by class-imbalanced and unlabeled datasets, enabling improved detection of pulmonary pathologies. Experimental results on the NIH ChestXray14 dataset demonstrate that MoCoCLIP outperforms the state-of-the-art CheXZero model, achieving relative improvement of approximately 6.5%. Furthermore, on the CheXpert dataset, MoCoCLIP demonstrates superior zero-shot performance, achieving an average AUC of 0.750 compared to CheXZero with 0.746 AUC, highlighting its enhanced generalization capabilities on unseen data.
2503.13156
Youssef Mourchid
Zakariae Zrimek, Youssef Mourchid, Mohammed El Hassouni
DynSTG-Mamba: Dynamic Spatio-Temporal Graph Mamba with Cross-Graph Knowledge Distillation for Gait Disorders Recognition
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Gait disorder recognition plays a crucial role in the early diagnosis and monitoring of movement disorders. Existing approaches, including spatio-temporal graph convolutional networks (ST-GCNs), often face high memory demands and struggle to capture complex spatio-temporal dependencies, limiting their efficiency in clinical applications. To address these challenges, we introduce DynSTG-Mamba (Dynamic Spatio-Temporal Graph Mamba), a novel framework that combines DF-STGNN and STG-Mamba to enhance motion sequence modeling. The DF-STGNN incorporates a dynamic spatio-temporal filter that adaptively adjusts spatial connections between skeletal joints and temporal interactions across different movement phases. This approach ensures better feature propagation through dynamic graph structures by considering the hierarchical nature and dynamics of skeletal gait data. Meanwhile, STG-Mamba, an extension of Mamba adapted for skeletal motion data, ensures a continuous propagation of states, facilitating the capture of long-term dependencies while reducing computational complexity. To reduce the number of model parameters and computational costs while maintaining consistency, we propose Cross-Graph Relational Knowledge Distillation, a novel knowledge transfer mechanism that aligns relational information between teacher (large architecture) and student models (small architecture) while using shared memory. This ensures that the interactions and movement patterns of the joints are accurately preserved in the motion sequences. We validate our DynSTG-Mamba on KOA-NM, PD-WALK, and ATAXIA datasets, where it outperforms state-of-the-art approaches by achieving in terms of Accuracy, F1-score, and Recall. Our results highlight the efficiency and robustness of our approach, offering a lightweight yet highly accurate solution for automated gait analysis and movement disorder assessment.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 13:26:47 GMT" } ]
2025-03-18T00:00:00
[ [ "Zrimek", "Zakariae", "" ], [ "Mourchid", "Youssef", "" ], [ "Hassouni", "Mohammed El", "" ] ]
TITLE: DynSTG-Mamba: Dynamic Spatio-Temporal Graph Mamba with Cross-Graph Knowledge Distillation for Gait Disorders Recognition ABSTRACT: Gait disorder recognition plays a crucial role in the early diagnosis and monitoring of movement disorders. Existing approaches, including spatio-temporal graph convolutional networks (ST-GCNs), often face high memory demands and struggle to capture complex spatio-temporal dependencies, limiting their efficiency in clinical applications. To address these challenges, we introduce DynSTG-Mamba (Dynamic Spatio-Temporal Graph Mamba), a novel framework that combines DF-STGNN and STG-Mamba to enhance motion sequence modeling. The DF-STGNN incorporates a dynamic spatio-temporal filter that adaptively adjusts spatial connections between skeletal joints and temporal interactions across different movement phases. This approach ensures better feature propagation through dynamic graph structures by considering the hierarchical nature and dynamics of skeletal gait data. Meanwhile, STG-Mamba, an extension of Mamba adapted for skeletal motion data, ensures a continuous propagation of states, facilitating the capture of long-term dependencies while reducing computational complexity. To reduce the number of model parameters and computational costs while maintaining consistency, we propose Cross-Graph Relational Knowledge Distillation, a novel knowledge transfer mechanism that aligns relational information between teacher (large architecture) and student models (small architecture) while using shared memory. This ensures that the interactions and movement patterns of the joints are accurately preserved in the motion sequences. We validate our DynSTG-Mamba on KOA-NM, PD-WALK, and ATAXIA datasets, where it outperforms state-of-the-art approaches by achieving in terms of Accuracy, F1-score, and Recall. Our results highlight the efficiency and robustness of our approach, offering a lightweight yet highly accurate solution for automated gait analysis and movement disorder assessment.
2503.13158
Bernd Zimmering
Bernd Zimmering, Cec\'ilia Coelho, Vaibhav Gupta, Maria Maleshkova, Oliver Niggemann
Laplace-Net: Learning Dynamical Systems with External Forcing
Preprint - under review
null
null
null
cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modelling forced dynamical systems - where an external input drives the system state - is critical across diverse domains such as engineering, finance, and the natural sciences. In this work, we propose Laplace-Net, a decoupled, solver-free neural framework for learning forced and delay-aware systems. It leverages a Laplace transform-based approach to decompose internal dynamics, external inputs, and initial values into established theoretical concepts, enhancing interpretability. Laplace-Net promotes transferability since the system can be rapidly re-trained or fine-tuned for new forcing signals, providing flexibility in applications ranging from controller adaptation to long-horizon forecasting. Experimental results on eight benchmark datasets - including linear, non-linear, and delayed systems - demonstrate the method's improved accuracy and robustness compared to state-of-the-art approaches, particularly in handling complex and previously unseen inputs.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 13:31:12 GMT" } ]
2025-03-18T00:00:00
[ [ "Zimmering", "Bernd", "" ], [ "Coelho", "Cecília", "" ], [ "Gupta", "Vaibhav", "" ], [ "Maleshkova", "Maria", "" ], [ "Niggemann", "Oliver", "" ] ]
TITLE: Laplace-Net: Learning Dynamical Systems with External Forcing ABSTRACT: Modelling forced dynamical systems - where an external input drives the system state - is critical across diverse domains such as engineering, finance, and the natural sciences. In this work, we propose Laplace-Net, a decoupled, solver-free neural framework for learning forced and delay-aware systems. It leverages a Laplace transform-based approach to decompose internal dynamics, external inputs, and initial values into established theoretical concepts, enhancing interpretability. Laplace-Net promotes transferability since the system can be rapidly re-trained or fine-tuned for new forcing signals, providing flexibility in applications ranging from controller adaptation to long-horizon forecasting. Experimental results on eight benchmark datasets - including linear, non-linear, and delayed systems - demonstrate the method's improved accuracy and robustness compared to state-of-the-art approaches, particularly in handling complex and previously unseen inputs.
2503.13160
Zihao Liu
Zihao Liu, Xiaoyu Wu, Jianqin Wu, Xuxu Wang, Linlin Yang
Language-guided Open-world Video Anomaly Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video anomaly detection models aim to detect anomalies that deviate from what is expected. In open-world scenarios, the expected events may change as requirements change. For example, not wearing a mask is considered abnormal during a flu outbreak but normal otherwise. However, existing methods assume that the definition of anomalies is invariable, and thus are not applicable to the open world. To address this, we propose a novel open-world VAD paradigm with variable definitions, allowing guided detection through user-provided natural language at inference time. This paradigm necessitates establishing a robust mapping from video and textual definition to anomaly score. Therefore, we propose LaGoVAD (Language-guided Open-world VAD), a model that dynamically adapts anomaly definitions through two regularization strategies: diversifying the relative durations of anomalies via dynamic video synthesis, and enhancing feature robustness through contrastive learning with negative mining. Training such adaptable models requires diverse anomaly definitions, but existing datasets typically provide given labels without semantic descriptions. To bridge this gap, we collect PreVAD (Pre-training Video Anomaly Dataset), the largest and most diverse video anomaly dataset to date, featuring 35,279 annotated videos with multi-level category labels and descriptions that explicitly define anomalies. Zero-shot experiments on seven datasets demonstrate SOTA performance. Data and code will be released.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 13:31:19 GMT" } ]
2025-03-18T00:00:00
[ [ "Liu", "Zihao", "" ], [ "Wu", "Xiaoyu", "" ], [ "Wu", "Jianqin", "" ], [ "Wang", "Xuxu", "" ], [ "Yang", "Linlin", "" ] ]
TITLE: Language-guided Open-world Video Anomaly Detection ABSTRACT: Video anomaly detection models aim to detect anomalies that deviate from what is expected. In open-world scenarios, the expected events may change as requirements change. For example, not wearing a mask is considered abnormal during a flu outbreak but normal otherwise. However, existing methods assume that the definition of anomalies is invariable, and thus are not applicable to the open world. To address this, we propose a novel open-world VAD paradigm with variable definitions, allowing guided detection through user-provided natural language at inference time. This paradigm necessitates establishing a robust mapping from video and textual definition to anomaly score. Therefore, we propose LaGoVAD (Language-guided Open-world VAD), a model that dynamically adapts anomaly definitions through two regularization strategies: diversifying the relative durations of anomalies via dynamic video synthesis, and enhancing feature robustness through contrastive learning with negative mining. Training such adaptable models requires diverse anomaly definitions, but existing datasets typically provide given labels without semantic descriptions. To bridge this gap, we collect PreVAD (Pre-training Video Anomaly Dataset), the largest and most diverse video anomaly dataset to date, featuring 35,279 annotated videos with multi-level category labels and descriptions that explicitly define anomalies. Zero-shot experiments on seven datasets demonstrate SOTA performance. Data and code will be released.
2503.13163
Shani Gamrian
Shani Gamrian, Hila Barel, Feiran Li, Masakazu Yoshimura, Daisuke Iso
Beyond RGB: Adaptive Parallel Processing for RAW Object Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Object detection models are typically applied to standard RGB images processed through Image Signal Processing (ISP) pipelines, which are designed to enhance sensor-captured RAW images for human vision. However, these ISP functions can lead to a loss of critical information that may be essential in optimizing for computer vision tasks, such as object detection. In this work, we introduce Raw Adaptation Module (RAM), a module designed to replace the traditional ISP, with parameters optimized specifically for RAW object detection. Inspired by the parallel processing mechanisms of the human visual system, RAM departs from existing learned ISP methods by applying multiple ISP functions in parallel rather than sequentially, allowing for a more comprehensive capture of image features. These processed representations are then fused in a specialized module, which dynamically integrates and optimizes the information for the target task. This novel approach not only leverages the full potential of RAW sensor data but also enables task-specific pre-processing, resulting in superior object detection performance. Our approach outperforms RGB-based methods and achieves state-of-the-art results across diverse RAW image datasets under varying lighting conditions and dynamic ranges.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 13:36:49 GMT" } ]
2025-03-18T00:00:00
[ [ "Gamrian", "Shani", "" ], [ "Barel", "Hila", "" ], [ "Li", "Feiran", "" ], [ "Yoshimura", "Masakazu", "" ], [ "Iso", "Daisuke", "" ] ]
TITLE: Beyond RGB: Adaptive Parallel Processing for RAW Object Detection ABSTRACT: Object detection models are typically applied to standard RGB images processed through Image Signal Processing (ISP) pipelines, which are designed to enhance sensor-captured RAW images for human vision. However, these ISP functions can lead to a loss of critical information that may be essential in optimizing for computer vision tasks, such as object detection. In this work, we introduce Raw Adaptation Module (RAM), a module designed to replace the traditional ISP, with parameters optimized specifically for RAW object detection. Inspired by the parallel processing mechanisms of the human visual system, RAM departs from existing learned ISP methods by applying multiple ISP functions in parallel rather than sequentially, allowing for a more comprehensive capture of image features. These processed representations are then fused in a specialized module, which dynamically integrates and optimizes the information for the target task. This novel approach not only leverages the full potential of RAW sensor data but also enables task-specific pre-processing, resulting in superior object detection performance. Our approach outperforms RGB-based methods and achieves state-of-the-art results across diverse RAW image datasets under varying lighting conditions and dynamic ranges.
2503.13184
Shihao Yuan
Yuanze Li, Shihao Yuan, Haolin Wang, Qizhang Li, Ming Liu, Chen Xu, Guangming Shi, Wangmeng Zuo
Triad: Empowering LMM-based Anomaly Detection with Vision Expert-guided Visual Tokenizer and Manufacturing Process
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although recent methods have tried to introduce large multimodal models (LMMs) into industrial anomaly detection (IAD), their generalization in the IAD field is far inferior to that for general purposes. We summarize the main reasons for this gap into two aspects. On one hand, general-purpose LMMs lack cognition of defects in the visual modality, thereby failing to sufficiently focus on defect areas. Therefore, we propose to modify the AnyRes structure of the LLaVA model, providing the potential anomalous areas identified by existing IAD models to the LMMs. On the other hand, existing methods mainly focus on identifying defects by learning defect patterns or comparing with normal samples, yet they fall short of understanding the causes of these defects. Considering that the generation of defects is closely related to the manufacturing process, we propose a manufacturing-driven IAD paradigm. An instruction-tuning dataset for IAD (InstructIAD) and a data organization approach for Chain-of-Thought with manufacturing (CoT-M) are designed to leverage the manufacturing process for IAD. Based on the above two modifications, we present Triad, a novel LMM-based method incorporating an expert-guided region-of-interest tokenizer and manufacturing process for industrial anomaly detection. Extensive experiments show that our Triad not only demonstrates competitive performance against current LMMs but also achieves further improved accuracy when equipped with manufacturing processes. Source code, training data, and pre-trained models will be publicly available at https://github.com/tzjtatata/Triad.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 13:56:57 GMT" } ]
2025-03-18T00:00:00
[ [ "Li", "Yuanze", "" ], [ "Yuan", "Shihao", "" ], [ "Wang", "Haolin", "" ], [ "Li", "Qizhang", "" ], [ "Liu", "Ming", "" ], [ "Xu", "Chen", "" ], [ "Shi", "Guangming", "" ], [ "Zuo", "Wangmeng", "" ] ]
TITLE: Triad: Empowering LMM-based Anomaly Detection with Vision Expert-guided Visual Tokenizer and Manufacturing Process ABSTRACT: Although recent methods have tried to introduce large multimodal models (LMMs) into industrial anomaly detection (IAD), their generalization in the IAD field is far inferior to that for general purposes. We summarize the main reasons for this gap into two aspects. On one hand, general-purpose LMMs lack cognition of defects in the visual modality, thereby failing to sufficiently focus on defect areas. Therefore, we propose to modify the AnyRes structure of the LLaVA model, providing the potential anomalous areas identified by existing IAD models to the LMMs. On the other hand, existing methods mainly focus on identifying defects by learning defect patterns or comparing with normal samples, yet they fall short of understanding the causes of these defects. Considering that the generation of defects is closely related to the manufacturing process, we propose a manufacturing-driven IAD paradigm. An instruction-tuning dataset for IAD (InstructIAD) and a data organization approach for Chain-of-Thought with manufacturing (CoT-M) are designed to leverage the manufacturing process for IAD. Based on the above two modifications, we present Triad, a novel LMM-based method incorporating an expert-guided region-of-interest tokenizer and manufacturing process for industrial anomaly detection. Extensive experiments show that our Triad not only demonstrates competitive performance against current LMMs but also achieves further improved accuracy when equipped with manufacturing processes. Source code, training data, and pre-trained models will be publicly available at https://github.com/tzjtatata/Triad.
2503.13185
Cheng Wang
Dingning Liu, Cheng Wang, Peng Gao, Renrui Zhang, Xinzhu Ma, Yuan Meng, Zhihui Wang
3DAxisPrompt: Promoting the 3D Grounding and Reasoning in GPT-4o
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Multimodal Large Language Models (MLLMs) exhibit impressive capabilities across a variety of tasks, especially when equipped with carefully designed visual prompts. However, existing studies primarily focus on logical reasoning and visual understanding, while the capability of MLLMs to operate effectively in 3D vision remains an ongoing area of exploration. In this paper, we introduce a novel visual prompting method, called 3DAxisPrompt, to elicit the 3D understanding capabilities of MLLMs in real-world scenes. More specifically, our method leverages the 3D coordinate axis and masks generated from the Segment Anything Model (SAM) to provide explicit geometric priors to MLLMs and then extend their impressive 2D grounding and reasoning ability to real-world 3D scenarios. Besides, we first provide a thorough investigation of the potential visual prompting formats and conclude our findings to reveal the potential and limits of 3D understanding capabilities in GPT-4o, as a representative of MLLMs. Finally, we build evaluation environments with four datasets, i.e., ScanRefer, ScanNet, FMB, and nuScene datasets, covering various 3D tasks. Based on this, we conduct extensive quantitative and qualitative experiments, which demonstrate the effectiveness of the proposed method. Overall, our study reveals that MLLMs, with the help of 3DAxisPrompt, can effectively perceive an object's 3D position in real-world scenarios. Nevertheless, a single prompt engineering approach does not consistently achieve the best outcomes for all 3D tasks. This study highlights the feasibility of leveraging MLLMs for 3D vision grounding/reasoning with prompt engineering techniques.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 13:57:05 GMT" } ]
2025-03-18T00:00:00
[ [ "Liu", "Dingning", "" ], [ "Wang", "Cheng", "" ], [ "Gao", "Peng", "" ], [ "Zhang", "Renrui", "" ], [ "Ma", "Xinzhu", "" ], [ "Meng", "Yuan", "" ], [ "Wang", "Zhihui", "" ] ]
TITLE: 3DAxisPrompt: Promoting the 3D Grounding and Reasoning in GPT-4o ABSTRACT: Multimodal Large Language Models (MLLMs) exhibit impressive capabilities across a variety of tasks, especially when equipped with carefully designed visual prompts. However, existing studies primarily focus on logical reasoning and visual understanding, while the capability of MLLMs to operate effectively in 3D vision remains an ongoing area of exploration. In this paper, we introduce a novel visual prompting method, called 3DAxisPrompt, to elicit the 3D understanding capabilities of MLLMs in real-world scenes. More specifically, our method leverages the 3D coordinate axis and masks generated from the Segment Anything Model (SAM) to provide explicit geometric priors to MLLMs and then extend their impressive 2D grounding and reasoning ability to real-world 3D scenarios. Besides, we first provide a thorough investigation of the potential visual prompting formats and conclude our findings to reveal the potential and limits of 3D understanding capabilities in GPT-4o, as a representative of MLLMs. Finally, we build evaluation environments with four datasets, i.e., ScanRefer, ScanNet, FMB, and nuScene datasets, covering various 3D tasks. Based on this, we conduct extensive quantitative and qualitative experiments, which demonstrate the effectiveness of the proposed method. Overall, our study reveals that MLLMs, with the help of 3DAxisPrompt, can effectively perceive an object's 3D position in real-world scenarios. Nevertheless, a single prompt engineering approach does not consistently achieve the best outcomes for all 3D tasks. This study highlights the feasibility of leveraging MLLMs for 3D vision grounding/reasoning with prompt engineering techniques.
2503.13188
Matteo Sodano
Matteo Sodano, Federico Magistri, Elias Marks, Fares Hosn, Aibek Zurbayev, Rodrigo Marcuzzi, Meher V. R. Malladi, Jens Behley, Cyrill Stachniss
3D Hierarchical Panoptic Segmentation in Real Orchard Environments Across Different Sensors
Submitted to IROS
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Crop yield estimation is a relevant problem in agriculture, because an accurate crop yield estimate can support farmers' decisions on harvesting or precision intervention. Robots can help to automate this process. To do so, they need to be able to perceive the surrounding environment to identify target objects. In this paper, we introduce a novel approach to address the problem of hierarchical panoptic segmentation of apple orchards on 3D data from different sensors. Our approach is able to simultaneously provide semantic segmentation, instance segmentation of trunks and fruits, and instance segmentation of plants (a single trunk with its fruits). This allows us to identify relevant information such as individual plants, fruits, and trunks, and capture the relationship among them, such as precisely estimate the number of fruits associated to each tree in an orchard. Additionally, to efficiently evaluate our approach for hierarchical panoptic segmentation, we provide a dataset designed specifically for this task. Our dataset is recorded in Bonn in a real apple orchard with a variety of sensors, spanning from a terrestrial laser scanner to a RGB-D camera mounted on different robotic platforms. The experiments show that our approach surpasses state-of-the-art approaches in 3D panoptic segmentation in the agricultural domain, while also providing full hierarchical panoptic segmentation. Our dataset has been made publicly available at https://www.ipb.uni-bonn.de/data/hops/. We will provide the open-source implementation of our approach and public competiton for hierarchical panoptic segmentation on the hidden test sets upon paper acceptance.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 13:59:20 GMT" } ]
2025-03-18T00:00:00
[ [ "Sodano", "Matteo", "" ], [ "Magistri", "Federico", "" ], [ "Marks", "Elias", "" ], [ "Hosn", "Fares", "" ], [ "Zurbayev", "Aibek", "" ], [ "Marcuzzi", "Rodrigo", "" ], [ "Malladi", "Meher V. R.", "" ], [ "Behley", "Jens", "" ], [ "Stachniss", "Cyrill", "" ] ]
TITLE: 3D Hierarchical Panoptic Segmentation in Real Orchard Environments Across Different Sensors ABSTRACT: Crop yield estimation is a relevant problem in agriculture, because an accurate crop yield estimate can support farmers' decisions on harvesting or precision intervention. Robots can help to automate this process. To do so, they need to be able to perceive the surrounding environment to identify target objects. In this paper, we introduce a novel approach to address the problem of hierarchical panoptic segmentation of apple orchards on 3D data from different sensors. Our approach is able to simultaneously provide semantic segmentation, instance segmentation of trunks and fruits, and instance segmentation of plants (a single trunk with its fruits). This allows us to identify relevant information such as individual plants, fruits, and trunks, and capture the relationship among them, such as precisely estimate the number of fruits associated to each tree in an orchard. Additionally, to efficiently evaluate our approach for hierarchical panoptic segmentation, we provide a dataset designed specifically for this task. Our dataset is recorded in Bonn in a real apple orchard with a variety of sensors, spanning from a terrestrial laser scanner to a RGB-D camera mounted on different robotic platforms. The experiments show that our approach surpasses state-of-the-art approaches in 3D panoptic segmentation in the agricultural domain, while also providing full hierarchical panoptic segmentation. Our dataset has been made publicly available at https://www.ipb.uni-bonn.de/data/hops/. We will provide the open-source implementation of our approach and public competiton for hierarchical panoptic segmentation on the hidden test sets upon paper acceptance.
2503.13195
Jianhua Pei
Haoqi Huang, Ping Wang, Jianhua Pei, Jiacheng Wang, Shahen Alexanian, Dusit Niyato
Deep Learning Advancements in Anomaly Detection: A Comprehensive Survey
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid expansion of data from diverse sources has made anomaly detection (AD) increasingly essential for identifying unexpected observations that may signal system failures, security breaches, or fraud. As datasets become more complex and high-dimensional, traditional detection methods struggle to effectively capture intricate patterns. Advances in deep learning have made AD methods more powerful and adaptable, improving their ability to handle high-dimensional and unstructured data. This survey provides a comprehensive review of over 180 recent studies, focusing on deep learning-based AD techniques. We categorize and analyze these methods into reconstruction-based and prediction-based approaches, highlighting their effectiveness in modeling complex data distributions. Additionally, we explore the integration of traditional and deep learning methods, highlighting how hybrid approaches combine the interpretability of traditional techniques with the flexibility of deep learning to enhance detection accuracy and model transparency. Finally, we identify open issues and propose future research directions to advance the field of AD. This review bridges gaps in existing literature and serves as a valuable resource for researchers and practitioners seeking to enhance AD techniques using deep learning.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 14:04:48 GMT" } ]
2025-03-18T00:00:00
[ [ "Huang", "Haoqi", "" ], [ "Wang", "Ping", "" ], [ "Pei", "Jianhua", "" ], [ "Wang", "Jiacheng", "" ], [ "Alexanian", "Shahen", "" ], [ "Niyato", "Dusit", "" ] ]
TITLE: Deep Learning Advancements in Anomaly Detection: A Comprehensive Survey ABSTRACT: The rapid expansion of data from diverse sources has made anomaly detection (AD) increasingly essential for identifying unexpected observations that may signal system failures, security breaches, or fraud. As datasets become more complex and high-dimensional, traditional detection methods struggle to effectively capture intricate patterns. Advances in deep learning have made AD methods more powerful and adaptable, improving their ability to handle high-dimensional and unstructured data. This survey provides a comprehensive review of over 180 recent studies, focusing on deep learning-based AD techniques. We categorize and analyze these methods into reconstruction-based and prediction-based approaches, highlighting their effectiveness in modeling complex data distributions. Additionally, we explore the integration of traditional and deep learning methods, highlighting how hybrid approaches combine the interpretability of traditional techniques with the flexibility of deep learning to enhance detection accuracy and model transparency. Finally, we identify open issues and propose future research directions to advance the field of AD. This review bridges gaps in existing literature and serves as a valuable resource for researchers and practitioners seeking to enhance AD techniques using deep learning.
2503.13203
Corentin Sautier
Corentin Sautier, Gilles Puy, Alexandre Boulch, Renaud Marlet, Vincent Lepetit
Clustering is back: Reaching state-of-the-art LiDAR instance segmentation without training
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Panoptic segmentation of LiDAR point clouds is fundamental to outdoor scene understanding, with autonomous driving being a primary application. While state-of-the-art approaches typically rely on end-to-end deep learning architectures and extensive manual annotations of instances, the significant cost and time investment required for labeling large-scale point cloud datasets remains a major bottleneck in this field. In this work, we demonstrate that competitive panoptic segmentation can be achieved using only semantic labels, with instances predicted without any training or annotations. Our method achieves performance comparable to current state-of-the-art supervised methods on standard benchmarks including SemanticKITTI and nuScenes, and outperforms every publicly available method on SemanticKITTI as a drop-in instance head replacement, while running in real-time on a single-threaded CPU and requiring no instance labels. Our method is fully explainable, and requires no learning or parameter tuning. Code is available at https://github.com/valeoai/Alpine/
[ { "version": "v1", "created": "Mon, 17 Mar 2025 14:12:08 GMT" } ]
2025-03-18T00:00:00
[ [ "Sautier", "Corentin", "" ], [ "Puy", "Gilles", "" ], [ "Boulch", "Alexandre", "" ], [ "Marlet", "Renaud", "" ], [ "Lepetit", "Vincent", "" ] ]
TITLE: Clustering is back: Reaching state-of-the-art LiDAR instance segmentation without training ABSTRACT: Panoptic segmentation of LiDAR point clouds is fundamental to outdoor scene understanding, with autonomous driving being a primary application. While state-of-the-art approaches typically rely on end-to-end deep learning architectures and extensive manual annotations of instances, the significant cost and time investment required for labeling large-scale point cloud datasets remains a major bottleneck in this field. In this work, we demonstrate that competitive panoptic segmentation can be achieved using only semantic labels, with instances predicted without any training or annotations. Our method achieves performance comparable to current state-of-the-art supervised methods on standard benchmarks including SemanticKITTI and nuScenes, and outperforms every publicly available method on SemanticKITTI as a drop-in instance head replacement, while running in real-time on a single-threaded CPU and requiring no instance labels. Our method is fully explainable, and requires no learning or parameter tuning. Code is available at https://github.com/valeoai/Alpine/
2503.13205
Zhen Chen
Zhen Chen, Zhihao Peng, Xusheng Liang, Cheng Wang, Peigan Liang, Linsheng Zeng, Minjie Ju, Yixuan Yuan
MAP: Evaluation and Multi-Agent Enhancement of Large Language Models for Inpatient Pathways
null
null
null
null
cs.AI cs.CL cs.CV cs.HC cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inpatient pathways demand complex clinical decision-making based on comprehensive patient information, posing critical challenges for clinicians. Despite advancements in large language models (LLMs) in medical applications, limited research focused on artificial intelligence (AI) inpatient pathways systems, due to the lack of large-scale inpatient datasets. Moreover, existing medical benchmarks typically concentrated on medical question-answering and examinations, ignoring the multifaceted nature of clinical decision-making in inpatient settings. To address these gaps, we first developed the Inpatient Pathway Decision Support (IPDS) benchmark from the MIMIC-IV database, encompassing 51,274 cases across nine triage departments and 17 major disease categories alongside 16 standardized treatment options. Then, we proposed the Multi-Agent Inpatient Pathways (MAP) framework to accomplish inpatient pathways with three clinical agents, including a triage agent managing the patient admission, a diagnosis agent serving as the primary decision maker at the department, and a treatment agent providing treatment plans. Additionally, our MAP framework includes a chief agent overseeing the inpatient pathways to guide and promote these three clinician agents. Extensive experiments showed our MAP improved the diagnosis accuracy by 25.10% compared to the state-of-the-art LLM HuatuoGPT2-13B. It is worth noting that our MAP demonstrated significant clinical compliance, outperforming three board-certified clinicians by 10%-12%, establishing a foundation for inpatient pathways systems.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 14:14:28 GMT" } ]
2025-03-18T00:00:00
[ [ "Chen", "Zhen", "" ], [ "Peng", "Zhihao", "" ], [ "Liang", "Xusheng", "" ], [ "Wang", "Cheng", "" ], [ "Liang", "Peigan", "" ], [ "Zeng", "Linsheng", "" ], [ "Ju", "Minjie", "" ], [ "Yuan", "Yixuan", "" ] ]
TITLE: MAP: Evaluation and Multi-Agent Enhancement of Large Language Models for Inpatient Pathways ABSTRACT: Inpatient pathways demand complex clinical decision-making based on comprehensive patient information, posing critical challenges for clinicians. Despite advancements in large language models (LLMs) in medical applications, limited research focused on artificial intelligence (AI) inpatient pathways systems, due to the lack of large-scale inpatient datasets. Moreover, existing medical benchmarks typically concentrated on medical question-answering and examinations, ignoring the multifaceted nature of clinical decision-making in inpatient settings. To address these gaps, we first developed the Inpatient Pathway Decision Support (IPDS) benchmark from the MIMIC-IV database, encompassing 51,274 cases across nine triage departments and 17 major disease categories alongside 16 standardized treatment options. Then, we proposed the Multi-Agent Inpatient Pathways (MAP) framework to accomplish inpatient pathways with three clinical agents, including a triage agent managing the patient admission, a diagnosis agent serving as the primary decision maker at the department, and a treatment agent providing treatment plans. Additionally, our MAP framework includes a chief agent overseeing the inpatient pathways to guide and promote these three clinician agents. Extensive experiments showed our MAP improved the diagnosis accuracy by 25.10% compared to the state-of-the-art LLM HuatuoGPT2-13B. It is worth noting that our MAP demonstrated significant clinical compliance, outperforming three board-certified clinicians by 10%-12%, establishing a foundation for inpatient pathways systems.
2503.13211
Marvin Seyfarth
Marvin Seyfarth, Salman Ul Hassan Dar, Isabelle Ayx, Matthias Alexander Fink, Stefan O. Schoenberg, Hans-Ulrich Kauczor, Sandy Engelhardt
MedLoRD: A Medical Low-Resource Diffusion Model for High-Resolution 3D CT Image Synthesis
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advancements in AI for medical imaging offer significant potential. However, their applications are constrained by the limited availability of data and the reluctance of medical centers to share it due to patient privacy concerns. Generative models present a promising solution by creating synthetic data as a substitute for real patient data. However, medical images are typically high-dimensional, and current state-of-the-art methods are often impractical for computational resource-constrained healthcare environments. These models rely on data sub-sampling, raising doubts about their feasibility and real-world applicability. Furthermore, many of these models are evaluated on quantitative metrics that alone can be misleading in assessing the image quality and clinical meaningfulness of the generated images. To address this, we introduce MedLoRD, a generative diffusion model designed for computational resource-constrained environments. MedLoRD is capable of generating high-dimensional medical volumes with resolutions up to 512$\times$512$\times$256, utilizing GPUs with only 24GB VRAM, which are commonly found in standard desktop workstations. MedLoRD is evaluated across multiple modalities, including Coronary Computed Tomography Angiography and Lung Computed Tomography datasets. Extensive evaluations through radiological evaluation, relative regional volume analysis, adherence to conditional masks, and downstream tasks show that MedLoRD generates high-fidelity images closely adhering to segmentation mask conditions, surpassing the capabilities of current state-of-the-art generative models for medical image synthesis in computational resource-constrained environments.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 14:22:49 GMT" } ]
2025-03-18T00:00:00
[ [ "Seyfarth", "Marvin", "" ], [ "Dar", "Salman Ul Hassan", "" ], [ "Ayx", "Isabelle", "" ], [ "Fink", "Matthias Alexander", "" ], [ "Schoenberg", "Stefan O.", "" ], [ "Kauczor", "Hans-Ulrich", "" ], [ "Engelhardt", "Sandy", "" ] ]
TITLE: MedLoRD: A Medical Low-Resource Diffusion Model for High-Resolution 3D CT Image Synthesis ABSTRACT: Advancements in AI for medical imaging offer significant potential. However, their applications are constrained by the limited availability of data and the reluctance of medical centers to share it due to patient privacy concerns. Generative models present a promising solution by creating synthetic data as a substitute for real patient data. However, medical images are typically high-dimensional, and current state-of-the-art methods are often impractical for computational resource-constrained healthcare environments. These models rely on data sub-sampling, raising doubts about their feasibility and real-world applicability. Furthermore, many of these models are evaluated on quantitative metrics that alone can be misleading in assessing the image quality and clinical meaningfulness of the generated images. To address this, we introduce MedLoRD, a generative diffusion model designed for computational resource-constrained environments. MedLoRD is capable of generating high-dimensional medical volumes with resolutions up to 512$\times$512$\times$256, utilizing GPUs with only 24GB VRAM, which are commonly found in standard desktop workstations. MedLoRD is evaluated across multiple modalities, including Coronary Computed Tomography Angiography and Lung Computed Tomography datasets. Extensive evaluations through radiological evaluation, relative regional volume analysis, adherence to conditional masks, and downstream tasks show that MedLoRD generates high-fidelity images closely adhering to segmentation mask conditions, surpassing the capabilities of current state-of-the-art generative models for medical image synthesis in computational resource-constrained environments.
2503.13226
Konstantinos Nikoletos
Konstantinos Nikoletos, Vasilis Efthymiou, George Papadakis, Kostas Stefanidis
Auto-Configuring Entity Resolution Pipelines
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
The same real-world entity (e.g., a movie, a restaurant, a person) may be described in various ways on different datasets. Entity Resolution (ER) aims to find such different descriptions of the same entity, this way improving data quality and, therefore, data value. However, an ER pipeline typically involves several steps (e.g., blocking, similarity estimation, clustering), with each step requiring its own configurations and tuning. The choice of the best configuration, among a vast number of possible combinations, is a dataset-specific and labor-intensive task both for novice and expert users, while it often requires some ground truth knowledge of real matches. In this work, we examine ways of automatically configuring a state of-the-art end-to-end ER pipeline based on pre-trained language models under two settings: (i) When ground truth is available. In this case, sampling strategies that are typically used for hyperparameter optimization can significantly restrict the search of the configuration space. We experimentally compare their relative effectiveness and time efficiency, applying them to ER pipelines for the first time. (ii) When no ground truth is available. In this case, labelled data extracted from other datasets with available ground truth can be used to train a regression model that predicts the relative effectiveness of parameter configurations. Experimenting with 11 ER benchmark datasets, we evaluate the relative performance of existing techniques that address each problem, but have not been applied to ER before.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 14:41:37 GMT" } ]
2025-03-18T00:00:00
[ [ "Nikoletos", "Konstantinos", "" ], [ "Efthymiou", "Vasilis", "" ], [ "Papadakis", "George", "" ], [ "Stefanidis", "Kostas", "" ] ]
TITLE: Auto-Configuring Entity Resolution Pipelines ABSTRACT: The same real-world entity (e.g., a movie, a restaurant, a person) may be described in various ways on different datasets. Entity Resolution (ER) aims to find such different descriptions of the same entity, this way improving data quality and, therefore, data value. However, an ER pipeline typically involves several steps (e.g., blocking, similarity estimation, clustering), with each step requiring its own configurations and tuning. The choice of the best configuration, among a vast number of possible combinations, is a dataset-specific and labor-intensive task both for novice and expert users, while it often requires some ground truth knowledge of real matches. In this work, we examine ways of automatically configuring a state of-the-art end-to-end ER pipeline based on pre-trained language models under two settings: (i) When ground truth is available. In this case, sampling strategies that are typically used for hyperparameter optimization can significantly restrict the search of the configuration space. We experimentally compare their relative effectiveness and time efficiency, applying them to ER pipelines for the first time. (ii) When no ground truth is available. In this case, labelled data extracted from other datasets with available ground truth can be used to train a regression model that predicts the relative effectiveness of parameter configurations. Experimenting with 11 ER benchmark datasets, we evaluate the relative performance of existing techniques that address each problem, but have not been applied to ER before.
2503.13229
Yongkang Cheng
Yongkang Cheng, Shaoli Huang
HoloGest: Decoupled Diffusion and Motion Priors for Generating Holisticly Expressive Co-speech Gestures
Accepted by 3DV 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Animating virtual characters with holistic co-speech gestures is a challenging but critical task. Previous systems have primarily focused on the weak correlation between audio and gestures, leading to physically unnatural outcomes that degrade the user experience. To address this problem, we introduce HoleGest, a novel neural network framework based on decoupled diffusion and motion priors for the automatic generation of high-quality, expressive co-speech gestures. Our system leverages large-scale human motion datasets to learn a robust prior with low audio dependency and high motion reliance, enabling stable global motion and detailed finger movements. To improve the generation efficiency of diffusion-based models, we integrate implicit joint constraints with explicit geometric and conditional constraints, capturing complex motion distributions between large strides. This integration significantly enhances generation speed while maintaining high-quality motion. Furthermore, we design a shared embedding space for gesture-transcription text alignment, enabling the generation of semantically correct gesture actions. Extensive experiments and user feedback demonstrate the effectiveness and potential applications of our model, with our method achieving a level of realism close to the ground truth, providing an immersive user experience. Our code, model, and demo are are available at https://cyk990422.github.io/HoloGest.github.io/.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 14:42:31 GMT" } ]
2025-03-18T00:00:00
[ [ "Cheng", "Yongkang", "" ], [ "Huang", "Shaoli", "" ] ]
TITLE: HoloGest: Decoupled Diffusion and Motion Priors for Generating Holisticly Expressive Co-speech Gestures ABSTRACT: Animating virtual characters with holistic co-speech gestures is a challenging but critical task. Previous systems have primarily focused on the weak correlation between audio and gestures, leading to physically unnatural outcomes that degrade the user experience. To address this problem, we introduce HoleGest, a novel neural network framework based on decoupled diffusion and motion priors for the automatic generation of high-quality, expressive co-speech gestures. Our system leverages large-scale human motion datasets to learn a robust prior with low audio dependency and high motion reliance, enabling stable global motion and detailed finger movements. To improve the generation efficiency of diffusion-based models, we integrate implicit joint constraints with explicit geometric and conditional constraints, capturing complex motion distributions between large strides. This integration significantly enhances generation speed while maintaining high-quality motion. Furthermore, we design a shared embedding space for gesture-transcription text alignment, enabling the generation of semantically correct gesture actions. Extensive experiments and user feedback demonstrate the effectiveness and potential applications of our model, with our method achieving a level of realism close to the ground truth, providing an immersive user experience. Our code, model, and demo are are available at https://cyk990422.github.io/HoloGest.github.io/.
2503.13244
Weiwei Su
Weiwei Su, Shigefumi Hata, Hiroshi Kori, Hiroya Nakao, and Ryota Kobayashi
Pairwise vs Higher-order interactions: Can we identify the interaction type in coupled oscillators from time series?
16 pages, 6 figures
null
null
null
nlin.CD physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rhythmic phenomena, which are ubiquitous in biological systems, are typically modelled as systems of coupled limit cycle oscillators. Recently, there has been an increased interest in understanding the impact of higher-order interactions on the population dynamics of coupled oscillators. Meanwhile, estimating a mathematical model from experimental data is a vital step in understanding the dynamics of real-world complex systems. In coupled oscillator systems, identifying the type of interaction (e.g. pairwise or three-body) of a network is challenging, because different interactions can induce similar dynamical states and bifurcations. In this study, we have developed a method based on the adaptive LASSO (Least Absolute Shrinkage and Selection Operator) to infer the interactions between the oscillators from time series data. The proposed method can successfully classify the type of interaction and infer the probabilities of the existence of pairwise and three-body couplings. Through systematic analysis of synthetic datasets, we have demonstrated that our method outperforms two baseline methods, LASSO and OLS (Ordinary Least Squares), in accurately inferring the topology and strength of couplings between oscillators. Finally, we demonstrate the effectiveness of the proposed method by applying it to the synthetic data of 100 oscillators. These results imply that the proposed method is promising for identifying interactions from rhythmic activities in real-world systems.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 14:57:59 GMT" } ]
2025-03-18T00:00:00
[ [ "Su", "Weiwei", "" ], [ "Hata", "Shigefumi", "" ], [ "Kori", "Hiroshi", "" ], [ "Nakao", "Hiroya", "" ], [ "Kobayashi", "Ryota", "" ] ]
TITLE: Pairwise vs Higher-order interactions: Can we identify the interaction type in coupled oscillators from time series? ABSTRACT: Rhythmic phenomena, which are ubiquitous in biological systems, are typically modelled as systems of coupled limit cycle oscillators. Recently, there has been an increased interest in understanding the impact of higher-order interactions on the population dynamics of coupled oscillators. Meanwhile, estimating a mathematical model from experimental data is a vital step in understanding the dynamics of real-world complex systems. In coupled oscillator systems, identifying the type of interaction (e.g. pairwise or three-body) of a network is challenging, because different interactions can induce similar dynamical states and bifurcations. In this study, we have developed a method based on the adaptive LASSO (Least Absolute Shrinkage and Selection Operator) to infer the interactions between the oscillators from time series data. The proposed method can successfully classify the type of interaction and infer the probabilities of the existence of pairwise and three-body couplings. Through systematic analysis of synthetic datasets, we have demonstrated that our method outperforms two baseline methods, LASSO and OLS (Ordinary Least Squares), in accurately inferring the topology and strength of couplings between oscillators. Finally, we demonstrate the effectiveness of the proposed method by applying it to the synthetic data of 100 oscillators. These results imply that the proposed method is promising for identifying interactions from rhythmic activities in real-world systems.
2503.13252
Jingqi Jiang
Jingqi Jiang, Shida Xu, Kaicheng Zhang, Jiyuan Wei, Jingyang Wang and Sen Wang
Digital Beamforming Enhanced Radar Odometry
null
null
null
null
cs.RO eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Radar has become an essential sensor for autonomous navigation, especially in challenging environments where camera and LiDAR sensors fail. 4D single-chip millimeter-wave radar systems, in particular, have drawn increasing attention thanks to their ability to provide spatial and Doppler information with low hardware cost and power consumption. However, most single-chip radar systems using traditional signal processing, such as Fast Fourier Transform, suffer from limited spatial resolution in radar detection, significantly limiting the performance of radar-based odometry and Simultaneous Localization and Mapping (SLAM) systems. In this paper, we develop a novel radar signal processing pipeline that integrates spatial domain beamforming techniques, and extend it to 3D Direction of Arrival estimation. Experiments using public datasets are conducted to evaluate and compare the performance of our proposed signal processing pipeline against traditional methodologies. These tests specifically focus on assessing structural precision across diverse scenes and measuring odometry accuracy in different radar odometry systems. This research demonstrates the feasibility of achieving more accurate radar odometry by simply replacing the standard FFT-based processing with the proposed pipeline. The codes are available at GitHub*.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 15:08:21 GMT" } ]
2025-03-18T00:00:00
[ [ "Jiang", "Jingqi", "" ], [ "Xu", "Shida", "" ], [ "Zhang", "Kaicheng", "" ], [ "Wei", "Jiyuan", "" ], [ "Wang", "Jingyang", "" ], [ "Wang", "Sen", "" ] ]
TITLE: Digital Beamforming Enhanced Radar Odometry ABSTRACT: Radar has become an essential sensor for autonomous navigation, especially in challenging environments where camera and LiDAR sensors fail. 4D single-chip millimeter-wave radar systems, in particular, have drawn increasing attention thanks to their ability to provide spatial and Doppler information with low hardware cost and power consumption. However, most single-chip radar systems using traditional signal processing, such as Fast Fourier Transform, suffer from limited spatial resolution in radar detection, significantly limiting the performance of radar-based odometry and Simultaneous Localization and Mapping (SLAM) systems. In this paper, we develop a novel radar signal processing pipeline that integrates spatial domain beamforming techniques, and extend it to 3D Direction of Arrival estimation. Experiments using public datasets are conducted to evaluate and compare the performance of our proposed signal processing pipeline against traditional methodologies. These tests specifically focus on assessing structural precision across diverse scenes and measuring odometry accuracy in different radar odometry systems. This research demonstrates the feasibility of achieving more accurate radar odometry by simply replacing the standard FFT-based processing with the proposed pipeline. The codes are available at GitHub*.
2503.13260
Navve Wasserman
Amit Zalcher, Navve Wasserman, Roman Beliy, Oliver Heinimann, Michal Irani
Don't Judge Before You CLIP: A Unified Approach for Perceptual Tasks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual perceptual tasks aim to predict human judgment of images (e.g., emotions invoked by images, image quality assessment). Unlike objective tasks such as object/scene recognition, perceptual tasks rely on subjective human assessments, making its data-labeling difficult. The scarcity of such human-annotated data results in small datasets leading to poor generalization. Typically, specialized models were designed for each perceptual task, tailored to its unique characteristics and its own training dataset. We propose a unified architectural framework for solving multiple different perceptual tasks leveraging CLIP as a prior. Our approach is based on recent cognitive findings which indicate that CLIP correlates well with human judgment. While CLIP was explicitly trained to align images and text, it implicitly also learned human inclinations. We attribute this to the inclusion of human-written image captions in CLIP's training data, which contain not only factual image descriptions, but inevitably also human sentiments and emotions. This makes CLIP a particularly strong prior for perceptual tasks. Accordingly, we suggest that minimal adaptation of CLIP suffices for solving a variety of perceptual tasks. Our simple unified framework employs a lightweight adaptation to fine-tune CLIP to each task, without requiring any task-specific architectural changes. We evaluate our approach on three tasks: (i) Image Memorability Prediction, (ii) No-reference Image Quality Assessment, and (iii) Visual Emotion Analysis. Our model achieves state-of-the-art results on all three tasks, while demonstrating improved generalization across different datasets.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 15:15:31 GMT" } ]
2025-03-18T00:00:00
[ [ "Zalcher", "Amit", "" ], [ "Wasserman", "Navve", "" ], [ "Beliy", "Roman", "" ], [ "Heinimann", "Oliver", "" ], [ "Irani", "Michal", "" ] ]
TITLE: Don't Judge Before You CLIP: A Unified Approach for Perceptual Tasks ABSTRACT: Visual perceptual tasks aim to predict human judgment of images (e.g., emotions invoked by images, image quality assessment). Unlike objective tasks such as object/scene recognition, perceptual tasks rely on subjective human assessments, making its data-labeling difficult. The scarcity of such human-annotated data results in small datasets leading to poor generalization. Typically, specialized models were designed for each perceptual task, tailored to its unique characteristics and its own training dataset. We propose a unified architectural framework for solving multiple different perceptual tasks leveraging CLIP as a prior. Our approach is based on recent cognitive findings which indicate that CLIP correlates well with human judgment. While CLIP was explicitly trained to align images and text, it implicitly also learned human inclinations. We attribute this to the inclusion of human-written image captions in CLIP's training data, which contain not only factual image descriptions, but inevitably also human sentiments and emotions. This makes CLIP a particularly strong prior for perceptual tasks. Accordingly, we suggest that minimal adaptation of CLIP suffices for solving a variety of perceptual tasks. Our simple unified framework employs a lightweight adaptation to fine-tune CLIP to each task, without requiring any task-specific architectural changes. We evaluate our approach on three tasks: (i) Image Memorability Prediction, (ii) No-reference Image Quality Assessment, and (iii) Visual Emotion Analysis. Our model achieves state-of-the-art results on all three tasks, while demonstrating improved generalization across different datasets.
2503.13271
Chengen Wang
Chengen Wang, Murat Kantarcioglu
Graph Generative Models Evaluation with Masked Autoencoder
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, numerous graph generative models (GGMs) have been proposed. However, evaluating these models remains a considerable challenge, primarily due to the difficulty in extracting meaningful graph features that accurately represent real-world graphs. The traditional evaluation techniques, which rely on graph statistical properties like node degree distribution, clustering coefficients, or Laplacian spectrum, overlook node features and lack scalability. There are newly proposed deep learning-based methods employing graph random neural networks or contrastive learning to extract graph features, demonstrating superior performance compared to traditional statistical methods, but their experimental results also demonstrate that these methods do not always working well across different metrics. Although there are overlaps among these metrics, they are generally not interchangeable, each evaluating generative models from a different perspective. In this paper, we propose a novel method that leverages graph masked autoencoders to effectively extract graph features for GGM evaluations. We conduct extensive experiments on graphs and empirically demonstrate that our method can be more reliable and effective than previously proposed methods across a number of GGM evaluation metrics, such as "Fr\'echet Distance (FD)" and "MMD Linear". However, no single method stands out consistently across all metrics and datasets. Therefore, this study also aims to raise awareness of the significance and challenges associated with GGM evaluation techniques, especially in light of recent advances in generative models.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 15:23:21 GMT" } ]
2025-03-18T00:00:00
[ [ "Wang", "Chengen", "" ], [ "Kantarcioglu", "Murat", "" ] ]
TITLE: Graph Generative Models Evaluation with Masked Autoencoder ABSTRACT: In recent years, numerous graph generative models (GGMs) have been proposed. However, evaluating these models remains a considerable challenge, primarily due to the difficulty in extracting meaningful graph features that accurately represent real-world graphs. The traditional evaluation techniques, which rely on graph statistical properties like node degree distribution, clustering coefficients, or Laplacian spectrum, overlook node features and lack scalability. There are newly proposed deep learning-based methods employing graph random neural networks or contrastive learning to extract graph features, demonstrating superior performance compared to traditional statistical methods, but their experimental results also demonstrate that these methods do not always working well across different metrics. Although there are overlaps among these metrics, they are generally not interchangeable, each evaluating generative models from a different perspective. In this paper, we propose a novel method that leverages graph masked autoencoders to effectively extract graph features for GGM evaluations. We conduct extensive experiments on graphs and empirically demonstrate that our method can be more reliable and effective than previously proposed methods across a number of GGM evaluation metrics, such as "Fr\'echet Distance (FD)" and "MMD Linear". However, no single method stands out consistently across all metrics and datasets. Therefore, this study also aims to raise awareness of the significance and challenges associated with GGM evaluation techniques, especially in light of recent advances in generative models.
2503.13272
Katja Schwarz
Katja Schwarz and Norman Mueller and Peter Kontschieder
Generative Gaussian Splatting: Generating 3D Scenes with Video Diffusion Priors
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Synthesizing consistent and photorealistic 3D scenes is an open problem in computer vision. Video diffusion models generate impressive videos but cannot directly synthesize 3D representations, i.e., lack 3D consistency in the generated sequences. In addition, directly training generative 3D models is challenging due to a lack of 3D training data at scale. In this work, we present Generative Gaussian Splatting (GGS) -- a novel approach that integrates a 3D representation with a pre-trained latent video diffusion model. Specifically, our model synthesizes a feature field parameterized via 3D Gaussian primitives. The feature field is then either rendered to feature maps and decoded into multi-view images, or directly upsampled into a 3D radiance field. We evaluate our approach on two common benchmark datasets for scene synthesis, RealEstate10K and ScanNet+, and find that our proposed GGS model significantly improves both the 3D consistency of the generated multi-view images, and the quality of the generated 3D scenes over all relevant baselines. Compared to a similar model without 3D representation, GGS improves FID on the generated 3D scenes by ~20% on both RealEstate10K and ScanNet+. Project page: https://katjaschwarz.github.io/ggs/
[ { "version": "v1", "created": "Mon, 17 Mar 2025 15:24:04 GMT" } ]
2025-03-18T00:00:00
[ [ "Schwarz", "Katja", "" ], [ "Mueller", "Norman", "" ], [ "Kontschieder", "Peter", "" ] ]
TITLE: Generative Gaussian Splatting: Generating 3D Scenes with Video Diffusion Priors ABSTRACT: Synthesizing consistent and photorealistic 3D scenes is an open problem in computer vision. Video diffusion models generate impressive videos but cannot directly synthesize 3D representations, i.e., lack 3D consistency in the generated sequences. In addition, directly training generative 3D models is challenging due to a lack of 3D training data at scale. In this work, we present Generative Gaussian Splatting (GGS) -- a novel approach that integrates a 3D representation with a pre-trained latent video diffusion model. Specifically, our model synthesizes a feature field parameterized via 3D Gaussian primitives. The feature field is then either rendered to feature maps and decoded into multi-view images, or directly upsampled into a 3D radiance field. We evaluate our approach on two common benchmark datasets for scene synthesis, RealEstate10K and ScanNet+, and find that our proposed GGS model significantly improves both the 3D consistency of the generated multi-view images, and the quality of the generated 3D scenes over all relevant baselines. Compared to a similar model without 3D representation, GGS improves FID on the generated 3D scenes by ~20% on both RealEstate10K and ScanNet+. Project page: https://katjaschwarz.github.io/ggs/
2503.13277
Suresh Kumar
Alfred Simbun, Suresh Kumar
Artificial Intelligence-Driven Prognostic Classification of COVID-19 Using Chest X-rays: A Deep Learning Approach
27 pages, 6 figures, 10 tables
null
null
null
eess.IV cs.AI cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Background: The COVID-19 pandemic has overwhelmed healthcare systems, emphasizing the need for AI-driven tools to assist in rapid and accurate patient prognosis. Chest X-ray imaging is a widely available diagnostic tool, but existing methods for prognosis classification lack scalability and efficiency. Objective: This study presents a high-accuracy deep learning model for classifying COVID-19 severity (Mild, Moderate, and Severe) using Chest X-ray images, developed on Microsoft Azure Custom Vision. Methods: Using a dataset of 1,103 confirmed COVID-19 X-ray images from AIforCOVID, we trained and validated a deep learning model leveraging Convolutional Neural Networks (CNNs). The model was evaluated on an unseen dataset to measure accuracy, precision, and recall. Results: Our model achieved an average accuracy of 97%, with specificity of 99%, sensitivity of 87%, and an F1-score of 93.11%. When classifying COVID-19 severity, the model achieved accuracies of 89.03% (Mild), 95.77% (Moderate), and 81.16% (Severe). These results demonstrate the model's potential for real-world clinical applications, aiding in faster decision-making and improved resource allocation. Conclusion: AI-driven prognosis classification using deep learning can significantly enhance COVID-19 patient management, enabling early intervention and efficient triaging. Our study provides a scalable, high-accuracy AI framework for integrating deep learning into routine clinical workflows. Future work should focus on expanding datasets, external validation, and regulatory compliance to facilitate clinical adoption.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 15:27:21 GMT" } ]
2025-03-18T00:00:00
[ [ "Simbun", "Alfred", "" ], [ "Kumar", "Suresh", "" ] ]
TITLE: Artificial Intelligence-Driven Prognostic Classification of COVID-19 Using Chest X-rays: A Deep Learning Approach ABSTRACT: Background: The COVID-19 pandemic has overwhelmed healthcare systems, emphasizing the need for AI-driven tools to assist in rapid and accurate patient prognosis. Chest X-ray imaging is a widely available diagnostic tool, but existing methods for prognosis classification lack scalability and efficiency. Objective: This study presents a high-accuracy deep learning model for classifying COVID-19 severity (Mild, Moderate, and Severe) using Chest X-ray images, developed on Microsoft Azure Custom Vision. Methods: Using a dataset of 1,103 confirmed COVID-19 X-ray images from AIforCOVID, we trained and validated a deep learning model leveraging Convolutional Neural Networks (CNNs). The model was evaluated on an unseen dataset to measure accuracy, precision, and recall. Results: Our model achieved an average accuracy of 97%, with specificity of 99%, sensitivity of 87%, and an F1-score of 93.11%. When classifying COVID-19 severity, the model achieved accuracies of 89.03% (Mild), 95.77% (Moderate), and 81.16% (Severe). These results demonstrate the model's potential for real-world clinical applications, aiding in faster decision-making and improved resource allocation. Conclusion: AI-driven prognosis classification using deep learning can significantly enhance COVID-19 patient management, enabling early intervention and efficient triaging. Our study provides a scalable, high-accuracy AI framework for integrating deep learning into routine clinical workflows. Future work should focus on expanding datasets, external validation, and regulatory compliance to facilitate clinical adoption.
2503.13279
Xinkai Zou
Xinkai Zou, Yan Liu, Xiongbo Shi, Chen Yang
Goal2Story: A Multi-Agent Fleet based on Privately Enabled sLLMs for Impacting Mapping on Requirements Elicitation
null
null
null
null
cs.SE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As requirements drift with rapid iterations, agile development becomes the dominant paradigm. Goal-driven Requirements Elicitation (RE) is a pivotal yet challenging task in agile project development due to its heavy tangling with adaptive planning and efficient collaboration. Recently, AI agents have shown promising ability in supporting requirements analysis by saving significant time and effort for stakeholders. However, current research mainly focuses on functional RE, and research works have not been reported bridging the long journey from goal to user stories. Moreover, considering the cost of LLM facilities and the need for data and idea protection, privately hosted small-sized LLM should be further utilized in RE. To address these challenges, we propose Goal2Story, a multi-agent fleet that adopts the Impact Mapping (IM) framework while merely using cost-effective sLLMs for goal-driven RE. Moreover, we introduce a StorySeek dataset that contains over 1,000 user stories (USs) with corresponding goals and project context information, as well as the semi-automatic dataset construction method. For evaluation, we proposed two metrics: Factuality Hit Rate (FHR) to measure consistency between the generated USs with the dataset and Quality And Consistency Evaluation (QuACE) to evaluate the quality of the generated USs. Experimental results demonstrate that Goal2Story outperforms the baseline performance of the Super-Agent adopting powerful LLMs, while also showcasing the performance improvements in key metrics brought by CoT and Agent Profile to Goal2Story, as well as its exploration in identifying latent needs.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 15:31:20 GMT" } ]
2025-03-18T00:00:00
[ [ "Zou", "Xinkai", "" ], [ "Liu", "Yan", "" ], [ "Shi", "Xiongbo", "" ], [ "Yang", "Chen", "" ] ]
TITLE: Goal2Story: A Multi-Agent Fleet based on Privately Enabled sLLMs for Impacting Mapping on Requirements Elicitation ABSTRACT: As requirements drift with rapid iterations, agile development becomes the dominant paradigm. Goal-driven Requirements Elicitation (RE) is a pivotal yet challenging task in agile project development due to its heavy tangling with adaptive planning and efficient collaboration. Recently, AI agents have shown promising ability in supporting requirements analysis by saving significant time and effort for stakeholders. However, current research mainly focuses on functional RE, and research works have not been reported bridging the long journey from goal to user stories. Moreover, considering the cost of LLM facilities and the need for data and idea protection, privately hosted small-sized LLM should be further utilized in RE. To address these challenges, we propose Goal2Story, a multi-agent fleet that adopts the Impact Mapping (IM) framework while merely using cost-effective sLLMs for goal-driven RE. Moreover, we introduce a StorySeek dataset that contains over 1,000 user stories (USs) with corresponding goals and project context information, as well as the semi-automatic dataset construction method. For evaluation, we proposed two metrics: Factuality Hit Rate (FHR) to measure consistency between the generated USs with the dataset and Quality And Consistency Evaluation (QuACE) to evaluate the quality of the generated USs. Experimental results demonstrate that Goal2Story outperforms the baseline performance of the Super-Agent adopting powerful LLMs, while also showcasing the performance improvements in key metrics brought by CoT and Agent Profile to Goal2Story, as well as its exploration in identifying latent needs.
2503.13296
Mikkel Jordahn
Mikkel Jordahn, Jonas Vestergaard Jensen, Mikkel N. Schmidt and Michael Riis Andersen
On Local Posterior Structure in Deep Ensembles
Code and models available at https://github.com/jonasvj/OnLocalPosteriorStructureInDeepEnsembles
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-sa/4.0/
Bayesian Neural Networks (BNNs) often improve model calibration and predictive uncertainty quantification compared to point estimators such as maximum-a-posteriori (MAP). Similarly, deep ensembles (DEs) are also known to improve calibration, and therefore, it is natural to hypothesize that deep ensembles of BNNs (DE-BNNs) should provide even further improvements. In this work, we systematically investigate this across a number of datasets, neural network architectures, and BNN approximation methods and surprisingly find that when the ensembles grow large enough, DEs consistently outperform DE-BNNs on in-distribution data. To shine light on this observation, we conduct several sensitivity and ablation studies. Moreover, we show that even though DE-BNNs outperform DEs on out-of-distribution metrics, this comes at the cost of decreased in-distribution performance. As a final contribution, we open-source the large pool of trained models to facilitate further research on this topic.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 15:41:39 GMT" } ]
2025-03-18T00:00:00
[ [ "Jordahn", "Mikkel", "" ], [ "Jensen", "Jonas Vestergaard", "" ], [ "Schmidt", "Mikkel N.", "" ], [ "Andersen", "Michael Riis", "" ] ]
TITLE: On Local Posterior Structure in Deep Ensembles ABSTRACT: Bayesian Neural Networks (BNNs) often improve model calibration and predictive uncertainty quantification compared to point estimators such as maximum-a-posteriori (MAP). Similarly, deep ensembles (DEs) are also known to improve calibration, and therefore, it is natural to hypothesize that deep ensembles of BNNs (DE-BNNs) should provide even further improvements. In this work, we systematically investigate this across a number of datasets, neural network architectures, and BNN approximation methods and surprisingly find that when the ensembles grow large enough, DEs consistently outperform DE-BNNs on in-distribution data. To shine light on this observation, we conduct several sensitivity and ablation studies. Moreover, we show that even though DE-BNNs outperform DEs on out-of-distribution metrics, this comes at the cost of decreased in-distribution performance. As a final contribution, we open-source the large pool of trained models to facilitate further research on this topic.
2503.13301
Deepak Vungarala
Deepak Vungarala, Md Hasibul Amin, Pietro Mercati, Arnob Ghosh, Arman Roohi, Ramtin Zand, Shaahin Angizi
LIMCA: LLM for Automating Analog In-Memory Computing Architecture Design Exploration
4 Figures, 5 Tables
null
null
null
cs.AR
http://creativecommons.org/licenses/by/4.0/
Resistive crossbars enabling analog In-Memory Computing (IMC) have emerged as a promising architecture for Deep Neural Network (DNN) acceleration, offering high memory bandwidth and in-situ computation. However, the manual, knowledge-intensive design process and the lack of high-quality circuit netlists have significantly constrained design space exploration and optimization to behavioral system-level tools. In this work, we introduce LIMCA, a novel fine-tune-free Large Language Model (LLM)-driven framework for automating the design and evaluation of IMC crossbar architectures. Unlike traditional approaches, LIMCA employs a No-Human-In-Loop (NHIL) automated pipeline to generate and validate circuit netlists for SPICE simulations, eliminating manual intervention. LIMCA systematically explores the IMC design space by leveraging a structured dataset and LLM-based performance evaluation. Our experimental results on MNIST classification demonstrate that LIMCA successfully generates crossbar designs achieving $\geq$96% accuracy while maintaining a power consumption $\leq$3W, making this the first work in LLM-assisted IMC design space exploration. Compared to existing frameworks, LIMCA provides an automated, scalable, and hardware-aware solution, reducing design exploration time while ensuring user-constrained performance trade-offs.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 15:45:17 GMT" } ]
2025-03-18T00:00:00
[ [ "Vungarala", "Deepak", "" ], [ "Amin", "Md Hasibul", "" ], [ "Mercati", "Pietro", "" ], [ "Ghosh", "Arnob", "" ], [ "Roohi", "Arman", "" ], [ "Zand", "Ramtin", "" ], [ "Angizi", "Shaahin", "" ] ]
TITLE: LIMCA: LLM for Automating Analog In-Memory Computing Architecture Design Exploration ABSTRACT: Resistive crossbars enabling analog In-Memory Computing (IMC) have emerged as a promising architecture for Deep Neural Network (DNN) acceleration, offering high memory bandwidth and in-situ computation. However, the manual, knowledge-intensive design process and the lack of high-quality circuit netlists have significantly constrained design space exploration and optimization to behavioral system-level tools. In this work, we introduce LIMCA, a novel fine-tune-free Large Language Model (LLM)-driven framework for automating the design and evaluation of IMC crossbar architectures. Unlike traditional approaches, LIMCA employs a No-Human-In-Loop (NHIL) automated pipeline to generate and validate circuit netlists for SPICE simulations, eliminating manual intervention. LIMCA systematically explores the IMC design space by leveraging a structured dataset and LLM-based performance evaluation. Our experimental results on MNIST classification demonstrate that LIMCA successfully generates crossbar designs achieving $\geq$96% accuracy while maintaining a power consumption $\leq$3W, making this the first work in LLM-assisted IMC design space exploration. Compared to existing frameworks, LIMCA provides an automated, scalable, and hardware-aware solution, reducing design exploration time while ensuring user-constrained performance trade-offs.
2503.13304
Witold Wydma\'nski
Witold Wydma\'nski, Marek \'Smieja
GFSNetwork: Differentiable Feature Selection via Gumbel-Sigmoid Relaxation
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Feature selection in deep learning remains a critical challenge, particularly for high-dimensional tabular data where interpretability and computational efficiency are paramount. We present GFSNetwork, a novel neural architecture that performs differentiable feature selection through temperature-controlled Gumbel-Sigmoid sampling. Unlike traditional methods, where the user has to define the requested number of features, GFSNetwork selects it automatically during an end-to-end process. Moreover, GFSNetwork maintains constant computational overhead regardless of the number of input features. We evaluate GFSNetwork on a series of classification and regression benchmarks, where it consistently outperforms recent methods including DeepLasso, attention maps, as well as traditional feature selectors, while using significantly fewer features. Furthermore, we validate our approach on real-world metagenomic datasets, demonstrating its effectiveness in high-dimensional biological data. Concluding, our method provides a scalable solution that bridges the gap between neural network flexibility and traditional feature selection interpretability. We share our python implementation of GFSNetwork at https://github.com/wwydmanski/GFSNetwork, as well as a PyPi package (gfs_network).
[ { "version": "v1", "created": "Mon, 17 Mar 2025 15:47:26 GMT" } ]
2025-03-18T00:00:00
[ [ "Wydmański", "Witold", "" ], [ "Śmieja", "Marek", "" ] ]
TITLE: GFSNetwork: Differentiable Feature Selection via Gumbel-Sigmoid Relaxation ABSTRACT: Feature selection in deep learning remains a critical challenge, particularly for high-dimensional tabular data where interpretability and computational efficiency are paramount. We present GFSNetwork, a novel neural architecture that performs differentiable feature selection through temperature-controlled Gumbel-Sigmoid sampling. Unlike traditional methods, where the user has to define the requested number of features, GFSNetwork selects it automatically during an end-to-end process. Moreover, GFSNetwork maintains constant computational overhead regardless of the number of input features. We evaluate GFSNetwork on a series of classification and regression benchmarks, where it consistently outperforms recent methods including DeepLasso, attention maps, as well as traditional feature selectors, while using significantly fewer features. Furthermore, we validate our approach on real-world metagenomic datasets, demonstrating its effectiveness in high-dimensional biological data. Concluding, our method provides a scalable solution that bridges the gap between neural network flexibility and traditional feature selection interpretability. We share our python implementation of GFSNetwork at https://github.com/wwydmanski/GFSNetwork, as well as a PyPi package (gfs_network).
2503.13310
Matteo Esposito
Matteo Esposito and Xiaozhou Li and Sergio Moreschini and Noman Ahmad and Tomas Cerny and Karthik Vaidhyanathan and Valentina Lenarduzzi and Davide Taibi
Generative AI for Software Architecture. Applications, Trends, Challenges, and Future Directions
null
null
null
null
cs.SE cs.AI cs.DC cs.ET
http://creativecommons.org/licenses/by/4.0/
Context: Generative Artificial Intelligence (GenAI) is transforming much of software development, yet its application in software architecture is still in its infancy, and no prior study has systematically addressed the topic. Aim: We aim to systematically synthesize the use, rationale, contexts, usability, and future challenges of GenAI in software architecture. Method: We performed a multivocal literature review (MLR), analyzing peer-reviewed and gray literature, identifying current practices, models, adoption contexts, and reported challenges, extracting themes via open coding. Results: Our review identified significant adoption of GenAI for architectural decision support and architectural reconstruction. OpenAI GPT models are predominantly applied, and there is consistent use of techniques such as few-shot prompting and retrieved-augmented generation (RAG). GenAI has been applied mostly to initial stages of the Software Development Life Cycle (SDLC), such as Requirements-to-Architecture and Architecture-to-Code. Monolithic and microservice architectures were the dominant targets. However, rigorous testing of GenAI outputs was typically missing from the studies. Among the most frequent challenges are model precision, hallucinations, ethical aspects, privacy issues, lack of architecture-specific datasets, and the absence of sound evaluation frameworks. Conclusions: GenAI shows significant potential in software design, but several challenges remain on its path to greater adoption. Research efforts should target designing general evaluation methodologies, handling ethics and precision, increasing transparency and explainability, and promoting architecture-specific datasets and benchmarks to bridge the gap between theoretical possibilities and practical use.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 15:49:30 GMT" } ]
2025-03-18T00:00:00
[ [ "Esposito", "Matteo", "" ], [ "Li", "Xiaozhou", "" ], [ "Moreschini", "Sergio", "" ], [ "Ahmad", "Noman", "" ], [ "Cerny", "Tomas", "" ], [ "Vaidhyanathan", "Karthik", "" ], [ "Lenarduzzi", "Valentina", "" ], [ "Taibi", "Davide", "" ] ]
TITLE: Generative AI for Software Architecture. Applications, Trends, Challenges, and Future Directions ABSTRACT: Context: Generative Artificial Intelligence (GenAI) is transforming much of software development, yet its application in software architecture is still in its infancy, and no prior study has systematically addressed the topic. Aim: We aim to systematically synthesize the use, rationale, contexts, usability, and future challenges of GenAI in software architecture. Method: We performed a multivocal literature review (MLR), analyzing peer-reviewed and gray literature, identifying current practices, models, adoption contexts, and reported challenges, extracting themes via open coding. Results: Our review identified significant adoption of GenAI for architectural decision support and architectural reconstruction. OpenAI GPT models are predominantly applied, and there is consistent use of techniques such as few-shot prompting and retrieved-augmented generation (RAG). GenAI has been applied mostly to initial stages of the Software Development Life Cycle (SDLC), such as Requirements-to-Architecture and Architecture-to-Code. Monolithic and microservice architectures were the dominant targets. However, rigorous testing of GenAI outputs was typically missing from the studies. Among the most frequent challenges are model precision, hallucinations, ethical aspects, privacy issues, lack of architecture-specific datasets, and the absence of sound evaluation frameworks. Conclusions: GenAI shows significant potential in software design, but several challenges remain on its path to greater adoption. Research efforts should target designing general evaluation methodologies, handling ethics and precision, increasing transparency and explainability, and promoting architecture-specific datasets and benchmarks to bridge the gap between theoretical possibilities and practical use.
2503.13316
Marcello Iotti
Marcello Iotti, Paolo Davini, Jost von Hardenberg, Giuseppe Zappa
RainScaleGAN: a Conditional Generative Adversarial Network for Rainfall Downscaling
38 pages, 16 figures
null
null
null
physics.ao-ph cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To this day, accurately simulating local-scale precipitation and reliably reproducing its distribution remains a challenging task. The limited horizontal resolution of Global Climate Models is among the primary factors undermining their skill in this context. The physical mechanisms driving the onset and development of precipitation, especially in extreme events, operate at spatio-temporal scales smaller than those numerically resolved, thus struggling to be captured accurately. In order to circumvent this limitation, several downscaling approaches have been developed over the last decades to address the discrepancy between the spatial resolution of models output and the resolution required by local-scale applications. In this paper, we introduce RainScaleGAN, a conditional deep convolutional Generative Adversarial Network (GAN) for precipitation downscaling. GANs have been effectively used in image super-resolution, an approach highly relevant for downscaling tasks. RainScaleGAN's capabilities are tested in a perfect-model setup, where the spatial resolution of a precipitation dataset is artificially degraded from 0.25$^{\circ}\times$0.25$^{\circ}$ to 2$^{\circ}\times$2$^\circ$, and RainScaleGAN is used to restore it. The developed model outperforms one of the leading precipitation downscaling method found in the literature. RainScaleGAN not only generates a synthetic dataset featuring plausible high-resolution spatial patterns and intensities, but also produces a precipitation distribution with statistics closely mirroring those of the ground-truth dataset. Given that RainScaleGAN's approach is agnostic with respect to the underlying physics, the method has the potential to be applied to other physical variables such as surface winds or temperature.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 15:54:20 GMT" } ]
2025-03-18T00:00:00
[ [ "Iotti", "Marcello", "" ], [ "Davini", "Paolo", "" ], [ "von Hardenberg", "Jost", "" ], [ "Zappa", "Giuseppe", "" ] ]
TITLE: RainScaleGAN: a Conditional Generative Adversarial Network for Rainfall Downscaling ABSTRACT: To this day, accurately simulating local-scale precipitation and reliably reproducing its distribution remains a challenging task. The limited horizontal resolution of Global Climate Models is among the primary factors undermining their skill in this context. The physical mechanisms driving the onset and development of precipitation, especially in extreme events, operate at spatio-temporal scales smaller than those numerically resolved, thus struggling to be captured accurately. In order to circumvent this limitation, several downscaling approaches have been developed over the last decades to address the discrepancy between the spatial resolution of models output and the resolution required by local-scale applications. In this paper, we introduce RainScaleGAN, a conditional deep convolutional Generative Adversarial Network (GAN) for precipitation downscaling. GANs have been effectively used in image super-resolution, an approach highly relevant for downscaling tasks. RainScaleGAN's capabilities are tested in a perfect-model setup, where the spatial resolution of a precipitation dataset is artificially degraded from 0.25$^{\circ}\times$0.25$^{\circ}$ to 2$^{\circ}\times$2$^\circ$, and RainScaleGAN is used to restore it. The developed model outperforms one of the leading precipitation downscaling method found in the literature. RainScaleGAN not only generates a synthetic dataset featuring plausible high-resolution spatial patterns and intensities, but also produces a precipitation distribution with statistics closely mirroring those of the ground-truth dataset. Given that RainScaleGAN's approach is agnostic with respect to the underlying physics, the method has the potential to be applied to other physical variables such as surface winds or temperature.
2503.13329
Tom Burnley
Beatriz Costa-Gomes, Joel Greer, Nikolai Juraschko, James Parkhurst, Jola Mirecka, Marjan Famili, Camila Rangel-Smith, Oliver Strickson, Alan Lowe, Mark Basham and Tom Burnley
PERC: a suite of software tools for the curation of cryoEM data with application to simulation, modelling and machine learning
22 pages, 4 figures
null
null
null
cs.LG cs.CE q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Ease of access to data, tools and models expedites scientific research. In structural biology there are now numerous open repositories of experimental and simulated datasets. Being able to easily access and utilise these is crucial for allowing researchers to make optimal use of their research effort. The tools presented here are useful for collating existing public cryoEM datasets and/or creating new synthetic cryoEM datasets to aid the development of novel data processing and interpretation algorithms. In recent years, structural biology has seen the development of a multitude of machine-learning based algorithms for aiding numerous steps in the processing and reconstruction of experimental datasets and the use of these approaches has become widespread. Developing such techniques in structural biology requires access to large datasets which can be cumbersome to curate and unwieldy to make use of. In this paper we present a suite of Python software packages which we collectively refer to as PERC (profet, EMPIARreader and CAKED). These are designed to reduce the burden which data curation places upon structural biology research. The protein structure fetcher (profet) package allows users to conveniently download and cleave sequences or structures from the Protein Data Bank or Alphafold databases. EMPIARreader allows lazy loading of Electron Microscopy Public Image Archive datasets in a machine-learning compatible structure. The Class Aggregator for Key Electron-microscopy Data (CAKED) package is designed to seamlessly facilitate the training of machine learning models on electron microscopy data, including electron-cryo-microscopy-specific data augmentation and labelling. These packages may be utilised independently or as building blocks in workflows. All are available in open source repositories and designed to be easily extensible to facilitate more advanced workflows if required.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 16:07:56 GMT" } ]
2025-03-18T00:00:00
[ [ "Costa-Gomes", "Beatriz", "" ], [ "Greer", "Joel", "" ], [ "Juraschko", "Nikolai", "" ], [ "Parkhurst", "James", "" ], [ "Mirecka", "Jola", "" ], [ "Famili", "Marjan", "" ], [ "Rangel-Smith", "Camila", "" ], [ "Strickson", "Oliver", "" ], [ "Lowe", "Alan", "" ], [ "Basham", "Mark", "" ], [ "Burnley", "Tom", "" ] ]
TITLE: PERC: a suite of software tools for the curation of cryoEM data with application to simulation, modelling and machine learning ABSTRACT: Ease of access to data, tools and models expedites scientific research. In structural biology there are now numerous open repositories of experimental and simulated datasets. Being able to easily access and utilise these is crucial for allowing researchers to make optimal use of their research effort. The tools presented here are useful for collating existing public cryoEM datasets and/or creating new synthetic cryoEM datasets to aid the development of novel data processing and interpretation algorithms. In recent years, structural biology has seen the development of a multitude of machine-learning based algorithms for aiding numerous steps in the processing and reconstruction of experimental datasets and the use of these approaches has become widespread. Developing such techniques in structural biology requires access to large datasets which can be cumbersome to curate and unwieldy to make use of. In this paper we present a suite of Python software packages which we collectively refer to as PERC (profet, EMPIARreader and CAKED). These are designed to reduce the burden which data curation places upon structural biology research. The protein structure fetcher (profet) package allows users to conveniently download and cleave sequences or structures from the Protein Data Bank or Alphafold databases. EMPIARreader allows lazy loading of Electron Microscopy Public Image Archive datasets in a machine-learning compatible structure. The Class Aggregator for Key Electron-microscopy Data (CAKED) package is designed to seamlessly facilitate the training of machine learning models on electron microscopy data, including electron-cryo-microscopy-specific data augmentation and labelling. These packages may be utilised independently or as building blocks in workflows. All are available in open source repositories and designed to be easily extensible to facilitate more advanced workflows if required.
2503.13330
Ricardo Bigolin Lanfredi
Ricardo Bigolin Lanfredi, Yan Zhuang, Mark Finkelstein, Praveen Thoppey Srinivasan Balamuralikrishna, Luke Krembs, Brandon Khoury, Arthi Reddy, Pritam Mukherjee, Neil M. Rofsky, Ronald M. Summers
LEAVS: An LLM-based Labeler for Abdominal CT Supervision
null
null
null
null
eess.IV cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extracting structured labels from radiology reports has been employed to create vision models to simultaneously detect several types of abnormalities. However, existing works focus mainly on the chest region. Few works have been investigated on abdominal radiology reports due to more complex anatomy and a wider range of pathologies in the abdomen. We propose LEAVS (Large language model Extractor for Abdominal Vision Supervision). This labeler can annotate the certainty of presence and the urgency of seven types of abnormalities for nine abdominal organs on CT radiology reports. To ensure broad coverage, we chose abnormalities that encompass most of the finding types from CT reports. Our approach employs a specialized chain-of-thought prompting strategy for a locally-run LLM using sentence extraction and multiple-choice questions in a tree-based decision system. We demonstrate that the LLM can extract several abnormality types across abdominal organs with an average F1 score of 0.89, significantly outperforming competing labelers and humans. Additionally, we show that extraction of urgency labels achieved performance comparable to human annotations. Finally, we demonstrate that the abnormality labels contain valuable information for training a single vision model that classifies several organs as normal or abnormal. We release our code and structured annotations for a public CT dataset containing over 1,000 CT volumes.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 16:09:22 GMT" } ]
2025-03-18T00:00:00
[ [ "Lanfredi", "Ricardo Bigolin", "" ], [ "Zhuang", "Yan", "" ], [ "Finkelstein", "Mark", "" ], [ "Balamuralikrishna", "Praveen Thoppey Srinivasan", "" ], [ "Krembs", "Luke", "" ], [ "Khoury", "Brandon", "" ], [ "Reddy", "Arthi", "" ], [ "Mukherjee", "Pritam", "" ], [ "Rofsky", "Neil M.", "" ], [ "Summers", "Ronald M.", "" ] ]
TITLE: LEAVS: An LLM-based Labeler for Abdominal CT Supervision ABSTRACT: Extracting structured labels from radiology reports has been employed to create vision models to simultaneously detect several types of abnormalities. However, existing works focus mainly on the chest region. Few works have been investigated on abdominal radiology reports due to more complex anatomy and a wider range of pathologies in the abdomen. We propose LEAVS (Large language model Extractor for Abdominal Vision Supervision). This labeler can annotate the certainty of presence and the urgency of seven types of abnormalities for nine abdominal organs on CT radiology reports. To ensure broad coverage, we chose abnormalities that encompass most of the finding types from CT reports. Our approach employs a specialized chain-of-thought prompting strategy for a locally-run LLM using sentence extraction and multiple-choice questions in a tree-based decision system. We demonstrate that the LLM can extract several abnormality types across abdominal organs with an average F1 score of 0.89, significantly outperforming competing labelers and humans. Additionally, we show that extraction of urgency labels achieved performance comparable to human annotations. Finally, we demonstrate that the abnormality labels contain valuable information for training a single vision model that classifies several organs as normal or abnormal. We release our code and structured annotations for a public CT dataset containing over 1,000 CT volumes.
2503.13358
Daniil Selikhanovych
Daniil Selikhanovych, David Li, Aleksei Leonov, Nikita Gushchin, Sergei Kushneriuk, Alexander Filippov, Evgeny Burnaev, Iaroslav Koshelev, Alexander Korotin
One-Step Residual Shifting Diffusion for Image Super-Resolution via Distillation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion models for super-resolution (SR) produce high-quality visual results but require expensive computational costs. Despite the development of several methods to accelerate diffusion-based SR models, some (e.g., SinSR) fail to produce realistic perceptual details, while others (e.g., OSEDiff) may hallucinate non-existent structures. To overcome these issues, we present RSD, a new distillation method for ResShift, one of the top diffusion-based SR models. Our method is based on training the student network to produce such images that a new fake ResShift model trained on them will coincide with the teacher model. RSD achieves single-step restoration and outperforms the teacher by a large margin. We show that our distillation method can surpass the other distillation-based method for ResShift - SinSR - making it on par with state-of-the-art diffusion-based SR distillation methods. Compared to SR methods based on pre-trained text-to-image models, RSD produces competitive perceptual quality, provides images with better alignment to degraded input images, and requires fewer parameters and GPU memory. We provide experimental results on various real-world and synthetic datasets, including RealSR, RealSet65, DRealSR, ImageNet, and DIV2K.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 16:44:08 GMT" } ]
2025-03-18T00:00:00
[ [ "Selikhanovych", "Daniil", "" ], [ "Li", "David", "" ], [ "Leonov", "Aleksei", "" ], [ "Gushchin", "Nikita", "" ], [ "Kushneriuk", "Sergei", "" ], [ "Filippov", "Alexander", "" ], [ "Burnaev", "Evgeny", "" ], [ "Koshelev", "Iaroslav", "" ], [ "Korotin", "Alexander", "" ] ]
TITLE: One-Step Residual Shifting Diffusion for Image Super-Resolution via Distillation ABSTRACT: Diffusion models for super-resolution (SR) produce high-quality visual results but require expensive computational costs. Despite the development of several methods to accelerate diffusion-based SR models, some (e.g., SinSR) fail to produce realistic perceptual details, while others (e.g., OSEDiff) may hallucinate non-existent structures. To overcome these issues, we present RSD, a new distillation method for ResShift, one of the top diffusion-based SR models. Our method is based on training the student network to produce such images that a new fake ResShift model trained on them will coincide with the teacher model. RSD achieves single-step restoration and outperforms the teacher by a large margin. We show that our distillation method can surpass the other distillation-based method for ResShift - SinSR - making it on par with state-of-the-art diffusion-based SR distillation methods. Compared to SR methods based on pre-trained text-to-image models, RSD produces competitive perceptual quality, provides images with better alignment to degraded input images, and requires fewer parameters and GPU memory. We provide experimental results on various real-world and synthetic datasets, including RealSR, RealSet65, DRealSR, ImageNet, and DIV2K.
2503.13369
Wan Ju Kang
Wan Ju Kang, Eunki Kim, Na Min An, Sangryul Kim, Haemin Choi, Ki Hoon Kwak, and James Thorne
Sightation Counts: Leveraging Sighted User Feedback in Building a BLV-aligned Dataset of Diagram Descriptions
37 pages, 10 figures, 21 tables
null
null
null
cs.AI cs.CV cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Often, the needs and visual abilities differ between the annotator group and the end user group. Generating detailed diagram descriptions for blind and low-vision (BLV) users is one such challenging domain. Sighted annotators could describe visuals with ease, but existing studies have shown that direct generations by them are costly, bias-prone, and somewhat lacking by BLV standards. In this study, we ask sighted individuals to assess -- rather than produce -- diagram descriptions generated by vision-language models (VLM) that have been guided with latent supervision via a multi-pass inference. The sighted assessments prove effective and useful to professional educators who are themselves BLV and teach visually impaired learners. We release Sightation, a collection of diagram description datasets spanning 5k diagrams and 137k samples for completion, preference, retrieval, question answering, and reasoning training purposes and demonstrate their fine-tuning potential in various downstream tasks.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 16:52:46 GMT" } ]
2025-03-18T00:00:00
[ [ "Kang", "Wan Ju", "" ], [ "Kim", "Eunki", "" ], [ "An", "Na Min", "" ], [ "Kim", "Sangryul", "" ], [ "Choi", "Haemin", "" ], [ "Kwak", "Ki Hoon", "" ], [ "Thorne", "James", "" ] ]
TITLE: Sightation Counts: Leveraging Sighted User Feedback in Building a BLV-aligned Dataset of Diagram Descriptions ABSTRACT: Often, the needs and visual abilities differ between the annotator group and the end user group. Generating detailed diagram descriptions for blind and low-vision (BLV) users is one such challenging domain. Sighted annotators could describe visuals with ease, but existing studies have shown that direct generations by them are costly, bias-prone, and somewhat lacking by BLV standards. In this study, we ask sighted individuals to assess -- rather than produce -- diagram descriptions generated by vision-language models (VLM) that have been guided with latent supervision via a multi-pass inference. The sighted assessments prove effective and useful to professional educators who are themselves BLV and teach visually impaired learners. We release Sightation, a collection of diagram description datasets spanning 5k diagrams and 137k samples for completion, preference, retrieval, question answering, and reasoning training purposes and demonstrate their fine-tuning potential in various downstream tasks.
2503.13371
Xulin Fan
Xulin Fan, Heting Gao, Ziyi Chen, Peng Chang, Mei Han, Mark Hasegawa-Johnson
SyncDiff: Diffusion-based Talking Head Synthesis with Bottlenecked Temporal Visual Prior for Improved Synchronization
Accepted to WACV 2025
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Talking head synthesis, also known as speech-to-lip synthesis, reconstructs the facial motions that align with the given audio tracks. The synthesized videos are evaluated on mainly two aspects, lip-speech synchronization and image fidelity. Recent studies demonstrate that GAN-based and diffusion-based models achieve state-of-the-art (SOTA) performance on this task, with diffusion-based models achieving superior image fidelity but experiencing lower synchronization compared to their GAN-based counterparts. To this end, we propose SyncDiff, a simple yet effective approach to improve diffusion-based models using a temporal pose frame with information bottleneck and facial-informative audio features extracted from AVHuBERT, as conditioning input into the diffusion process. We evaluate SyncDiff on two canonical talking head datasets, LRS2 and LRS3 for direct comparison with other SOTA models. Experiments on LRS2/LRS3 datasets show that SyncDiff achieves a synchronization score 27.7%/62.3% relatively higher than previous diffusion-based methods, while preserving their high-fidelity characteristics.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 16:58:53 GMT" } ]
2025-03-18T00:00:00
[ [ "Fan", "Xulin", "" ], [ "Gao", "Heting", "" ], [ "Chen", "Ziyi", "" ], [ "Chang", "Peng", "" ], [ "Han", "Mei", "" ], [ "Hasegawa-Johnson", "Mark", "" ] ]
TITLE: SyncDiff: Diffusion-based Talking Head Synthesis with Bottlenecked Temporal Visual Prior for Improved Synchronization ABSTRACT: Talking head synthesis, also known as speech-to-lip synthesis, reconstructs the facial motions that align with the given audio tracks. The synthesized videos are evaluated on mainly two aspects, lip-speech synchronization and image fidelity. Recent studies demonstrate that GAN-based and diffusion-based models achieve state-of-the-art (SOTA) performance on this task, with diffusion-based models achieving superior image fidelity but experiencing lower synchronization compared to their GAN-based counterparts. To this end, we propose SyncDiff, a simple yet effective approach to improve diffusion-based models using a temporal pose frame with information bottleneck and facial-informative audio features extracted from AVHuBERT, as conditioning input into the diffusion process. We evaluate SyncDiff on two canonical talking head datasets, LRS2 and LRS3 for direct comparison with other SOTA models. Experiments on LRS2/LRS3 datasets show that SyncDiff achieves a synchronization score 27.7%/62.3% relatively higher than previous diffusion-based methods, while preserving their high-fidelity characteristics.
2503.13385
Qing Zhou
Qing Zhou, Junyu Gao and Qi Wang
Scale Efficient Training for Large Datasets
Accepted by CVPR2025
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
The rapid growth of dataset scales has been a key driver in advancing deep learning research. However, as dataset scale increases, the training process becomes increasingly inefficient due to the presence of low-value samples, including excessive redundant samples, overly challenging samples, and inefficient easy samples that contribute little to model improvement.To address this challenge, we propose Scale Efficient Training (SeTa) for large datasets, a dynamic sample pruning approach that losslessly reduces training time. To remove low-value samples, SeTa first performs random pruning to eliminate redundant samples, then clusters the remaining samples according to their learning difficulty measured by loss. Building upon this clustering, a sliding window strategy is employed to progressively remove both overly challenging and inefficient easy clusters following an easy-to-hard curriculum.We conduct extensive experiments on large-scale synthetic datasets, including ToCa, SS1M, and ST+MJ, each containing over 3 million samples.SeTa reduces training costs by up to 50\% while maintaining or improving performance, with minimal degradation even at 70\% cost reduction. Furthermore, experiments on various scale real datasets across various backbones (CNNs, Transformers, and Mambas) and diverse tasks (instruction tuning, multi-view stereo, geo-localization, composed image retrieval, referring image segmentation) demonstrate the powerful effectiveness and universality of our approach. Code is available at https://github.com/mrazhou/SeTa.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 17:13:43 GMT" } ]
2025-03-18T00:00:00
[ [ "Zhou", "Qing", "" ], [ "Gao", "Junyu", "" ], [ "Wang", "Qi", "" ] ]
TITLE: Scale Efficient Training for Large Datasets ABSTRACT: The rapid growth of dataset scales has been a key driver in advancing deep learning research. However, as dataset scale increases, the training process becomes increasingly inefficient due to the presence of low-value samples, including excessive redundant samples, overly challenging samples, and inefficient easy samples that contribute little to model improvement.To address this challenge, we propose Scale Efficient Training (SeTa) for large datasets, a dynamic sample pruning approach that losslessly reduces training time. To remove low-value samples, SeTa first performs random pruning to eliminate redundant samples, then clusters the remaining samples according to their learning difficulty measured by loss. Building upon this clustering, a sliding window strategy is employed to progressively remove both overly challenging and inefficient easy clusters following an easy-to-hard curriculum.We conduct extensive experiments on large-scale synthetic datasets, including ToCa, SS1M, and ST+MJ, each containing over 3 million samples.SeTa reduces training costs by up to 50\% while maintaining or improving performance, with minimal degradation even at 70\% cost reduction. Furthermore, experiments on various scale real datasets across various backbones (CNNs, Transformers, and Mambas) and diverse tasks (instruction tuning, multi-view stereo, geo-localization, composed image retrieval, referring image segmentation) demonstrate the powerful effectiveness and universality of our approach. Code is available at https://github.com/mrazhou/SeTa.
2503.13399
James Burgess
James Burgess, Jeffrey J Nirschl, Laura Bravo-S\'anchez, Alejandro Lozano, Sanket Rajan Gupte, Jesus G. Galaz-Montoya, Yuhui Zhang, Yuchang Su, Disha Bhowmik, Zachary Coman, Sarina M. Hasan, Alexandra Johannesson, William D. Leineweber, Malvika G Nair, Ridhi Yarlagadda, Connor Zuraski, Wah Chiu, Sarah Cohen, Jan N. Hansen, Manuel D Leonetti, Chad Liu, Emma Lundberg, Serena Yeung-Levy
MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research
CVPR 2025 (Conference on Computer Vision and Pattern Recognition) Project page at https://jmhb0.github.io/microvqa Benchmark at https://huggingface.co/datasets/jmhb/microvqa
null
null
null
cs.CV cs.AI cs.CL cs.LG q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Scientific research demands sophisticated reasoning over multimodal data, a challenge especially prevalent in biology. Despite recent advances in multimodal large language models (MLLMs) for AI-assisted research, existing multimodal reasoning benchmarks only target up to college-level difficulty, while research-level benchmarks emphasize lower-level perception, falling short of the complex multimodal reasoning needed for scientific discovery. To bridge this gap, we introduce MicroVQA, a visual-question answering (VQA) benchmark designed to assess three reasoning capabilities vital in research workflows: expert image understanding, hypothesis generation, and experiment proposal. MicroVQA consists of 1,042 multiple-choice questions (MCQs) curated by biology experts across diverse microscopy modalities, ensuring VQA samples represent real scientific practice. In constructing the benchmark, we find that standard MCQ generation methods induce language shortcuts, motivating a new two-stage pipeline: an optimized LLM prompt structures question-answer pairs into MCQs; then, an agent-based `RefineBot' updates them to remove shortcuts. Benchmarking on state-of-the-art MLLMs reveal a peak performance of 53\%; models with smaller LLMs only slightly underperform top models, suggesting that language-based reasoning is less challenging than multimodal reasoning; and tuning with scientific articles enhances performance. Expert analysis of chain-of-thought responses shows that perception errors are the most frequent, followed by knowledge errors and then overgeneralization errors. These insights highlight the challenges in multimodal scientific reasoning, showing MicroVQA is a valuable resource advancing AI-driven biomedical research. MicroVQA is available at https://huggingface.co/datasets/jmhb/microvqa, and project page at https://jmhb0.github.io/microvqa.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 17:33:10 GMT" } ]
2025-03-18T00:00:00
[ [ "Burgess", "James", "" ], [ "Nirschl", "Jeffrey J", "" ], [ "Bravo-Sánchez", "Laura", "" ], [ "Lozano", "Alejandro", "" ], [ "Gupte", "Sanket Rajan", "" ], [ "Galaz-Montoya", "Jesus G.", "" ], [ "Zhang", "Yuhui", "" ], [ "Su", "Yuchang", "" ], [ "Bhowmik", "Disha", "" ], [ "Coman", "Zachary", "" ], [ "Hasan", "Sarina M.", "" ], [ "Johannesson", "Alexandra", "" ], [ "Leineweber", "William D.", "" ], [ "Nair", "Malvika G", "" ], [ "Yarlagadda", "Ridhi", "" ], [ "Zuraski", "Connor", "" ], [ "Chiu", "Wah", "" ], [ "Cohen", "Sarah", "" ], [ "Hansen", "Jan N.", "" ], [ "Leonetti", "Manuel D", "" ], [ "Liu", "Chad", "" ], [ "Lundberg", "Emma", "" ], [ "Yeung-Levy", "Serena", "" ] ]
TITLE: MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research ABSTRACT: Scientific research demands sophisticated reasoning over multimodal data, a challenge especially prevalent in biology. Despite recent advances in multimodal large language models (MLLMs) for AI-assisted research, existing multimodal reasoning benchmarks only target up to college-level difficulty, while research-level benchmarks emphasize lower-level perception, falling short of the complex multimodal reasoning needed for scientific discovery. To bridge this gap, we introduce MicroVQA, a visual-question answering (VQA) benchmark designed to assess three reasoning capabilities vital in research workflows: expert image understanding, hypothesis generation, and experiment proposal. MicroVQA consists of 1,042 multiple-choice questions (MCQs) curated by biology experts across diverse microscopy modalities, ensuring VQA samples represent real scientific practice. In constructing the benchmark, we find that standard MCQ generation methods induce language shortcuts, motivating a new two-stage pipeline: an optimized LLM prompt structures question-answer pairs into MCQs; then, an agent-based `RefineBot' updates them to remove shortcuts. Benchmarking on state-of-the-art MLLMs reveal a peak performance of 53\%; models with smaller LLMs only slightly underperform top models, suggesting that language-based reasoning is less challenging than multimodal reasoning; and tuning with scientific articles enhances performance. Expert analysis of chain-of-thought responses shows that perception errors are the most frequent, followed by knowledge errors and then overgeneralization errors. These insights highlight the challenges in multimodal scientific reasoning, showing MicroVQA is a valuable resource advancing AI-driven biomedical research. MicroVQA is available at https://huggingface.co/datasets/jmhb/microvqa, and project page at https://jmhb0.github.io/microvqa.
2503.13400
Qi Zhang
Qi Zhang, Xiuyuan Chen, Ziyi He, Kun Wang, Lianming Wu, Hongxing Shen, and Jianqi Sun
U2AD: Uncertainty-based Unsupervised Anomaly Detection Framework for Detecting T2 Hyperintensity in MRI Spinal Cord
null
null
null
null
eess.IV cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
T2 hyperintensities in spinal cord MR images are crucial biomarkers for conditions such as degenerative cervical myelopathy. However, current clinical diagnoses primarily rely on manual evaluation. Deep learning methods have shown promise in lesion detection, but most supervised approaches are heavily dependent on large, annotated datasets. Unsupervised anomaly detection (UAD) offers a compelling alternative by eliminating the need for abnormal data annotations. However, existing UAD methods rely on curated normal datasets and their performance frequently deteriorates when applied to clinical datasets due to domain shifts. We propose an Uncertainty-based Unsupervised Anomaly Detection framework, termed U2AD, to address these limitations. Unlike traditional methods, U2AD is designed to be trained and tested within the same clinical dataset, following a "mask-and-reconstruction" paradigm built on a Vision Transformer-based architecture. We introduce an uncertainty-guided masking strategy to resolve task conflicts between normal reconstruction and anomaly detection to achieve an optimal balance. Specifically, we employ a Monte-Carlo sampling technique to estimate reconstruction uncertainty mappings during training. By iteratively optimizing reconstruction training under the guidance of both epistemic and aleatoric uncertainty, U2AD reduces overall reconstruction variance while emphasizing regions. Experimental results demonstrate that U2AD outperforms existing supervised and unsupervised methods in patient-level identification and segment-level localization tasks. This framework establishes a new benchmark for incorporating uncertainty guidance into UAD, highlighting its clinical utility in addressing domain shifts and task conflicts in medical image anomaly detection. Our code is available: https://github.com/zhibaishouheilab/U2AD
[ { "version": "v1", "created": "Mon, 17 Mar 2025 17:33:32 GMT" } ]
2025-03-18T00:00:00
[ [ "Zhang", "Qi", "" ], [ "Chen", "Xiuyuan", "" ], [ "He", "Ziyi", "" ], [ "Wang", "Kun", "" ], [ "Wu", "Lianming", "" ], [ "Shen", "Hongxing", "" ], [ "Sun", "Jianqi", "" ] ]
TITLE: U2AD: Uncertainty-based Unsupervised Anomaly Detection Framework for Detecting T2 Hyperintensity in MRI Spinal Cord ABSTRACT: T2 hyperintensities in spinal cord MR images are crucial biomarkers for conditions such as degenerative cervical myelopathy. However, current clinical diagnoses primarily rely on manual evaluation. Deep learning methods have shown promise in lesion detection, but most supervised approaches are heavily dependent on large, annotated datasets. Unsupervised anomaly detection (UAD) offers a compelling alternative by eliminating the need for abnormal data annotations. However, existing UAD methods rely on curated normal datasets and their performance frequently deteriorates when applied to clinical datasets due to domain shifts. We propose an Uncertainty-based Unsupervised Anomaly Detection framework, termed U2AD, to address these limitations. Unlike traditional methods, U2AD is designed to be trained and tested within the same clinical dataset, following a "mask-and-reconstruction" paradigm built on a Vision Transformer-based architecture. We introduce an uncertainty-guided masking strategy to resolve task conflicts between normal reconstruction and anomaly detection to achieve an optimal balance. Specifically, we employ a Monte-Carlo sampling technique to estimate reconstruction uncertainty mappings during training. By iteratively optimizing reconstruction training under the guidance of both epistemic and aleatoric uncertainty, U2AD reduces overall reconstruction variance while emphasizing regions. Experimental results demonstrate that U2AD outperforms existing supervised and unsupervised methods in patient-level identification and segment-level localization tasks. This framework establishes a new benchmark for incorporating uncertainty guidance into UAD, highlighting its clinical utility in addressing domain shifts and task conflicts in medical image anomaly detection. Our code is available: https://github.com/zhibaishouheilab/U2AD
2503.13404
Xubo Yue
Cheoljoon Jeong, Xubo Yue, Seokhyun Chung
Fed-Joint: Joint Modeling of Nonlinear Degradation Signals and Failure Events for Remaining Useful Life Prediction using Federated Learning
null
null
null
null
cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many failure mechanisms of machinery are closely related to the behavior of condition monitoring (CM) signals. To achieve a cost-effective preventive maintenance strategy, accurate remaining useful life (RUL) prediction based on the signals is of paramount importance. However, the CM signals are often recorded at different factories and production lines, with limited amounts of data. Unfortunately, these datasets have rarely been shared between the sites due to data confidentiality and ownership issues, a lack of computing and storage power, and high communication costs associated with data transfer between sites and a data center. Another challenge in real applications is that the CM signals are often not explicitly specified \textit{a priori}, meaning that existing methods, which often usually a parametric form, may not be applicable. To address these challenges, we propose a new prognostic framework for RUL prediction using the joint modeling of nonlinear degradation signals and time-to-failure data within a federated learning scheme. The proposed method constructs a nonparametric degradation model using a federated multi-output Gaussian process and then employs a federated survival model to predict failure times and probabilities for in-service machinery. The superiority of the proposed method over other alternatives is demonstrated through comprehensive simulation studies and a case study using turbofan engine degradation signal data that include run-to-failure events.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 17:34:34 GMT" } ]
2025-03-18T00:00:00
[ [ "Jeong", "Cheoljoon", "" ], [ "Yue", "Xubo", "" ], [ "Chung", "Seokhyun", "" ] ]
TITLE: Fed-Joint: Joint Modeling of Nonlinear Degradation Signals and Failure Events for Remaining Useful Life Prediction using Federated Learning ABSTRACT: Many failure mechanisms of machinery are closely related to the behavior of condition monitoring (CM) signals. To achieve a cost-effective preventive maintenance strategy, accurate remaining useful life (RUL) prediction based on the signals is of paramount importance. However, the CM signals are often recorded at different factories and production lines, with limited amounts of data. Unfortunately, these datasets have rarely been shared between the sites due to data confidentiality and ownership issues, a lack of computing and storage power, and high communication costs associated with data transfer between sites and a data center. Another challenge in real applications is that the CM signals are often not explicitly specified \textit{a priori}, meaning that existing methods, which often usually a parametric form, may not be applicable. To address these challenges, we propose a new prognostic framework for RUL prediction using the joint modeling of nonlinear degradation signals and time-to-failure data within a federated learning scheme. The proposed method constructs a nonparametric degradation model using a federated multi-output Gaussian process and then employs a federated survival model to predict failure times and probabilities for in-service machinery. The superiority of the proposed method over other alternatives is demonstrated through comprehensive simulation studies and a case study using turbofan engine degradation signal data that include run-to-failure events.
2503.13409
Guillaume Lagarde
Gabriel Bathie and Guillaume Lagarde
A $(1+\epsilon)$-Approximation for Ultrametric Embedding in Subquadratic Time
Extended version of AAAI 2025
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
Efficiently computing accurate representations of high-dimensional data is essential for data analysis and unsupervised learning. Dendrograms, also known as ultrametrics, are widely used representations that preserve hierarchical relationships within the data. However, popular methods for computing them, such as linkage algorithms, suffer from quadratic time and space complexity, making them impractical for large datasets. The "best ultrametric embedding" (a.k.a. "best ultrametric fit") problem, which aims to find the ultrametric that best preserves the distances between points in the original data, is known to require at least quadratic time for an exact solution. Recent work has focused on improving scalability by approximating optimal solutions in subquadratic time, resulting in a $(\sqrt{2} + \epsilon)$-approximation (Cohen-Addad, de Joannis de Verclos and Lagarde, 2021). In this paper, we present the first subquadratic algorithm that achieves arbitrarily precise approximations of the optimal ultrametric embedding. Specifically, we provide an algorithm that, for any $c \geq 1$, outputs a $c$-approximation of the best ultrametric in time $\tilde{O}(n^{1 + 1/c})$. In particular, for any fixed $\epsilon > 0$, the algorithm computes a $(1+\epsilon)$-approximation in time $\tilde{O}(n^{2 - \epsilon + o(\epsilon^2)})$. Experimental results show that our algorithm improves upon previous methods in terms of approximation quality while maintaining comparable running times.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 17:38:37 GMT" } ]
2025-03-18T00:00:00
[ [ "Bathie", "Gabriel", "" ], [ "Lagarde", "Guillaume", "" ] ]
TITLE: A $(1+\epsilon)$-Approximation for Ultrametric Embedding in Subquadratic Time ABSTRACT: Efficiently computing accurate representations of high-dimensional data is essential for data analysis and unsupervised learning. Dendrograms, also known as ultrametrics, are widely used representations that preserve hierarchical relationships within the data. However, popular methods for computing them, such as linkage algorithms, suffer from quadratic time and space complexity, making them impractical for large datasets. The "best ultrametric embedding" (a.k.a. "best ultrametric fit") problem, which aims to find the ultrametric that best preserves the distances between points in the original data, is known to require at least quadratic time for an exact solution. Recent work has focused on improving scalability by approximating optimal solutions in subquadratic time, resulting in a $(\sqrt{2} + \epsilon)$-approximation (Cohen-Addad, de Joannis de Verclos and Lagarde, 2021). In this paper, we present the first subquadratic algorithm that achieves arbitrarily precise approximations of the optimal ultrametric embedding. Specifically, we provide an algorithm that, for any $c \geq 1$, outputs a $c$-approximation of the best ultrametric in time $\tilde{O}(n^{1 + 1/c})$. In particular, for any fixed $\epsilon > 0$, the algorithm computes a $(1+\epsilon)$-approximation in time $\tilde{O}(n^{2 - \epsilon + o(\epsilon^2)})$. Experimental results show that our algorithm improves upon previous methods in terms of approximation quality while maintaining comparable running times.
2503.13419
Khaza Anuarul Hoque
Ripan Kumar Kundu, Matthew Denton, Genova Mongalo, Prasad Calyam, Khaza Anuarul Hoque
Securing Virtual Reality Experiences: Unveiling and Tackling Cybersickness Attacks with Explainable AI
This work has been submitted to the IEEE for possible publication
null
null
null
cs.CR cs.AI cs.ET cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The synergy between virtual reality (VR) and artificial intelligence (AI), specifically deep learning (DL)-based cybersickness detection models, has ushered in unprecedented advancements in immersive experiences by automatically detecting cybersickness severity and adaptively various mitigation techniques, offering a smooth and comfortable VR experience. While this DL-enabled cybersickness detection method provides promising solutions for enhancing user experiences, it also introduces new risks since these models are vulnerable to adversarial attacks; a small perturbation of the input data that is visually undetectable to human observers can fool the cybersickness detection model and trigger unexpected mitigation, thus disrupting user immersive experiences (UIX) and even posing safety risks. In this paper, we present a new type of VR attack, i.e., a cybersickness attack, which successfully stops the triggering of cybersickness mitigation by fooling DL-based cybersickness detection models and dramatically hinders the UIX. Next, we propose a novel explainable artificial intelligence (XAI)-guided cybersickness attack detection framework to detect such attacks in VR to ensure UIX and a comfortable VR experience. We evaluate the proposed attack and the detection framework using two state-of-the-art open-source VR cybersickness datasets: Simulation 2021 and Gameplay dataset. Finally, to verify the effectiveness of our proposed method, we implement the attack and the XAI-based detection using a testbed with a custom-built VR roller coaster simulation with an HTC Vive Pro Eye headset and perform a user study. Our study shows that such an attack can dramatically hinder the UIX. However, our proposed XAI-guided cybersickness attack detection can successfully detect cybersickness attacks and trigger the proper mitigation, effectively reducing VR cybersickness.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 17:49:51 GMT" } ]
2025-03-18T00:00:00
[ [ "Kundu", "Ripan Kumar", "" ], [ "Denton", "Matthew", "" ], [ "Mongalo", "Genova", "" ], [ "Calyam", "Prasad", "" ], [ "Hoque", "Khaza Anuarul", "" ] ]
TITLE: Securing Virtual Reality Experiences: Unveiling and Tackling Cybersickness Attacks with Explainable AI ABSTRACT: The synergy between virtual reality (VR) and artificial intelligence (AI), specifically deep learning (DL)-based cybersickness detection models, has ushered in unprecedented advancements in immersive experiences by automatically detecting cybersickness severity and adaptively various mitigation techniques, offering a smooth and comfortable VR experience. While this DL-enabled cybersickness detection method provides promising solutions for enhancing user experiences, it also introduces new risks since these models are vulnerable to adversarial attacks; a small perturbation of the input data that is visually undetectable to human observers can fool the cybersickness detection model and trigger unexpected mitigation, thus disrupting user immersive experiences (UIX) and even posing safety risks. In this paper, we present a new type of VR attack, i.e., a cybersickness attack, which successfully stops the triggering of cybersickness mitigation by fooling DL-based cybersickness detection models and dramatically hinders the UIX. Next, we propose a novel explainable artificial intelligence (XAI)-guided cybersickness attack detection framework to detect such attacks in VR to ensure UIX and a comfortable VR experience. We evaluate the proposed attack and the detection framework using two state-of-the-art open-source VR cybersickness datasets: Simulation 2021 and Gameplay dataset. Finally, to verify the effectiveness of our proposed method, we implement the attack and the XAI-based detection using a testbed with a custom-built VR roller coaster simulation with an HTC Vive Pro Eye headset and perform a user study. Our study shows that such an attack can dramatically hinder the UIX. However, our proposed XAI-guided cybersickness attack detection can successfully detect cybersickness attacks and trigger the proper mitigation, effectively reducing VR cybersickness.
2503.13424
Xinyu Lian
Xinyu Lian, Zichao Yu, Ruiming Liang, Yitong Wang, Li Ray Luo, Kaixu Chen, Yuanzhen Zhou, Qihong Tang, Xudong Xu, Zhaoyang Lyu, Bo Dai, Jiangmiao Pang
Infinite Mobility: Scalable High-Fidelity Synthesis of Articulated Objects via Procedural Generation
Project page: https://infinite-mobility.github.io 10 pages,12 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Large-scale articulated objects with high quality are desperately needed for multiple tasks related to embodied AI. Most existing methods for creating articulated objects are either data-driven or simulation based, which are limited by the scale and quality of the training data or the fidelity and heavy labour of the simulation. In this paper, we propose Infinite Mobility, a novel method for synthesizing high-fidelity articulated objects through procedural generation. User study and quantitative evaluation demonstrate that our method can produce results that excel current state-of-the-art methods and are comparable to human-annotated datasets in both physics property and mesh quality. Furthermore, we show that our synthetic data can be used as training data for generative models, enabling next-step scaling up. Code is available at https://github.com/Intern-Nexus/Infinite-Mobility
[ { "version": "v1", "created": "Mon, 17 Mar 2025 17:53:56 GMT" } ]
2025-03-18T00:00:00
[ [ "Lian", "Xinyu", "" ], [ "Yu", "Zichao", "" ], [ "Liang", "Ruiming", "" ], [ "Wang", "Yitong", "" ], [ "Luo", "Li Ray", "" ], [ "Chen", "Kaixu", "" ], [ "Zhou", "Yuanzhen", "" ], [ "Tang", "Qihong", "" ], [ "Xu", "Xudong", "" ], [ "Lyu", "Zhaoyang", "" ], [ "Dai", "Bo", "" ], [ "Pang", "Jiangmiao", "" ] ]
TITLE: Infinite Mobility: Scalable High-Fidelity Synthesis of Articulated Objects via Procedural Generation ABSTRACT: Large-scale articulated objects with high quality are desperately needed for multiple tasks related to embodied AI. Most existing methods for creating articulated objects are either data-driven or simulation based, which are limited by the scale and quality of the training data or the fidelity and heavy labour of the simulation. In this paper, we propose Infinite Mobility, a novel method for synthesizing high-fidelity articulated objects through procedural generation. User study and quantitative evaluation demonstrate that our method can produce results that excel current state-of-the-art methods and are comparable to human-annotated datasets in both physics property and mesh quality. Furthermore, we show that our synthetic data can be used as training data for generative models, enabling next-step scaling up. Code is available at https://github.com/Intern-Nexus/Infinite-Mobility
2503.13430
Thomas Monninger
Thomas Monninger, Md Zafar Anwar, Stanislaw Antol, Steffen Staab, Sihao Ding
AugMapNet: Improving Spatial Latent Structure via BEV Grid Augmentation for Enhanced Vectorized Online HD Map Construction
null
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous driving requires an understanding of the infrastructure elements, such as lanes and crosswalks. To navigate safely, this understanding must be derived from sensor data in real-time and needs to be represented in vectorized form. Learned Bird's-Eye View (BEV) encoders are commonly used to combine a set of camera images from multiple views into one joint latent BEV grid. Traditionally, from this latent space, an intermediate raster map is predicted, providing dense spatial supervision but requiring post-processing into the desired vectorized form. More recent models directly derive infrastructure elements as polylines using vectorized map decoders, providing instance-level information. Our approach, Augmentation Map Network (AugMapNet), proposes latent BEV grid augmentation, a novel technique that significantly enhances the latent BEV representation. AugMapNet combines vector decoding and dense spatial supervision more effectively than existing architectures while remaining as straightforward to integrate and as generic as auxiliary supervision. Experiments on nuScenes and Argoverse2 datasets demonstrate significant improvements in vectorized map prediction performance up to 13.3% over the StreamMapNet baseline on 60m range and greater improvements on larger ranges. We confirm transferability by applying our method to another baseline and find similar improvements. A detailed analysis of the latent BEV grid confirms a more structured latent space of AugMapNet and shows the value of our novel concept beyond pure performance improvement. The code will be released soon.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 17:55:32 GMT" } ]
2025-03-18T00:00:00
[ [ "Monninger", "Thomas", "" ], [ "Anwar", "Md Zafar", "" ], [ "Antol", "Stanislaw", "" ], [ "Staab", "Steffen", "" ], [ "Ding", "Sihao", "" ] ]
TITLE: AugMapNet: Improving Spatial Latent Structure via BEV Grid Augmentation for Enhanced Vectorized Online HD Map Construction ABSTRACT: Autonomous driving requires an understanding of the infrastructure elements, such as lanes and crosswalks. To navigate safely, this understanding must be derived from sensor data in real-time and needs to be represented in vectorized form. Learned Bird's-Eye View (BEV) encoders are commonly used to combine a set of camera images from multiple views into one joint latent BEV grid. Traditionally, from this latent space, an intermediate raster map is predicted, providing dense spatial supervision but requiring post-processing into the desired vectorized form. More recent models directly derive infrastructure elements as polylines using vectorized map decoders, providing instance-level information. Our approach, Augmentation Map Network (AugMapNet), proposes latent BEV grid augmentation, a novel technique that significantly enhances the latent BEV representation. AugMapNet combines vector decoding and dense spatial supervision more effectively than existing architectures while remaining as straightforward to integrate and as generic as auxiliary supervision. Experiments on nuScenes and Argoverse2 datasets demonstrate significant improvements in vectorized map prediction performance up to 13.3% over the StreamMapNet baseline on 60m range and greater improvements on larger ranges. We confirm transferability by applying our method to another baseline and find similar improvements. A detailed analysis of the latent BEV grid confirms a more structured latent space of AugMapNet and shows the value of our novel concept beyond pure performance improvement. The code will be released soon.
2503.13435
Ling Yang
Ling Yang, Kaixin Zhu, Juanxi Tian, Bohan Zeng, Mingbao Lin, Hongjuan Pei, Wentao Zhang, Shuicheng Yan
WideRange4D: Enabling High-Quality 4D Reconstruction with Wide-Range Movements and Scenes
Project: https://github.com/Gen-Verse/WideRange4D
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid development of 3D reconstruction technology, research in 4D reconstruction is also advancing, existing 4D reconstruction methods can generate high-quality 4D scenes. However, due to the challenges in acquiring multi-view video data, the current 4D reconstruction benchmarks mainly display actions performed in place, such as dancing, within limited scenarios. In practical scenarios, many scenes involve wide-range spatial movements, highlighting the limitations of existing 4D reconstruction datasets. Additionally, existing 4D reconstruction methods rely on deformation fields to estimate the dynamics of 3D objects, but deformation fields struggle with wide-range spatial movements, which limits the ability to achieve high-quality 4D scene reconstruction with wide-range spatial movements. In this paper, we focus on 4D scene reconstruction with significant object spatial movements and propose a novel 4D reconstruction benchmark, WideRange4D. This benchmark includes rich 4D scene data with large spatial variations, allowing for a more comprehensive evaluation of the generation capabilities of 4D generation methods. Furthermore, we introduce a new 4D reconstruction method, Progress4D, which generates stable and high-quality 4D results across various complex 4D scene reconstruction tasks. We conduct both quantitative and qualitative comparison experiments on WideRange4D, showing that our Progress4D outperforms existing state-of-the-art 4D reconstruction methods. Project: https://github.com/Gen-Verse/WideRange4D
[ { "version": "v1", "created": "Mon, 17 Mar 2025 17:58:18 GMT" } ]
2025-03-18T00:00:00
[ [ "Yang", "Ling", "" ], [ "Zhu", "Kaixin", "" ], [ "Tian", "Juanxi", "" ], [ "Zeng", "Bohan", "" ], [ "Lin", "Mingbao", "" ], [ "Pei", "Hongjuan", "" ], [ "Zhang", "Wentao", "" ], [ "Yan", "Shuicheng", "" ] ]
TITLE: WideRange4D: Enabling High-Quality 4D Reconstruction with Wide-Range Movements and Scenes ABSTRACT: With the rapid development of 3D reconstruction technology, research in 4D reconstruction is also advancing, existing 4D reconstruction methods can generate high-quality 4D scenes. However, due to the challenges in acquiring multi-view video data, the current 4D reconstruction benchmarks mainly display actions performed in place, such as dancing, within limited scenarios. In practical scenarios, many scenes involve wide-range spatial movements, highlighting the limitations of existing 4D reconstruction datasets. Additionally, existing 4D reconstruction methods rely on deformation fields to estimate the dynamics of 3D objects, but deformation fields struggle with wide-range spatial movements, which limits the ability to achieve high-quality 4D scene reconstruction with wide-range spatial movements. In this paper, we focus on 4D scene reconstruction with significant object spatial movements and propose a novel 4D reconstruction benchmark, WideRange4D. This benchmark includes rich 4D scene data with large spatial variations, allowing for a more comprehensive evaluation of the generation capabilities of 4D generation methods. Furthermore, we introduce a new 4D reconstruction method, Progress4D, which generates stable and high-quality 4D results across various complex 4D scene reconstruction tasks. We conduct both quantitative and qualitative comparison experiments on WideRange4D, showing that our Progress4D outperforms existing state-of-the-art 4D reconstruction methods. Project: https://github.com/Gen-Verse/WideRange4D
2209.04517
Jola Mirecka
Marjan Famili, Jola Mirecka, Camila Rangel Smith, Anna Kota\'nska, Nikolai Juraschko, Beatriz Costa-Gomes, Colin M. Palmer, Jeyan Thiyagalingam, Tom Burnley, Mark Basham, Alan R. Lowe
Affinity-VAE: incorporating prior knowledge in representation learning from scientific images
null
null
null
null
cs.CV cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning compact and interpretable representations of data is a critical challenge in scientific image analysis. Here, we introduce Affinity-VAE, a generative model that enables us to impose our scientific intuition about the similarity of instances in the dataset on the learned representation during training. We demonstrate the utility of the approach in the scientific domain of cryo-electron tomography (cryo-ET) where a significant current challenge is to identify similar molecules within a noisy and low contrast tomographic image volume. This task is distinct from classification in that, at inference time, it is unknown whether an instance is part of the training set or not. We trained affinity-VAE using prior knowledge of protein structure to inform the latent space. Our model is able to create rotationally-invariant, morphologically homogeneous clusters in the latent representation, with improved cluster separation compared to other approaches. It achieves competitive performance on protein classification with the added benefit of disentangling object pose, structural similarity and an interpretable latent representation. In the context of cryo-ET data, affinity-VAE captures the orientation of identified proteins in 3D which can be used as a prior for subsequent scientific experiments. Extracting physical principles from a trained network is of significant importance in scientific imaging where a ground truth training set is not always feasible.
[ { "version": "v1", "created": "Fri, 9 Sep 2022 20:39:22 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 16:34:24 GMT" } ]
2025-03-17T00:00:00
[ [ "Famili", "Marjan", "" ], [ "Mirecka", "Jola", "" ], [ "Smith", "Camila Rangel", "" ], [ "Kotańska", "Anna", "" ], [ "Juraschko", "Nikolai", "" ], [ "Costa-Gomes", "Beatriz", "" ], [ "Palmer", "Colin M.", "" ], [ "Thiyagalingam", "Jeyan", "" ], [ "Burnley", "Tom", "" ], [ "Basham", "Mark", "" ], [ "Lowe", "Alan R.", "" ] ]
TITLE: Affinity-VAE: incorporating prior knowledge in representation learning from scientific images ABSTRACT: Learning compact and interpretable representations of data is a critical challenge in scientific image analysis. Here, we introduce Affinity-VAE, a generative model that enables us to impose our scientific intuition about the similarity of instances in the dataset on the learned representation during training. We demonstrate the utility of the approach in the scientific domain of cryo-electron tomography (cryo-ET) where a significant current challenge is to identify similar molecules within a noisy and low contrast tomographic image volume. This task is distinct from classification in that, at inference time, it is unknown whether an instance is part of the training set or not. We trained affinity-VAE using prior knowledge of protein structure to inform the latent space. Our model is able to create rotationally-invariant, morphologically homogeneous clusters in the latent representation, with improved cluster separation compared to other approaches. It achieves competitive performance on protein classification with the added benefit of disentangling object pose, structural similarity and an interpretable latent representation. In the context of cryo-ET data, affinity-VAE captures the orientation of identified proteins in 3D which can be used as a prior for subsequent scientific experiments. Extracting physical principles from a trained network is of significant importance in scientific imaging where a ground truth training set is not always feasible.
2210.14485
Tongkun Liu
Tongkun Liu, Bing Li, Zhuo Zhao, Xiao Du, Bingke Jiang, Leqi Geng
Reconstruction from edge image combined with color and gradient difference for industrial surface anomaly detection
11 pages, 8 figures
null
10.1016/j.aei.2024.103064
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstruction-based methods are widely explored in industrial visual anomaly detection. Such methods commonly require the model to well reconstruct the normal patterns but fail in the anomalies, and thus the anomalies can be detected by evaluating the reconstruction errors. However, in practice, it's usually difficult to control the generalization boundary of the model. The model with an overly strong generalization capability can even well reconstruct the abnormal regions, making them less distinguishable, while the model with a poor generalization capability can not reconstruct those changeable high-frequency components in the normal regions, which ultimately leads to false positives. To tackle the above issue, we propose a new reconstruction network where we reconstruct the original RGB image from its gray value edges (EdgRec). Specifically, this is achieved by an UNet-type denoising autoencoder with skip connections. The input edge and skip connections can well preserve the high-frequency information in the original image. Meanwhile, the proposed restoration task can force the network to memorize the normal low-frequency and color information. Besides, the denoising design can prevent the model from directly copying the original high-frequent components. To evaluate the anomalies, we further propose a new interpretable hand-crafted evaluation function that considers both the color and gradient differences. Our method achieves competitive results on the challenging benchmark MVTec AD (97.8\% for detection and 97.7\% for localization, AUROC). In addition, we conduct experiments on the MVTec 3D-AD dataset and show convincing results using RGB images only. Our code will be available at https://github.com/liutongkun/EdgRec.
[ { "version": "v1", "created": "Wed, 26 Oct 2022 05:21:43 GMT" } ]
2025-03-17T00:00:00
[ [ "Liu", "Tongkun", "" ], [ "Li", "Bing", "" ], [ "Zhao", "Zhuo", "" ], [ "Du", "Xiao", "" ], [ "Jiang", "Bingke", "" ], [ "Geng", "Leqi", "" ] ]
TITLE: Reconstruction from edge image combined with color and gradient difference for industrial surface anomaly detection ABSTRACT: Reconstruction-based methods are widely explored in industrial visual anomaly detection. Such methods commonly require the model to well reconstruct the normal patterns but fail in the anomalies, and thus the anomalies can be detected by evaluating the reconstruction errors. However, in practice, it's usually difficult to control the generalization boundary of the model. The model with an overly strong generalization capability can even well reconstruct the abnormal regions, making them less distinguishable, while the model with a poor generalization capability can not reconstruct those changeable high-frequency components in the normal regions, which ultimately leads to false positives. To tackle the above issue, we propose a new reconstruction network where we reconstruct the original RGB image from its gray value edges (EdgRec). Specifically, this is achieved by an UNet-type denoising autoencoder with skip connections. The input edge and skip connections can well preserve the high-frequency information in the original image. Meanwhile, the proposed restoration task can force the network to memorize the normal low-frequency and color information. Besides, the denoising design can prevent the model from directly copying the original high-frequent components. To evaluate the anomalies, we further propose a new interpretable hand-crafted evaluation function that considers both the color and gradient differences. Our method achieves competitive results on the challenging benchmark MVTec AD (97.8\% for detection and 97.7\% for localization, AUROC). In addition, we conduct experiments on the MVTec 3D-AD dataset and show convincing results using RGB images only. Our code will be available at https://github.com/liutongkun/EdgRec.
2303.03470
R. Spencer Hallyburton
R. Spencer Hallyburton, Qingzhao Zhang, Z. Morley Mao, Michael Reiter, Miroslav Pajic
What Would Trojans Do? Exploiting Partial-Information Vulnerabilities in Autonomous Vehicle Sensing
null
null
null
null
cs.CR cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Safety-critical sensors in autonomous vehicles (AVs) form an essential part of the vehicle's trusted computing base (TCB), yet they are highly susceptible to attacks. Alarmingly, Tier 1 manufacturers have already exposed vulnerabilities to attacks introducing Trojans that can stealthily alter sensor outputs. We analyze the feasible capability and safety-critical outcomes of an attack on sensing at a cyber level. To further address these threats, we design realistic attacks in AV simulators and real-world datasets under two practical constraints: attackers (1) possess only partial information and (2) are constrained by data structures that maintain sensor integrity.Examining the role of camera and LiDAR in multi-sensor AVs, we find that attacks targeting only the camera have minimal safety impact due to the sensor fusion system's strong reliance on 3D data from LiDAR. This reliance makes LiDAR-based attacks especially detrimental to safety. To mitigate the vulnerabilities, we introduce security-aware sensor fusion incorporating (1) a probabilistic data-asymmetry monitor and (2) a scalable track-to-track fusion of 3D LiDAR and monocular detections (T2T-3DLM). We demonstrate that these methods significantly diminish attack success rate.
[ { "version": "v1", "created": "Mon, 6 Mar 2023 19:52:41 GMT" }, { "version": "v2", "created": "Sun, 23 Apr 2023 11:41:26 GMT" }, { "version": "v3", "created": "Fri, 8 Dec 2023 10:58:13 GMT" }, { "version": "v4", "created": "Thu, 13 Mar 2025 20:57:41 GMT" } ]
2025-03-17T00:00:00
[ [ "Hallyburton", "R. Spencer", "" ], [ "Zhang", "Qingzhao", "" ], [ "Mao", "Z. Morley", "" ], [ "Reiter", "Michael", "" ], [ "Pajic", "Miroslav", "" ] ]
TITLE: What Would Trojans Do? Exploiting Partial-Information Vulnerabilities in Autonomous Vehicle Sensing ABSTRACT: Safety-critical sensors in autonomous vehicles (AVs) form an essential part of the vehicle's trusted computing base (TCB), yet they are highly susceptible to attacks. Alarmingly, Tier 1 manufacturers have already exposed vulnerabilities to attacks introducing Trojans that can stealthily alter sensor outputs. We analyze the feasible capability and safety-critical outcomes of an attack on sensing at a cyber level. To further address these threats, we design realistic attacks in AV simulators and real-world datasets under two practical constraints: attackers (1) possess only partial information and (2) are constrained by data structures that maintain sensor integrity.Examining the role of camera and LiDAR in multi-sensor AVs, we find that attacks targeting only the camera have minimal safety impact due to the sensor fusion system's strong reliance on 3D data from LiDAR. This reliance makes LiDAR-based attacks especially detrimental to safety. To mitigate the vulnerabilities, we introduce security-aware sensor fusion incorporating (1) a probabilistic data-asymmetry monitor and (2) a scalable track-to-track fusion of 3D LiDAR and monocular detections (T2T-3DLM). We demonstrate that these methods significantly diminish attack success rate.
2303.17448
Weiming Li
Weiming Li, Xueqian Wang, Gang Li, Baocheng Geng, Pramod K. Varshney
NN-Copula-CD: A Copula-Guided Interpretable Neural Network for Change Detection in Heterogeneous Remote Sensing Images
The full version of this work is submitted to IEEE TGRS
IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1-17, 2025, Art no. 4700817
10.1109/TGRS.2024.3524639
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Change detection (CD) in heterogeneous remote sensing images has been widely used for disaster monitoring and land-use management. In the past decade, the heterogeneous CD problem has significantly benefited from the development of deep neural networks (DNNs). However, the purely data-driven DNNs perform like a black box where the lack of interpretability limits the trustworthiness and controllability of DNNs in most practical CD applications. As a powerful knowledge-driven tool, copula theory performs well in modeling relationships among random variables. To enhance the interpretability of existing neural networks for CD, we propose a knowledge-data-driven heterogeneous CD method based on a copula-guided neural network, named NN-Copula-CD. In our NN-Copula-CD, the mathematical characteristics of copula are employed as the loss functions to supervise a neural network to learn the dependence between bi-temporal heterogeneous superpixel pairs, and then the changed regions are identified via binary classification based on the degrees of dependence of all the superpixel pairs in the bi-temporal images. We conduct in-depth experiments on three datasets with heterogeneous images, where both quantitative and visual results demonstrate the effectiveness of our proposed NN-Copula-CD method.
[ { "version": "v1", "created": "Thu, 30 Mar 2023 15:20:21 GMT" }, { "version": "v2", "created": "Wed, 18 Sep 2024 14:51:30 GMT" }, { "version": "v3", "created": "Thu, 19 Sep 2024 14:32:45 GMT" } ]
2025-03-17T00:00:00
[ [ "Li", "Weiming", "" ], [ "Wang", "Xueqian", "" ], [ "Li", "Gang", "" ], [ "Geng", "Baocheng", "" ], [ "Varshney", "Pramod K.", "" ] ]
TITLE: NN-Copula-CD: A Copula-Guided Interpretable Neural Network for Change Detection in Heterogeneous Remote Sensing Images ABSTRACT: Change detection (CD) in heterogeneous remote sensing images has been widely used for disaster monitoring and land-use management. In the past decade, the heterogeneous CD problem has significantly benefited from the development of deep neural networks (DNNs). However, the purely data-driven DNNs perform like a black box where the lack of interpretability limits the trustworthiness and controllability of DNNs in most practical CD applications. As a powerful knowledge-driven tool, copula theory performs well in modeling relationships among random variables. To enhance the interpretability of existing neural networks for CD, we propose a knowledge-data-driven heterogeneous CD method based on a copula-guided neural network, named NN-Copula-CD. In our NN-Copula-CD, the mathematical characteristics of copula are employed as the loss functions to supervise a neural network to learn the dependence between bi-temporal heterogeneous superpixel pairs, and then the changed regions are identified via binary classification based on the degrees of dependence of all the superpixel pairs in the bi-temporal images. We conduct in-depth experiments on three datasets with heterogeneous images, where both quantitative and visual results demonstrate the effectiveness of our proposed NN-Copula-CD method.
2305.17063
Felix Jimenez
Felix Jimenez, Matthias Katzfuss
Vecchia Gaussian Process Ensembles on Internal Representations of Deep Neural Networks
22 pages, 9 figures
null
null
null
stat.ML cs.LG
http://creativecommons.org/licenses/by/4.0/
For regression tasks, standard Gaussian processes (GPs) provide natural uncertainty quantification (UQ), while deep neural networks (DNNs) excel at representation learning. Deterministic UQ methods for neural networks have successfully combined the two and require only a single pass through the neural network. However, current methods necessitate changes to network training to address feature collapse, where unique inputs map to identical feature vectors. We propose an alternative solution, the deep Vecchia ensemble (DVE), which allows deterministic UQ to work in the presence of feature collapse, negating the need for network retraining. DVE comprises an ensemble of GPs built on hidden-layer outputs of a DNN, achieving scalability via Vecchia approximations that leverage nearest-neighbor conditional independence. DVE is compatible with pretrained networks and incurs low computational overhead. We demonstrate DVE's utility on several datasets and carry out experiments to understand the inner workings of the proposed method.
[ { "version": "v1", "created": "Fri, 26 May 2023 16:19:26 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 16:50:47 GMT" } ]
2025-03-17T00:00:00
[ [ "Jimenez", "Felix", "" ], [ "Katzfuss", "Matthias", "" ] ]
TITLE: Vecchia Gaussian Process Ensembles on Internal Representations of Deep Neural Networks ABSTRACT: For regression tasks, standard Gaussian processes (GPs) provide natural uncertainty quantification (UQ), while deep neural networks (DNNs) excel at representation learning. Deterministic UQ methods for neural networks have successfully combined the two and require only a single pass through the neural network. However, current methods necessitate changes to network training to address feature collapse, where unique inputs map to identical feature vectors. We propose an alternative solution, the deep Vecchia ensemble (DVE), which allows deterministic UQ to work in the presence of feature collapse, negating the need for network retraining. DVE comprises an ensemble of GPs built on hidden-layer outputs of a DNN, achieving scalability via Vecchia approximations that leverage nearest-neighbor conditional independence. DVE is compatible with pretrained networks and incurs low computational overhead. We demonstrate DVE's utility on several datasets and carry out experiments to understand the inner workings of the proposed method.
2307.09210
Nathaniel Josephs
Nathaniel Josephs, Arash A. Amini, Marina Paez, and Lizhen Lin
Nested stochastic block model for simultaneously clustering networks and nodes
null
null
null
null
stat.ME cs.SI stat.ML
http://creativecommons.org/licenses/by/4.0/
We introduce the nested stochastic block model (NSBM) to cluster a collection of networks while simultaneously detecting communities within each network. NSBM has several appealing features including the ability to work on unlabeled networks with potentially different node sets, the flexibility to model heterogeneous communities, and the means to automatically select the number of classes for the networks and the number of communities within each network. This is accomplished via a Bayesian model, with a novel application of the nested Dirichlet process (NDP) as a prior to jointly model the between-network and within-network clusters. The dependency introduced by the network data creates nontrivial challenges for the NDP, especially in the development of efficient samplers. For posterior inference, we propose several Markov chain Monte Carlo algorithms including a standard Gibbs sampler, a collapsed Gibbs sampler, and two blocked Gibbs samplers that ultimately return two levels of clustering labels from both within and across the networks. Extensive simulation studies are carried out which demonstrate that the model provides very accurate estimates of both levels of the clustering structure. We also apply our model to two social network datasets that cannot be analyzed using any previous method in the literature due to the anonymity of the nodes and the varying number of nodes in each network.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 12:46:34 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 13:40:37 GMT" } ]
2025-03-17T00:00:00
[ [ "Josephs", "Nathaniel", "" ], [ "Amini", "Arash A.", "" ], [ "Paez", "Marina", "" ], [ "Lin", "Lizhen", "" ] ]
TITLE: Nested stochastic block model for simultaneously clustering networks and nodes ABSTRACT: We introduce the nested stochastic block model (NSBM) to cluster a collection of networks while simultaneously detecting communities within each network. NSBM has several appealing features including the ability to work on unlabeled networks with potentially different node sets, the flexibility to model heterogeneous communities, and the means to automatically select the number of classes for the networks and the number of communities within each network. This is accomplished via a Bayesian model, with a novel application of the nested Dirichlet process (NDP) as a prior to jointly model the between-network and within-network clusters. The dependency introduced by the network data creates nontrivial challenges for the NDP, especially in the development of efficient samplers. For posterior inference, we propose several Markov chain Monte Carlo algorithms including a standard Gibbs sampler, a collapsed Gibbs sampler, and two blocked Gibbs samplers that ultimately return two levels of clustering labels from both within and across the networks. Extensive simulation studies are carried out which demonstrate that the model provides very accurate estimates of both levels of the clustering structure. We also apply our model to two social network datasets that cannot be analyzed using any previous method in the literature due to the anonymity of the nodes and the varying number of nodes in each network.
2308.07462
Javier Conde
Pedro Reviriego, Javier Conde, Elena Merino-G\'omez, Gonzalo Mart\'inez, Jos\'e Alberto Hern\'andez
Playing with words: Comparing the vocabulary and lexical diversity of ChatGPT and humans
null
Machine Learning with Applications Volume 18, December 2024, 100602
10.1016/j.mlwa.2024.100602
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The introduction of Artificial Intelligence (AI) generative language models such as GPT (Generative Pre-trained Transformer) and tools such as ChatGPT has triggered a revolution that can transform how text is generated. This has many implications, for example, as AI-generated text becomes a significant fraction of the text, would this have an effect on the language capabilities of readers and also on the training of newer AI tools? Would it affect the evolution of languages? Focusing on one specific aspect of the language: words; will the use of tools such as ChatGPT increase or reduce the vocabulary used or the lexical richness? This has implications for words, as those not included in AI-generated content will tend to be less and less popular and may eventually be lost. In this work, we perform an initial comparison of the vocabulary and lexical richness of ChatGPT and humans when performing the same tasks. In more detail, two datasets containing the answers to different types of questions answered by ChatGPT and humans, and a third dataset in which ChatGPT paraphrases sentences and questions are used. The analysis shows that ChatGPT tends to use fewer distinct words and lower lexical richness than humans. These results are very preliminary and additional datasets and ChatGPT configurations have to be evaluated to extract more general conclusions. Therefore, further research is needed to understand how the use of ChatGPT and more broadly generative AI tools will affect the vocabulary and lexical richness in different types of text and languages.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 21:19:44 GMT" }, { "version": "v2", "created": "Thu, 31 Aug 2023 11:09:16 GMT" }, { "version": "v3", "created": "Fri, 14 Mar 2025 16:19:46 GMT" } ]
2025-03-17T00:00:00
[ [ "Reviriego", "Pedro", "" ], [ "Conde", "Javier", "" ], [ "Merino-Gómez", "Elena", "" ], [ "Martínez", "Gonzalo", "" ], [ "Hernández", "José Alberto", "" ] ]
TITLE: Playing with words: Comparing the vocabulary and lexical diversity of ChatGPT and humans ABSTRACT: The introduction of Artificial Intelligence (AI) generative language models such as GPT (Generative Pre-trained Transformer) and tools such as ChatGPT has triggered a revolution that can transform how text is generated. This has many implications, for example, as AI-generated text becomes a significant fraction of the text, would this have an effect on the language capabilities of readers and also on the training of newer AI tools? Would it affect the evolution of languages? Focusing on one specific aspect of the language: words; will the use of tools such as ChatGPT increase or reduce the vocabulary used or the lexical richness? This has implications for words, as those not included in AI-generated content will tend to be less and less popular and may eventually be lost. In this work, we perform an initial comparison of the vocabulary and lexical richness of ChatGPT and humans when performing the same tasks. In more detail, two datasets containing the answers to different types of questions answered by ChatGPT and humans, and a third dataset in which ChatGPT paraphrases sentences and questions are used. The analysis shows that ChatGPT tends to use fewer distinct words and lower lexical richness than humans. These results are very preliminary and additional datasets and ChatGPT configurations have to be evaluated to extract more general conclusions. Therefore, further research is needed to understand how the use of ChatGPT and more broadly generative AI tools will affect the vocabulary and lexical richness in different types of text and languages.
2309.07068
Tongkun Liu
Tongkun Liu, Bing Li, Xiao Du, Bingke Jiang, Leqi Geng, Feiyang Wang, Zhuo Zhao
FAIR: Frequency-aware Image Restoration for Industrial Visual Anomaly Detection
12 pages, 10 figures
null
10.1016/j.aei.2024.103064
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image reconstruction-based anomaly detection models are widely explored in industrial visual inspection. However, existing models usually suffer from the trade-off between normal reconstruction fidelity and abnormal reconstruction distinguishability, which damages the performance. In this paper, we find that the above trade-off can be better mitigated by leveraging the distinct frequency biases between normal and abnormal reconstruction errors. To this end, we propose Frequency-aware Image Restoration (FAIR), a novel self-supervised image restoration task that restores images from their high-frequency components. It enables precise reconstruction of normal patterns while mitigating unfavorable generalization to anomalies. Using only a simple vanilla UNet, FAIR achieves state-of-the-art performance with higher efficiency on various defect detection datasets. Code: https://github.com/liutongkun/FAIR.
[ { "version": "v1", "created": "Wed, 13 Sep 2023 16:28:43 GMT" } ]
2025-03-17T00:00:00
[ [ "Liu", "Tongkun", "" ], [ "Li", "Bing", "" ], [ "Du", "Xiao", "" ], [ "Jiang", "Bingke", "" ], [ "Geng", "Leqi", "" ], [ "Wang", "Feiyang", "" ], [ "Zhao", "Zhuo", "" ] ]
TITLE: FAIR: Frequency-aware Image Restoration for Industrial Visual Anomaly Detection ABSTRACT: Image reconstruction-based anomaly detection models are widely explored in industrial visual inspection. However, existing models usually suffer from the trade-off between normal reconstruction fidelity and abnormal reconstruction distinguishability, which damages the performance. In this paper, we find that the above trade-off can be better mitigated by leveraging the distinct frequency biases between normal and abnormal reconstruction errors. To this end, we propose Frequency-aware Image Restoration (FAIR), a novel self-supervised image restoration task that restores images from their high-frequency components. It enables precise reconstruction of normal patterns while mitigating unfavorable generalization to anomalies. Using only a simple vanilla UNet, FAIR achieves state-of-the-art performance with higher efficiency on various defect detection datasets. Code: https://github.com/liutongkun/FAIR.
2310.01035
Hu Wang
Hu Wang, Congbo Ma, Jianpeng Zhang, Yuan Zhang, Jodie Avery, Louise Hull, Gustavo Carneiro
Learnable Cross-modal Knowledge Distillation for Multi-modal Learning with Missing Modality
null
Medical Image Computing and Computer-Assisted Intervention 2023 (MICCAI 2023)
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
The problem of missing modalities is both critical and non-trivial to be handled in multi-modal models. It is common for multi-modal tasks that certain modalities contribute more compared to other modalities, and if those important modalities are missing, the model performance drops significantly. Such fact remains unexplored by current multi-modal approaches that recover the representation from missing modalities by feature reconstruction or blind feature aggregation from other modalities, instead of extracting useful information from the best performing modalities. In this paper, we propose a Learnable Cross-modal Knowledge Distillation (LCKD) model to adaptively identify important modalities and distil knowledge from them to help other modalities from the cross-modal perspective for solving the missing modality issue. Our approach introduces a teacher election procedure to select the most ``qualified'' teachers based on their single modality performance on certain tasks. Then, cross-modal knowledge distillation is performed between teacher and student modalities for each task to push the model parameters to a point that is beneficial for all tasks. Hence, even if the teacher modalities for certain tasks are missing during testing, the available student modalities can accomplish the task well enough based on the learned knowledge from their automatically elected teacher modalities. Experiments on the Brain Tumour Segmentation Dataset 2018 (BraTS2018) shows that LCKD outperforms other methods by a considerable margin, improving the state-of-the-art performance by 3.61% for enhancing tumour, 5.99% for tumour core, and 3.76% for whole tumour in terms of segmentation Dice score.
[ { "version": "v1", "created": "Mon, 2 Oct 2023 09:24:54 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 07:53:09 GMT" } ]
2025-03-17T00:00:00
[ [ "Wang", "Hu", "" ], [ "Ma", "Congbo", "" ], [ "Zhang", "Jianpeng", "" ], [ "Zhang", "Yuan", "" ], [ "Avery", "Jodie", "" ], [ "Hull", "Louise", "" ], [ "Carneiro", "Gustavo", "" ] ]
TITLE: Learnable Cross-modal Knowledge Distillation for Multi-modal Learning with Missing Modality ABSTRACT: The problem of missing modalities is both critical and non-trivial to be handled in multi-modal models. It is common for multi-modal tasks that certain modalities contribute more compared to other modalities, and if those important modalities are missing, the model performance drops significantly. Such fact remains unexplored by current multi-modal approaches that recover the representation from missing modalities by feature reconstruction or blind feature aggregation from other modalities, instead of extracting useful information from the best performing modalities. In this paper, we propose a Learnable Cross-modal Knowledge Distillation (LCKD) model to adaptively identify important modalities and distil knowledge from them to help other modalities from the cross-modal perspective for solving the missing modality issue. Our approach introduces a teacher election procedure to select the most ``qualified'' teachers based on their single modality performance on certain tasks. Then, cross-modal knowledge distillation is performed between teacher and student modalities for each task to push the model parameters to a point that is beneficial for all tasks. Hence, even if the teacher modalities for certain tasks are missing during testing, the available student modalities can accomplish the task well enough based on the learned knowledge from their automatically elected teacher modalities. Experiments on the Brain Tumour Segmentation Dataset 2018 (BraTS2018) shows that LCKD outperforms other methods by a considerable margin, improving the state-of-the-art performance by 3.61% for enhancing tumour, 5.99% for tumour core, and 3.76% for whole tumour in terms of segmentation Dice score.
2310.14560
Hui Tian
Hui Tian, Kai Xu
Polyhedral Surface: Self-supervised Point Cloud Reconstruction Based on Polyhedral Surface
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point cloud reconstruction from raw point cloud has been an important topic in computer graphics for decades, especially due to its high demand in modeling and rendering applications. An important way to solve this problem is establishing a local geometry to fit the local curve. However, previous methods build either a local plane or polynomial curve. Local plane brings the loss of sharp feature and the boundary artefacts on open surface. Polynomial curve is hard to combine with neural network due to the local coordinate consistent problem. To address this, we propose a novel polyhedral surface to represent local surface. This method provides more flexible to represent sharp feature and surface boundary on open surface. It does not require any local coordinate system, which is important when introducing neural networks. Specifically, we use normals to construct the polyhedral surface, including both dihedral and trihedral surfaces using 2 and 3 normals, respectively. Our method achieves state-of-the-art results on three commonly used datasets (ShapeNetCore, ABC, and ScanNet). Code will be released upon acceptance.
[ { "version": "v1", "created": "Mon, 23 Oct 2023 04:24:31 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 12:25:32 GMT" } ]
2025-03-17T00:00:00
[ [ "Tian", "Hui", "" ], [ "Xu", "Kai", "" ] ]
TITLE: Polyhedral Surface: Self-supervised Point Cloud Reconstruction Based on Polyhedral Surface ABSTRACT: Point cloud reconstruction from raw point cloud has been an important topic in computer graphics for decades, especially due to its high demand in modeling and rendering applications. An important way to solve this problem is establishing a local geometry to fit the local curve. However, previous methods build either a local plane or polynomial curve. Local plane brings the loss of sharp feature and the boundary artefacts on open surface. Polynomial curve is hard to combine with neural network due to the local coordinate consistent problem. To address this, we propose a novel polyhedral surface to represent local surface. This method provides more flexible to represent sharp feature and surface boundary on open surface. It does not require any local coordinate system, which is important when introducing neural networks. Specifically, we use normals to construct the polyhedral surface, including both dihedral and trihedral surfaces using 2 and 3 normals, respectively. Our method achieves state-of-the-art results on three commonly used datasets (ShapeNetCore, ABC, and ScanNet). Code will be released upon acceptance.
2310.20335
Gonzalo Contreras-Aso
Gonzalo Contreras-Aso, Cristian P\'erez-Corral, Miguel Romance
Uplifting edges in higher order networks: spectral centralities for non-uniform hypergraphs
29 pages, 8 figures
null
10.3934/math.20241539
null
math.SP math-ph math.MP physics.comp-ph physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Spectral analysis of networks states that many structural properties of graphs, such as centrality of their nodes, are given in terms of their adjacency matrices. The natural extension of such spectral analysis to higher order networks is strongly limited by the fact that a given hypergraph could have several different adjacency hypermatrices, hence the results obtained so far are mainly restricted to the class of uniform hypergraphs, which leaves many real systems unattended. A new method for analysing non-linear eigenvector-like centrality measures of non-uniform hypergraphs is presented in this paper that could be useful for studying properties of $\mathcal{H}$-eigenvectors and $\mathcal{Z}$-eigenvectors in the non-uniform case. In order to do so, a new operation - the $\textit{uplift}$ - is introduced, incorporating auxiliary nodes in the hypergraph to allow for a uniform-like analysis. We later argue why this is a mathematically sound operation, and we furthermore use it to classify a whole family of hypergraphs with unique Perron-like $\mathcal{Z}$-eigenvectors. We supplement the theoretical analysis with several examples and numerical simulations on synthetic and real datasets.
[ { "version": "v1", "created": "Tue, 31 Oct 2023 10:21:58 GMT" }, { "version": "v2", "created": "Mon, 15 Jul 2024 10:15:46 GMT" } ]
2025-03-17T00:00:00
[ [ "Contreras-Aso", "Gonzalo", "" ], [ "Pérez-Corral", "Cristian", "" ], [ "Romance", "Miguel", "" ] ]
TITLE: Uplifting edges in higher order networks: spectral centralities for non-uniform hypergraphs ABSTRACT: Spectral analysis of networks states that many structural properties of graphs, such as centrality of their nodes, are given in terms of their adjacency matrices. The natural extension of such spectral analysis to higher order networks is strongly limited by the fact that a given hypergraph could have several different adjacency hypermatrices, hence the results obtained so far are mainly restricted to the class of uniform hypergraphs, which leaves many real systems unattended. A new method for analysing non-linear eigenvector-like centrality measures of non-uniform hypergraphs is presented in this paper that could be useful for studying properties of $\mathcal{H}$-eigenvectors and $\mathcal{Z}$-eigenvectors in the non-uniform case. In order to do so, a new operation - the $\textit{uplift}$ - is introduced, incorporating auxiliary nodes in the hypergraph to allow for a uniform-like analysis. We later argue why this is a mathematically sound operation, and we furthermore use it to classify a whole family of hypergraphs with unique Perron-like $\mathcal{Z}$-eigenvectors. We supplement the theoretical analysis with several examples and numerical simulations on synthetic and real datasets.
2311.05029
Christian Wilms
Robert Johanson and Christian Wilms and Ole Johannsen and Simone Frintrop
S$^3$AD: Semi-supervised Small Apple Detection in Orchard Environments
Accepted at WACV 2024. The code and the dataset MAD are available at http://www.inf.uni-hamburg.de/mad
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Crop detection is integral for precision agriculture applications such as automated yield estimation or fruit picking. However, crop detection, e.g., apple detection in orchard environments remains challenging due to a lack of large-scale datasets and the small relative size of the crops in the image. In this work, we address these challenges by reformulating the apple detection task in a semi-supervised manner. To this end, we provide the large, high-resolution dataset MAD comprising 105 labeled images with 14,667 annotated apple instances and 4,440 unlabeled images. Utilizing this dataset, we also propose a novel Semi-Supervised Small Apple Detection system S$^3$AD based on contextual attention and selective tiling to improve the challenging detection of small apples, while limiting the computational overhead. We conduct an extensive evaluation on MAD and the MSU dataset, showing that S$^3$AD substantially outperforms strong fully-supervised baselines, including several small object detection systems, by up to $14.9\%$. Additionally, we exploit the detailed annotations of our dataset w.r.t. apple properties to analyze the influence of relative size or level of occlusion on the results of various systems, quantifying current challenges.
[ { "version": "v1", "created": "Wed, 8 Nov 2023 21:25:27 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 09:10:28 GMT" } ]
2025-03-17T00:00:00
[ [ "Johanson", "Robert", "" ], [ "Wilms", "Christian", "" ], [ "Johannsen", "Ole", "" ], [ "Frintrop", "Simone", "" ] ]
TITLE: S$^3$AD: Semi-supervised Small Apple Detection in Orchard Environments ABSTRACT: Crop detection is integral for precision agriculture applications such as automated yield estimation or fruit picking. However, crop detection, e.g., apple detection in orchard environments remains challenging due to a lack of large-scale datasets and the small relative size of the crops in the image. In this work, we address these challenges by reformulating the apple detection task in a semi-supervised manner. To this end, we provide the large, high-resolution dataset MAD comprising 105 labeled images with 14,667 annotated apple instances and 4,440 unlabeled images. Utilizing this dataset, we also propose a novel Semi-Supervised Small Apple Detection system S$^3$AD based on contextual attention and selective tiling to improve the challenging detection of small apples, while limiting the computational overhead. We conduct an extensive evaluation on MAD and the MSU dataset, showing that S$^3$AD substantially outperforms strong fully-supervised baselines, including several small object detection systems, by up to $14.9\%$. Additionally, we exploit the detailed annotations of our dataset w.r.t. apple properties to analyze the influence of relative size or level of occlusion on the results of various systems, quantifying current challenges.
2312.04584
Yiming Li
Mingyan Zhu, Yiming Li, Junfeng Guo, Tao Wei, Shu-Tao Xia, Zhan Qin
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger
This paper is accepted by IEEE Transactions on Dependable and Secure Computing (TDSC), 2025. The first two authors contributed equally to this work. 14 pages
null
null
null
cs.CR cs.AI cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Currently, sample-specific backdoor attacks (SSBAs) are the most advanced and malicious methods since they can easily circumvent most of the current backdoor defenses. In this paper, we reveal that SSBAs are not sufficiently stealthy due to their poisoned-label nature, where users can discover anomalies if they check the image-label relationship. In particular, we demonstrate that it is ineffective to directly generalize existing SSBAs to their clean-label variants by poisoning samples solely from the target class. We reveal that it is primarily due to two reasons, including \textbf{(1)} the `antagonistic effects' of ground-truth features and \textbf{(2)} the learning difficulty of sample-specific features. Accordingly, trigger-related features of existing SSBAs cannot be effectively learned under the clean-label setting due to their mild trigger intensity required for ensuring stealthiness. We argue that the intensity constraint of existing SSBAs is mostly because their trigger patterns are `content-irrelevant' and therefore act as `noises' for both humans and DNNs. Motivated by this understanding, we propose to exploit content-relevant features, $a.k.a.$ (human-relied) attributes, as the trigger patterns to design clean-label SSBAs. This new attack paradigm is dubbed backdoor attack with attribute trigger (BAAT). Extensive experiments are conducted on benchmark datasets, which verify the effectiveness of our BAAT and its resistance to existing defenses.
[ { "version": "v1", "created": "Sun, 3 Dec 2023 09:12:14 GMT" }, { "version": "v2", "created": "Mon, 11 Dec 2023 03:12:36 GMT" }, { "version": "v3", "created": "Fri, 14 Mar 2025 13:36:51 GMT" } ]
2025-03-17T00:00:00
[ [ "Zhu", "Mingyan", "" ], [ "Li", "Yiming", "" ], [ "Guo", "Junfeng", "" ], [ "Wei", "Tao", "" ], [ "Xia", "Shu-Tao", "" ], [ "Qin", "Zhan", "" ] ]
TITLE: Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger ABSTRACT: Currently, sample-specific backdoor attacks (SSBAs) are the most advanced and malicious methods since they can easily circumvent most of the current backdoor defenses. In this paper, we reveal that SSBAs are not sufficiently stealthy due to their poisoned-label nature, where users can discover anomalies if they check the image-label relationship. In particular, we demonstrate that it is ineffective to directly generalize existing SSBAs to their clean-label variants by poisoning samples solely from the target class. We reveal that it is primarily due to two reasons, including \textbf{(1)} the `antagonistic effects' of ground-truth features and \textbf{(2)} the learning difficulty of sample-specific features. Accordingly, trigger-related features of existing SSBAs cannot be effectively learned under the clean-label setting due to their mild trigger intensity required for ensuring stealthiness. We argue that the intensity constraint of existing SSBAs is mostly because their trigger patterns are `content-irrelevant' and therefore act as `noises' for both humans and DNNs. Motivated by this understanding, we propose to exploit content-relevant features, $a.k.a.$ (human-relied) attributes, as the trigger patterns to design clean-label SSBAs. This new attack paradigm is dubbed backdoor attack with attribute trigger (BAAT). Extensive experiments are conducted on benchmark datasets, which verify the effectiveness of our BAAT and its resistance to existing defenses.
2312.05327
Ribana Roscher
Ribana Roscher and Marc Ru{\ss}wurm and Caroline Gevaert and Michael Kampffmeyer and Jefersson A. dos Santos and Maria Vakalopoulou and Ronny H\"ansch and Stine Hansen and Keiller Nogueira and Jonathan Prexl and Devis Tuia
Better, Not Just More: Data-Centric Machine Learning for Earth Observation
null
IEEE Geoscience and Remote Sensing Magazine, vol. 12, no. 4, pp. 335-355, Dec. 2024
10.1109/MGRS.2024.3470986
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent developments and research in modern machine learning have led to substantial improvements in the geospatial field. Although numerous deep learning architectures and models have been proposed, the majority of them have been solely developed on benchmark datasets that lack strong real-world relevance. Furthermore, the performance of many methods has already saturated on these datasets. We argue that a shift from a model-centric view to a complementary data-centric perspective is necessary for further improvements in accuracy, generalization ability, and real impact on end-user applications. Furthermore, considering the entire machine learning cycle-from problem definition to model deployment with feedback-is crucial for enhancing machine learning models that can be reliable in unforeseen situations. This work presents a definition as well as a precise categorization and overview of automated data-centric learning approaches for geospatial data. It highlights the complementary role of data-centric learning with respect to model-centric in the larger machine learning deployment cycle. We review papers across the entire geospatial field and categorize them into different groups. A set of representative experiments shows concrete implementation examples. These examples provide concrete steps to act on geospatial data with data-centric machine learning approaches.
[ { "version": "v1", "created": "Fri, 8 Dec 2023 19:24:05 GMT" }, { "version": "v2", "created": "Sat, 22 Jun 2024 16:29:56 GMT" }, { "version": "v3", "created": "Tue, 5 Nov 2024 14:12:40 GMT" }, { "version": "v4", "created": "Thu, 13 Mar 2025 20:09:55 GMT" } ]
2025-03-17T00:00:00
[ [ "Roscher", "Ribana", "" ], [ "Rußwurm", "Marc", "" ], [ "Gevaert", "Caroline", "" ], [ "Kampffmeyer", "Michael", "" ], [ "Santos", "Jefersson A. dos", "" ], [ "Vakalopoulou", "Maria", "" ], [ "Hänsch", "Ronny", "" ], [ "Hansen", "Stine", "" ], [ "Nogueira", "Keiller", "" ], [ "Prexl", "Jonathan", "" ], [ "Tuia", "Devis", "" ] ]
TITLE: Better, Not Just More: Data-Centric Machine Learning for Earth Observation ABSTRACT: Recent developments and research in modern machine learning have led to substantial improvements in the geospatial field. Although numerous deep learning architectures and models have been proposed, the majority of them have been solely developed on benchmark datasets that lack strong real-world relevance. Furthermore, the performance of many methods has already saturated on these datasets. We argue that a shift from a model-centric view to a complementary data-centric perspective is necessary for further improvements in accuracy, generalization ability, and real impact on end-user applications. Furthermore, considering the entire machine learning cycle-from problem definition to model deployment with feedback-is crucial for enhancing machine learning models that can be reliable in unforeseen situations. This work presents a definition as well as a precise categorization and overview of automated data-centric learning approaches for geospatial data. It highlights the complementary role of data-centric learning with respect to model-centric in the larger machine learning deployment cycle. We review papers across the entire geospatial field and categorize them into different groups. A set of representative experiments shows concrete implementation examples. These examples provide concrete steps to act on geospatial data with data-centric machine learning approaches.
2312.15268
Kaichen Zhou
Kaichen Zhou, Jia-Wang Bian, Jian-Qing Zheng, Jiaxing Zhong, Qian Xie, Niki Trigoni, Andrew Markham
Manydepth2: Motion-Aware Self-Supervised Monocular Depth Estimation in Dynamic Scenes
Monocular Depth Estimation, Self-Supervised, Optical Flow
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite advancements in self-supervised monocular depth estimation, challenges persist in dynamic scenarios due to the dependence on assumptions about a static world. In this paper, we present Manydepth2, to achieve precise depth estimation for both dynamic objects and static backgrounds, all while maintaining computational efficiency. To tackle the challenges posed by dynamic content, we incorporate optical flow and coarse monocular depth to create a pseudo-static reference frame. This frame is then utilized to build a motion-aware cost volume in collaboration with the vanilla target frame. Furthermore, to improve the accuracy and robustness of the network architecture, we propose an attention-based depth network that effectively integrates information from feature maps at different resolutions by incorporating both channel and non-local attention mechanisms. Compared to methods with similar computational costs, Manydepth2 achieves a significant reduction of approximately five percent in root-mean-square error for self-supervised monocular depth estimation on the KITTI-2015 dataset. The code could be found at https://github.com/kaichen-z/Manydepth2.
[ { "version": "v1", "created": "Sat, 23 Dec 2023 14:36:27 GMT" }, { "version": "v2", "created": "Mon, 16 Sep 2024 17:45:13 GMT" }, { "version": "v3", "created": "Sat, 21 Sep 2024 17:46:31 GMT" }, { "version": "v4", "created": "Thu, 26 Sep 2024 15:10:58 GMT" }, { "version": "v5", "created": "Sun, 29 Sep 2024 01:16:33 GMT" }, { "version": "v6", "created": "Fri, 11 Oct 2024 23:24:26 GMT" }, { "version": "v7", "created": "Sat, 18 Jan 2025 16:16:10 GMT" }, { "version": "v8", "created": "Tue, 28 Jan 2025 01:18:32 GMT" }, { "version": "v9", "created": "Thu, 13 Mar 2025 19:28:00 GMT" } ]
2025-03-17T00:00:00
[ [ "Zhou", "Kaichen", "" ], [ "Bian", "Jia-Wang", "" ], [ "Zheng", "Jian-Qing", "" ], [ "Zhong", "Jiaxing", "" ], [ "Xie", "Qian", "" ], [ "Trigoni", "Niki", "" ], [ "Markham", "Andrew", "" ] ]
TITLE: Manydepth2: Motion-Aware Self-Supervised Monocular Depth Estimation in Dynamic Scenes ABSTRACT: Despite advancements in self-supervised monocular depth estimation, challenges persist in dynamic scenarios due to the dependence on assumptions about a static world. In this paper, we present Manydepth2, to achieve precise depth estimation for both dynamic objects and static backgrounds, all while maintaining computational efficiency. To tackle the challenges posed by dynamic content, we incorporate optical flow and coarse monocular depth to create a pseudo-static reference frame. This frame is then utilized to build a motion-aware cost volume in collaboration with the vanilla target frame. Furthermore, to improve the accuracy and robustness of the network architecture, we propose an attention-based depth network that effectively integrates information from feature maps at different resolutions by incorporating both channel and non-local attention mechanisms. Compared to methods with similar computational costs, Manydepth2 achieves a significant reduction of approximately five percent in root-mean-square error for self-supervised monocular depth estimation on the KITTI-2015 dataset. The code could be found at https://github.com/kaichen-z/Manydepth2.
2402.07594
Ramses Sanchez
Patrick Seifner, Kostadin Cvejoski, Antonia K\"orner, Rams\'es J. S\'anchez
Zero-shot Imputation with Foundation Inference Models for Dynamical Systems
null
null
null
null
cs.LG math.DS
http://creativecommons.org/licenses/by/4.0/
Dynamical systems governed by ordinary differential equations (ODEs) serve as models for a vast number of natural and social phenomena. In this work, we offer a fresh perspective on the classical problem of imputing missing time series data, whose underlying dynamics are assumed to be determined by ODEs. Specifically, we revisit ideas from amortized inference and neural operators, and propose a novel supervised learning framework for zero-shot time series imputation, through parametric functions satisfying some (hidden) ODEs. Our proposal consists of two components. First, a broad probability distribution over the space of ODE solutions, observation times and noise mechanisms, with which we generate a large, synthetic dataset of (hidden) ODE solutions, along with their noisy and sparse observations. Second, a neural recognition model that is trained offline, to map the generated time series onto the spaces of initial conditions and time derivatives of the (hidden) ODE solutions, which we then integrate to impute the missing data. We empirically demonstrate that one and the same (pretrained) recognition model can perform zero-shot imputation across 63 distinct time series with missing values, each sampled from widely different dynamical systems. Likewise, we demonstrate that it can perform zero-shot imputation of missing high-dimensional data in 10 vastly different settings, spanning human motion, air quality, traffic and electricity studies, as well as Navier-Stokes simulations -- without requiring any fine-tuning. What is more, our proposal often outperforms state-of-the-art methods, which are trained on the target datasets. Our pretrained model, repository and tutorials are available online.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 11:48:54 GMT" }, { "version": "v2", "created": "Fri, 4 Oct 2024 10:41:18 GMT" }, { "version": "v3", "created": "Fri, 28 Feb 2025 14:24:39 GMT" }, { "version": "v4", "created": "Fri, 14 Mar 2025 15:37:14 GMT" } ]
2025-03-17T00:00:00
[ [ "Seifner", "Patrick", "" ], [ "Cvejoski", "Kostadin", "" ], [ "Körner", "Antonia", "" ], [ "Sánchez", "Ramsés J.", "" ] ]
TITLE: Zero-shot Imputation with Foundation Inference Models for Dynamical Systems ABSTRACT: Dynamical systems governed by ordinary differential equations (ODEs) serve as models for a vast number of natural and social phenomena. In this work, we offer a fresh perspective on the classical problem of imputing missing time series data, whose underlying dynamics are assumed to be determined by ODEs. Specifically, we revisit ideas from amortized inference and neural operators, and propose a novel supervised learning framework for zero-shot time series imputation, through parametric functions satisfying some (hidden) ODEs. Our proposal consists of two components. First, a broad probability distribution over the space of ODE solutions, observation times and noise mechanisms, with which we generate a large, synthetic dataset of (hidden) ODE solutions, along with their noisy and sparse observations. Second, a neural recognition model that is trained offline, to map the generated time series onto the spaces of initial conditions and time derivatives of the (hidden) ODE solutions, which we then integrate to impute the missing data. We empirically demonstrate that one and the same (pretrained) recognition model can perform zero-shot imputation across 63 distinct time series with missing values, each sampled from widely different dynamical systems. Likewise, we demonstrate that it can perform zero-shot imputation of missing high-dimensional data in 10 vastly different settings, spanning human motion, air quality, traffic and electricity studies, as well as Navier-Stokes simulations -- without requiring any fine-tuning. What is more, our proposal often outperforms state-of-the-art methods, which are trained on the target datasets. Our pretrained model, repository and tutorials are available online.
2402.08635
Hasan Mahmud
Husne Ara Rubaiyeat, Hasan Mahmud, Ahsan Habib, Md. Kamrul Hasan
BdSLW60: A Word-Level Bangla Sign Language Dataset
null
Accpeted by Multimedia Tools and Applications 2025
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Sign language discourse is an essential mode of daily communication for the deaf and hard-of-hearing people. However, research on Bangla Sign Language (BdSL) faces notable limitations, primarily due to the lack of datasets. Recognizing wordlevel signs in BdSL (WL-BdSL) presents a multitude of challenges, including the need for well-annotated datasets, capturing the dynamic nature of sign gestures from facial or hand landmarks, developing suitable machine learning or deep learning-based models with substantial video samples, and so on. In this paper, we address these challenges by creating a comprehensive BdSL word-level dataset named BdSLW60 in an unconstrained and natural setting, allowing positional and temporal variations and allowing sign users to change hand dominance freely. The dataset encompasses 60 Bangla sign words, with a significant scale of 9307 video trials provided by 18 signers under the supervision of a sign language professional. The dataset was rigorously annotated and cross-checked by 60 annotators. We also introduced a unique approach of a relative quantization-based key frame encoding technique for landmark based sign gesture recognition. We report the benchmarking of our BdSLW60 dataset using the Support Vector Machine (SVM) with testing accuracy up to 67.6% and an attention-based bi-LSTM with testing accuracy up to 75.1%. The dataset is available at https://www.kaggle.com/datasets/hasaniut/bdslw60 and the code base is accessible from https://github.com/hasanssl/BdSLW60_Code.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 18:02:58 GMT" } ]
2025-03-17T00:00:00
[ [ "Rubaiyeat", "Husne Ara", "" ], [ "Mahmud", "Hasan", "" ], [ "Habib", "Ahsan", "" ], [ "Hasan", "Md. Kamrul", "" ] ]
TITLE: BdSLW60: A Word-Level Bangla Sign Language Dataset ABSTRACT: Sign language discourse is an essential mode of daily communication for the deaf and hard-of-hearing people. However, research on Bangla Sign Language (BdSL) faces notable limitations, primarily due to the lack of datasets. Recognizing wordlevel signs in BdSL (WL-BdSL) presents a multitude of challenges, including the need for well-annotated datasets, capturing the dynamic nature of sign gestures from facial or hand landmarks, developing suitable machine learning or deep learning-based models with substantial video samples, and so on. In this paper, we address these challenges by creating a comprehensive BdSL word-level dataset named BdSLW60 in an unconstrained and natural setting, allowing positional and temporal variations and allowing sign users to change hand dominance freely. The dataset encompasses 60 Bangla sign words, with a significant scale of 9307 video trials provided by 18 signers under the supervision of a sign language professional. The dataset was rigorously annotated and cross-checked by 60 annotators. We also introduced a unique approach of a relative quantization-based key frame encoding technique for landmark based sign gesture recognition. We report the benchmarking of our BdSLW60 dataset using the Support Vector Machine (SVM) with testing accuracy up to 67.6% and an attention-based bi-LSTM with testing accuracy up to 75.1%. The dataset is available at https://www.kaggle.com/datasets/hasaniut/bdslw60 and the code base is accessible from https://github.com/hasanssl/BdSLW60_Code.
2403.08215
Sicen Guo
Sicen Guo, Ziwei Long, Zhiyuan Wu, Qijun Chen, Ioannis Pitas and Rui Fan
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving
13 pages, 7 figures, 5 tables
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the impressive performance achieved by data-fusion networks with duplex encoders for visual semantic segmentation, they become ineffective when spatial geometric data are not available. Implicitly infusing the spatial geometric prior knowledge acquired by a data-fusion teacher network into a single-modal student network is a practical, albeit less explored research avenue. This article delves into this topic and resorts to knowledge distillation approaches to address this problem. We introduce the Learning to Infuse ''X'' (LIX) framework, with novel contributions in both logit distillation and feature distillation aspects. We present a mathematical proof that underscores the limitation of using a single, fixed weight in decoupled knowledge distillation and introduce a logit-wise dynamic weight controller as a solution to this issue. Furthermore, we develop an adaptively-recalibrated feature distillation algorithm, including two novel techniques: feature recalibration via kernel regression and in-depth feature consistency quantification via centered kernel alignment. Extensive experiments conducted with intermediate-fusion and late-fusion networks across various public datasets provide both quantitative and qualitative evaluations, demonstrating the superior performance of our LIX framework when compared to other state-of-the-art approaches.
[ { "version": "v1", "created": "Wed, 13 Mar 2024 03:24:36 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 09:24:22 GMT" } ]
2025-03-17T00:00:00
[ [ "Guo", "Sicen", "" ], [ "Long", "Ziwei", "" ], [ "Wu", "Zhiyuan", "" ], [ "Chen", "Qijun", "" ], [ "Pitas", "Ioannis", "" ], [ "Fan", "Rui", "" ] ]
TITLE: LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving ABSTRACT: Despite the impressive performance achieved by data-fusion networks with duplex encoders for visual semantic segmentation, they become ineffective when spatial geometric data are not available. Implicitly infusing the spatial geometric prior knowledge acquired by a data-fusion teacher network into a single-modal student network is a practical, albeit less explored research avenue. This article delves into this topic and resorts to knowledge distillation approaches to address this problem. We introduce the Learning to Infuse ''X'' (LIX) framework, with novel contributions in both logit distillation and feature distillation aspects. We present a mathematical proof that underscores the limitation of using a single, fixed weight in decoupled knowledge distillation and introduce a logit-wise dynamic weight controller as a solution to this issue. Furthermore, we develop an adaptively-recalibrated feature distillation algorithm, including two novel techniques: feature recalibration via kernel regression and in-depth feature consistency quantification via centered kernel alignment. Extensive experiments conducted with intermediate-fusion and late-fusion networks across various public datasets provide both quantitative and qualitative evaluations, demonstrating the superior performance of our LIX framework when compared to other state-of-the-art approaches.
2404.02101
Hao He
Hao He, Yinghao Xu, Yuwei Guo, Gordon Wetzstein, Bo Dai, Hongsheng Li, Ceyuan Yang
CameraCtrl: Enabling Camera Control for Text-to-Video Generation
Project page: https://hehao13.github.io/projects-CameraCtrl/ Code: https://github.com/hehao13/CameraCtrl
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Controllability plays a crucial role in video generation, as it allows users to create and edit content more precisely. Existing models, however, lack control of camera pose that serves as a cinematic language to express deeper narrative nuances. To alleviate this issue, we introduce CameraCtrl, enabling accurate camera pose control for video diffusion models. Our approach explores effective camera trajectory parameterization along with a plug-and-play camera pose control module that is trained on top of a video diffusion model, leaving other modules of the base model untouched. Moreover, a comprehensive study on the effect of various training datasets is conducted, suggesting that videos with diverse camera distributions and similar appearance to the base model indeed enhance controllability and generalization. Experimental results demonstrate the effectiveness of CameraCtrl in achieving precise camera control with different video generation models, marking a step forward in the pursuit of dynamic and customized video storytelling from textual and camera pose inputs.
[ { "version": "v1", "created": "Tue, 2 Apr 2024 16:52:41 GMT" }, { "version": "v2", "created": "Thu, 13 Mar 2025 18:35:06 GMT" } ]
2025-03-17T00:00:00
[ [ "He", "Hao", "" ], [ "Xu", "Yinghao", "" ], [ "Guo", "Yuwei", "" ], [ "Wetzstein", "Gordon", "" ], [ "Dai", "Bo", "" ], [ "Li", "Hongsheng", "" ], [ "Yang", "Ceyuan", "" ] ]
TITLE: CameraCtrl: Enabling Camera Control for Text-to-Video Generation ABSTRACT: Controllability plays a crucial role in video generation, as it allows users to create and edit content more precisely. Existing models, however, lack control of camera pose that serves as a cinematic language to express deeper narrative nuances. To alleviate this issue, we introduce CameraCtrl, enabling accurate camera pose control for video diffusion models. Our approach explores effective camera trajectory parameterization along with a plug-and-play camera pose control module that is trained on top of a video diffusion model, leaving other modules of the base model untouched. Moreover, a comprehensive study on the effect of various training datasets is conducted, suggesting that videos with diverse camera distributions and similar appearance to the base model indeed enhance controllability and generalization. Experimental results demonstrate the effectiveness of CameraCtrl in achieving precise camera control with different video generation models, marking a step forward in the pursuit of dynamic and customized video storytelling from textual and camera pose inputs.
2405.03279
Qizhou Chen
Qizhou Chen, Taolin Zhang, Xiaofeng He, Dongyang Li, Chengyu Wang, Longtao Huang, Hui Xue
Lifelong Knowledge Editing for LLMs with Retrieval-Augmented Continuous Prompt Learning
EMNLP 2024 main
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Model editing aims to correct outdated or erroneous knowledge in large language models (LLMs) without the need for costly retraining. Lifelong model editing is the most challenging task that caters to the continuous editing requirements of LLMs. Prior works primarily focus on single or batch editing; nevertheless, these methods fall short in lifelong editing scenarios due to catastrophic knowledge forgetting and the degradation of model performance. Although retrieval-based methods alleviate these issues, they are impeded by slow and cumbersome processes of integrating the retrieved knowledge into the model. In this work, we introduce RECIPE, a RetriEval-augmented ContInuous Prompt lEarning method, to boost editing efficacy and inference efficiency in lifelong learning. RECIPE first converts knowledge statements into short and informative continuous prompts, prefixed to the LLM's input query embedding, to efficiently refine the response grounded on the knowledge. It further integrates the Knowledge Sentinel (KS) that acts as an intermediary to calculate a dynamic threshold, determining whether the retrieval repository contains relevant knowledge. Our retriever and prompt encoder are jointly trained to achieve editing properties, i.e., reliability, generality, and locality. In our experiments, RECIPE is assessed extensively across multiple LLMs and editing datasets, where it achieves superior editing performance. RECIPE also demonstrates its capability to maintain the overall performance of LLMs alongside showcasing fast editing and inference speed.
[ { "version": "v1", "created": "Mon, 6 May 2024 08:52:11 GMT" }, { "version": "v2", "created": "Wed, 8 May 2024 03:45:51 GMT" }, { "version": "v3", "created": "Fri, 4 Oct 2024 12:29:46 GMT" }, { "version": "v4", "created": "Fri, 14 Mar 2025 03:56:58 GMT" } ]
2025-03-17T00:00:00
[ [ "Chen", "Qizhou", "" ], [ "Zhang", "Taolin", "" ], [ "He", "Xiaofeng", "" ], [ "Li", "Dongyang", "" ], [ "Wang", "Chengyu", "" ], [ "Huang", "Longtao", "" ], [ "Xue", "Hui", "" ] ]
TITLE: Lifelong Knowledge Editing for LLMs with Retrieval-Augmented Continuous Prompt Learning ABSTRACT: Model editing aims to correct outdated or erroneous knowledge in large language models (LLMs) without the need for costly retraining. Lifelong model editing is the most challenging task that caters to the continuous editing requirements of LLMs. Prior works primarily focus on single or batch editing; nevertheless, these methods fall short in lifelong editing scenarios due to catastrophic knowledge forgetting and the degradation of model performance. Although retrieval-based methods alleviate these issues, they are impeded by slow and cumbersome processes of integrating the retrieved knowledge into the model. In this work, we introduce RECIPE, a RetriEval-augmented ContInuous Prompt lEarning method, to boost editing efficacy and inference efficiency in lifelong learning. RECIPE first converts knowledge statements into short and informative continuous prompts, prefixed to the LLM's input query embedding, to efficiently refine the response grounded on the knowledge. It further integrates the Knowledge Sentinel (KS) that acts as an intermediary to calculate a dynamic threshold, determining whether the retrieval repository contains relevant knowledge. Our retriever and prompt encoder are jointly trained to achieve editing properties, i.e., reliability, generality, and locality. In our experiments, RECIPE is assessed extensively across multiple LLMs and editing datasets, where it achieves superior editing performance. RECIPE also demonstrates its capability to maintain the overall performance of LLMs alongside showcasing fast editing and inference speed.
2406.00009
Hang Zhou
Hang Zhou, Ke Ma, Shixiao Liang, Xiaopeng Li, Xiaobo Qu
ULTra-AV: A Unified Longitudinal Trajectory Dataset for Automated Vehicle
NA
null
10.1038/s41597-024-03795-y
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated Vehicles (AVs) promise significant advances in transportation. Critical to these improvements is understanding AVs' longitudinal behavior, relying heavily on real-world trajectory data. Existing open-source trajectory datasets of AV, however, often fall short in refinement, reliability, and completeness, hindering effective performance metrics analysis and model development. This study addresses these challenges by creating a Unified Longitudinal TRAjectory dataset for AVs (Ultra-AV) to analyze their microscopic longitudinal driving behaviors. This dataset compiles data from 13 distinct sources, encompassing various AV types, test sites, and experiment scenarios. We established a three-step data processing: 1. extraction of longitudinal trajectory data, 2. general data cleaning, and 3. data-specific cleaning to obtain the longitudinal trajectory data and car-following trajectory data. The validity of the processed data is affirmed through performance evaluations across safety, mobility, stability, and sustainability, along with an analysis of the relationships between variables in car-following models. Our work not only furnishes researchers with standardized data and metrics for longitudinal AV behavior studies but also sets guidelines for data collection and model development.
[ { "version": "v1", "created": "Thu, 16 May 2024 21:03:31 GMT" } ]
2025-03-17T00:00:00
[ [ "Zhou", "Hang", "" ], [ "Ma", "Ke", "" ], [ "Liang", "Shixiao", "" ], [ "Li", "Xiaopeng", "" ], [ "Qu", "Xiaobo", "" ] ]
TITLE: ULTra-AV: A Unified Longitudinal Trajectory Dataset for Automated Vehicle ABSTRACT: Automated Vehicles (AVs) promise significant advances in transportation. Critical to these improvements is understanding AVs' longitudinal behavior, relying heavily on real-world trajectory data. Existing open-source trajectory datasets of AV, however, often fall short in refinement, reliability, and completeness, hindering effective performance metrics analysis and model development. This study addresses these challenges by creating a Unified Longitudinal TRAjectory dataset for AVs (Ultra-AV) to analyze their microscopic longitudinal driving behaviors. This dataset compiles data from 13 distinct sources, encompassing various AV types, test sites, and experiment scenarios. We established a three-step data processing: 1. extraction of longitudinal trajectory data, 2. general data cleaning, and 3. data-specific cleaning to obtain the longitudinal trajectory data and car-following trajectory data. The validity of the processed data is affirmed through performance evaluations across safety, mobility, stability, and sustainability, along with an analysis of the relationships between variables in car-following models. Our work not only furnishes researchers with standardized data and metrics for longitudinal AV behavior studies but also sets guidelines for data collection and model development.
2406.17911
Kun Zhao
Kun Zhao, Chenghao Xiao, Sixing Yan, William K. Cheung, Kai Ye, Noura Al Moubayed, Liang Zhan, Chenghua Lin
X-ray Made Simple: Lay Radiology Report Generation and Robust Evaluation
This paper has substantial data and conceptual changes since release that go beyond simple updating the existing one. As a result, the authors have changed and we need to re-coordinate and reach consensus. So we decide to withdraw it
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Radiology Report Generation (RRG) has advanced considerably with the development of multimodal generative models. Despite the progress, the field still faces significant challenges in evaluation, as existing metrics lack robustness and fairness. We reveal that, RRG with high performance on existing lexical-based metrics (e.g. BLEU) might be more of a mirage - a model can get a high BLEU only by learning the template of reports. This has become a pressing issue for RRG due to the highly patternized nature of these reports. In addition, standard radiology reports are often highly technical. Helping patients understand these reports is crucial from a patient's perspective, yet this has been largely overlooked in previous work. In this work, we un-intuitively approach these problems by proposing the Layman's RRG framework that can systematically improve RRG with day-to-day language. Specifically, our framework first contributes a translated Layman's terms dataset. Building upon the dataset, we then propose a semantics-based evaluation method, which is effective in mitigating the inflated numbers of BLEU and provides more robust evaluation. We show that training on the layman's terms dataset encourages models to focus on the semantics of the reports, as opposed to overfitting to learning the report templates. Last, we reveal a promising scaling law between the number of training examples and semantics gain provided by our dataset, compared to the inverse pattern brought by the original formats.
[ { "version": "v1", "created": "Tue, 25 Jun 2024 19:52:01 GMT" }, { "version": "v2", "created": "Sun, 30 Jun 2024 21:53:56 GMT" }, { "version": "v3", "created": "Wed, 16 Oct 2024 21:57:58 GMT" }, { "version": "v4", "created": "Fri, 21 Feb 2025 22:52:11 GMT" }, { "version": "v5", "created": "Fri, 14 Mar 2025 14:44:32 GMT" } ]
2025-03-17T00:00:00
[ [ "Zhao", "Kun", "" ], [ "Xiao", "Chenghao", "" ], [ "Yan", "Sixing", "" ], [ "Cheung", "William K.", "" ], [ "Ye", "Kai", "" ], [ "Moubayed", "Noura Al", "" ], [ "Zhan", "Liang", "" ], [ "Lin", "Chenghua", "" ] ]
TITLE: X-ray Made Simple: Lay Radiology Report Generation and Robust Evaluation ABSTRACT: Radiology Report Generation (RRG) has advanced considerably with the development of multimodal generative models. Despite the progress, the field still faces significant challenges in evaluation, as existing metrics lack robustness and fairness. We reveal that, RRG with high performance on existing lexical-based metrics (e.g. BLEU) might be more of a mirage - a model can get a high BLEU only by learning the template of reports. This has become a pressing issue for RRG due to the highly patternized nature of these reports. In addition, standard radiology reports are often highly technical. Helping patients understand these reports is crucial from a patient's perspective, yet this has been largely overlooked in previous work. In this work, we un-intuitively approach these problems by proposing the Layman's RRG framework that can systematically improve RRG with day-to-day language. Specifically, our framework first contributes a translated Layman's terms dataset. Building upon the dataset, we then propose a semantics-based evaluation method, which is effective in mitigating the inflated numbers of BLEU and provides more robust evaluation. We show that training on the layman's terms dataset encourages models to focus on the semantics of the reports, as opposed to overfitting to learning the report templates. Last, we reveal a promising scaling law between the number of training examples and semantics gain provided by our dataset, compared to the inverse pattern brought by the original formats.
2407.11211
Philipp Allgeuer
Philipp Allgeuer and Kyra Ahrens and Stefan Wermter
Unconstrained Open Vocabulary Image Classification: Zero-Shot Transfer from Text to Image via CLIP Inversion
Published at WACV 2025
Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 8206-8217
null
null
cs.CV cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
We introduce NOVIC, an innovative real-time uNconstrained Open Vocabulary Image Classifier that uses an autoregressive transformer to generatively output classification labels as language. Leveraging the extensive knowledge of CLIP models, NOVIC harnesses the embedding space to enable zero-shot transfer from pure text to images. Traditional CLIP models, despite their ability for open vocabulary classification, require an exhaustive prompt of potential class labels, restricting their application to images of known content or context. To address this, we propose an "object decoder" model that is trained on a large-scale 92M-target dataset of templated object noun sets and LLM-generated captions to always output the object noun in question. This effectively inverts the CLIP text encoder and allows textual object labels from essentially the entire English language to be generated directly from image-derived embedding vectors, without requiring any a priori knowledge of the potential content of an image, and without any label biases. The trained decoders are tested on a mix of manually and web-curated datasets, as well as standard image classification benchmarks, and achieve fine-grained prompt-free prediction scores of up to 87.5%, a strong result considering the model must work for any conceivable image and without any contextual clues.
[ { "version": "v1", "created": "Mon, 15 Jul 2024 19:53:02 GMT" }, { "version": "v2", "created": "Wed, 17 Jul 2024 22:23:42 GMT" }, { "version": "v3", "created": "Mon, 18 Nov 2024 14:43:38 GMT" }, { "version": "v4", "created": "Tue, 26 Nov 2024 09:28:35 GMT" } ]
2025-03-17T00:00:00
[ [ "Allgeuer", "Philipp", "" ], [ "Ahrens", "Kyra", "" ], [ "Wermter", "Stefan", "" ] ]
TITLE: Unconstrained Open Vocabulary Image Classification: Zero-Shot Transfer from Text to Image via CLIP Inversion ABSTRACT: We introduce NOVIC, an innovative real-time uNconstrained Open Vocabulary Image Classifier that uses an autoregressive transformer to generatively output classification labels as language. Leveraging the extensive knowledge of CLIP models, NOVIC harnesses the embedding space to enable zero-shot transfer from pure text to images. Traditional CLIP models, despite their ability for open vocabulary classification, require an exhaustive prompt of potential class labels, restricting their application to images of known content or context. To address this, we propose an "object decoder" model that is trained on a large-scale 92M-target dataset of templated object noun sets and LLM-generated captions to always output the object noun in question. This effectively inverts the CLIP text encoder and allows textual object labels from essentially the entire English language to be generated directly from image-derived embedding vectors, without requiring any a priori knowledge of the potential content of an image, and without any label biases. The trained decoders are tested on a mix of manually and web-curated datasets, as well as standard image classification benchmarks, and achieve fine-grained prompt-free prediction scores of up to 87.5%, a strong result considering the model must work for any conceivable image and without any contextual clues.
2408.12928
An-Lan Wang
An-Lan Wang, Bin Shan, Wei Shi, Kun-Yu Lin, Xiang Fei, Guozhi Tang, Lei Liao, Can Huang, Jingqun Tang, Wei-Shi Zheng
ParGo: Bridging Vision-Language with Partial and Global Views
Accepted by AAAI 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This work presents ParGo, a novel Partial-Global projector designed to connect the vision and language modalities for Multimodal Large Language Models (MLLMs). Unlike previous works that rely on global attention-based projectors, our ParGo bridges the representation gap between the separately pre-trained vision encoders and the LLMs by integrating global and partial views, which alleviates the overemphasis on prominent regions. To facilitate the effective training of ParGo, we collect a large-scale detail-captioned image-text dataset named ParGoCap-1M-PT, consisting of 1 million images paired with high-quality captions. Extensive experiments on several MLLM benchmarks demonstrate the effectiveness of our ParGo, highlighting its superiority in aligning vision and language modalities. Compared to conventional Q-Former projector, our ParGo achieves an improvement of 259.96 in MME benchmark. Furthermore, our experiments reveal that ParGo significantly outperforms other projectors, particularly in tasks that emphasize detail perception ability.
[ { "version": "v1", "created": "Fri, 23 Aug 2024 09:14:58 GMT" }, { "version": "v2", "created": "Tue, 7 Jan 2025 09:39:15 GMT" }, { "version": "v3", "created": "Fri, 14 Mar 2025 08:48:51 GMT" } ]
2025-03-17T00:00:00
[ [ "Wang", "An-Lan", "" ], [ "Shan", "Bin", "" ], [ "Shi", "Wei", "" ], [ "Lin", "Kun-Yu", "" ], [ "Fei", "Xiang", "" ], [ "Tang", "Guozhi", "" ], [ "Liao", "Lei", "" ], [ "Huang", "Can", "" ], [ "Tang", "Jingqun", "" ], [ "Zheng", "Wei-Shi", "" ] ]
TITLE: ParGo: Bridging Vision-Language with Partial and Global Views ABSTRACT: This work presents ParGo, a novel Partial-Global projector designed to connect the vision and language modalities for Multimodal Large Language Models (MLLMs). Unlike previous works that rely on global attention-based projectors, our ParGo bridges the representation gap between the separately pre-trained vision encoders and the LLMs by integrating global and partial views, which alleviates the overemphasis on prominent regions. To facilitate the effective training of ParGo, we collect a large-scale detail-captioned image-text dataset named ParGoCap-1M-PT, consisting of 1 million images paired with high-quality captions. Extensive experiments on several MLLM benchmarks demonstrate the effectiveness of our ParGo, highlighting its superiority in aligning vision and language modalities. Compared to conventional Q-Former projector, our ParGo achieves an improvement of 259.96 in MME benchmark. Furthermore, our experiments reveal that ParGo significantly outperforms other projectors, particularly in tasks that emphasize detail perception ability.
2409.00138
Yijia Shao
Yijia Shao, Tianshi Li, Weiyan Shi, Yanchen Liu, Diyi Yang
PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action
NeurIPS 2024 Datasets and Benchmarks Track
null
null
null
cs.CL cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As language models (LMs) are widely utilized in personalized communication scenarios (e.g., sending emails, writing social media posts) and endowed with a certain level of agency, ensuring they act in accordance with the contextual privacy norms becomes increasingly critical. However, quantifying the privacy norm awareness of LMs and the emerging privacy risk in LM-mediated communication is challenging due to (1) the contextual and long-tailed nature of privacy-sensitive cases, and (2) the lack of evaluation approaches that capture realistic application scenarios. To address these challenges, we propose PrivacyLens, a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories, enabling multi-level evaluation of privacy leakage in LM agents' actions. We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds. Using this dataset, we reveal a discrepancy between LM performance in answering probing questions and their actual behavior when executing user instructions in an agent setup. State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions. We also demonstrate the dynamic nature of PrivacyLens by extending each seed into multiple trajectories to red-team LM privacy leakage risk. Dataset and code are available at https://github.com/SALT-NLP/PrivacyLens.
[ { "version": "v1", "created": "Thu, 29 Aug 2024 17:58:38 GMT" }, { "version": "v2", "created": "Thu, 17 Oct 2024 04:43:40 GMT" }, { "version": "v3", "created": "Fri, 14 Mar 2025 06:03:20 GMT" } ]
2025-03-17T00:00:00
[ [ "Shao", "Yijia", "" ], [ "Li", "Tianshi", "" ], [ "Shi", "Weiyan", "" ], [ "Liu", "Yanchen", "" ], [ "Yang", "Diyi", "" ] ]
TITLE: PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action ABSTRACT: As language models (LMs) are widely utilized in personalized communication scenarios (e.g., sending emails, writing social media posts) and endowed with a certain level of agency, ensuring they act in accordance with the contextual privacy norms becomes increasingly critical. However, quantifying the privacy norm awareness of LMs and the emerging privacy risk in LM-mediated communication is challenging due to (1) the contextual and long-tailed nature of privacy-sensitive cases, and (2) the lack of evaluation approaches that capture realistic application scenarios. To address these challenges, we propose PrivacyLens, a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories, enabling multi-level evaluation of privacy leakage in LM agents' actions. We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds. Using this dataset, we reveal a discrepancy between LM performance in answering probing questions and their actual behavior when executing user instructions in an agent setup. State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions. We also demonstrate the dynamic nature of PrivacyLens by extending each seed into multiple trajectories to red-team LM privacy leakage risk. Dataset and code are available at https://github.com/SALT-NLP/PrivacyLens.
2409.02465
Zhe Xu
Zhe Xu, Jiasheng Ye, Xiaoran Liu, Xiangyang Liu, Tianxiang Sun, Zhigeng Liu, Qipeng Guo, Linlin Li, Qun Liu, Xuanjing Huang, Xipeng Qiu
DetectiveQA: Evaluating Long-Context Reasoning on Detective Novels
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, significant efforts have been devoted to enhancing the long-context capabilities of Large Language Models (LLMs), particularly in long-context reasoning. To facilitate this research, we propose \textbf{DetectiveQA}, a dataset specifically designed for narrative reasoning within long contexts. We leverage detective novels, averaging over 100k tokens, to create a dataset containing 1200 human-annotated questions in both Chinese and English, each paired with corresponding reference reasoning steps. Furthermore, we introduce a step-wise reasoning metric, which enhances the evaluation of LLMs' reasoning processes. We validate our approach and evaluate the mainstream LLMs, including GPT-4, Claude, and LLaMA, revealing persistent long-context reasoning challenges and demonstrating their evidence-retrieval challenges. Our findings offer valuable insights into the study of long-context reasoning and lay the base for more rigorous evaluations.
[ { "version": "v1", "created": "Wed, 4 Sep 2024 06:28:22 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 08:44:06 GMT" } ]
2025-03-17T00:00:00
[ [ "Xu", "Zhe", "" ], [ "Ye", "Jiasheng", "" ], [ "Liu", "Xiaoran", "" ], [ "Liu", "Xiangyang", "" ], [ "Sun", "Tianxiang", "" ], [ "Liu", "Zhigeng", "" ], [ "Guo", "Qipeng", "" ], [ "Li", "Linlin", "" ], [ "Liu", "Qun", "" ], [ "Huang", "Xuanjing", "" ], [ "Qiu", "Xipeng", "" ] ]
TITLE: DetectiveQA: Evaluating Long-Context Reasoning on Detective Novels ABSTRACT: Recently, significant efforts have been devoted to enhancing the long-context capabilities of Large Language Models (LLMs), particularly in long-context reasoning. To facilitate this research, we propose \textbf{DetectiveQA}, a dataset specifically designed for narrative reasoning within long contexts. We leverage detective novels, averaging over 100k tokens, to create a dataset containing 1200 human-annotated questions in both Chinese and English, each paired with corresponding reference reasoning steps. Furthermore, we introduce a step-wise reasoning metric, which enhances the evaluation of LLMs' reasoning processes. We validate our approach and evaluate the mainstream LLMs, including GPT-4, Claude, and LLaMA, revealing persistent long-context reasoning challenges and demonstrating their evidence-retrieval challenges. Our findings offer valuable insights into the study of long-context reasoning and lay the base for more rigorous evaluations.
2409.03277
Zhengzhuo Xu
Zhengzhuo Xu, Bowen Qu, Yiyan Qi, Sinan Du, Chengjin Xu, Chun Yuan, Jian Guo
ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding
null
null
null
null
cs.AI cs.CL cs.CV
http://creativecommons.org/licenses/by/4.0/
Automatic chart understanding is crucial for content comprehension and document parsing. Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in chart understanding through domain-specific alignment and fine-tuning. However, current MLLMs still struggle to provide faithful data and reliable analysis only based on charts. To address it, we propose ChartMoE, which employs the Mixture of Expert (MoE) architecture to replace the traditional linear projector to bridge the modality gap. Specifically, we train several linear connectors through distinct alignment tasks, which are utilized as the foundational initialization parameters for different experts. Additionally, we introduce ChartMoE-Align, a dataset with nearly 1 million chart-table-JSON-code quadruples to conduct three alignment tasks (chart-table/JSON/code). Combined with the vanilla connector, we initialize different experts diversely and adopt high-quality knowledge learning to further refine the MoE connector and LLM parameters. Extensive experiments demonstrate the effectiveness of the MoE connector and our initialization strategy, e.g., ChartMoE improves the accuracy of the previous state-of-the-art from 80.48\% to 84.64\% on the ChartQA benchmark.
[ { "version": "v1", "created": "Thu, 5 Sep 2024 06:41:02 GMT" }, { "version": "v2", "created": "Tue, 4 Feb 2025 17:22:34 GMT" }, { "version": "v3", "created": "Fri, 14 Mar 2025 03:19:00 GMT" } ]
2025-03-17T00:00:00
[ [ "Xu", "Zhengzhuo", "" ], [ "Qu", "Bowen", "" ], [ "Qi", "Yiyan", "" ], [ "Du", "Sinan", "" ], [ "Xu", "Chengjin", "" ], [ "Yuan", "Chun", "" ], [ "Guo", "Jian", "" ] ]
TITLE: ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding ABSTRACT: Automatic chart understanding is crucial for content comprehension and document parsing. Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in chart understanding through domain-specific alignment and fine-tuning. However, current MLLMs still struggle to provide faithful data and reliable analysis only based on charts. To address it, we propose ChartMoE, which employs the Mixture of Expert (MoE) architecture to replace the traditional linear projector to bridge the modality gap. Specifically, we train several linear connectors through distinct alignment tasks, which are utilized as the foundational initialization parameters for different experts. Additionally, we introduce ChartMoE-Align, a dataset with nearly 1 million chart-table-JSON-code quadruples to conduct three alignment tasks (chart-table/JSON/code). Combined with the vanilla connector, we initialize different experts diversely and adopt high-quality knowledge learning to further refine the MoE connector and LLM parameters. Extensive experiments demonstrate the effectiveness of the MoE connector and our initialization strategy, e.g., ChartMoE improves the accuracy of the previous state-of-the-art from 80.48\% to 84.64\% on the ChartQA benchmark.
2409.06316
Daniel Rose
Daniel Rose, Oliver Wieder, Thomas Seidel, Thierry Langer
PharmacoMatch: Efficient 3D Pharmacophore Screening via Neural Subgraph Matching
null
null
null
null
cs.LG cs.AI q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The increasing size of screening libraries poses a significant challenge for the development of virtual screening methods for drug discovery, necessitating a re-evaluation of traditional approaches in the era of big data. Although 3D pharmacophore screening remains a prevalent technique, its application to very large datasets is limited by the computational cost associated with matching query pharmacophores to database molecules. In this study, we introduce PharmacoMatch, a novel contrastive learning approach based on neural subgraph matching. Our method reinterprets pharmacophore screening as an approximate subgraph matching problem and enables efficient querying of conformational databases by encoding query-target relationships in the embedding space. We conduct comprehensive investigations of the learned representations and evaluate PharmacoMatch as pre-screening tool in a zero-shot setting. We demonstrate significantly shorter runtimes and comparable performance metrics to existing solutions, providing a promising speed-up for screening very large datasets.
[ { "version": "v1", "created": "Tue, 10 Sep 2024 08:17:06 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 09:51:43 GMT" } ]
2025-03-17T00:00:00
[ [ "Rose", "Daniel", "" ], [ "Wieder", "Oliver", "" ], [ "Seidel", "Thomas", "" ], [ "Langer", "Thierry", "" ] ]
TITLE: PharmacoMatch: Efficient 3D Pharmacophore Screening via Neural Subgraph Matching ABSTRACT: The increasing size of screening libraries poses a significant challenge for the development of virtual screening methods for drug discovery, necessitating a re-evaluation of traditional approaches in the era of big data. Although 3D pharmacophore screening remains a prevalent technique, its application to very large datasets is limited by the computational cost associated with matching query pharmacophores to database molecules. In this study, we introduce PharmacoMatch, a novel contrastive learning approach based on neural subgraph matching. Our method reinterprets pharmacophore screening as an approximate subgraph matching problem and enables efficient querying of conformational databases by encoding query-target relationships in the embedding space. We conduct comprehensive investigations of the learned representations and evaluate PharmacoMatch as pre-screening tool in a zero-shot setting. We demonstrate significantly shorter runtimes and comparable performance metrics to existing solutions, providing a promising speed-up for screening very large datasets.
2409.15397
Danijel Korzinek
Nikola Ljube\v{s}i\'c, Peter Rupnik and Danijel Kor\v{z}inek
The ParlaSpeech Collection of Automatically Generated Speech and Text Datasets from Parliamentary Proceedings
Submitted to SPECOM 2024
null
10.1007/978-3-031-77961-9_10
null
eess.AS cs.CL cs.LG cs.SD
http://creativecommons.org/licenses/by/4.0/
Recent significant improvements in speech and language technologies come both from self-supervised approaches over raw language data as well as various types of explicit supervision. To ensure high-quality processing of spoken data, the most useful type of explicit supervision is still the alignment between the speech signal and its corresponding text transcript, which is a data type that is not available for many languages. In this paper, we present our approach to building large and open speech-and-text-aligned datasets of less-resourced languages based on transcripts of parliamentary proceedings and their recordings. Our starting point are the ParlaMint comparable corpora of transcripts of parliamentary proceedings of 26 national European parliaments. In the pilot run on expanding the ParlaMint corpora with aligned publicly available recordings, we focus on three Slavic languages, namely Croatian, Polish, and Serbian. The main challenge of our approach is the lack of any global alignment between the ParlaMint texts and the available recordings, as well as the sometimes varying data order in each of the modalities, which requires a novel approach in aligning long sequences of text and audio in a large search space. The results of this pilot run are three high-quality datasets that span more than 5,000 hours of speech and accompanying text transcripts. Although these datasets already make a huge difference in the availability of spoken and textual data for the three languages, we want to emphasize the potential of the presented approach in building similar datasets for many more languages.
[ { "version": "v1", "created": "Mon, 23 Sep 2024 10:12:18 GMT" }, { "version": "v2", "created": "Tue, 26 Nov 2024 12:50:50 GMT" } ]
2025-03-17T00:00:00
[ [ "Ljubešić", "Nikola", "" ], [ "Rupnik", "Peter", "" ], [ "Koržinek", "Danijel", "" ] ]
TITLE: The ParlaSpeech Collection of Automatically Generated Speech and Text Datasets from Parliamentary Proceedings ABSTRACT: Recent significant improvements in speech and language technologies come both from self-supervised approaches over raw language data as well as various types of explicit supervision. To ensure high-quality processing of spoken data, the most useful type of explicit supervision is still the alignment between the speech signal and its corresponding text transcript, which is a data type that is not available for many languages. In this paper, we present our approach to building large and open speech-and-text-aligned datasets of less-resourced languages based on transcripts of parliamentary proceedings and their recordings. Our starting point are the ParlaMint comparable corpora of transcripts of parliamentary proceedings of 26 national European parliaments. In the pilot run on expanding the ParlaMint corpora with aligned publicly available recordings, we focus on three Slavic languages, namely Croatian, Polish, and Serbian. The main challenge of our approach is the lack of any global alignment between the ParlaMint texts and the available recordings, as well as the sometimes varying data order in each of the modalities, which requires a novel approach in aligning long sequences of text and audio in a large search space. The results of this pilot run are three high-quality datasets that span more than 5,000 hours of speech and accompanying text transcripts. Although these datasets already make a huge difference in the availability of spoken and textual data for the three languages, we want to emphasize the potential of the presented approach in building similar datasets for many more languages.
2409.15784
Sung Yun Lee
Sung Yun Lee, Do Hyung Cho, Chulho Jung, Daeho Sung, Daewoong Nam, Sangsoo Kim, Changyong Song
Deep-learning real-time phase retrieval of imperfect diffraction patterns from X-ray free-electron lasers
null
null
10.1038/s41524-025-01569-7
null
physics.app-ph cond-mat.mtrl-sci cs.LG physics.optics
http://creativecommons.org/licenses/by-nc-nd/4.0/
Machine learning is attracting surging interest across nearly all scientific areas by enabling the analysis of large datasets and the extraction of scientific information from incomplete data. Data-driven science is rapidly growing, especially in X-ray methodologies, where advanced light sources and detection technologies accumulate vast amounts of data that exceed meticulous human inspection capabilities. Despite the increasing demands, the full application of machine learning has been hindered by the need for data-specific optimizations. In this study, we introduce a new deep-learning-based phase retrieval method for imperfect diffraction data. This method provides robust phase retrieval for simulated data and performs well on weak-signal single-pulse diffraction data from X-ray free-electron lasers. Moreover, the method significantly reduces data processing time, facilitating real-time image reconstructions that are crucial for high-repetition-rate data acquisition. Thus, this approach offers a reliable solution to the phase problem and is expected to be widely adopted across various research areas.
[ { "version": "v1", "created": "Tue, 24 Sep 2024 06:28:25 GMT" } ]
2025-03-17T00:00:00
[ [ "Lee", "Sung Yun", "" ], [ "Cho", "Do Hyung", "" ], [ "Jung", "Chulho", "" ], [ "Sung", "Daeho", "" ], [ "Nam", "Daewoong", "" ], [ "Kim", "Sangsoo", "" ], [ "Song", "Changyong", "" ] ]
TITLE: Deep-learning real-time phase retrieval of imperfect diffraction patterns from X-ray free-electron lasers ABSTRACT: Machine learning is attracting surging interest across nearly all scientific areas by enabling the analysis of large datasets and the extraction of scientific information from incomplete data. Data-driven science is rapidly growing, especially in X-ray methodologies, where advanced light sources and detection technologies accumulate vast amounts of data that exceed meticulous human inspection capabilities. Despite the increasing demands, the full application of machine learning has been hindered by the need for data-specific optimizations. In this study, we introduce a new deep-learning-based phase retrieval method for imperfect diffraction data. This method provides robust phase retrieval for simulated data and performs well on weak-signal single-pulse diffraction data from X-ray free-electron lasers. Moreover, the method significantly reduces data processing time, facilitating real-time image reconstructions that are crucial for high-repetition-rate data acquisition. Thus, this approach offers a reliable solution to the phase problem and is expected to be widely adopted across various research areas.
2410.02086
Minoh Jeong
Minoh Jeong, Min Namgung, Zae Myung Kim, Dongyeop Kang, Yao-Yi Chiang, Alfred Hero
Anchors Aweigh! Sail for Optimal Unified Multi-Modal Representations
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A unified representation space in multi-modal learning is essential for effectively integrating diverse data sources, such as text, images, and audio, to enhance efficiency and performance across various downstream tasks. Recent binding methods, such as ImageBind (Girdhar et al., 2023), typically rely on a single, fixed anchor modality for aligning multi-modal data. We mathematically analyze these fixed anchor binding method and uncover significant limitations: (1) over-reliance on the choice of the anchor modality, (2) inadequate capture of intra-modal information, and (3) failure to account for cross-modal correlation among non-anchored modalities. To address these issues, we propose the need for adaptive anchor binding methods, exemplified by our framework CentroBind. The proposed method uses adaptively adjustable centroid-based anchors generated from all available modalities, leading to a balanced and rich representation space. We theoretically demonstrate that our approach captures three critical properties of multi-modal learning -- intra-modal learning, inter-modal learning, and multi-modal alignment -- while constructing a unified representation that spans all modalities. Experiments on both synthetic and real-world datasets show that adaptive anchor methods such as CentroBind consistently outperform fixed anchor binding methods, verifying our analysis.
[ { "version": "v1", "created": "Wed, 2 Oct 2024 23:19:23 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 16:36:53 GMT" } ]
2025-03-17T00:00:00
[ [ "Jeong", "Minoh", "" ], [ "Namgung", "Min", "" ], [ "Kim", "Zae Myung", "" ], [ "Kang", "Dongyeop", "" ], [ "Chiang", "Yao-Yi", "" ], [ "Hero", "Alfred", "" ] ]
TITLE: Anchors Aweigh! Sail for Optimal Unified Multi-Modal Representations ABSTRACT: A unified representation space in multi-modal learning is essential for effectively integrating diverse data sources, such as text, images, and audio, to enhance efficiency and performance across various downstream tasks. Recent binding methods, such as ImageBind (Girdhar et al., 2023), typically rely on a single, fixed anchor modality for aligning multi-modal data. We mathematically analyze these fixed anchor binding method and uncover significant limitations: (1) over-reliance on the choice of the anchor modality, (2) inadequate capture of intra-modal information, and (3) failure to account for cross-modal correlation among non-anchored modalities. To address these issues, we propose the need for adaptive anchor binding methods, exemplified by our framework CentroBind. The proposed method uses adaptively adjustable centroid-based anchors generated from all available modalities, leading to a balanced and rich representation space. We theoretically demonstrate that our approach captures three critical properties of multi-modal learning -- intra-modal learning, inter-modal learning, and multi-modal alignment -- while constructing a unified representation that spans all modalities. Experiments on both synthetic and real-world datasets show that adaptive anchor methods such as CentroBind consistently outperform fixed anchor binding methods, verifying our analysis.
2410.02603
Fantine Huot
Fantine Huot, Reinald Kim Amplayo, Jennimaria Palomaki, Alice Shoshana Jakobovits, Elizabeth Clark, Mirella Lapata
Agents' Room: Narrative Generation through Multi-step Collaboration
Published as a conference paper at ICLR 2025
null
null
null
cs.CL cs.LG cs.MA
http://creativecommons.org/licenses/by/4.0/
Writing compelling fiction is a multifaceted process combining elements such as crafting a plot, developing interesting characters, and using evocative language. While large language models (LLMs) show promise for story writing, they currently rely heavily on intricate prompting, which limits their use. We propose Agents' Room, a generation framework inspired by narrative theory, that decomposes narrative writing into subtasks tackled by specialized agents. To illustrate our method, we introduce Tell Me A Story, a high-quality dataset of complex writing prompts and human-written stories, and a novel evaluation framework designed specifically for assessing long narratives. We show that Agents' Room generates stories that are preferred by expert evaluators over those produced by baseline systems by leveraging collaboration and specialization to decompose the complex story writing task into tractable components. We provide extensive analysis with automated and human-based metrics of the generated output.
[ { "version": "v1", "created": "Thu, 3 Oct 2024 15:44:42 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 17:09:03 GMT" } ]
2025-03-17T00:00:00
[ [ "Huot", "Fantine", "" ], [ "Amplayo", "Reinald Kim", "" ], [ "Palomaki", "Jennimaria", "" ], [ "Jakobovits", "Alice Shoshana", "" ], [ "Clark", "Elizabeth", "" ], [ "Lapata", "Mirella", "" ] ]
TITLE: Agents' Room: Narrative Generation through Multi-step Collaboration ABSTRACT: Writing compelling fiction is a multifaceted process combining elements such as crafting a plot, developing interesting characters, and using evocative language. While large language models (LLMs) show promise for story writing, they currently rely heavily on intricate prompting, which limits their use. We propose Agents' Room, a generation framework inspired by narrative theory, that decomposes narrative writing into subtasks tackled by specialized agents. To illustrate our method, we introduce Tell Me A Story, a high-quality dataset of complex writing prompts and human-written stories, and a novel evaluation framework designed specifically for assessing long narratives. We show that Agents' Room generates stories that are preferred by expert evaluators over those produced by baseline systems by leveraging collaboration and specialization to decompose the complex story writing task into tractable components. We provide extensive analysis with automated and human-based metrics of the generated output.
2410.03461
Tobias Leemann
Tobias Leemann, Periklis Petridis, Giuseppe Vietri, Dionysis Manousakas, Aaron Roth, Sergul Aydore
Auto-GDA: Automatic Domain Adaptation for Efficient Grounding Verification in Retrieval-Augmented Generation
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While retrieval-augmented generation (RAG) has been shown to enhance factuality of large language model (LLM) outputs, LLMs still suffer from hallucination, generating incorrect or irrelevant information. A common detection strategy involves prompting the LLM again to assess whether its response is grounded in the retrieved evidence, but this approach is costly. Alternatively, lightweight natural language inference (NLI) models for efficient grounding verification can be used at inference time. While existing pre-trained NLI models offer potential solutions, their performance remains subpar compared to larger models on realistic RAG inputs. RAG inputs are more complex than most datasets used for training NLI models and have characteristics specific to the underlying knowledge base, requiring adaptation of the NLI models to a specific target domain. Additionally, the lack of labeled instances in the target domain makes supervised domain adaptation, e.g., through fine-tuning, infeasible. To address these challenges, we introduce Automatic Generative Domain Adaptation (Auto-GDA). Our framework enables unsupervised domain adaptation through synthetic data generation. Unlike previous methods that rely on handcrafted filtering and augmentation strategies, Auto-GDA employs an iterative process to continuously improve the quality of generated samples using weak labels from less efficient teacher models and discrete optimization to select the most promising augmented samples. Experimental results demonstrate the effectiveness of our approach, with models fine-tuned on synthetic data using Auto-GDA often surpassing the performance of the teacher model and reaching the performance level of LLMs at 10% of their computational cost.
[ { "version": "v1", "created": "Fri, 4 Oct 2024 14:21:27 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 17:27:00 GMT" } ]
2025-03-17T00:00:00
[ [ "Leemann", "Tobias", "" ], [ "Petridis", "Periklis", "" ], [ "Vietri", "Giuseppe", "" ], [ "Manousakas", "Dionysis", "" ], [ "Roth", "Aaron", "" ], [ "Aydore", "Sergul", "" ] ]
TITLE: Auto-GDA: Automatic Domain Adaptation for Efficient Grounding Verification in Retrieval-Augmented Generation ABSTRACT: While retrieval-augmented generation (RAG) has been shown to enhance factuality of large language model (LLM) outputs, LLMs still suffer from hallucination, generating incorrect or irrelevant information. A common detection strategy involves prompting the LLM again to assess whether its response is grounded in the retrieved evidence, but this approach is costly. Alternatively, lightweight natural language inference (NLI) models for efficient grounding verification can be used at inference time. While existing pre-trained NLI models offer potential solutions, their performance remains subpar compared to larger models on realistic RAG inputs. RAG inputs are more complex than most datasets used for training NLI models and have characteristics specific to the underlying knowledge base, requiring adaptation of the NLI models to a specific target domain. Additionally, the lack of labeled instances in the target domain makes supervised domain adaptation, e.g., through fine-tuning, infeasible. To address these challenges, we introduce Automatic Generative Domain Adaptation (Auto-GDA). Our framework enables unsupervised domain adaptation through synthetic data generation. Unlike previous methods that rely on handcrafted filtering and augmentation strategies, Auto-GDA employs an iterative process to continuously improve the quality of generated samples using weak labels from less efficient teacher models and discrete optimization to select the most promising augmented samples. Experimental results demonstrate the effectiveness of our approach, with models fine-tuned on synthetic data using Auto-GDA often surpassing the performance of the teacher model and reaching the performance level of LLMs at 10% of their computational cost.
2410.05111
Qifeng Chen
Qifeng Chen, Sheng Yang, Sicong Du, Tao Tang, Peng Chen, Yuchi Huo
LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present LiDAR-GS, a Gaussian Splatting (GS) method for real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes. Recent GS methods proposed for cameras have achieved significant advancements in real-time rendering beyond Neural Radiance Fields (NeRF). However, applying GS representation to LiDAR, an active 3D sensor type, poses several challenges that must be addressed to preserve high accuracy and unique characteristics. Specifically, LiDAR-GS designs a differentiable laser beam splatting, using range-view representation for precise surface splatting by projecting lasers onto micro cross-sections, effectively eliminating artifacts associated with local affine approximations. Furthermore, LiDAR-GS leverages Neural Gaussian Representation, which further integrate view-dependent clues, to represent key LiDAR properties that are influenced by the incident direction and external factors. Combining these practices with some essential adaptations, e.g., dynamic instances decomposition, LiDAR-GS succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets when compared with the methods using explicit mesh or implicit NeRF. Our source code is publicly available at https://www.github.com/cqf7419/LiDAR-GS.
[ { "version": "v1", "created": "Mon, 7 Oct 2024 15:07:56 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 09:52:11 GMT" } ]
2025-03-17T00:00:00
[ [ "Chen", "Qifeng", "" ], [ "Yang", "Sheng", "" ], [ "Du", "Sicong", "" ], [ "Tang", "Tao", "" ], [ "Chen", "Peng", "" ], [ "Huo", "Yuchi", "" ] ]
TITLE: LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting ABSTRACT: We present LiDAR-GS, a Gaussian Splatting (GS) method for real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes. Recent GS methods proposed for cameras have achieved significant advancements in real-time rendering beyond Neural Radiance Fields (NeRF). However, applying GS representation to LiDAR, an active 3D sensor type, poses several challenges that must be addressed to preserve high accuracy and unique characteristics. Specifically, LiDAR-GS designs a differentiable laser beam splatting, using range-view representation for precise surface splatting by projecting lasers onto micro cross-sections, effectively eliminating artifacts associated with local affine approximations. Furthermore, LiDAR-GS leverages Neural Gaussian Representation, which further integrate view-dependent clues, to represent key LiDAR properties that are influenced by the incident direction and external factors. Combining these practices with some essential adaptations, e.g., dynamic instances decomposition, LiDAR-GS succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets when compared with the methods using explicit mesh or implicit NeRF. Our source code is publicly available at https://www.github.com/cqf7419/LiDAR-GS.
2410.12953
Aayush Agrawal
Aayush Agrawal, Aniruddh Sikdar, Rajini Makam, Suresh Sundaram, Suresh Kumar Besai and Mahesh Gopi
Syn2Real Domain Generalization for Underwater Mine-like Object Detection Using Side-Scan Sonar
7 pages, 4 figures and 3 tables
null
10.1109/LGRS.2025.3550037
null
cs.LG cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Underwater mine detection with deep learning suffers from limitations due to the scarcity of real-world data. This scarcity leads to overfitting, where models perform well on training data but poorly on unseen data. This paper proposes a Syn2Real (Synthetic to Real) domain generalization approach using diffusion models to address this challenge. We demonstrate that synthetic data generated with noise by DDPM and DDIM models, even if not perfectly realistic, can effectively augment real-world samples for training. The residual noise in the final sampled images improves the model's ability to generalize to real-world data with inherent noise and high variation. The baseline Mask-RCNN model when trained on a combination of synthetic and original training datasets, exhibited approximately a 60% increase in Average Precision (AP) compared to being trained solely on the original training data. This significant improvement highlights the potential of Syn2Real domain generalization for underwater mine detection tasks.
[ { "version": "v1", "created": "Wed, 16 Oct 2024 18:42:08 GMT" } ]
2025-03-17T00:00:00
[ [ "Agrawal", "Aayush", "" ], [ "Sikdar", "Aniruddh", "" ], [ "Makam", "Rajini", "" ], [ "Sundaram", "Suresh", "" ], [ "Besai", "Suresh Kumar", "" ], [ "Gopi", "Mahesh", "" ] ]
TITLE: Syn2Real Domain Generalization for Underwater Mine-like Object Detection Using Side-Scan Sonar ABSTRACT: Underwater mine detection with deep learning suffers from limitations due to the scarcity of real-world data. This scarcity leads to overfitting, where models perform well on training data but poorly on unseen data. This paper proposes a Syn2Real (Synthetic to Real) domain generalization approach using diffusion models to address this challenge. We demonstrate that synthetic data generated with noise by DDPM and DDIM models, even if not perfectly realistic, can effectively augment real-world samples for training. The residual noise in the final sampled images improves the model's ability to generalize to real-world data with inherent noise and high variation. The baseline Mask-RCNN model when trained on a combination of synthetic and original training datasets, exhibited approximately a 60% increase in Average Precision (AP) compared to being trained solely on the original training data. This significant improvement highlights the potential of Syn2Real domain generalization for underwater mine detection tasks.
2410.19150
Abraham Israeli
Abraham Israeli, David Jurgens, Daniel Romero
A Test of Time: Predicting the Sustainable Success of Online Collaboration in Wikipedia
null
null
null
null
cs.CY cs.CL cs.SI
http://creativecommons.org/licenses/by/4.0/
The Internet has significantly expanded the potential for global collaboration, allowing millions of users to contribute to collective projects like Wikipedia. While prior work has assessed the success of online collaborations, most approaches are time-agnostic, evaluating success without considering its longevity. Research on the factors that ensure the long-term preservation of high-quality standards in online collaboration is scarce. In this study, we address this gap. We propose a novel metric, `Sustainable Success,' which measures the ability of collaborative efforts to maintain their quality over time. Using Wikipedia as a case study, we introduce the SustainPedia dataset, which compiles data from over 40K Wikipedia articles, including each article's sustainable success label and more than 300 explanatory features such as edit history, user experience, and team composition. Using this dataset, we develop machine learning models to predict the sustainable success of Wikipedia articles. Our best-performing model achieves a high AU-ROC score of 0.88 on average. Our analysis reveals important insights. For example, we find that the longer an article takes to be recognized as high-quality, the more likely it is to maintain that status over time (i.e., be sustainable). Additionally, user experience emerged as the most critical predictor of sustainability. Our analysis provides insights into broader collective actions beyond Wikipedia (e.g., online activism, crowdsourced open-source software), where the same social dynamics that drive success on Wikipedia might play a role. We make all data and code used for this study publicly available for further research.
[ { "version": "v1", "created": "Thu, 24 Oct 2024 20:42:53 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 17:47:49 GMT" } ]
2025-03-17T00:00:00
[ [ "Israeli", "Abraham", "" ], [ "Jurgens", "David", "" ], [ "Romero", "Daniel", "" ] ]
TITLE: A Test of Time: Predicting the Sustainable Success of Online Collaboration in Wikipedia ABSTRACT: The Internet has significantly expanded the potential for global collaboration, allowing millions of users to contribute to collective projects like Wikipedia. While prior work has assessed the success of online collaborations, most approaches are time-agnostic, evaluating success without considering its longevity. Research on the factors that ensure the long-term preservation of high-quality standards in online collaboration is scarce. In this study, we address this gap. We propose a novel metric, `Sustainable Success,' which measures the ability of collaborative efforts to maintain their quality over time. Using Wikipedia as a case study, we introduce the SustainPedia dataset, which compiles data from over 40K Wikipedia articles, including each article's sustainable success label and more than 300 explanatory features such as edit history, user experience, and team composition. Using this dataset, we develop machine learning models to predict the sustainable success of Wikipedia articles. Our best-performing model achieves a high AU-ROC score of 0.88 on average. Our analysis reveals important insights. For example, we find that the longer an article takes to be recognized as high-quality, the more likely it is to maintain that status over time (i.e., be sustainable). Additionally, user experience emerged as the most critical predictor of sustainability. Our analysis provides insights into broader collective actions beyond Wikipedia (e.g., online activism, crowdsourced open-source software), where the same social dynamics that drive success on Wikipedia might play a role. We make all data and code used for this study publicly available for further research.
2410.21705
Yuxun Qu
Yuxun Qu, Yongqiang Tang, Chenyang Zhang, Wensheng Zhang
AdaptGCD: Multi-Expert Adapter Tuning for Generalized Category Discovery
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Different from the traditional semi-supervised learning paradigm that is constrained by the close-world assumption, Generalized Category Discovery (GCD) presumes that the unlabeled dataset contains new categories not appearing in the labeled set, and aims to not only classify old categories but also discover new categories in the unlabeled data. Existing studies on GCD typically devote to transferring the general knowledge from the self-supervised pretrained model to the target GCD task via some fine-tuning strategies, such as partial tuning and prompt learning. Nevertheless, these fine-tuning methods fail to make a sound balance between the generalization capacity of pretrained backbone and the adaptability to the GCD task. To fill this gap, in this paper, we propose a novel adapter-tuning-based method named AdaptGCD, which is the first work to introduce the adapter tuning into the GCD task and provides some key insights expected to enlighten future research. Furthermore, considering the discrepancy of supervision information between the old and new classes, a multi-expert adapter structure equipped with a route assignment constraint is elaborately devised, such that the data from old and new classes are separated into different expert groups. Extensive experiments are conducted on 7 widely-used datasets. The remarkable improvements in performance highlight the effectiveness of our proposals.
[ { "version": "v1", "created": "Tue, 29 Oct 2024 03:41:47 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 15:55:43 GMT" } ]
2025-03-17T00:00:00
[ [ "Qu", "Yuxun", "" ], [ "Tang", "Yongqiang", "" ], [ "Zhang", "Chenyang", "" ], [ "Zhang", "Wensheng", "" ] ]
TITLE: AdaptGCD: Multi-Expert Adapter Tuning for Generalized Category Discovery ABSTRACT: Different from the traditional semi-supervised learning paradigm that is constrained by the close-world assumption, Generalized Category Discovery (GCD) presumes that the unlabeled dataset contains new categories not appearing in the labeled set, and aims to not only classify old categories but also discover new categories in the unlabeled data. Existing studies on GCD typically devote to transferring the general knowledge from the self-supervised pretrained model to the target GCD task via some fine-tuning strategies, such as partial tuning and prompt learning. Nevertheless, these fine-tuning methods fail to make a sound balance between the generalization capacity of pretrained backbone and the adaptability to the GCD task. To fill this gap, in this paper, we propose a novel adapter-tuning-based method named AdaptGCD, which is the first work to introduce the adapter tuning into the GCD task and provides some key insights expected to enlighten future research. Furthermore, considering the discrepancy of supervision information between the old and new classes, a multi-expert adapter structure equipped with a route assignment constraint is elaborately devised, such that the data from old and new classes are separated into different expert groups. Extensive experiments are conducted on 7 widely-used datasets. The remarkable improvements in performance highlight the effectiveness of our proposals.
2411.05609
Cristiano Patr\'icio
Cristiano Patr\'icio, Lu\'is F. Teixeira, Jo\~ao C. Neves
A Two-Step Concept-Based Approach for Enhanced Interpretability and Trust in Skin Lesion Diagnosis
Published in the Computational and Structural Biotechnology Journal
null
10.1016/j.csbj.2025.02.013
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
The main challenges hindering the adoption of deep learning-based systems in clinical settings are the scarcity of annotated data and the lack of interpretability and trust in these systems. Concept Bottleneck Models (CBMs) offer inherent interpretability by constraining the final disease prediction on a set of human-understandable concepts. However, this inherent interpretability comes at the cost of greater annotation burden. Additionally, adding new concepts requires retraining the entire system. In this work, we introduce a novel two-step methodology that addresses both of these challenges. By simulating the two stages of a CBM, we utilize a pretrained Vision Language Model (VLM) to automatically predict clinical concepts, and an off-the-shelf Large Language Model (LLM) to generate disease diagnoses based on the predicted concepts. Furthermore, our approach supports test-time human intervention, enabling corrections to predicted concepts, which improves final diagnoses and enhances transparency in decision-making. We validate our approach on three skin lesion datasets, demonstrating that it outperforms traditional CBMs and state-of-the-art explainable methods, all without requiring any training and utilizing only a few annotated examples. The code is available at https://github.com/CristianoPatricio/2-step-concept-based-skin-diagnosis.
[ { "version": "v1", "created": "Fri, 8 Nov 2024 14:52:42 GMT" }, { "version": "v2", "created": "Fri, 14 Mar 2025 09:51:44 GMT" } ]
2025-03-17T00:00:00
[ [ "Patrício", "Cristiano", "" ], [ "Teixeira", "Luís F.", "" ], [ "Neves", "João C.", "" ] ]
TITLE: A Two-Step Concept-Based Approach for Enhanced Interpretability and Trust in Skin Lesion Diagnosis ABSTRACT: The main challenges hindering the adoption of deep learning-based systems in clinical settings are the scarcity of annotated data and the lack of interpretability and trust in these systems. Concept Bottleneck Models (CBMs) offer inherent interpretability by constraining the final disease prediction on a set of human-understandable concepts. However, this inherent interpretability comes at the cost of greater annotation burden. Additionally, adding new concepts requires retraining the entire system. In this work, we introduce a novel two-step methodology that addresses both of these challenges. By simulating the two stages of a CBM, we utilize a pretrained Vision Language Model (VLM) to automatically predict clinical concepts, and an off-the-shelf Large Language Model (LLM) to generate disease diagnoses based on the predicted concepts. Furthermore, our approach supports test-time human intervention, enabling corrections to predicted concepts, which improves final diagnoses and enhances transparency in decision-making. We validate our approach on three skin lesion datasets, demonstrating that it outperforms traditional CBMs and state-of-the-art explainable methods, all without requiring any training and utilizing only a few annotated examples. The code is available at https://github.com/CristianoPatricio/2-step-concept-based-skin-diagnosis.
2411.12523
Rania Briq
Rania Briq, Jiangtao Wang, Stefan Kesselheim
Data Pruning in Generative Diffusion Models
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data pruning is the problem of identifying a core subset that is most beneficial to training and discarding the remainder. While pruning strategies are well studied for discriminative models like those used in classification, little research has gone into their application to generative models. Generative models aim to estimate the underlying distribution of the data, so presumably they should benefit from larger datasets. In this work we aim to shed light on the accuracy of this statement, specifically answer the question of whether data pruning for generative diffusion models could have a positive impact. Contrary to intuition, we show that eliminating redundant or noisy data in large datasets is beneficial particularly when done strategically. We experiment with several pruning methods including recent-state-of-art methods, and evaluate over CelebA-HQ and ImageNet datasets. We demonstrate that a simple clustering method outperforms other sophisticated and computationally demanding methods. We further exhibit how we can leverage clustering to balance skewed datasets in an unsupervised manner to allow fair sampling for underrepresented populations in the data distribution, which is a crucial problem in generative models.
[ { "version": "v1", "created": "Tue, 19 Nov 2024 14:13:25 GMT" }, { "version": "v2", "created": "Sat, 25 Jan 2025 12:41:48 GMT" }, { "version": "v3", "created": "Fri, 14 Mar 2025 13:11:28 GMT" } ]
2025-03-17T00:00:00
[ [ "Briq", "Rania", "" ], [ "Wang", "Jiangtao", "" ], [ "Kesselheim", "Stefan", "" ] ]
TITLE: Data Pruning in Generative Diffusion Models ABSTRACT: Data pruning is the problem of identifying a core subset that is most beneficial to training and discarding the remainder. While pruning strategies are well studied for discriminative models like those used in classification, little research has gone into their application to generative models. Generative models aim to estimate the underlying distribution of the data, so presumably they should benefit from larger datasets. In this work we aim to shed light on the accuracy of this statement, specifically answer the question of whether data pruning for generative diffusion models could have a positive impact. Contrary to intuition, we show that eliminating redundant or noisy data in large datasets is beneficial particularly when done strategically. We experiment with several pruning methods including recent-state-of-art methods, and evaluate over CelebA-HQ and ImageNet datasets. We demonstrate that a simple clustering method outperforms other sophisticated and computationally demanding methods. We further exhibit how we can leverage clustering to balance skewed datasets in an unsupervised manner to allow fair sampling for underrepresented populations in the data distribution, which is a crucial problem in generative models.
2411.12590
Neale Ratzlaff
Neale Ratzlaff, Matthew Lyle Olson, Musashi Hinck, Estelle Aflalo, Shao-Yen Tseng, Vasudev Lal, Phillip Howard
Debias your Large Multi-Modal Model at Test-Time via Non-Contrastive Visual Attribute Steering
10 pages, 6 Figures, 8 Tables. arXiv admin note: text overlap with arXiv:2410.13976
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Large Multi-Modal Models (LMMs) have demonstrated impressive capabilities as general-purpose chatbots able to engage in conversations about visual inputs. However, their responses are influenced by societal biases present in their training datasets, leading to undesirable differences in how the model responds when presented with images depicting people of different demographics. In this work, we propose a training-free debiasing framework for LMMs that intervenes on the model's representations during text generation by constructing a steering vector that reduces reference on protected attributes. Our framework introduces two complementary methods: (1) a dataset-based approach that constructs a steering vector by contrasting model activations on biased and neutral inputs, and (2) a novel optimization-based approach designed for low-resource settings, which constructs the steering vector using a single step of gradient-based perturbation without requiring additional data. Our experiments show that these interventions effectively reduce the propensity of LMMs to generate text related to protected attributes while maintaining sentiment and fluency. Furthermore, we demonstrate that debiased LMMs achieve comparable accuracy to their unmodified counterparts on downstream tasks, indicating that bias mitigation can be achieved without sacrificing model performance.
[ { "version": "v1", "created": "Fri, 15 Nov 2024 20:06:09 GMT" }, { "version": "v2", "created": "Thu, 13 Mar 2025 18:02:59 GMT" } ]
2025-03-17T00:00:00
[ [ "Ratzlaff", "Neale", "" ], [ "Olson", "Matthew Lyle", "" ], [ "Hinck", "Musashi", "" ], [ "Aflalo", "Estelle", "" ], [ "Tseng", "Shao-Yen", "" ], [ "Lal", "Vasudev", "" ], [ "Howard", "Phillip", "" ] ]
TITLE: Debias your Large Multi-Modal Model at Test-Time via Non-Contrastive Visual Attribute Steering ABSTRACT: Large Multi-Modal Models (LMMs) have demonstrated impressive capabilities as general-purpose chatbots able to engage in conversations about visual inputs. However, their responses are influenced by societal biases present in their training datasets, leading to undesirable differences in how the model responds when presented with images depicting people of different demographics. In this work, we propose a training-free debiasing framework for LMMs that intervenes on the model's representations during text generation by constructing a steering vector that reduces reference on protected attributes. Our framework introduces two complementary methods: (1) a dataset-based approach that constructs a steering vector by contrasting model activations on biased and neutral inputs, and (2) a novel optimization-based approach designed for low-resource settings, which constructs the steering vector using a single step of gradient-based perturbation without requiring additional data. Our experiments show that these interventions effectively reduce the propensity of LMMs to generate text related to protected attributes while maintaining sentiment and fluency. Furthermore, we demonstrate that debiased LMMs achieve comparable accuracy to their unmodified counterparts on downstream tasks, indicating that bias mitigation can be achieved without sacrificing model performance.